text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
## ResFPN Classifier Tutorial - COVIDx
by *Ming Ming Zhang*
```
import os, sys
# python files directory
PY_DIR = #'directory/to/python/files'
sys.path.append(PY_DIR)
import covidx, resnet_fpn
```
### Preprocess Data
```
dataset_dir = #'directory/to/covidx'
covidx.move_imgs(dataset_dir, subset='train')
covidx.move_imgs(dataset_dir, subset='test')
train_dir = os.path.join(dataset_dir, 'train')
test_dir = os.path.join(dataset_dir, 'test')
# for the first time use, set convert_rgb to True
convert_rgb = False
if convert_rgb:
covidx.covert_rgb_dir(train_dir)
covidx.covert_rgb_dir(test_dir)
batch_size = 32
image_size = (256,256)
train_ds, val_ds, test_ds, classes = covidx.load_data(
dataset_dir, batch_size, image_size)
total = 13917
class_weight = []
for class_name in classes:
imgs_list = os.listdir(os.path.join(train_dir, class_name))
num_imgs = len(imgs_list)
weight = (1/num_imgs)*(total/len(classes))
class_weight.append(weight)
print('%s: %d, ratio=%.4f' \
% (
class_name,
num_imgs,
num_imgs/total
))
class_weight = [x/sum(class_weight) for x in class_weight]
for i in range(len(classes)):
print('%s: weight=%f' % (classes[i], class_weight[i]))
```
### ResFPN Classifiers
```
image_shape = (256,256,3)
filepath = #'path\to\pretrained_resnet_weights.h5'
```
#### Cross-entropy Loss
```
ResFPN_ce = resnet_fpn.ResFPN_Classifier(
image_shape=image_shape,
num_classes=len(classes),
num_filters=256,
architecture='resnet50',
augmentation=False,
checkpoint_path=None,
resnet_weights_path=filepath)
ResFPN_ce.train(
train_dataset=train_ds,
val_dataset=val_ds,
params={'lr':0.01, 'l2':0.1, 'epochs':5},
loss_type='ce',
save_weights=False)
top_idxes, ensemble_acc = ResFPN_ce.select_top(val_ds, top=3)
ensemble_class_ids, metrics, f1_score = ResFPN_ce.predict(
test_ds, classes, display_metrics=True, top_idxes=top_idxes)
import numpy as np
f1s_ce = np.array([0.70,0.72,0.73,0.72,0.75])
print('ce: mean %.2f, std %.2f' %(np.mean(f1s_ce), np.std(f1s_ce)))
```
#### Focal Loss
```
alphas=[class_weight]
gammas=[0,1,2,5]
best_f1, best_alpha, best_gamma = covidx.tune_focal(
train_ds, val_ds, classes, alphas, gammas, image_shape, filepath)
ResFPN_focal = resnet_fpn.ResFPN_Classifier(
image_shape=image_shape,
num_classes=len(classes),
num_filters=256,
architecture='resnet50',
augmentation=False,
checkpoint_path=None,
resnet_weights_path=filepath)
ResFPN_focal.train(
train_dataset=train_ds,
val_dataset=val_ds,
params={'lr':0.01, 'l2':0.1, 'epochs':5,
'alpha':class_weight, 'gamma':2.0},
#'alpha':best_alpha, 'gamma':best_gamma},
loss_type='focal',
save_weights=False)
top_idxes, ensemble_acc = ResFPN_focal.select_top(val_ds, top=3)
ensemble_class_ids, metrics, f1_score = ResFPN_focal.predict(
test_ds, classes, display_metrics=True, top_idxes=top_idxes)
f1s_focal = np.array([0.77,0.78,0.77,0.76,0.78])
print('focal: mean %.2f, std %.2f' %(np.mean(f1s_focal), np.std(f1s_focal)))
```
| github_jupyter |
# Autoregressive model using a feedforward neural network (pytorch implementation)
In this notebook we will use a feedforward neural network to fit a linear model to time series data.
> In this notebook all models are implemented with pytorch.
<div class="alert alert-success">
1. Pytorch has an excellent, but slightly more involved interface than Keras provides for Tensorflow.
2. The data preprocessing requirements for a NN are similar to those for Keras and TensorFlow, but there are additional steps needed regarding mini batches and make sure data are tensors.
2. Forecasting **h-steps** ahead uses the same approach as Keras/Tensorflow, but there are subtle differences in the code.
</div>
---
**LEARNING OBJECTIVES**
* Use a NN to mimic a linear model using PyTorch.
* Train a deep learning model implemented using PyTorch.
* Generate h-step forecasts using an iterative approach
* Generate h-step forecast using a direct modelling approach
* Construct a deep feedforward neural network for forecasting
* Use a ensemble of neural networks to forecast
---
## 1. Python dependencies
It is recommended that you use the forecasting course conda environment provided for this work. We are going to implement neural networks using `pytorch`. You should be using at least `pytorch` version `1.4.0`.
```
import statsmodels.api as sm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# pytorch imports
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import time
torch.__version__
```
## 2. Reminder: The forecasting process for AR
1. Select $l$ the number of autoregressive lags and forecast horizon $h$
2. Pre-process the data into tabular form [[$lag_1, lag_2, ... lag_l$], [$y_t$]]
3. Train the NN model using the tabular data
4. Iteratively forecast 1-step ahead gradually replacing ground truth observations with predictions.
### 2.1 Synthetic data without noise
Given the extra complexities of forecasting using OLS we will use simple synthetic data before exploring real healthcare data. The synthetic data we will use is a cosine.
```
t = np.arange(200)
ts_data = np.cos(0.2 * t)
plt.plot(ts_data);
```
## 2.2. Preprocess the time series into tabular autoregressive form
An autoregressive model consists of $l$ lags of the time series.
An easy way to think about the form of the data for a autoregressive OLS model is as a table of variables. The first $l$ columns are the lags (the independent predictor variables) and the final column is $y$ at time $t$ ($y_t$) that is, the target/dependent variable.
We there need to manipulate the time series so that is now in that format. More precisely for each row we need:
**A vector presenting the lags at time t**
* $X_t = $ [$lag_{t-l}, ... lag_{t-2}, lag_{t-1}$]
**A scalar value representing y at time t:**
* $y_t$
For training we need a vector of rows ($X_t$) and vector of target $y_t$. e.g.
```python
X_train = [X_1, X_2, X_3, ..., X_t]
y_train = [y_1, y_2, y_3, ..., y_t]
```
---
The function `sliding_window` illustrates how to preprocess time series data into tabular form in python.
```
def sliding_window(train, window_size=2, horizon=1):
'''
sliding window.
Parameters:
--------
train: array-like
training data for time series method
window_size: int, optional (default=2)
lookback - how much lagged data to include.
horizon: int, optional (default=1)
number of observations ahead to predict
Returns:
array-like, array-like
preprocessed X, preprocessed Y
'''
tabular_X = []
tabular_y = []
for i in range(0, len(train) - window_size - horizon):
X_train = train[i:window_size+i]
y_train = train[i+window_size+horizon-1]
tabular_X.append(X_train)
tabular_y.append(y_train)
return np.asarray(tabular_X), np.asarray(tabular_y).reshape(-1, )
def to_tensors(*arrays):
results = ()
for a in arrays:
results += torch.FloatTensor(a),
return results
def get_data_loader(X, y, batch_size=32):
'''
Set up train data as a TensorDataSet
'''
tensor_data = TensorDataset(torch.FloatTensor(X),
torch.FloatTensor(y))
return DataLoader(tensor_data,
batch_size=batch_size,
shuffle=False)
def ts_train_test_split(*arrays, train_size, as_tensors=True):
'''
time series train test split
Parameters:
X: array-like
X data
y_data
'''
results = ()
for a in arrays:
if as_tensors:
results += to_tensors(a[:train_size], a[train_size:])
else:
results += a[:train_size], a[train_size:]
return results
WINDOW_SIZE = 5
BATCH_SIZE = 16
# preprocess time series into a supervised learning problem
X_data, y_data = sliding_window(ts_data, window_size=WINDOW_SIZE)
# train test split
train_size = int(len(y_data) * (2/3))
X_train, X_test, y_train, y_test = ts_train_test_split(X_data,
y_data,
train_size=train_size,
as_tensors=True)
```
## 2.3 Create the PyTorch model.
After pre-processing the data, we need to create our PyTorch model. The first model we create will mimic the linear autoregressive model we built using an instance of `OLS`. This means we have neural network with a single layer with `window_size` inputs ans a single output.
```
# Base model on PyTorch nn.Module class
class LinearModel(nn.Module):
def __init__(self, window_size):
# Inherit parent (nn.module) methods using super init
super(LinearModel, self).__init__()
# Linear model only has a single layer
# window_size input containing our lags
# a Linear object is the same as Keras' Dense layer.
self.layer1 = nn.Linear(in_features=window_size,
out_features=1,
bias=True)
def forward(self, x):
# Pass data through net.
y_pred = self.layer1(x)
return y_pred
def fit(model, optimizer, criterion, n_epochs,
X_train, y_train, X_test=None, y_test=None,
batch_size=32, verbose=0):
'''
train the pytorch model
Parameters:
------
model: torch.nn.module
PyTorch Neural Network Model implements .forward()
optimizer: torch.optim.Optimizer
PyTorch optimization engine e.g. Adam
criterion: torch.nn.criterion
PyTorch criterion e.g. MSELoss
n_epochs: int
Number of epochs to train
X_train: Tensor
x training matrix 2D
y_train: Tensor
y training vector
X_test: Tensor, optional (default=None)
x test matrix 2D
y_test: Tensor, optional (default=None)
y test vector
batch_size: int, optional (default=32)
Size of the mini batches used in training
verbose: int, optional (default=0)
0 == no output during training
1 == output loss every 10 epochs
(includes validation loss if X_test, y_test included)
Returns:
-------
dict
training and validation loss history. keys are
'loss' and 'val_loss'
'''
PRINT_STEPS = 10
# Set up lists for loss
train_losses = []
test_losses = []
history = {'loss':[],
'val_loss':[]}
# create the mini-batches
train_loader = get_data_loader(X_train,
y_train,
batch_size=batch_size)
start_time = time.time()
# Loop through required number of epochs
for epoch in range(n_epochs):
# Train model (using batches): Switch to training mode
model.train()
for batch in train_loader:
y_pred = model.forward(batch[0])
loss = criterion(y_pred, batch[1].reshape(y_pred.shape[0], -1))
# Zero gradients, perform a backward pass,
# and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Get results for complete training set: Switch to evaluation mode
model.eval()
y_pred_train = model.forward(X_train)
history['loss'].append(criterion(y_pred_train,
y_train.reshape(y_pred_train.shape[0],
-1)).detach())
if not X_test is None:
# Get results for test set
y_pred_test = model.forward(X_test)
history['val_loss'].append(criterion(y_pred_test,
y_test.reshape(y_pred_test.shape[0],
-1)).detach())
# Print loss & accuracy every 10 epochs (prt last item of results list)
if verbose == 1:
if (epoch+1) % PRINT_STEPS == 0:
print(f'Epoch {epoch+1}. ', end='')
print(f"Train accuracy {history['loss'][-1]: 0.3f}. ", end='')
print(f"Test accuracy {history['val_loss'][-1]: 0.3f}.")
duration = time.time() - start_time
if verbose == 1:
print(f'Training time {duration:.2f}s')
return history
torch.manual_seed(1234)
N_EPOCHS = 100
# Create model
model = LinearModel(WINDOW_SIZE)
# Set loss
criterion = nn.MSELoss()
# Set optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
history = fit(model, optimizer, criterion, N_EPOCHS,
X_train, y_train, X_test=X_test, y_test=y_test,
batch_size=32, verbose=1)
plt.plot(range(N_EPOCHS), history['loss'], label='loss')
plt.plot(range(N_EPOCHS), history['val_loss'], label='val_loss')
plt.legend()
```
## 2.4 Forecasting 1 step ahead
To forecast 1-step ahead we use the just need to pass tbe first element of the `X_test` tensor to the model.
```
with torch.no_grad():
pred = model(X_test[0])[0]
print(f'1-step forecast: {pred:10.4f}')
print(f'ground trust value: {y_test[0]:6.4f}')
```
## 2.4 Forecast h periods ahead using the iterative method.
**We have trained our `NN` model to predict 1-step**. When forecasting 2 or more steps ahead we still only have five ground truth observations ($lag_1$ ... $lag_5$). This means that when forecasting h-steps ahead we need to do this in a loop where we iteratively replace our ground truth observations with our predictions.
There's an easy way to do this in pytorch using the `torch.roll(tensor, shifts)` function. That shifts everything in the array down by `shifts`. The function is **circular** so the value in element 0 is moved to be the final value in the tensor.
```
def autoregressive_iterative_forecast(model, exog, h):
'''
h-step forecast for an autoregressive
model using the iterative prediction method.
Conduct h one-step forecasts gradually
replacing ground truth autoregressive X
values with predictions.
Parameters:
------
model: forecast object
model that has a .predict(h) interface
exog: array-like
initial vector of lagged values (X)
h: int
forecast horizon. assumed to be > 0
Returns:
------
numpy.ndarray
y_predictions
'''
y_preds = []
current_X = exog
for i in range(h):
with torch.no_grad():
y_pred = model(current_X)[0]
y_preds.append(y_pred)
current_X = torch.roll(current_X, shifts=-1)
current_X[-1] = y_pred
return np.array(y_preds)
H = 5
y_preds = autoregressive_iterative_forecast(model, X_test[0], h=H)
print(f'Iterative forecast: {y_preds}')
print(f'Ground truth y: {y_test[:H].numpy().T}')
```
#### Adding some noise
To make this a bit more interesting we will add some normally distributed noise to the synthetic time series.
```
#set the random seed so that we all get the same results
np.random.seed(12)
t = np.arange(200)
ts_data = np.cos(0.2 * t)
noise = np.random.normal(loc=0.0, scale=0.2, size=200)
ts_data = ts_data + noise
plt.plot(ts_data);
WINDOW_SIZE = 12
BATCH_SIZE = 16
#preprocess time series into a supervised learning problem
X_data, y_data = sliding_window(ts_data, window_size=WINDOW_SIZE)
#train test split
train_size = int(len(y_data) * (2/3))
X_train, X_test, y_train, y_test = ts_train_test_split(X_data,
y_data,
train_size=train_size,
as_tensors=True)
def fit(model, optimizer, criterion, n_epochs,
X_train, y_train, X_test=None, y_test=None,
batch_size=32, verbose=0):
'''
train the pytorch model
Parameters:
------
model: torch.nn.module
PyTorch Neural Network Model implements .forward()
optimizer: torch.optim.Optimizer
PyTorch optimization engine e.g. Adam
criterion: torch.nn.criterion
PyTorch criterion e.g. MSELoss
n_epochs: int
Number of epochs to train
X_train: Tensor
x training matrix 2D
y_train: Tensor
y training vector
X_test: Tensor, optional (default=None)
x test matrix 2D
y_test: Tensor, optional (default=None)
y test vector
batch_size: int, optional (default=32)
Size of the mini batches used in training
verbose: int, optional (default=0)
0 == no output during training
1 == output loss every 10 epochs
(includes validation loss if X_test, y_test included)
Returns:
-------
dict
training and validation loss history. keys are
'loss' and 'val_loss'
'''
PRINT_STEPS = 10
# Set up lists for loss
train_losses = []
test_losses = []
history = {'loss':[],
'val_loss':[]}
#create the mini-batches
train_loader = get_data_loader(X_train,
y_train,
batch_size=batch_size)
start_time = time.time()
# Loop through required number of epochs
for epoch in range(n_epochs):
# Train model (using batches): Switch to training mode
model.train()
for batch in train_loader:
y_pred = model.forward(batch[0])
loss = criterion(y_pred, batch[1].reshape(y_pred.shape[0], -1))
# Zero gradients, perform a backward pass,
# and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Get results for complete training set: Switch to evaluation mode
model.eval()
y_pred_train = model.forward(X_train)
history['loss'].append(criterion(y_pred_train,
y_train.reshape(y_pred_train.shape[0], -1)).detach())
if not X_test is None:
# Get results for test set
y_pred_test = model.forward(X_test)
history['val_loss'].append(criterion(y_pred_test,
y_test.reshape(y_pred_test.shape[0], -1)).detach())
# Print loss & accuracy every 10 epochs (print last iteem of results lists)
if verbose == 1:
if (epoch+1) % PRINT_STEPS == 0:
print(f'Epoch {epoch+1}. ', end='')
print(f"Train accuracy {history['loss'][-1]: 0.3f}. ", end='')
print(f"Test accuracy {history['val_loss'][-1]: 0.3f}.")
duration = time.time() - start_time
if verbose == 1:
print(f'Training time {duration:.2f}s')
return history
torch.manual_seed(1234)
# Create model
model = LinearModel(WINDOW_SIZE)
# Set loss
criterion = nn.MSELoss()
# Set optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
history = fit(model, optimizer, criterion, n_epochs=100,
X_train=X_train, y_train=y_train,
X_test=X_test, y_test=y_test, verbose=1)
plt.plot(history['loss'], label='loss')
plt.plot(history['val_loss'], label='val_loss')
plt.legend();
plt.plot(ts_data[WINDOW_SIZE+1:], label='ground truth')
plt.plot(model(X_train).detach().numpy(), label='NN fitted')
plt.legend();
#make iterative predictions
H = len(y_test)
y_preds_iter = autoregressive_iterative_forecast(model,
X_test[0],
h=H)
#plot
plt.plot(y_preds_iter, label='iterative forecast method')
plt.plot(y_test, label='ground truth')
plt.legend();
```
## 2.5 The direct h-step forecasting method.
In the direct method to forecast h-steps ahead we have **$h$ forecasting models**. Each model provides a single point forecast from a step ahead. In the example here, y_test is of length 57 periods. The direct method requires 57 NNs to make its prediction!
Recall the `sliding_window` function. We ignored an optional parameter `horizon` in the iterative example. By default `horizon=1` i.e. the function returns target values that are only a single period ahead. We can vary the step size by increasing the value of horizon.
**Training multiple models**
1. Create a for loop and set it to iterate 57 times.
2. In each loop call `sliding_window` setting `horizon` to the iteration number + 1
3. Create a new instance of the the model
4. Train the model and save in a list.
5. Save the model to .h5 file. (recommended so you can reload without retraining)
```
def train_direct_models(data, n_epochs, horizon, window_size,
train_size, save=True):
models = []
print('Training model =>', end=' ')
for h in range(horizon):
print(f'{h+1}', end=', ')
# preprocess time series into a supervised learning problem
X, y = sliding_window(data, window_size=window_size,
horizon=h+1)
# train test split
X_train, X_test, y_train, y_test = ts_train_test_split(X, y,
train_size=train_size,
as_tensors=True)
# Create model
model_h = LinearModel(window_size)
# Set loss
criterion = nn.MSELoss()
# Set optimizer
optimizer = torch.optim.Adam(model_h.parameters(), lr=0.01)
# fit model silently (verbose=0)
history = fit(model_h, optimizer, criterion, n_epochs=100,
X_train=X_train,
y_train=y_train,
verbose=0)
if save:
path = f'./output/direct_model_h{h+1}.pt'
torch.save(model_h.state_dict(), path)
models.append(model_h)
print('done')
return models
def load_linear_direct_models(horizon, ws):
models = []
print('Loading direct model horizon =>', end=' ')
for h in range(horizon):
print(f'{h+1},', end=' ')
model_h = LinearModel(ws)
path = f'./output/direct_model_h{h+1}.pt'
model_h.load_state_dict(torch.load(path))
model_h.eval()
models.append(model_h)
return models
# set pytorch random seed
torch.manual_seed(1234)
N_EPOCHS = 100
HORIZON = len(y_test)
WINDOW_SIZE = 12
TRAIN_SIZE = 130
LOAD_FROM_FILE = True
if LOAD_FROM_FILE:
direct_models = load_linear_direct_models(HORIZON,
WINDOW_SIZE)
else:
direct_models = train_direct_models(ts_data,
n_epochs=N_EPOCHS,
horizon=HORIZON,
window_size=WINDOW_SIZE,
train_size=TRAIN_SIZE)
```
We now create the `direct_forecast` function. This is just a for loop to pass the input X data each model. Remember that the input to each model is **same** i.e. exog which in our case will be `X_test[0]`
```
def direct_forecast(models, exog):
'''
h-step forecast for an autoregressive
model using the direct prediction method.
Each model contained in @models has been trained
to predict a unique number of steps ahead.
Each model forecasts and the results are
combined in an ordered array and returned.
Parameters:
------
models: list
direct models each has has a .predict(exog)
interface
exog: Tensor
initial vector of lagged values (X)
Returns:
------
numpy.ndarray
y_predictions
'''
preds = []
for model_h in models:
with torch.no_grad():
pred_h = model_h(exog)[0]
preds.append(pred_h)
return np.array(preds)
# make the direct forecast
y_preds_direct = direct_forecast(direct_models, X_test[0])
# plot the direct forecast against the test data
plt.plot(y_preds_direct, label='direct forecast method')
plt.plot(y_test, label='ground truth')
plt.legend()
```
Like the iterative method the direct method looks a close match to the ground truth test set! Let's plot all three datasets on the same chart and then take a look at the **RMSE** of each method.
```
# plot iterative and direct
plt.plot(y_preds_direct, label='direct forecast method')
plt.plot(y_preds_iter, label='iterative forecast method')
plt.plot(y_test, label='ground truth', linestyle='-.')
plt.legend()
from statsmodels.tools.eval_measures import rmse
rmse(y_test, y_preds_iter)
rmse(y_test, y_preds_direct)
```
In this particular example (and single holdout set) the direct method out performed the iterative method. You should not assume this is always the case!
## 2.6 Forecasting a vector of y
In the **iterative** and **direct** methods we always forecast a *scalar* value. An modification is to adapt a feedforward neural network to predict a **vector of y values**. Using this architecture we would train our model on sliding windows of $X$ and $y$. Where y is a vector of length $h$ and $X$ is a vector of length $ws$ (window size)
### 2.6.1 Exercise: preprocessing the time series into vectors of y
Task: modify the function `sliding_window` (provided below) so that it returns a vectors of y.
Hints:
* Assume you are standing at time $t$. With a forecasting horizon of $h$, y would be $[y_{t+1}, y_{t+2}, ... , y_{t+h}]$.
* Array slicing might prove useful:
```python
train = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
print(train[2:6])
```
```
>> [3 4 5 6]
```
```
def sliding_window(train, window_size=2, horizon=2):
'''
sliding window.
Parameters:
--------
train: array-like
training data for time series method
window_size: int, optional (default=2)
lookback - how much lagged data to include.
horizon: int, optional (default=2)
number of observations ahead to predict
Returns:
array-like, array-like
preprocessed X, preprocessed Y
'''
tabular_X = []
tabular_y = []
for i in range(0, len(train) - window_size - horizon):
X_train = train[i:window_size+i]
#we use list slicing to return a vector of training for y_train
y_train = train[i+window_size:i+window_size+horizon]
tabular_X.append(X_train)
tabular_y.append(y_train)
return np.asarray(tabular_X), np.array(tabular_y)
```
**After** you have modified `sliding_window` run the code below to preprocess the time series.
## 2.6.2 Build a model that predicts vectors in PyTorch
```
# Base model on PyTorch nn.Module class
class VectorModel(nn.Module):
def __init__(self, in_features, n_neurons, n_outputs):
# Inherit parent (nn.module) methods using super init
super(VectorModel, self).__init__()
# This time we have two Linear layers
# The first layer maps input features to a second layer
# with user specified number of neurons.
self.layer1 = nn.Linear(in_features=in_features,
out_features=n_neurons,
bias=True)
# The output layer has a user specified n_outputs
# This is equal to the horizon you are predicting.
self.out = nn.Linear(in_features=n_neurons,
out_features=n_outputs,
bias=True)
def forward(self, x):
# Pass data through net.
x = F.relu(self.layer1(x))
y_pred = self.out(x)
return y_pred
```
### 2.6.3 Train the model
```
# set PyTorch random seed
torch.manual_seed(1234)
WINDOW_SIZE = 12
HORIZON = 12
N_EPOCHS = 100
TRAIN_SIZE = 130
BATCH_SIZE = 32
# convert time series to supervised learning format
X, y = sliding_window(ts_data,
window_size=WINDOW_SIZE,
horizon=HORIZON)
#train-test split
X_train, X_test, y_train, y_test = ts_train_test_split(X, y,
train_size=TRAIN_SIZE,
as_tensors=True)
# Create an instance of VectorModel
model_v = VectorModel(WINDOW_SIZE, n_neurons=10, n_outputs=HORIZON)
# Set loss function
criterion = nn.MSELoss()
# Set optimizer to Adam
optimizer = torch.optim.Adam(model_v.parameters(), lr=0.01)
# train the model
history = fit(model_v, optimizer, criterion, N_EPOCHS, X_train, y_train)
plt.plot(history['loss'], label='loss')
plt.plot(history['val_loss'], label='val_loss')
plt.legend()
```
### 2.6.4 Predict a single vector ahead.
Predicting a single vector ahead is actually making a h-step forecast. This is done in exactly the same way as the other models using `.predict(X)`
```
with torch.no_grad():
y_preds = model_v(X_test[0])
y_preds.numpy()
#plot the prediction
plt.plot(y_test[0], label='test')
plt.plot(y_preds, label='forecast')
plt.legend()
```
### 2.6.5 Exercise predicting multiple vectors ahead (> h-steps)
It is important to remember that with the vector output you predict in multiples of $h$. So if you make two predictions you have predictions for 2h. But in general this works in the same way as the iterative method. Each time you forecast you replace $h$ values in the X vector with the predictions.
**Task**:
* Modify `autoregressive_iterative_forecast` (provided below) so that it works with the new model. After you are done rename the function `vector_iterative_forecast`
* Predict 4 vector lengths ahead and plot the result against the test set.
Hints:
* For simplicity, you could make the parameter `h` the number of vectors ahead to predict.
* Each call of `model(X)` returns a vector. At the end of the iterative forecast you will have a list of vectors. Call `np.concatenate(list)` to transform this into a single list.
* In the notebook the X and vectors are both of size 12 (`WINDOW_SIZE == 12` and `len(y_train[0]) == 12`). This means you could simplify your code for the example. Alternatively it could work with different sized X and y vectors.
* Remember that `y_test` contains sliding windows of size 12. So if you predict 2 vectors ahead then you will need to plot `y_test[0]` and `y_test[12]`
```
def vector_iterative_forecast(model, exog, h):
'''
h-step forecast for an autoregressive
model using the iterative prediction method.
Conduct h one-step forecasts gradually
replacing ground truth autoregressive X
values with predictions.
Parameters:
------
model: forecast object
model that has a .predict(h) interface
exog: array-like
initial vector of lagged values (X)
h: int
forecast horizon. assumed to be > 0
Returns:
------
numpy.ndarray
y_predictions
'''
y_preds = []
current_X = exog
for i in range(h):
with torch.no_grad():
y_pred = model(current_X)
y_preds.append(y_pred.numpy())
#current_X = np.roll(current_X, shift=-h)
#current_X[-h] = y_pred.copy()
#in pytorch we use clone() method
current_X = y_pred.clone()
return np.concatenate(np.array(y_preds))
H = 4
y_preds = vector_iterative_forecast(model_v, X_test[0], H)
y_preds
y_test_to_plot = []
for i in range(H):
y_test_to_plot.append(y_test[WINDOW_SIZE*i].numpy())
plt.plot(np.concatenate(y_test_to_plot), label='test')
plt.plot(y_preds, label='forecast')
plt.legend()
```
| github_jupyter |
# EDA of Supermarket Sales
**Context**
The growth of supermarkets in most populated cities are increasing and market competitions are also high. The dataset is one of the historical sales of supermarket company which has recorded in 3 different branches for 3 months data.
**Data Dictionary**
1. ***Invoice id:*** Computer generated sales slip invoice identification number
2. ***Branch:*** Branch of supercenter (3 branches are available identified by A, B and C).
3. ***City:*** Location of supercenters
4. ***Customer type:*** Type of customers, recorded by Members for customers using member card and Normal for without member card.
5. ***Gender:*** Gender type of customer
6. ***Product line:*** General item categorization groups - Electronic accessories, Fashion accessories, Food and beverages, Health and beauty, Home and lifestyle, Sports and travel
7. ***Unit price:*** Price of each product in USD
8. ***Quantity:*** Number of products purchased by customer
9. ***Tax:*** 5% tax fee for customer buying
10. ***Total:*** Total price including tax
11. ***Date:*** Date of purchase (Record available from January 2019 to March 2019)
12. ***Time:*** Purchase time (10am to 9pm)
13. ***Payment:*** Payment used by customer for purchase (3 methods are available \u2013 Cash, Credit card and Ewallet)
14. ***COGS:*** Cost of goods sold
15. ***Gross margin percentage:*** Gross margin percentage
16. ***Gross income:*** Gross income
17. ***Rating:*** Customer stratification rating on their overall shopping experience (On a scale of 1 to 10)
#### Import libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import calmap
from pandas_profiling import ProfileReport
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
print('Libraries imported')
df = pd.read_csv('data.csv')
df.head(-3)
df.info()
# Setting date as index is more convenient
df['Date'] = pd.to_datetime(df['Date'])
df.set_index('Date', inplace = True)
df.head(-3)
```
#### Univariate Analysis
```
plt.figure(figsize = (22,14))
sns.displot(df['Rating'], kde = True)
plt.axvline(x = np.mean(df['Rating']), c = 'green', ls = '--', label = 'Mean')
plt.axvline(x = np.percentile(df['Rating'], 25), c = 'red', ls = '--', label = 'Percentile 25th')
plt.axvline(x = np.percentile(df['Rating'], 75), c = 'red', ls = '--', label = 'Percentile 75th')
plt.legend()
plt.show()
```
* Not every one of them is uniformly distributed
```
df.hist(figsize= (13,13))
sns.countplot(data = df, x = 'Branch')
plt.figure()
sns.countplot(data = df, x = 'Payment')
print(df['Branch'].value_counts())
print('\n')
print(df['Payment'].value_counts())
```
#### Bivariate Analysis
```
sns.regplot(data = df, x = 'Rating', y = 'gross income')
sns.boxplot(data = df, x = 'Branch', y = 'gross income')
sns.boxplot(data = df, x = 'Gender', y = 'gross income')
plt.figure(figsize = (14,9))
sns.lineplot(x = df.groupby(df.index).mean().index, y =df.groupby(df.index).mean()['gross income'])
```
#### Duplicate Rows and Missing Values
```
df[df.duplicated() == True] # no duplicates
# df.drop_duplicates in case if you have them
df.isna().sum() # no missing values
# df.fillna(df.mean()) --> fills only numeric columns
# df.fillna(df.model.iloc[0]) --> fills categorical with their mode
# dataset = pd.read_csv('data.csv')
# rep = ProfileReport(dataset)
# rep
```
#### Correlation Analysis
```
np.round(df.corr(), 3)
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
cmap = sns.diverging_palette(230, 20, as_cmap=True)
plt.figure(figsize = (9,9))
sns.heatmap(np.round(df.corr(), 3), annot = True, mask = mask, cmap = cmap)
```
| github_jupyter |
In this blog, we want to discuss multiple linear regression. We will be including features numerical and categorical variable, and feed those into the model.We will be discuss how to select which features are significant predictor, and perform diagnostic.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 00:33*
<!--TEASER_END-->
In this example, we want to use volume and cover(hardcover/paper) as our explanatory variable, the predictor, to predict weight, the response variable.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 01:19*
Plotting in the scatter plot, we can see the difference in weight for hardcover vs paperback. Hardcover are generally weigh more than the paperback.
This data currently live in DAAG R library. So let's load that and perform summary of multiple linear regression.
```
#load data
library(DAAG)
data(allbacks)
#fit model using lm (#response# ~ #explanatories#, #data#)
book_mlr = lm(weight ~ volume + cover, data = allbacks)
summary(book_mlr)
```
In this summary we can see MLR output. Remember the reference level is a label that's not in the context. In this case, cover lacks hardcover, which means hardcover is the reference level for cover. So we can see that we have 92.75%, which is percentage of variability in weights can be explained by the volume and cover type books. This is a good model, and it seems weights and cover type capture most of features, where the residuals can be caused by unexplained explanatory variables.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 05:25*
Using earlier informations as a basis, we can simplify our MLR into single line for each category. Recall that the hardcover is the reference level for the cover, we can eliminate the coverslope and get our predictor for hardcover books. Same goes for paperback, we can include the slope cover and get paperback predictor. The results is very intuitive, as you can see from the intercept, paperback weigh less than the hardcover.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 06:07*
Notice that we have multiple linear regression in our plot, instead of imposing one line that's trying to conclude both level in cover. This way we get a more accurate predictor.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 08:06*
So how do interpret the slope? Here we have new statement, **All else held constant**. What it mean is given that in case all other variables are constant, we want to focus just one explanatory variable, in this case the volume. Recall that the interpretation for numerical explanatory is that given 1 unit increase in explanatory, response will be higher/decrease based on the units. Since the volume is in cm3 for units, and weight is in grams, we can intercept the slope as **All else held constant, for each 1 cm^3 increase in volume, the model predicts the books to be heavier on average by 0.72 grams**.
And what about the cover? Remember that the hardcover is the reference level, like explanatory in categorical, we say **All else held constant, the model predicts that paperback books weigh 185.05 grams less than hardcover books, on average.**

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 08:54*
So this is how we interpret the intercept. If all set to zero, (in this case the categorical refer back to reference level), hardcover books with no volumes are expected on average to weigh 198 grams. This is meaningless, as it's not possible of books with no volumes. But it's still serve to adjust the height of the line.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 09:42*
And for prediction we can just plug in all explanatory variable,calculate using the formula and we get the weight.
### Interaction variables

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/161) 10:53*
Note that the books example are oversimplifying example. We assume the slope to be the same for hardcover and paperback. And this is not always the case. **interaction** variable have to be use in the model, but it's not going to be discussed here.
# Adjusted R Squared

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 01:42*
Recall in [my blog post](http://napitupulu-jon.appspot.com/posts/intro-linear-regression-coursera-statistics.html) earlier we're talking about predicting poverty with HS graduate, here we have a scatter matrix, that can plot for each relationship, and correlation coeffecient.Loading the dataset(the dataset is taken from Data Analysis and Statistical Inference, at Coursera)
```
states = read.csv('http://bit.ly/dasi_states')
head(states)
```
Next, we're going to fit the model, using female_house as the only explanatory variable.
```
pov_slr = lm(poverty ~ female_house, data=states)
summary(pov_slr)
```
And here we see the R squared.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 03:30*
Take a look and see how the data plotted and also the linear model is there. Looking at the p-value approximate to zero, meaning the female house is a significant predictor to the poverty.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 03:31*
Once, we can also do it from ANOVA table, and we have 0.28.Now what if we're adding one feature, in this case white?
```
pov_mlr = lm(poverty ~ female_house + white, data=states)
summary(pov_mlr)
anova(pov_mlr)
```
We can calculate our R Squared from ANOVA table,
```
ssq_fh = 132.57
ssq_white = 8.21
total_ssq = 480.25
(ssq_fh+ssq_white)/total_ssq
```
And the result is the same with p-value from linear regression. Since this is the proportion of variability poverty that can explained by both explanatory variables, female_house and white, for female_house only contributed female house, the proportion is only incorporate the female_house, which in this case:
```
ssq_fh/total_ssq
```

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 07:36*
As we increase our features, our R squared is increase as well (attributed by dof). So, we also want to know whether additional feature is significant predictor or not. And for that, we use **adjusted R squared**. The value will add a formula which incorporate the number of predictor. If the new R squared can't offset penalty of number predictors, the additional feature will be excluded.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 08:40*
The sample size is 51 (50 states + DC). And the explanatory total is 2, fh and white, so we have n and k. Plugging those into the formula, incorporating SS residuals(unexplained), SS total, n and k, we get 0.26. So this will means that the additional predictor is getting penalty, and may not contributed significantly to the model.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 09:18*
In the previous model where we only have one explanatory variable, R squared not getting penalty, as shown by same value for both R squared and adjusted R squared. However, after the model add new predictor, white, R squared increase(remember it will always increase as the predictor added), but adjusted R squared is not increase, even decrease. This would means 'white' isn't contributed to the model significantly.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/163) 10:00*
So since k is never zero or lower, It impacts the adjusted R squared being smaller. But still adjusted R squared is the more preferable than the others.Remember this is still doesn't change the fact that R squared is proportion variability of response that explained by the model, and this what's not adjusted R squared definition.
# Collinearity and Parsimony
## Collinear
Two predictors are said to be **collinear** when they correlated with each other. Since the predictor is conditioned to be independent with each other, it's violated and may give unreliable model. Inclusion of collinear predictors are called **multicollinearity**.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/165) 02:15*
Earlier we have incorporate 'white' as our second predictor, and turns out R squared is not changed, even getting penalty by adjusted R squared. Looking at parwise plot once gain, we can infer from the correlation coeffecient and scatter plot, that both variables are strong negative correlation. This will bring **multicolinearity** which will disrupt our model. So we should exclude 'white' on this. And that conclusion will bring us to **parsimony**.
## Parsimony
By avoiding features that correlation with each other, we actually do **parsimony**. The reason to exclude the feature is because it's not bring useful information to the model. **Parsimonious model** are said to be the best simplest model. It arises from Occam's razor, which states among competing hypothesis, the one with fewest assumption should be selected. In other words, when the models are equally good, we avoid the model with higher number of feature. It is impossible to control feature in observational study, but it's possible to control it in the experiment.
In the machine learning field, it often useful to exclude those features. First as we avoid them, we also avoid [**curse of dimensionality**](http://napitupulu-jon.appspot.com/posts/kNN.html), which states the learning will take longer exponentially, with huge number of features. So features(predictor) selection is what we're trying to use. It also avoid variance in our model that tend to overfitting, make it harder to generalize to the future problem.
In summary:
we can define MLR as ,
$$\hat{y} = \beta0 + \beta1x1 + .. \beta{k}xk$$
where k = number of predictors and x as the slope.
As for the interpretation of MLR, for the intercept we're assuming that all explanatory being zero, what are the expected value of the response variable on average. For the slope, assuming all else being equal, for each unit increase in particular (1 unit) explanatory, we expect the response to be increase/decrease on average by {explanatory slope}.
In MLR, we want to avoid two explanatory variables that dependent. Because it will mitigate the accuracy of our problem. One way to avoid this is to select one of the variables, or merge it into one (pca).
R squared will alwayws getting higher as you increase variables. To make it more robust, we use adjusted R squared, which give penalty to those features that doesn't provide useful information to the model.If it does, the model should have higher adjusted R squared, instead of getting constant or even decreasing the R squared. using the formula,
$$R_{adj}^2 = 1 - \frac{SSE / (n-k-1)}{SST/(n-1)}$$
where n is the number of observations, and k as the number of predictors
Lastly, we want the model to parsinomous model. That is the model with the best accuracy, but also with the simplest model. Full model is where we use all the explanatory to predict the response, and we can mitigate the variables using **model selection**, which we will be discuss with my next blog.
> **REFERENCES**:
> Dr. Mine Çetinkaya-Rundel, [Cousera](https://class.coursera.org/statistics-003/lecture)
| github_jupyter |
```
from pyltp import Segmentor
segmentor = Segmentor()
segmentor.load("/pi/ai/ltp/ltp_data_v3.4.0/cws.model")
words = segmentor.segment("元芳你怎么看")
print("|".join(words))
segmentor.release()
```
⊕ [pyltp/example.py at master · HIT-SCIR/pyltp](https://github.com/HIT-SCIR/pyltp/blob/master/example/example.py)
```
from pyltp import SentenceSplitter, Segmentor, Postagger, Parser, NamedEntityRecognizer, SementicRoleLabeller
import sys, os
MODELDIR='/pi/ai/ltp/ltp_data_v3.4.0'
paragraph = '中国进出口银行与中国银行加强合作。中国进出口银行与中国银行加强合作!'
sentence = SentenceSplitter.split(paragraph)[0]
segmentor = Segmentor()
segmentor.load(os.path.join(MODELDIR, "cws.model"))
words = segmentor.segment(sentence)
print(" ".join(words))
postagger = Postagger()
postagger.load(os.path.join(MODELDIR, "pos.model"))
postags = postagger.postag(words)
# list-of-string parameter is support in 0.1.5
# postags = postagger.postag(["中国","进出口","银行","与","中国银行","加强","合作"])
print(" ".join(postags))
parser = Parser()
parser.load(os.path.join(MODELDIR, "parser.model"))
arcs = parser.parse(words, postags)
print("\t".join("%d:%s" % (arc.head, arc.relation) for arc in arcs))
recognizer = NamedEntityRecognizer()
recognizer.load(os.path.join(MODELDIR, "ner.model"))
netags = recognizer.recognize(words, postags)
print("\t".join(netags))
labeller = SementicRoleLabeller()
labeller.load(os.path.join(MODELDIR, "pisrl.model"))
roles = labeller.label(words, postags, arcs)
for role in roles:
print(role.index, "".join(
["%s:(%d,%d)" % (arg.name, arg.range.start, arg.range.end) for arg in role.arguments]))
segmentor.release()
postagger.release()
parser.release()
recognizer.release()
labeller.release()
parser = Parser()
parser.load(os.path.join(MODELDIR, "parser.model"))
segmentor = Segmentor()
segmentor.load(os.path.join(MODELDIR, "cws.model"))
paragraph = '中国进出口银行与中国银行加强合作。中国进出口银行与中国银行加强合作!'
sentence = SentenceSplitter.split(paragraph)[0]
words = segmentor.segment(sentence)
print("\t".join(words))
arcs = parser.parse(words, postags)
print("\t".join("%d:%s" % (arc.head, arc.relation) for arc in arcs))
import sagas
rs=[]
for i in range(len(words)):
print("%s --> %s|%s|%s|%s" % (words[int(arcs[i].head)-1], words[i], \
arcs[i].relation, postags[i], netags[i]))
rs.append((words[int(arcs[i].head)-1], words[i], \
arcs[i].relation, postags[i], netags[i]))
sagas.to_df(rs, ['弧头','弧尾','依存关系','词性','命名实体'])
```
## 语义角色标注
语义角色标注 (Semantic Role Labeling, SRL) 是一种浅层的语义分析技术,标注句子中某些短语为给定谓词的论元 (语义角色) ,如施事、受事、时间和地点等。其能够对问答系统、信息抽取和机器翻译等应用产生推动作用。 仍然是上面的例子,语义角色标注的结果为下图: ...
其中有三个谓词提出,调研和探索。以探索为例,积极是它的方式(一般用ADV表示),而新机制则是它的受事(一般用A1表示)
核心的语义角色为 A0-5 六种,A0 通常表示动作的施事,A1通常表示动作的影响等,A2-5 根据谓语动词不同会有不同的语义含义。其余的15个语义角色为附加语义角色,如LOC 表示地点,TMP 表示时间等。附加语义角色列表如下:
https://www.ltp-cloud.com/intro#srl_how

```
descs='''标记 说明
ADV adverbial, default tag ( 附加的,默认标记 )
BNE beneficiary ( 受益人 )
CND condition ( 条件 )
DIR direction ( 方向 )
DGR degree ( 程度 )
EXT extent ( 扩展 )
FRQ frequency ( 频率 )
LOC locative ( 地点 )
MNR manner ( 方式 )
PRP purpose or reason ( 目的或原因 )
TMP temporal ( 时间 )
TPC topic ( 主题 )
CRD coordinated arguments ( 并列参数 )
PRD predicate ( 谓语动词 )
PSR possessor ( 持有者 )
PSE possessee ( 被持有 )'''
desc_rs=[]
for desc in descs.split('\n')[1:]:
desc_rs.append(desc.split('\t'))
sagas.to_df(desc_rs, ['mark', 'description'])
labeller = SementicRoleLabeller()
labeller.load(os.path.join(MODELDIR, "pisrl.model"))
def join_words(words, arg_range):
return ''.join([str(words[idx]) for idx in range(arg_range.start, arg_range.end+1)])
def explain(arg_name):
if arg_name=='A0':
return '动作的施事'
elif arg_name=='A1':
return '动作的影响'
return arg_name
print('sents', " ".join(words))
roles = labeller.label(words, postags, arcs)
for role in roles:
print(role.index, '->', "".join(
["%s:(%d,%d) _ " % (arg.name, arg.range.start, arg.range.end) for arg in role.arguments]))
for arg in role.arguments:
# print(''.join(words[arg.range.start:arg.range.end]))
print(arg.name, arg.range.start, arg.range.end)
# print(arg.name, words[arg.range.start], words[arg.range.end])
print(explain(arg.name), join_words(words, arg.range))
print('----')
from termcolor import colored
query="That was very bad"
topk = 5
print('top %d questions similar to "%s"' % (topk, colored(query, 'green')))
import sagas
dep_defs='''主谓关系 SBV subject-verb 我送她一束花 (我 <– 送)
动宾关系 VOB 直接宾语,verb-object 我送她一束花 (送 –> 花)
间宾关系 IOB 间接宾语,indirect-object 我送她一束花 (送 –> 她)
前置宾语 FOB 前置宾语,fronting-object 他什么书都读 (书 <– 读)
兼语 DBL double 他请我吃饭 (请 –> 我)
定中关系 ATT attribute 红苹果 (红 <– 苹果)
状中结构 ADV adverbial 非常美丽 (非常 <– 美丽)
动补结构 CMP complement 做完了作业 (做 –> 完)
并列关系 COO coordinate 大山和大海 (大山 –> 大海)
介宾关系 POB preposition-object 在贸易区内 (在 –> 内)
左附加关系 LAD left adjunct 大山和大海 (和 <– 大海)
右附加关系 RAD right adjunct 孩子们 (孩子 –> 们)
独立结构 IS independent structure 两个单句在结构上彼此独立
核心关系 HED head 指整个句子的核心'''.split('\n')
def_rs=[]
for dep in dep_defs:
def_rs.append(dep.split('\t'))
sagas.to_df(def_rs, ['type', 'tag', 'desc', 'example'])
import sagas
# https://ltp.readthedocs.io/zh_CN/latest/appendix.html#id3
# LTP 使用的是863词性标注集,其各个词性含义如下表
pos_defs='''a adjective 美丽 ni organization name 保险公司
b other noun-modifier 大型, 西式 nl location noun 城郊
c conjunction 和, 虽然 ns geographical name 北京
d adverb 很 nt temporal noun 近日, 明代
e exclamation 哎 nz other proper noun 诺贝尔奖
g morpheme 茨, 甥 o onomatopoeia 哗啦
h prefix 阿, 伪 p preposition 在, 把
i idiom 百花齐放 q quantity 个
j abbreviation 公检法 r pronoun 我们
k suffix 界, 率 u auxiliary 的, 地
m number 一, 第一 v verb 跑, 学习
n general noun 苹果 wp punctuation ,。!
nd direction noun 右侧 ws foreign words CPU
nh person name 杜甫, 汤姆 x non-lexeme 萄, 翱
z descriptive words 瑟瑟,匆匆'''.split('\n')
upos_maps={'a':'ADJ', 'p':'ADP', 'd':'ADV',
'u':'AUX', 'c':'CCONJ', 'h':'DET',
'e':'INTJ', 'n':'NOUN', 'm':'NUM',
'z':'PART', 'r':'PRON', 'nh':'PROPN',
'wp':'PUNCT', 'ws':'SYM',
'v':'VERB', 'x':'X'
}
upos_rev_maps={'SCONJ':['c'], 'NOUN':['ni', 'nl', 'ns', 'nt', 'nz', 'n', 'nd', 'nh']}
pos_rs=[]
for pos in pos_defs:
parts=pos.split('\t')
pos_rs.append(parts[:3])
pos_rs.append(parts[3:])
sagas.to_df(pos_rs, ['tag','desc','example'])
upos_maps['v']
from sagas.zh.ltp_procs import LtpProcs
ltp=LtpProcs()
def in_filters(val, filters):
for f in filters:
# if val.endswith(f):
# support suffix, also support as 'nsubj:pass'
if f in val:
return True
return False
def parse_sentence(sentence, filters):
words = ltp.segmentor.segment(sentence)
postags = ltp.postagger.postag(words)
arcs = ltp.parser.parse(words, postags)
roles = ltp.labeller.label(words, postags, arcs)
netags = ltp.recognizer.recognize(words, postags)
root=''
root_idx=0
collector=[]
verbs=[]
for i in range(len(words)):
rel=arcs[i].relation
pos=postags[i]
if rel=='HED':
root=words[i]
root_idx=i
if pos=='v':
verbs.append(words[i])
print('root', root, root_idx)
rs = []
for i in range(len(words)):
print("%s --> %s|%s|%s|%s" % (words[int(arcs[i].head) - 1], words[i], \
arcs[i].relation, postags[i], netags[i]))
pos=postags[i]
dep_idx=int(arcs[i].head) - 1
head=words[dep_idx]
rel=arcs[i].relation
rs.append((head, words[i], \
rel, pos, netags[i]))
if dep_idx==root_idx:
collector.append((rel.lower(), words[i]))
df=sagas.to_df(rs, ['弧头', '弧尾', '依存关系', '词性', '命名实体'])
return df, collector, verbs
df, collector, verbs=parse_sentence('我是一个好老师', ['SBV', 'OB'])
print(collector, verbs)
df
def ltp_parse(sentence):
words = ltp.segmentor.segment(sentence)
postags = ltp.postagger.postag(words)
arcs = ltp.parser.parse(words, postags)
roles = ltp.labeller.label(words, postags, arcs)
netags = ltp.recognizer.recognize(words, postags)
rs = []
for i in range(len(words)):
pos=postags[i]
dep_idx=int(arcs[i].head) - 1
head=words[dep_idx]
rel=arcs[i].relation
ne=netags[i]
rs.append({'head':head, 'text':words[i],
'rel':rel, 'pos':pos, 'entity':ne})
# df=sagas.to_df(rs, ['弧头', '弧尾', '依存关系', '词性', '命名实体'])
return rs
def ltp_ner(sents):
running_offset = 0
rs = []
tokens=ltp_parse(sents)
for token in tokens:
word=token['text']
word_offset = sents.index(word, running_offset)
word_len = len(word)
running_offset = word_offset + word_len
rs.append({"start":word_offset,
"end":running_offset,
'text': word, 'entity': token['entity']
})
return [w for w in rs if w['entity']!='O']
ltp_ner('我在北京工作')
def proc(sents):
import requests
data = {'lang':'zh', "sents":sents}
response = requests.post('http://localhost:8091/digest', json=data)
print(response.status_code, response.json())
proc("我是一个好老师")
proc('你是如何获得语言的?')
proc('你如何获得语言?')
proc('你如何获得语言的?')
class LtpViz(object):
def __init__(self, ltp=None):
from graphviz import Digraph
from sagas.zh.ltp_procs import LtpProcs
if ltp is None:
ltp = LtpProcs()
self.ltp=ltp
self.f = Digraph('deps', filename='deps.gv')
self.f.attr(rankdir='LR', size='6,4')
self.f.attr('node', shape='circle')
def deps(self, sentence):
words = self.ltp.segmentor.segment(sentence)
postags = self.ltp.postagger.postag(words)
arcs = self.ltp.parser.parse(words, postags)
roles = self.ltp.labeller.label(words, postags, arcs)
netags = self.ltp.recognizer.recognize(words, postags)
for i in range(len(words)):
a=words[int(arcs[i].head) - 1]
print("%s --> %s|%s|%s|%s" % (a, words[i], \
arcs[i].relation, postags[i], netags[i]))
self.f.edge(a, words[i], label=arcs[i].relation.lower())
return self.f
viz=LtpViz(ltp)
viz.deps('你如何获得语言的?')
ana=lambda sents: LtpViz(ltp).deps(sents)
ana('我送她一束花')
ana('朝鲜语现有11172个音节可用')
from sagas.zh.ltp_procs import LtpProcs
ltp=LtpProcs()
```
| github_jupyter |
```
import numpy as np
import random
from utils.gradcheck import gradcheck_naive
from utils.utils import normalizeRows, softmax
def sigmoid(x):
"""
Compute the sigmoid function for the input here.
Arguments:
x -- A scalar or numpy array.
Return:
s -- sigmoid(x)
"""
### YOUR CODE HERE
s=1/(1+np.exp(-x))
### END YOUR CODE
return s
def naiveSoftmaxLossAndGradient(
centerWordVec,
outsideWordIdx,
outsideVectors,
dataset
):
""" Naive Softmax loss & gradient function for word2vec models
Implement the naive softmax loss and gradients between a center word's
embedding and an outside word's embedding. This will be the building block
for our word2vec models.
Arguments:
centerWordVec -- numpy ndarray, center word's embedding
(v_c in the pdf handout)
outsideWordIdx -- integer, the index of the outside word
(o of u_o in the pdf handout)
outsideVectors -- outside vectors (rows of matrix) for all words in vocab
(U in the pdf handout)
dataset -- needed for negative sampling, unused here.
Return:
loss -- naive softmax loss
gradCenterVec -- the gradient with respect to the center word vector
(dJ / dv_c in the pdf handout)
gradOutsideVecs -- the gradient with respect to all the outside word vectors
(dJ / dU)
"""
### YOUR CODE HERE
### Please use the provided softmax function (imported earlier in this file)
### This numerically stable implementation helps you avoid issues pertaining
### to integer overflow.
#vocabulary size
vsize=outsideVectors.shape[0]
#dot product similarity
dot_product=np.matmul(outsideVectors,centerWordVec)
s_dot=softmax(dot_product)
loss=-np.log(s_dot[outsideWordIdx])
gradCenterVec=-outsideVectors[outsideWordIdx]
for i in range(vsize):
gradCenterVec=gradCenterVec+s_dot[i]*outsideVectors[i,:]
gradOutsideVecs=np.zeros(outsideVectors.shape)
for i in range(vsize):
if i!=outsideWordIdx:
gradOutsideVecs[i,:]=s_dot[i]*centerWordVec
else:
gradOutsideVecs[i,:]=(s_dot[i]-1)*centerWordVec
### END YOUR CODE
return loss, gradCenterVec, gradOutsideVecs
def getNegativeSamples(outsideWordIdx, dataset, K):
""" Samples K indexes which are not the outsideWordIdx """
negSampleWordIndices = [None] * K
for k in range(K):
newidx = dataset.sampleTokenIdx()
while newidx == outsideWordIdx:
newidx = dataset.sampleTokenIdx()
negSampleWordIndices[k] = newidx
return negSampleWordIndices
def negSamplingLossAndGradient(
centerWordVec,
outsideWordIdx,
outsideVectors,
dataset,
K=10
):
""" Negative sampling loss function for word2vec models
Implement the negative sampling loss and gradients for a centerWordVec
and a outsideWordIdx word vector as a building block for word2vec
models. K is the number of negative samples to take.
Note: The same word may be negatively sampled multiple times. For
example if an outside word is sampled twice, you shall have to
double count the gradient with respect to this word. Thrice if
it was sampled three times, and so forth.
Arguments/Return Specifications: same as naiveSoftmaxLossAndGradient
"""
# Negative sampling of words is done for you. Do not modify this if you
# wish to match the autograder and receive points!
negSampleWordIndices = getNegativeSamples(outsideWordIdx, dataset, K)
indices = [outsideWordIdx] + negSampleWordIndices
### YOUR CODE HERE
### Please use your implementation of sigmoid in here.
dot_product=np.matmul(outsideVectors,centerWordVec)
loss=0
for ind in indices:
if ind==outsideWordIdx:
loss=loss-np.log(sigmoid(dot_product[ind]))
else:
loss=loss-np.log(sigmoid(-dot_product[ind]))
gradCenterVec=-outsideVectors[outsideWordIdx,:]*(1-sigmoid(dot_product[outsideWordIdx]))
for k in negSampleWordIndices:
gradCenterVec=gradCenterVec+outsideVectors[k,:]*(1-sigmoid(-dot_product[k]))
gradOutsideVecs=np.zeros(outsideVectors.shape)
for ind in indices:
if ind==outsideWordIdx:
gradOutsideVecs[ind,:]=-centerWordVec*(1-sigmoid(dot_product[ind]))
else:
gradOutsideVecs[ind,:]=centerWordVec*(1-sigmoid(-dot_product[ind]))
indexCount = np.bincount(indices)
for i in np.unique(indices):
gradOutsideVecs[i] *= indexCount[i]
### END YOUR CODE
return loss, gradCenterVec, gradOutsideVecs
def skipgram(currentCenterWord, windowSize, outsideWords, word2Ind,
centerWordVectors, outsideVectors, dataset,
word2vecLossAndGradient=naiveSoftmaxLossAndGradient):
""" Skip-gram model in word2vec
Implement the skip-gram model in this function.
Arguments:
currentCenterWord -- a string of the current center word
windowSize -- integer, context window size
outsideWords -- list of no more than 2*windowSize strings, the outside words
word2Ind -- a dictionary that maps words to their indices in
the word vector list
centerWordVectors -- center word vectors (as rows) for all words in vocab
(V in pdf handout)
outsideVectors -- outside word vectors (as rows) for all words in vocab
(U in pdf handout)
word2vecLossAndGradient -- the loss and gradient function for
a prediction vector given the outsideWordIdx
word vectors, could be one of the two
loss functions you implemented above.
Return:
loss -- the loss function value for the skip-gram model
(J in the pdf handout)
gradCenterVecs -- the gradient with respect to the center word vectors
(dJ / dV in the pdf handout)
gradOutsideVectors -- the gradient with respect to the outside word vectors
(dJ / dU in the pdf handout)
"""
loss = 0.0
gradCenterVecs = np.zeros(centerWordVectors.shape)
gradOutsideVectors = np.zeros(outsideVectors.shape)
### YOUR CODE HERE
centerWordInd=word2Ind.get(currentCenterWord)
centerwordvec= centerWordVectors[centerWordInd,:]
outWordInd=[word2Ind.get(x) for x in outsideWords]
for ind in outWordInd:
ploss,pgcv,pgov=word2vecLossAndGradient(centerWordVec=centerwordvec,
outsideWordIdx=ind,
outsideVectors=outsideVectors,
dataset=dataset)
loss=loss+ploss
gradCenterVecs[centerWordInd,:]=gradCenterVecs[centerWordInd,:]+pgcv
gradOutsideVectors=gradOutsideVectors+pgov
### END YOUR CODE
return loss, gradCenterVecs, gradOutsideVectors
#############################################
# Testing functions below. DO NOT MODIFY! #
#############################################
def word2vec_sgd_wrapper(word2vecModel, word2Ind, wordVectors, dataset,
windowSize,
word2vecLossAndGradient=naiveSoftmaxLossAndGradient):
batchsize = 50
loss = 0.0
grad = np.zeros(wordVectors.shape)
N = wordVectors.shape[0]
centerWordVectors = wordVectors[:int(N/2),:]
outsideVectors = wordVectors[int(N/2):,:]
for i in range(batchsize):
windowSize1 = random.randint(1, windowSize)
centerWord, context = dataset.getRandomContext(windowSize1)
c, gin, gout = word2vecModel(
centerWord, windowSize1, context, word2Ind, centerWordVectors,
outsideVectors, dataset, word2vecLossAndGradient
)
loss += c / batchsize
grad[:int(N/2), :] += gin / batchsize
grad[int(N/2):, :] += gout / batchsize
return loss, grad
def test_word2vec():
""" Test the two word2vec implementations, before running on Stanford Sentiment Treebank """
dataset = type('dummy', (), {})()
def dummySampleTokenIdx():
return random.randint(0, 4)
def getRandomContext(C):
tokens = ["a", "b", "c", "d", "e"]
return tokens[random.randint(0,4)], \
[tokens[random.randint(0,4)] for i in range(2*C)]
dataset.sampleTokenIdx = dummySampleTokenIdx
dataset.getRandomContext = getRandomContext
random.seed(31415)
np.random.seed(9265)
dummy_vectors = normalizeRows(np.random.randn(10,3))
dummy_tokens = dict([("a",0), ("b",1), ("c",2),("d",3),("e",4)])
print("==== Gradient check for skip-gram with naiveSoftmaxLossAndGradient ====")
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
skipgram, dummy_tokens, vec, dataset, 5, naiveSoftmaxLossAndGradient),
dummy_vectors, "naiveSoftmaxLossAndGradient Gradient")
print("==== Gradient check for skip-gram with negSamplingLossAndGradient ====")
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
skipgram, dummy_tokens, vec, dataset, 5, negSamplingLossAndGradient),
dummy_vectors, "negSamplingLossAndGradient Gradient")
print("\n=== Results ===")
print ("Skip-Gram with naiveSoftmaxLossAndGradient")
print ("Your Result:")
print("Loss: {}\nGradient wrt Center Vectors (dJ/dV):\n {}\nGradient wrt Outside Vectors (dJ/dU):\n {}\n".format(
*skipgram("c", 3, ["a", "b", "e", "d", "b", "c"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset)
)
)
print ("Expected Result: Value should approximate these:")
print("""Loss: 11.16610900153398
Gradient wrt Center Vectors (dJ/dV):
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[-1.26947339 -1.36873189 2.45158957]
[ 0. 0. 0. ]
[ 0. 0. 0. ]]
Gradient wrt Outside Vectors (dJ/dU):
[[-0.41045956 0.18834851 1.43272264]
[ 0.38202831 -0.17530219 -1.33348241]
[ 0.07009355 -0.03216399 -0.24466386]
[ 0.09472154 -0.04346509 -0.33062865]
[-0.13638384 0.06258276 0.47605228]]
""")
print ("Skip-Gram with negSamplingLossAndGradient")
print ("Your Result:")
print("Loss: {}\nGradient wrt Center Vectors (dJ/dV):\n {}\n Gradient wrt Outside Vectors (dJ/dU):\n {}\n".format(
*skipgram("c", 1, ["a", "b"], dummy_tokens, dummy_vectors[:5,:],
dummy_vectors[5:,:], dataset, negSamplingLossAndGradient)
)
)
print ("Expected Result: Value should approximate these:")
print("""Loss: 16.15119285363322
Gradient wrt Center Vectors (dJ/dV):
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[-4.54650789 -1.85942252 0.76397441]
[ 0. 0. 0. ]
[ 0. 0. 0. ]]
Gradient wrt Outside Vectors (dJ/dU):
[[-0.69148188 0.31730185 2.41364029]
[-0.22716495 0.10423969 0.79292674]
[-0.45528438 0.20891737 1.58918512]
[-0.31602611 0.14501561 1.10309954]
[-0.80620296 0.36994417 2.81407799]]
""")
test_word2vec()
#!/usr/bin/env python
# Save parameters every a few SGD iterations as fail-safe
SAVE_PARAMS_EVERY = 5000
import pickle
import glob
import random
import numpy as np
import os.path as op
def load_saved_params():
"""
A helper function that loads previously saved parameters and resets
iteration start.
"""
st = 0
for f in glob.glob("saved_params_*.npy"):
iter = int(op.splitext(op.basename(f))[0].split("_")[2])
if (iter > st):
st = iter
if st > 0:
params_file = "saved_params_%d.npy" % st
state_file = "saved_state_%d.pickle" % st
params = np.load(params_file)
with open(state_file, "rb") as f:
state = pickle.load(f)
return st, params, state
else:
return st, None, None
def save_params(iter, params):
params_file = "saved_params_%d.npy" % iter
np.save(params_file, params)
with open("saved_state_%d.pickle" % iter, "wb") as f:
pickle.dump(random.getstate(), f)
def sgd(f, x0, step, iterations, postprocessing=None, useSaved=False,
PRINT_EVERY=10):
""" Stochastic Gradient Descent
Implement the stochastic gradient descent method in this function.
Arguments:
f -- the function to optimize, it should take a single
argument and yield two outputs, a loss and the gradient
with respect to the arguments
x0 -- the initial point to start SGD from
step -- the step size for SGD
iterations -- total iterations to run SGD for
postprocessing -- postprocessing function for the parameters
if necessary. In the case of word2vec we will need to
normalize the word vectors to have unit length.
PRINT_EVERY -- specifies how many iterations to output loss
Return:
x -- the parameter value after SGD finishes
"""
# Anneal learning rate every several iterations
ANNEAL_EVERY = 20000
if useSaved:
start_iter, oldx, state = load_saved_params()
if start_iter > 0:
x0 = oldx
step *= 0.5 ** (start_iter / ANNEAL_EVERY)
if state:
random.setstate(state)
else:
start_iter = 0
x = x0
if not postprocessing:
postprocessing = lambda x: x
exploss = None
for iter in range(start_iter + 1, iterations + 1):
# You might want to print the progress every few iterations.
loss = None
### YOUR CODE HERE
loss,gradient=f(x)
x=x-gradient*step
### END YOUR CODE
x = postprocessing(x)
if iter % PRINT_EVERY == 0:
if not exploss:
exploss = loss
else:
exploss = .95 * exploss + .05 * loss
print("iter %d: %f" % (iter, exploss))
if iter % SAVE_PARAMS_EVERY == 0 and useSaved:
save_params(iter, x)
if iter % ANNEAL_EVERY == 0:
step *= 0.5
return x
def sanity_check():
quad = lambda x: (np.sum(x ** 2), x * 2)
print("Running sanity checks...")
t1 = sgd(quad, 0.5, 0.01, 1000, PRINT_EVERY=100)
print("test 1 result:", t1)
assert abs(t1) <= 1e-6
t2 = sgd(quad, 0.0, 0.01, 1000, PRINT_EVERY=100)
print("test 2 result:", t2)
assert abs(t2) <= 1e-6
t3 = sgd(quad, -1.5, 0.01, 1000, PRINT_EVERY=100)
print("test 3 result:", t3)
assert abs(t3) <= 1e-6
print("-" * 40)
print("ALL TESTS PASSED")
print("-" * 40)
sanity_check()
```
| github_jupyter |
#### Organic molecule with SMILES input, CSEARCH performs conformational sampling with Fullmonte, QPREP creates Gaussian input files
###### Step 1: CSEARCH conformational sampling (creates SDF files)
```
import os, glob
from pathlib import Path
from aqme.csearch import csearch
from aqme.qprep import qprep
# set working directory and SMILES string
w_dir_main = Path(os.getcwd())
sdf_path = w_dir_main.joinpath('quinine')
smi = 'COC1=CC2=C(C=CN=C2C=C1)[C@H]([C@@H]3C[C@@H]4CCN3C[C@@H]4C=C)O'
# run CSEARCH conformational sampling, specifying:
# 1) Working directory (w_dir_main=w_dir_main)
# 2) PATH to create the new SDF files (destination=sdf_path)
# 3) SMILES string (smi=smi)
# 4) Name for the output SDF files (name='quinine')
# 5) Fullmonte sampling (program='fullmonte')
csearch(w_dir_main=w_dir_main,destination=sdf_path,
smi=smi,name='quinine',program='fullmonte')
```
###### Step 2: Writing Gaussian input files with the SDF obtained from CSEARCH
```
# set SDF filenames and directory where the new com files will be created
com_path = sdf_path.joinpath(f'com_files')
sdf_fullmonte_files = glob.glob(f'{sdf_path}/*.sdf')
# run QPREP input files generator, with:
# 1) Working directory (w_dir_main=sdf_path)
# 2) PATH to create the new SDF files (destination=com_path)
# 3) Files to convert (files=sdf_fullmonte_files)
# 4) QM program for the input (program='gaussian')
# 5) Keyword line for the Gaussian inputs (qm_input='wb97xd/6-31+G* opt freq')
# 6) Memory to use in the calculations (mem='24GB')
# 7) Processors to use in the calcs (nprocs=8)
qprep(w_dir_main=sdf_path,destination=com_path,files=sdf_fullmonte_files,program='gaussian',
qm_input='wb97xd/6-31+G* opt freq',mem='24GB',nprocs=8)
```
###### Bonus 1: If you want to use the same functions using a YAML file that stores all the variables
```
# to load the variables from a YAML file, use the varfile option
csearch(varfile='FILENAME.yaml')
# for each option, specify it in the YAML file as follows:
# program='fullmonte' --> program: 'fullmonte'
# name='quinine' --> name: 'quinine'
# etc
```
###### Bonus 2: If you want to use the same functions through command lines
```
csearch(w_dir_main=w_dir_main,destination=sdf_path,
smi=smi,name='quinine',program='fullmonte')
# for each option, specify it in the command line as follows:
# program='fullmonte' --> --program 'fullmonte'
# name='quinine' --> --name quinine
# etc
# for example: python -m aqme --program fullmonte --smi COC1=CC2=C(C=CN=C2C=C1)[C@H]([C@@H]3C[C@@H]4CCN3C[C@@H]4C=C)O --name quinine
```
| github_jupyter |
# Conjugate Priors
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
```
In the previous chapters we have used grid approximations to solve a variety of problems.
One of my goals has been to show that this approach is sufficient to solve many real-world problems.
And I think it's a good place to start because it shows clearly how the methods work.
However, as we saw in the previous chapter, grid methods will only get you so far.
As we increase the number of parameters, the number of points in the grid grows (literally) exponentially.
With more than 3-4 parameters, grid methods become impractical.
So, in the remaining three chapters, I will present three alternatives:
1. In this chapter we'll use **conjugate priors** to speed up some of the computations we've already done.
2. In the next chapter, I'll present Markov chain Monte Carlo (MCMC) methods, which can solve problems with tens of parameters, or even hundreds, in a reasonable amount of time.
3. And in the last chapter we'll use Approximate Bayesian Computation (ABC) for problems that are hard to model with simple distributions.
We'll start with the World Cup problem.
## The World Cup Problem Revisited
In <<_PoissonProcesses>>, we solved the World Cup problem using a Poisson process to model goals in a soccer game as random events that are equally likely to occur at any point during a game.
We used a gamma distribution to represent the prior distribution of $\lambda$, the goal-scoring rate. And we used a Poisson distribution to compute the probability of $k$, the number of goals scored.
Here's a gamma object that represents the prior distribution.
```
from scipy.stats import gamma
alpha = 1.4
dist = gamma(alpha)
```
And here's a grid approximation.
```
import numpy as np
from utils import pmf_from_dist
lams = np.linspace(0, 10, 101)
prior = pmf_from_dist(dist, lams)
```
Here's the likelihood of scoring 4 goals for each possible value of `lam`.
```
from scipy.stats import poisson
k = 4
likelihood = poisson(lams).pmf(k)
```
And here's the update.
```
posterior = prior * likelihood
posterior.normalize()
```
So far, this should be familiar.
Now we'll solve the same problem using the conjugate prior.
## The Conjugate Prior
In <<_TheGammaDistribution>>, I presented three reasons to use a gamma distribution for the prior and said there was a fourth reason I would reveal later.
Well, now is the time.
The other reason I chose the gamma distribution is that it is the "conjugate prior" of the Poisson distribution, so-called because the two distributions are connected or coupled, which is what "conjugate" means.
In the next section I'll explain *how* they are connected, but first I'll show you the consequence of this connection, which is that there is a remarkably simple way to compute the posterior distribution.
However, in order to demonstrate it, we have to switch from the one-parameter version of the gamma distribution to the two-parameter version. Since the first parameter is called `alpha`, you might guess that the second parameter is called `beta`.
The following function takes `alpha` and `beta` and makes an object that represents a gamma distribution with those parameters.
```
def make_gamma_dist(alpha, beta):
"""Makes a gamma object."""
dist = gamma(alpha, scale=1/beta)
dist.alpha = alpha
dist.beta = beta
return dist
```
Here's the prior distribution with `alpha=1.4` again and `beta=1`.
```
alpha = 1.4
beta = 1
prior_gamma = make_gamma_dist(alpha, beta)
prior_gamma.mean()
```
Now I claim without proof that we can do a Bayesian update with `k` goals just by making a gamma distribution with parameters `alpha+k` and `beta+1`.
```
def update_gamma(prior, data):
"""Update a gamma prior."""
k, t = data
alpha = prior.alpha + k
beta = prior.beta + t
return make_gamma_dist(alpha, beta)
```
Here's how we update it with `k=4` goals in `t=1` game.
```
data = 4, 1
posterior_gamma = update_gamma(prior_gamma, data)
```
After all the work we did with the grid, it might seem absurd that we can do a Bayesian update by adding two pairs of numbers.
So let's confirm that it works.
I'll make a `Pmf` with a discrete approximation of the posterior distribution.
```
posterior_conjugate = pmf_from_dist(posterior_gamma, lams)
```
The following figure shows the result along with the posterior we computed using the grid algorithm.
```
from utils import decorate
def decorate_rate(title=''):
decorate(xlabel='Goal scoring rate (lam)',
ylabel='PMF',
title=title)
posterior.plot(label='grid posterior', color='C1')
posterior_conjugate.plot(label='conjugate posterior',
color='C4', linestyle='dotted')
decorate_rate('Posterior distribution')
```
They are the same other than small differences due to floating-point approximations.
```
np.allclose(posterior, posterior_conjugate)
```
## What the Actual?
To understand how that works, we'll write the PDF of the gamma prior and the PMF of the Poisson likelihood, then multiply them together, because that's what the Bayesian update does.
We'll see that the result is a gamma distribution, and we'll derive its parameters.
Here's the PDF of the gamma prior, which is the probability density for each value of $\lambda$, given parameters $\alpha$ and $\beta$:
$$\lambda^{\alpha-1} e^{-\lambda \beta}$$
I have omitted the normalizing factor; since we are planning to normalize the posterior distribution anyway, we don't really need it.
Now suppose a team scores $k$ goals in $t$ games.
The probability of this data is given by the PMF of the Poisson distribution, which is a function of $k$ with $\lambda$ and $t$ as parameters.
$$\lambda^k e^{-\lambda t}$$
Again, I have omitted the normalizing factor, which makes it clearer that the gamma and Poisson distributions have the same functional form.
When we multiply them together, we can pair up the factors and add up the exponents.
The result is the unnormalized posterior distribution,
$$\lambda^{\alpha-1+k} e^{-\lambda(\beta + t)}$$
which we can recognize as an unnormalized gamma distribution with parameters $\alpha + k$ and $\beta + t$.
This derivation provides insight into what the parameters of the posterior distribution mean: $\alpha$ reflects the number of events that have occurred; $\beta$ reflects the elapsed time.
## Binomial Likelihood
As a second example, let's look again at the Euro problem.
When we solved it with a grid algorithm, we started with a uniform prior:
```
from utils import make_uniform
xs = np.linspace(0, 1, 101)
uniform = make_uniform(xs, 'uniform')
```
We used the binomial distribution to compute the likelihood of the data, which was 140 heads out of 250 attempts.
```
from scipy.stats import binom
k, n = 140, 250
xs = uniform.qs
likelihood = binom.pmf(k, n, xs)
```
Then we computed the posterior distribution in the usual way.
```
posterior = uniform * likelihood
posterior.normalize()
```
We can solve this problem more efficiently using the conjugate prior of the binomial distribution, which is the beta distribution.
The beta distribution is bounded between 0 and 1, so it works well for representing the distribution of a probability like `x`.
It has two parameters, called `alpha` and `beta`, that determine the shape of the distribution.
SciPy provides an object called `beta` that represents a beta distribution.
The following function takes `alpha` and `beta` and returns a new `beta` object.
```
import scipy.stats
def make_beta(alpha, beta):
"""Makes a beta object."""
dist = scipy.stats.beta(alpha, beta)
dist.alpha = alpha
dist.beta = beta
return dist
```
It turns out that the uniform distribution, which we used as a prior, is the beta distribution with parameters `alpha=1` and `beta=1`.
So we can make a `beta` object that represents a uniform distribution, like this:
```
alpha = 1
beta = 1
prior_beta = make_beta(alpha, beta)
```
Now let's figure out how to do the update. As in the previous example, we'll write the PDF of the prior distribution and the PMF of the likelihood function, and multiply them together. We'll see that the product has the same form as the prior, and we'll derive its parameters.
Here is the PDF of the beta distribution, which is a function of $x$ with $\alpha$ and $\beta$ as parameters.
$$x^{\alpha-1} (1-x)^{\beta-1}$$
Again, I have omitted the normalizing factor, which we don't need because we are going to normalize the distribution after the update.
And here's the PMF of the binomial distribution, which is a function of $k$ with $n$ and $x$ as parameters.
$$x^{k} (1-x)^{n-k}$$
Again, I have omitted the normalizing factor.
Now when we multiply the beta prior and the binomial likelihood, the result is
$$x^{\alpha-1+k} (1-x)^{\beta-1+n-k}$$
which we recognize as an unnormalized beta distribution with parameters $\alpha+k$ and $\beta+n-k$.
So if we observe `k` successes in `n` trials, we can do the update by making a beta distribution with parameters `alpha+k` and `beta+n-k`.
That's what this function does:
```
def update_beta(prior, data):
"""Update a beta distribution."""
k, n = data
alpha = prior.alpha + k
beta = prior.beta + n - k
return make_beta(alpha, beta)
```
Again, the conjugate prior gives us insight into the meaning of the parameters; $\alpha$ is related to the number of observed successes; $\beta$ is related to the number of failures.
Here's how we do the update with the observed data.
```
data = 140, 250
posterior_beta = update_beta(prior_beta, data)
```
To confirm that it works, I'll evaluate the posterior distribution for the possible values of `xs` and put the results in a `Pmf`.
```
posterior_conjugate = pmf_from_dist(posterior_beta, xs)
```
And we can compare the posterior distribution we just computed with the results from the grid algorithm.
```
def decorate_euro(title):
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title=title)
posterior.plot(label='grid posterior', color='C1')
posterior_conjugate.plot(label='conjugate posterior',
color='C4', linestyle='dotted')
decorate_euro(title='Posterior distribution of x')
```
They are the same other than small differences due to floating-point approximations.
The examples so far are problems we have already solved, so let's try something new.
```
np.allclose(posterior, posterior_conjugate)
```
## Lions and Tigers and Bears
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see 3 lions, 2 tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, what is the probability that the next animal we see is a bear?
To answer this question, we'll use the data to estimate the prevalence of each species, that is, what fraction of the animals belong to each species.
If we know the prevalences, we can use the multinomial distribution to compute the probability of the data.
For example, suppose we know that the fraction of lions, tigers, and bears is 0.4, 0.3, and 0.3, respectively.
In that case the probability of the data is:
```
from scipy.stats import multinomial
data = 3, 2, 1
n = np.sum(data)
ps = 0.4, 0.3, 0.3
multinomial.pmf(data, n, ps)
```
Now, we could choose a prior for the prevalences and do a Bayesian update using the multinomial distribution to compute the probability of the data.
But there's an easier way, because the multinomial distribution has a conjugate prior: the Dirichlet distribution.
## The Dirichlet Distribution
The Dirichlet distribution is a multivariate distribution, like the multivariate normal distribution we used in <<_MultivariateNormalDistribution>> to describe the distribution of penguin measurements.
In that example, the quantities in the distribution are pairs of flipper length and culmen length, and the parameters of the distribution are a vector of means and a matrix of covariances.
In a Dirichlet distribution, the quantities are vectors of probabilities, $\mathbf{x}$, and the parameter is a vector, $\mathbf{\alpha}$.
An example will make that clearer. SciPy provides a `dirichlet` object that represents a Dirichlet distribution.
Here's an instance with $\mathbf{\alpha} = 1, 2, 3$.
```
from scipy.stats import dirichlet
alpha = 1, 2, 3
dist = dirichlet(alpha)
```
Since we provided three parameters, the result is a distribution of three variables.
If we draw a random value from this distribution, like this:
```
dist.rvs()
dist.rvs().sum()
```
The result is an array of three values.
They are bounded between 0 and 1, and they always add up to 1, so they can be interpreted as the probabilities of a set of outcomes that are mutually exclusive and collectively exhaustive.
Let's see what the distributions of these values look like. I'll draw 1000 random vectors from this distribution, like this:
```
sample = dist.rvs(1000)
sample.shape
```
The result is an array with 1000 rows and three columns. I'll compute the `Cdf` of the values in each column.
```
from empiricaldist import Cdf
cdfs = [Cdf.from_seq(col)
for col in sample.transpose()]
```
The result is a list of `Cdf` objects that represent the marginal distributions of the three variables. Here's what they look like.
```
for i, cdf in enumerate(cdfs):
label = f'Column {i}'
cdf.plot(label=label)
decorate()
```
Column 0, which corresponds to the lowest parameter, contains the lowest probabilities.
Column 2, which corresponds to the highest parameter, contains the highest probabilities.
As it turns out, these marginal distributions are beta distributions.
The following function takes a sequence of parameters, `alpha`, and computes the marginal distribution of variable `i`:
```
def marginal_beta(alpha, i):
"""Compute the ith marginal of a Dirichlet distribution."""
total = np.sum(alpha)
return make_beta(alpha[i], total-alpha[i])
```
We can use it to compute the marginal distribution for the three variables.
```
marginals = [marginal_beta(alpha, i)
for i in range(len(alpha))]
```
The following plot shows the CDF of these distributions as gray lines and compares them to the CDFs of the samples.
```
xs = np.linspace(0, 1, 101)
for i in range(len(alpha)):
label = f'Column {i}'
pmf = pmf_from_dist(marginals[i], xs)
pmf.make_cdf().plot(color='C5')
cdf = cdfs[i]
cdf.plot(label=label, style=':')
decorate()
```
This confirms that the marginals of the Dirichlet distribution are beta distributions.
And that's useful because the Dirichlet distribution is the conjugate prior for the multinomial likelihood function.
If the prior distribution is Dirichlet with parameter vector `alpha` and the data is a vector of observations, `data`, the posterior distribution is Dirichlet with parameter vector `alpha + data`.
As an exercise at the end of this chapter, you can use this method to solve the Lions and Tigers and Bears problem.
## Summary
After reading this chapter, if you feel like you've been tricked, I understand. It turns out that many of the problems in this book can be solved with just a few arithmetic operations. So why did we go to all the trouble of using grid algorithms?
Sadly, there are only a few problems we can solve with conjugate priors; in fact, this chapter includes most of the ones that are useful in practice.
For the vast majority of problems, there is no conjugate prior and no shortcut to compute the posterior distribution.
That's why we need grid algorithms and the methods in the next two chapters, Approximate Bayesian Computation (ABC) and Markov chain Monte Carlo methods (MCMC).
## Exercises
**Exercise:** In the second version of the World Cup problem, the data we use for the update is not the number of goals in a game, but the time until the first goal.
So the probability of the data is given by the exponential distribution rather than the Poisson distribution.
But it turns out that the gamma distribution is *also* the conjugate prior of the exponential distribution, so there is a simple way to compute this update, too.
The PDF of the exponential distribution is a function of $t$ with $\lambda$ as a parameter.
$$\lambda e^{-\lambda t}$$
Multiply the PDF of the gamma prior by this likelihood, confirm that the result is an unnormalized gamma distribution, and see if you can derive its parameters.
Write a few lines of code to update `prior_gamma` with the data from this version of the problem, which was a first goal after 11 minutes and a second goal after an additional 12 minutes.
Remember to express these quantities in units of games, which are approximately 90 minutes.
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** For problems like the Euro problem where the likelihood function is binomial, we can do a Bayesian update with just a few arithmetic operations, but only if the prior is a beta distribution.
If we want a uniform prior, we can use a beta distribution with `alpha=1` and `beta=1`.
But what can we do if the prior distribution we want is not a beta distribution?
For example, in <<_TrianglePrior>> we also solved the Euro problem with a triangle prior, which is not a beta distribution.
In these cases, we can often find a beta distribution that is a good-enough approximation for the prior we want.
See if you can find a beta distribution that fits the triangle prior, then update it using `update_beta`.
Use `pmf_from_dist` to make a `Pmf` that approximates the posterior distribution and compare it to the posterior we just computed using a grid algorithm. How big is the largest difference between them?
Here's the triangle prior again.
```
from empiricaldist import Pmf
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
xs = uniform.qs
triangle = Pmf(a, xs, name='triangle')
triangle.normalize()
```
And here's the update.
```
k, n = 140, 250
likelihood = binom.pmf(k, n, xs)
posterior = triangle * likelihood
posterior.normalize()
```
To get you started, here's the beta distribution that we used as a uniform prior.
```
alpha = 1
beta = 1
prior_beta = make_beta(alpha, beta)
prior_beta.mean()
```
And here's what it looks like compared to the triangle prior.
```
prior_pmf = pmf_from_dist(prior_beta, xs)
triangle.plot(label='triangle')
prior_pmf.plot(label='beta')
decorate_euro('Prior distributions')
```
Now you take it from there.
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** [3Blue1Brown](https://en.wikipedia.org/wiki/3Blue1Brown) is a YouTube channel about math; if you are not already aware of it, I recommend it highly.
In [this video](https://www.youtube.com/watch?v=8idr1WZ1A7Q) the narrator presents this problem:
> You are buying a product online and you see three sellers offering the same product at the same price. One of them has a 100% positive rating, but with only 10 reviews. Another has a 96% positive rating with 50 total reviews. And yet another has a 93% positive rating, but with 200 total reviews.
>
>Which one should you buy from?
Let's think about how to model this scenario. Suppose each seller has some unknown probability, `x`, of providing satisfactory service and getting a positive rating, and we want to choose the seller with the highest value of `x`.
This is not the only model for this scenario, and it is not necessarily the best. An alternative would be something like item response theory, where sellers have varying ability to provide satisfactory service and customers have varying difficulty of being satisfied.
But the first model has the virtue of simplicity, so let's see where it gets us.
1. As a prior, I suggest a beta distribution with `alpha=8` and `beta=2`. What does this prior look like and what does it imply about sellers?
2. Use the data to update the prior for the three sellers and plot the posterior distributions. Which seller has the highest posterior mean?
3. How confident should we be about our choice? That is, what is the probability that the seller with the highest posterior mean actually has the highest value of `x`?
4. Consider a beta prior with `alpha=0.7` and `beta=0.5`. What does this prior look like and what does it imply about sellers?
5. Run the analysis again with this prior and see what effect it has on the results.
Note: When you evaluate the beta distribution, you should restrict the range of `xs` so it does not include 0 and 1. When the parameters of the beta distribution are less than 1, the probability density goes to infinity at 0 and 1. From a mathematical point of view, that's not a problem; it is still a proper probability distribution. But from a computational point of view, it means we have to avoid evaluating the PDF at 0 and 1.
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** Use a Dirichlet prior with parameter vector `alpha = [1, 1, 1]` to solve the Lions and Tigers and Bears problem:
>Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
>
>During the tour, we see three lions, two tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, estimate the prevalence of each species.
>
>What is the probability that the next animal we see is a bear?
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
| github_jupyter |
# Proving Universality
## Contents
1. [Introduction](#intro)
2. [Fun With Matrices](#fun)
2.1 [Matrices as outer products](#outer)
2.2 [Unitary and Hermitian matrices](#u-and-h)
2.3 [Pauli Decomposition](#pauli)
3. [Defining Universality](#defining)
4. [Basic Gate Sets](#basic)
4.1 [Clifford Gates](#big-red)
4.2 [Non-Clifford Gates](#non-clifford)
4.3 [Expanding the Gate Set](#expanding)
5. [Proving Universality](#proving)
6. [Universal Sets of Quantum Gates](#gate-sets)
## 1. Introduction <a id='intro'></a>
What does it mean for a computer to be able to do everything that it could possibly do? This was a question tackled by Alan Turing before we even had a good idea of what a computer was, or how to build one.
To ask this question for our classical computers, and specifically for our standard digital computers, we need to strip away all the screens, speakers and fancy input devices. What we are left with is simply a machine that converts input bit strings into output bit strings. If a device can perform any such conversion, taking any arbitrary set of inputs and cnoverting them to an arbitrarily chosen set of corresponding outputs, we call it *universal*.
Quantum computers similary take input states and convert them into output states. We will therefore be able to define universality in a similar way. To be more precise, and to be able to prove when universality can and cannot be acheived, it is useful to use the matrix representation of our quantum gates. But first we'll need to brush up on a few techniques.
## 2. Fun With Matrices <a id='fun'></a>
### 2.1 Matrices as outer products <a id='outer'></a>
In previous sections we have calculated many inner products, such as $\langle0|0\rangle =1$. These combine a bra and a ket to give us a single number. We can also combine them in a way that gives us a matrix, simply by putting them in the opposite order. This is called an outer product, and works by standard matrix multiplication. For example
$$
|0\rangle\langle0|= \begin{pmatrix} 1 \\\\ 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} = \begin{pmatrix} 1&0 \\\\ 0&0 \end{pmatrix},\\\\
|0\rangle\langle1| = \begin{pmatrix} 1 \\\\ 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} = \begin{pmatrix} 0&1 \\\\ 0&0 \end{pmatrix},\\\\
|1\rangle\langle0| = \begin{pmatrix} 0 \\\\ 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} = \begin{pmatrix} 0&0 \\\\ 1&0 \end{pmatrix},\\\\
|1\rangle\langle1| = \begin{pmatrix} 0 \\\\ 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} = \begin{pmatrix} 0&0 \\\\ 0&1 \end{pmatrix}.\\\\
$$
This also means that we can write any matrix purely in terms of outer products. In the examples above, we constructed the four matrices that cover each of the single elements in a single-qubit matrix, so we can write any other single-qubit matrix in terms of them.
$$
M= \begin{pmatrix} m_{0,0}&m_{0,1} \\\\ m_{1,0}&m_{1,1} \end{pmatrix} = m_{0,0} |0\rangle\langle0|+ m_{0,1} |0\rangle\langle1|+ m_{1,0} |1\rangle\langle0|+ m_{1,1} |1\rangle\langle1|
$$
This property also extends to matrices for any number of qubits, $n$. We simply use the outer products of the corresponding $n$-bit strings.
### 2.2 Unitary and Hermitian matrices <a id='u-and-h'></a>
The Hermitian conjugate $M^\dagger$ of a matrix $M$ is the combination of the transpose \(replace the top left element with the bottom right, and so on\) and the complex conjugate of each element. Two families of matrices that are very important to quantum computing are defined by their relationship with the Hermitian conjugate. One is the family of unitary matrices, for which
$$
U U^\dagger = U^\dagger U = 1.
$$
This means that the Hermitian conjugate of a unitary is its inverse: another unitary $U^\dagger$ with the power to undo the effects of $U$. All gates in quantum computing, with the exception of measurement and reset operations, can be represented by unitary matrices.
Another consequence of unitary is that it preserves the inner product between two arbitrary states. Specifically, take two states $\left| \psi_0 \right\rangle$ and $\left| \psi_1 \right\rangle$. The inner product of these is $\left\langle \psi_0 | \psi_1 \right\rangle$. If we apply the same unitary $U$ to both, the inner product of the resulting states is exactly the same,
$$
\left( \left\langle \psi_0 \right| U^\dagger \right) \left( U \left| \psi_1 \right\rangle \right) = \left\langle \psi_0 |U^\dagger U| \psi_1 \right\rangle = \left\langle \psi_0 | \phi_1 \right\rangle.
$$
This property provides us with a useful way of thinking about these gates. It means that for any set of states $\{ \left| \psi_j \right\rangle \}$ that provide an orthonormal basis for our system, the set of states $\{ \left| \phi_j \right\rangle = U \left| \psi_j \right\rangle \}$ will also be an orthonormal basis. The unitary can then be thought of as a rotation between these bases, and can be written accordingly as
$$
U = \sum_j \left| \phi_j \right\rangle \left\langle \psi_j \right|.
$$
This is essentially the quantum version of the 'truth tables' that describe the action of classical Boolean gates.
The other important family of matrices are the Hermitian matrices. These are those that are unaffected by the Hermitian conjugate
$$
H = H^\dagger.
$$
The matrices $X$, $Y$, $Z$ and $H$ are examples of Hermitian matrices that you've already seen \(coincidentally, they are also all unitary since they are their own inverses\).
All unitary and Hermitian matrices have the property of being diagonalizable. This means that they can be written in the form
$$
M = \sum_j \lambda_j |h_j\rangle\langle h_j|,
$$
where the $\lambda_j$ are the eigenvalues of the matrix and $|h_j\rangle$ are the corresponding eigenstates.
For unitaries, applying the $U U^\dagger=1$ condition in this diagonal form imples that $\lambda_j \lambda_j^* = 1$. The eigenvalues are therefore always complex numbers of magnitude 1, and so can be expressed $e^{ih_j}$ for some real value $h_j$. For Hermitian matrices the condition $H = H^\dagger$ implies $\lambda_j = \lambda_j^*$, and hence that all eigenvalues are real.
These two types of matrices therefore differ only in that one must have real numbers for eigenvalues, and the other must have complex exponentials of real numbers. This means that, for every unitary, we can define a corresponding Hermitian matrix. For this we simply reuse the same eigenstates, and use the $h_j$ from each $e^{ih_j}$ as the corresponding eigenvalue.
Similarly, for each Hermitian matrix there exists a unitary. We simply reuse the same eigenstates, and exponentiate the $h_j$ to create the eigenvalues $e^{ih_j}$. This can be expressed as
$$
U = e^{iH}
$$
Here we have used the standard definition of how to exponentiate a matrix, which has exactly the properties we require: preserving the eigenstates and exponentiating the eigenvalues.
### 2.3 Pauli decomposition <a id='pauli'></a>
As we saw above, it is possible to write matrices entirely in terms of outer products.
$$
M= \begin{pmatrix} m_{0,0}&m_{0,1} \\\\ m_{1,0}&m_{1,1} \end{pmatrix} = m_{0,0} |0\rangle\langle0|+ m_{0,1} |0\rangle\langle1|+ m_{1,0} |1\rangle\langle0|+ m_{1,1} |1\rangle\langle1|
$$
Now we will see that it is also possible to write them completely in terms of Pauli operators. For this, the key thing to note is that
$$
\frac{1+Z}{2} = \frac{1}{2}\left[ \begin{pmatrix} 1&0 \\\\0&1 \end{pmatrix}+\begin{pmatrix} 1&0 \\\\0&-1 \end{pmatrix}\right] = |0\rangle\langle0|,\\\\\frac{1-Z}{2} = \frac{1}{2}\left[ \begin{pmatrix} 1&0 \\\\0&1 \end{pmatrix}-\begin{pmatrix} 1&0 \\\\0&-1 \end{pmatrix}\right] = |1\rangle\langle1|
$$
This shows that $|0\rangle\langle0|$ and $|1\rangle\langle1|$ can be expressed using the identity matrix and $Z$. Now, using the property that $X|0\rangle = |1\rangle$, we can also produce
$$
|0\rangle\langle1| = |0\rangle\langle0|X = \frac{1}{2}(1+Z)~X = \frac{X+iY}{2},\\\\
|1\rangle\langle0| = X|0\rangle\langle0| = X~\frac{1}{2}(1+Z) = \frac{X-iY}{2}.
$$
Since we have all the outer products, we can now use this to write the matrix in terms of Pauli matrices:
$$
M = \frac{m_{0,0}+m_{1,1}}{2}~1~+~\frac{m_{0,1}+m_{1,0}}{2}~X~+~i\frac{m_{0,1}-m_{1,0}}{2}~Y~+~\frac{m_{0,0}-m_{1,1}}{2}~Z.
$$
This example was for a general single-qubit matrix, but the corresponding result is true also for matrices for any number of qubits. We simply start from the observation that
$$
\left(\frac{1+Z}{2}\right)\otimes\left(\frac{1+Z}{2}\right)\otimes\ldots\otimes\left(\frac{1+Z}{2}\right) = |00\ldots0\rangle\langle00\ldots0|,
$$
and can then proceed in the same manner as above. In the end it can be shown that any matrix can be expressed in terms of tensor products of Pauli matrices:
$$
M = \sum_{P_{n-1},\ldots,P_0 \in \{1,X,Y,Z\}} C_{P_{n-1}\ldots,P_0}~~P_{n-1} \otimes P_{n-2}\otimes\ldots\otimes P_0.
$$
For Hermitian matrices, note that the coefficients $C_{P_{n-1}\ldots,P_0}$ here will all be real.
## 3. Defining Universality <a id='defining'></a>
Just as each quantum gate can be represented by a unitary, so too can we describe an entire quantum computation by a (very large) unitary operation. The effect of this is to rotate the input state to the output state.
One possible special case of this is that the input and output states describe bit strings expressed in quantum form. The mapping of each input $x$ to its output $f(x)$ could be described by some (reversible) classical computation. Any such computation could therefore be expressed as a unitary.
$$
U = \sum_j \left| f(x) \right\rangle \left\langle x \right|.
$$
If we were able to implement any possible unitary, it would therefore mean we could acheive universality in the sense of standard digital computers.
Another special case is that the input and output states could describe a physical system, and the computation we perform is to simulate the dynamics of that system. This is an important problem that is impractical for classical computers, but is a natural application of quantum computers. The time evolution of the system in this case corresponds to the unitary that we apply, and the associated Hermitian matrix is the Hamiltonian of the system. Achieving any unitary would therefore correspond to simulating any time evolution, and engineering the effects of any Hamiltonian.
Combining these insights we can define what it means for quantum computers to be univeral. It is simply the ability to achieve any desired unitary on any arbitrary number of qubits. If we have this, we know that we can reproduce anything a digital computer can do, simulate any quantum system, and do everything else that is possible for a quantum computer.
## 4. Basic Gate Sets <a id='basic'></a>
Whether or not we can build any unitary from an set of basic gates depends greatly on what basic gates we have access to. For every possible realization of fault-tolerant quantum computing, there is a set of quantum operations that are most straightforward to realize. Often these consist of single and two qubit gates, most of which correspond to the set of so-called *Clifford gates*. This is a very important set of operations, which do a lot of the heavy-lifting in any quantum algorithm.
### 4.1 Clifford Gates <a id='big-red'></a>
To understand Clifford gates, let's start with an example that you have already seen many times: the Hadamard.
$$
H = |+\rangle\langle0|~+~ |-\rangle\langle1| = |0\rangle\langle+|~+~ |1\rangle\langle-|.
$$
This gate is expressed above using outer products, as described above. When expressed in this form, its famous effect becomes obvious: it takes $|0\rangle$, and rotates it to $|+\rangle$. More generally, we can say it rotates the basis states of the z measurement, $\{ |0\rangle,|1\rangle \}$, to the basis states of the x measurement, $\{ |+\rangle,|-\rangle \}$, and vice versa.
In this way, the effect of the Hadamard is to move information around a qubit. It swaps any information that would previously be accessed by an x measurement with that accessed by a z measurement.
The Hadamard can be combined with other gates to perform different operations, for example:
$$
H X H = Z,\\\\
H Z H = X.
$$
By doing a Hadamard before and after an $X$, we cause the action it previously applied to the z basis states to be transferred to the x basis states instead. The combined effect is then identical to that of a $Z$. Similarly, we can create an $X$ from Hadamards and a $Z$.
Similar behavior can be seen for the $S$ gate and its Hermitian conjugate,
$$
S X S^{\dagger} = Y,\\\\
S Y S^{\dagger} = -X,\\\\
S Z S^{\dagger} = Z.
$$
This has a similar effect to the Hadamard, except that it swaps $X$ and $Y$ instead of $X$ and $Z$. In combination with the Hadamard, we could then make a composite gate that shifts information between y and z.
This property of transforming Paulis into other Paulis is the defining feature of Clifford gates. Stated explicitly for the single-qubit case: if $U$ is a Clifford and $P$ is a Pauli, $U P U^{\dagger}$ will also be a Pauli. For Hermitian gates, like the Hadamard, we can simply use $U P U$.
Further examples of single-qubit Clifford gates are the Paulis themselves. These do not transform the Pauli they act on. Instead, they simply assign a phase of $-1$ to the two that they anticommute with. For example,
$$
Z X Z = -X,\\\\
Z Y Z = -Y,\\\\
Z Z Z= ~~~~Z.
$$
You may have noticed that a similar phase also arose in the effect of the $S$ gate. By combining this with a Pauli, we could make a composite gate that would cancel this phase, and swap $X$ and $Y$ in a way more similar to the Hadamard's swap of $X$ and $Z$.
For multiple-qubit Clifford gates, the defining property is that they transform tensor products of Paulis to other tensor products of Paulis. For example, the most prominent two-qubit Clifford gate is the CNOT. The property of this that we will make use of in this chapter is
$$
{ CX}_{j,k}~ (X \otimes 1)~{ CX}_{j,k} = X \otimes X.
$$
This effectively 'copies' an $X$ from the control qubit over to the target.
The process of sandwiching a matrix between a unitary and its Hermitian conjugate is known as conjugation by that unitary. This process transforms the eigenstates of the matrix, but leaves the eigenvalues unchanged. The reason why conjugation by Cliffords can transform between Paulis is because all Paulis share the same set of eigenvalues.
### 4.2 Non-Clifford Gates <a id='non-clifford'></a>
The Clifford gates are very important, but they are not powerful on their own. In order to do any quantum computation, we need gates that are not Cliffords. Three important examples are arbitrary rotations around the three axes of the qubit, $R_x(\theta)$, $R_y(\theta)$ and $R_z(\theta)$.
Let's focus on $R_x(\theta)$. As we saw above, any unitary can be expressed in an exponential form using a Hermitian matrix. For this gate, we find
$$
R_x(\theta) = e^{i \frac{\theta}{2} X}.
$$
The last section also showed us that the unitary and its corresponding Hermitian matrix have the same eigenstates. In this section, we've seen that conjugation by a unitary transforms eigenstates and leaves eigenvalues unchanged. With this in mind, it can be shown that
$$
U R_x(\theta)U^\dagger = e^{i \frac{\theta}{2} ~U X U^\dagger}.
$$
By conjugating this rotation by a Clifford, we can therefore transform it to the same rotation around another axis. So even if we didn't have a direct way to perform $R_y(\theta)$ and $R_z(\theta)$, we could do it with $R_x(\theta)$ combined with Clifford gates. This technique of boosting the power of non-Clifford gates by combining them with Clifford gates is one that we make great use of in quantum computing.
### 4.3 Expanding the Gate Set <a id='expanding'></a>
As another example of combining $R_x(\theta)$ with Cliffords, let's conjugate it with a CNOT.
$$
CX_{j,k} ~(R_x(\theta) \otimes 1)~ CX_{j,k} = CX_{j,k} ~ e^{i \frac{\theta}{2} ~ (X\otimes 1)}~ CX_{j,k} = e^{i \frac{\theta}{2} ~CX_{j,k} ~ (X\otimes 1)~ CX_{j,k}} = e^{i \frac{\theta}{2} ~ X\otimes X}
$$
This transforms our simple, single-qubit rotation into a much more powerful two-qubit gate. This is not just equivalent to performing the same rotation independently on both qubits. Instead, it is a gate capable of generating and manipulating entangled states.
We needn't stop there. We can use the same trick to extend the operation to any number of qubits. All that's needed is more conjugates by the CNOT to keep copying the $X$ over to new qubits.
Furthermore, we can use single-qubit Cliffords to transform the Pauli on different qubits. For example, in our two-qubit example we could conjugate by $S$ on the qubit on the right to turn the $X$ there into a $Y$:
$$
\left( I \otimes S \right) ~e^{i \frac{\theta}{2} ~ X\otimes X}~\left( I \otimes S^\dagger \right) = e^{i \frac{\theta}{2} ~ X\otimes Y}.
$$
With these techniques, we can make complex entangling operations that act on any arbitrary number of qubits, of the form
$$
U = e^{i\frac{\theta}{2} ~ P_{n-1}\otimes P_{n-2}\otimes...\otimes P_0}, ~~~ P_j \in \{I,X,Y,Z\}.
$$
This all goes to show that combining the single and two-qubit Clifford gates with rotations around the x axis gives us a powerful set of possibilities. What's left to demonstrate is that we can use them to do anything.
## 5. Proving Universality <a id='proving'></a>
As for classical computers, we will need to split this big job up into manageable chunks. We'll need to find a basic set of gates that will allow us to achieve this. As we'll see, the single- and two-qubit gates of the last section are sufficient for the task.
Suppose we wish to implement the unitary
$$
U = e^{i(aX + bZ)},
$$
but the only gates we have are $R_x(\theta) = e^{i \frac{\theta}{2} X}$ and $R_z(\theta) = e^{i \frac{\theta}{2} Z}$. The best way to solve this problem would be to use Euler angles. But let's instead consider a different method.
The Hermitian matrix in the exponential for $U$ is simply the sum of those for the $R_x(\theta)$ and $R_z(\theta)$ rotations. This suggests a naive approach to solving our problem: we could apply $R_z(2b) = e^{i bZ}$ followed by $R_x(2a) = e^{i a X}$. Unfortunately, because we are exponentiating matrices that do not commute, this approach will not work.
$$
e^{i a X} e^{i b Z} \neq e^{i(aX + bZ)}
$$
However, we could use the following modified version:
$$
U = \lim_{n\rightarrow\infty} ~ \left(e^{iaX/n}e^{ibZ/n}\right)^n.
$$
Here we split $U$ up into $n$ small slices. For each slice, it is a good approximation to say that
$$
e^{iaX/n}e^{ibZ/n} = e^{i(aX + bZ)/n}
$$
The error in this approximation scales as $1/n^2$. When we combine the $n$ slices, we get an approximation of our target unitary whose error scales as $1/n$. So by simply increasing the number of slices, we can get as close to $U$ as we need. Other methods of creating the sequence are also possible to get even more accurate versions of our target unitary.
The power of this method is that it can be used in complex cases than just a single qubit. For example, consider the unitary
$$
U = e^{i(aX\otimes X\otimes X + bZ\otimes Z\otimes Z)}.
$$
We know how to create the unitary $e^{i\frac{\theta}{2} X\otimes X\otimes X}$ from a single qubit $R_x(\theta)$ and two controlled-NOTs.
```python
qc.cx(0,2)
qc.cx(0,1)
qc.rx(theta,0)
qc.cx(0,1)
qc.cx(0,1)
```
With a few Hadamards, we can do the same for $e^{i\frac{\theta}{2} Z\otimes Z\otimes Z}$.
```python
qc.h(0)
qc.h(1)
qc.h(2)
qc.cx(0,2)
qc.cx(0,1)
qc.rx(theta,0)
qc.cx(0,1)
qc.cx(0,1)
qc.h(2)
qc.h(1)
qc.h(0)
```
This gives us the ability to reproduce a small slice of our new, three-qubit $U$:
$$
e^{iaX\otimes X\otimes X/n}e^{ibZ\otimes Z\otimes Z/n} = e^{i(aX\otimes X\otimes X + bZ\otimes Z\otimes Z)/n}.
$$
As before, we can then combine the slices together to get an arbitrarily accurate approximation of $U$.
This method continues to work as we increase the number of qubits, and also the number of terms that need simulating. Care must be taken to ensure that the approximation remains accurate, but this can be done in ways that require reasonable resources. Adding extra terms to simulate, or increasing the desired accuracy, only require the complexity of the method to increase polynomially.
Methods of this form can reproduce any unitary $U = e^{iH}$ for which $H$ can be expressed as a sum of tensor products of Paulis. Since we have shown previously that all matrices can be expressed in this way, this is sufficient to show that we can reproduce all unitaries. Though other methods may be better in practice, the main concept to take away from this chapter is that there is certainly a way to reproduce all multi-qubit unitaries using only the basic operations found in Qiskit. Quantum universality can be achieved!
This gate set is not the only one that can acheive universality. For example it can be shown that just the Hadamard and Toffoli are sufficient for universality. Multiple other gates sets have also been considered and been proven universal, each motivated by different routes toward acheiving the gates fault-tolerantly.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# Shor's Algorithm
Shor’s algorithm is famous for factoring integers in polynomial time. Since the best-known classical algorithm requires superpolynomial time to factor the product of two primes, the widely used cryptosystem, RSA, relies on factoring being impossible for large enough integers.
In this chapter we will focus on the quantum part of Shor’s algorithm, which actually solves the problem of _period finding_. Since a factoring problem can be turned into a period finding problem in polynomial time, an efficient period finding algorithm can be used to factor integers efficiently too. For now its enough to show that if we can compute the period of $a^x\bmod N$ efficiently, then we can also efficiently factor. Since period finding is a worthy problem in its own right, we will first solve this, then discuss how this can be used to factor in section 5.
```
import matplotlib.pyplot as plt
import numpy as np
from qiskit import QuantumCircuit, Aer, transpile, assemble
from qiskit.visualization import plot_histogram
from math import gcd
from numpy.random import randint
import pandas as pd
from fractions import Fraction
print("Imports Successful")
```
## 1. The Problem: Period Finding
Let’s look at the periodic function:
$$ f(x) = a^x \bmod{N}$$
<details>
<summary>Reminder: Modulo & Modular Arithmetic (Click here to expand)</summary>
The modulo operation (abbreviated to 'mod') simply means to find the remainder when dividing one number by another. For example:
$$ 17 \bmod 5 = 2 $$
Since $17 \div 5 = 3$ with remainder $2$. (i.e. $17 = (3\times 5) + 2$). In Python, the modulo operation is denoted through the <code>%</code> symbol.
This behaviour is used in <a href="https://en.wikipedia.org/wiki/Modular_arithmetic">modular arithmetic</a>, where numbers 'wrap round' after reaching a certain value (the modulus). Using modular arithmetic, we could write:
$$ 17 = 2 \pmod 5$$
Note that here the $\pmod 5$ applies to the entire equation (since it is in parenthesis), unlike the equation above where it only applied to the left-hand side of the equation.
</details>
where $a$ and $N$ are positive integers, $a$ is less than $N$, and they have no common factors. The period, or order ($r$), is the smallest (non-zero) integer such that:
$$a^r \bmod N = 1 $$
We can see an example of this function plotted on the graph below. Note that the lines between points are to help see the periodicity and do not represent the intermediate values between the x-markers.
```
N = 35
a = 3
# Calculate the plotting data
xvals = np.arange(35)
yvals = [np.mod(a**x, N) for x in xvals]
# Use matplotlib to display it nicely
fig, ax = plt.subplots()
ax.plot(xvals, yvals, linewidth=1, linestyle='dotted', marker='x')
ax.set(xlabel='$x$', ylabel='$%i^x$ mod $%i$' % (a, N),
title="Example of Periodic Function in Shor's Algorithm")
try: # plot r on the graph
r = yvals[1:].index(1) +1
plt.annotate('', xy=(0,1), xytext=(r,1), arrowprops=dict(arrowstyle='<->'))
plt.annotate('$r=%i$' % r, xy=(r/3,1.5))
except ValueError:
print('Could not find period, check a < N and have no common factors.')
```
## 2. The Solution
Shor’s solution was to use [quantum phase estimation](./quantum-phase-estimation.html) on the unitary operator:
$$ U|y\rangle \equiv |ay \bmod N \rangle $$
To see how this is helpful, let’s work out what an eigenstate of U might look like. If we started in the state $|1\rangle$, we can see that each successive application of U will multiply the state of our register by $a \pmod N$, and after $r$ applications we will arrive at the state $|1\rangle$ again. For example with $a = 3$ and $N = 35$:
$$\begin{aligned}
U|1\rangle &= |3\rangle & \\
U^2|1\rangle &= |9\rangle \\
U^3|1\rangle &= |27\rangle \\
& \vdots \\
U^{(r-1)}|1\rangle &= |12\rangle \\
U^r|1\rangle &= |1\rangle
\end{aligned}$$
```
ax.set(xlabel='Number of applications of U', ylabel='End state of register',
title="Effect of Successive Applications of U")
fig
```
So a superposition of the states in this cycle ($|u_0\rangle$) would be an eigenstate of $U$:
$$|u_0\rangle = \tfrac{1}{\sqrt{r}}\sum_{k=0}^{r-1}{|a^k \bmod N\rangle} $$
<details>
<summary>Click to Expand: Example with $a = 3$ and $N=35$</summary>
$$\begin{aligned}
|u_0\rangle &= \tfrac{1}{\sqrt{12}}(|1\rangle + |3\rangle + |9\rangle \dots + |4\rangle + |12\rangle) \\[10pt]
U|u_0\rangle &= \tfrac{1}{\sqrt{12}}(U|1\rangle + U|3\rangle + U|9\rangle \dots + U|4\rangle + U|12\rangle) \\[10pt]
&= \tfrac{1}{\sqrt{12}}(|3\rangle + |9\rangle + |27\rangle \dots + |12\rangle + |1\rangle) \\[10pt]
&= |u_0\rangle
\end{aligned}$$
</details>
This eigenstate has an eigenvalue of 1, which isn’t very interesting. A more interesting eigenstate could be one in which the phase is different for each of these computational basis states. Specifically, let’s look at the case in which the phase of the $k$th state is proportional to $k$:
$$\begin{aligned}
|u_1\rangle &= \tfrac{1}{\sqrt{r}}\sum_{k=0}^{r-1}{e^{-\tfrac{2\pi i k}{r}}|a^k \bmod N\rangle}\\[10pt]
U|u_1\rangle &= e^{\tfrac{2\pi i}{r}}|u_1\rangle
\end{aligned}
$$
<details>
<summary>Click to Expand: Example with $a = 3$ and $N=35$</summary>
$$\begin{aligned}
|u_1\rangle &= \tfrac{1}{\sqrt{12}}(|1\rangle + e^{-\tfrac{2\pi i}{12}}|3\rangle + e^{-\tfrac{4\pi i}{12}}|9\rangle \dots + e^{-\tfrac{20\pi i}{12}}|4\rangle + e^{-\tfrac{22\pi i}{12}}|12\rangle) \\[10pt]
U|u_1\rangle &= \tfrac{1}{\sqrt{12}}(|3\rangle + e^{-\tfrac{2\pi i}{12}}|9\rangle + e^{-\tfrac{4\pi i}{12}}|27\rangle \dots + e^{-\tfrac{20\pi i}{12}}|12\rangle + e^{-\tfrac{22\pi i}{12}}|1\rangle) \\[10pt]
U|u_1\rangle &= e^{\tfrac{2\pi i}{12}}\cdot\tfrac{1}{\sqrt{12}}(e^{\tfrac{-2\pi i}{12}}|3\rangle + e^{-\tfrac{4\pi i}{12}}|9\rangle + e^{-\tfrac{6\pi i}{12}}|27\rangle \dots + e^{-\tfrac{22\pi i}{12}}|12\rangle + e^{-\tfrac{24\pi i}{12}}|1\rangle) \\[10pt]
U|u_1\rangle &= e^{\tfrac{2\pi i}{12}}|u_1\rangle
\end{aligned}$$
(We can see $r = 12$ appears in the denominator of the phase.)
</details>
This is a particularly interesting eigenvalue as it contains $r$. In fact, $r$ has to be included to make sure the phase differences between the $r$ computational basis states are equal. This is not the only eigenstate with this behaviour; to generalise this further, we can multiply an integer, $s$, to this phase difference, which will show up in our eigenvalue:
$$\begin{aligned}
|u_s\rangle &= \tfrac{1}{\sqrt{r}}\sum_{k=0}^{r-1}{e^{-\tfrac{2\pi i s k}{r}}|a^k \bmod N\rangle}\\[10pt]
U|u_s\rangle &= e^{\tfrac{2\pi i s}{r}}|u_s\rangle
\end{aligned}
$$
<details>
<summary>Click to Expand: Example with $a = 3$ and $N=35$</summary>
$$\begin{aligned}
|u_s\rangle &= \tfrac{1}{\sqrt{12}}(|1\rangle + e^{-\tfrac{2\pi i s}{12}}|3\rangle + e^{-\tfrac{4\pi i s}{12}}|9\rangle \dots + e^{-\tfrac{20\pi i s}{12}}|4\rangle + e^{-\tfrac{22\pi i s}{12}}|12\rangle) \\[10pt]
U|u_s\rangle &= \tfrac{1}{\sqrt{12}}(|3\rangle + e^{-\tfrac{2\pi i s}{12}}|9\rangle + e^{-\tfrac{4\pi i s}{12}}|27\rangle \dots + e^{-\tfrac{20\pi i s}{12}}|12\rangle + e^{-\tfrac{22\pi i s}{12}}|1\rangle) \\[10pt]
U|u_s\rangle &= e^{\tfrac{2\pi i s}{12}}\cdot\tfrac{1}{\sqrt{12}}(e^{-\tfrac{2\pi i s}{12}}|3\rangle + e^{-\tfrac{4\pi i s}{12}}|9\rangle + e^{-\tfrac{6\pi i s}{12}}|27\rangle \dots + e^{-\tfrac{22\pi i s}{12}}|12\rangle + e^{-\tfrac{24\pi i s}{12}}|1\rangle) \\[10pt]
U|u_s\rangle &= e^{\tfrac{2\pi i s}{12}}|u_s\rangle
\end{aligned}$$
</details>
We now have a unique eigenstate for each integer value of $s$ where $$0 \leq s \leq r-1.$$ Very conveniently, if we sum up all these eigenstates, the different phases cancel out all computational basis states except $|1\rangle$:
$$ \tfrac{1}{\sqrt{r}}\sum_{s=0}^{r-1} |u_s\rangle = |1\rangle$$
<details>
<summary>Click to Expand: Example with $a = 7$ and $N=15$</summary>
For this, we will look at a smaller example where $a = 7$ and $N=15$. In this case $r=4$:
$$\begin{aligned}
\tfrac{1}{2}(\quad|u_0\rangle &= \tfrac{1}{2}(|1\rangle \hphantom{e^{-\tfrac{2\pi i}{12}}}+ |7\rangle \hphantom{e^{-\tfrac{12\pi i}{12}}} + |4\rangle \hphantom{e^{-\tfrac{12\pi i}{12}}} + |13\rangle)\dots \\[10pt]
+ |u_1\rangle &= \tfrac{1}{2}(|1\rangle + e^{-\tfrac{2\pi i}{4}}|7\rangle + e^{-\tfrac{\hphantom{1}4\pi i}{4}}|4\rangle + e^{-\tfrac{\hphantom{1}6\pi i}{4}}|13\rangle)\dots \\[10pt]
+ |u_2\rangle &= \tfrac{1}{2}(|1\rangle + e^{-\tfrac{4\pi i}{4}}|7\rangle + e^{-\tfrac{\hphantom{1}8\pi i}{4}}|4\rangle + e^{-\tfrac{12\pi i}{4}}|13\rangle)\dots \\[10pt]
+ |u_3\rangle &= \tfrac{1}{2}(|1\rangle + e^{-\tfrac{6\pi i}{4}}|7\rangle + e^{-\tfrac{12\pi i}{4}}|4\rangle + e^{-\tfrac{18\pi i}{4}}|13\rangle)\quad) = |1\rangle \\[10pt]
\end{aligned}$$
</details>
Since the computational basis state $|1\rangle$ is a superposition of these eigenstates, which means if we do QPE on $U$ using the state $|1\rangle$, we will measure a phase:
$$\phi = \frac{s}{r}$$
Where $s$ is a random integer between $0$ and $r-1$. We finally use the [continued fractions](https://en.wikipedia.org/wiki/Continued_fraction) algorithm on $\phi$ to find $r$. The circuit diagram looks like this (note that this diagram uses Qiskit's qubit ordering convention):
<img src="images/shor_circuit_1.svg">
We will next demonstrate Shor’s algorithm using Qiskit’s simulators. For this demonstration we will provide the circuits for $U$ without explanation, but in section 4 we will discuss how circuits for $U^{2^j}$ can be constructed efficiently.
## 3. Qiskit Implementation
In this example we will solve the period finding problem for $a=7$ and $N=15$. We provide the circuits for $U$ where:
$$U|y\rangle = |ay\bmod 15\rangle $$
without explanation. To create $U^x$, we will simply repeat the circuit $x$ times. In the next section we will discuss a general method for creating these circuits efficiently. The function `c_amod15` returns the controlled-U gate for `a`, repeated `power` times.
```
def c_amod15(a, power):
"""Controlled multiplication by a mod 15"""
if a not in [2,7,8,11,13]:
raise ValueError("'a' must be 2,7,8,11 or 13")
U = QuantumCircuit(4)
for iteration in range(power):
if a in [2,13]:
U.swap(0,1)
U.swap(1,2)
U.swap(2,3)
if a in [7,8]:
U.swap(2,3)
U.swap(1,2)
U.swap(0,1)
if a == 11:
U.swap(1,3)
U.swap(0,2)
if a in [7,11,13]:
for q in range(4):
U.x(q)
U = U.to_gate()
U.name = "%i^%i mod 15" % (a, power)
c_U = U.control()
return c_U
```
We will use 8 counting qubits:
```
# Specify variables
n_count = 8 # number of counting qubits
a = 7
```
We also import the circuit for the QFT (you can read more about the QFT in the [quantum Fourier transform chapter](./quantum-fourier-transform.html#generalqft)):
```
def qft_dagger(n):
"""n-qubit QFTdagger the first n qubits in circ"""
qc = QuantumCircuit(n)
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-np.pi/float(2**(j-m)), m, j)
qc.h(j)
qc.name = "QFT†"
return qc
```
With these building blocks we can easily construct the circuit for Shor's algorithm:
```
# Create QuantumCircuit with n_count counting qubits
# plus 4 qubits for U to act on
qc = QuantumCircuit(n_count + 4, n_count)
# Initialize counting qubits
# in state |+>
for q in range(n_count):
qc.h(q)
# And auxiliary register in state |1>
qc.x(3+n_count)
# Do controlled-U operations
for q in range(n_count):
qc.append(c_amod15(a, 2**q),
[q] + [i+n_count for i in range(4)])
# Do inverse-QFT
qc.append(qft_dagger(n_count), range(n_count))
# Measure circuit
qc.measure(range(n_count), range(n_count))
qc.draw(fold=-1) # -1 means 'do not fold'
```
Let's see what results we measure:
```
aer_sim = Aer.get_backend('aer_simulator')
t_qc = transpile(qc, aer_sim)
qobj = assemble(t_qc)
results = aer_sim.run(qobj).result()
counts = results.get_counts()
plot_histogram(counts)
```
Since we have 8 qubits, these results correspond to measured phases of:
```
rows, measured_phases = [], []
for output in counts:
decimal = int(output, 2) # Convert (base 2) string to decimal
phase = decimal/(2**n_count) # Find corresponding eigenvalue
measured_phases.append(phase)
# Add these values to the rows in our table:
rows.append([f"{output}(bin) = {decimal:>3}(dec)",
f"{decimal}/{2**n_count} = {phase:.2f}"])
# Print the rows in a table
headers=["Register Output", "Phase"]
df = pd.DataFrame(rows, columns=headers)
print(df)
```
We can now use the continued fractions algorithm to attempt to find $s$ and $r$. Python has this functionality built in: We can use the `fractions` module to turn a float into a `Fraction` object, for example:
```
Fraction(0.666)
```
Because this gives fractions that return the result exactly (in this case, `0.6660000...`), this can give gnarly results like the one above. We can use the `.limit_denominator()` method to get the fraction that most closely resembles our float, with denominator below a certain value:
```
# Get fraction that most closely resembles 0.666
# with denominator < 15
Fraction(0.666).limit_denominator(15)
```
Much nicer! The order (r) must be less than N, so we will set the maximum denominator to be `15`:
```
rows = []
for phase in measured_phases:
frac = Fraction(phase).limit_denominator(15)
rows.append([phase, f"{frac.numerator}/{frac.denominator}", frac.denominator])
# Print as a table
headers=["Phase", "Fraction", "Guess for r"]
df = pd.DataFrame(rows, columns=headers)
print(df)
```
We can see that two of the measured eigenvalues provided us with the correct result: $r=4$, and we can see that Shor’s algorithm has a chance of failing. These bad results are because $s = 0$, or because $s$ and $r$ are not coprime and instead of $r$ we are given a factor of $r$. The easiest solution to this is to simply repeat the experiment until we get a satisfying result for $r$.
### Quick Exercise
- Modify the circuit above for values of $a = 2, 8, 11$ and $13$. What results do you get and why?
## 4. Modular Exponentiation
You may have noticed that the method of creating the $U^{2^j}$ gates by repeating $U$ grows exponentially with $j$ and will not result in a polynomial time algorithm. We want a way to create the operator:
$$ U^{2^j}|y\rangle = |a^{2^j}y \bmod N \rangle $$
that grows polynomially with $j$. Fortunately, calculating:
$$ a^{2^j} \bmod N$$
efficiently is possible. Classical computers can use an algorithm known as _repeated squaring_ to calculate an exponential. In our case, since we are only dealing with exponentials of the form $2^j$, the repeated squaring algorithm becomes very simple:
```
def a2jmodN(a, j, N):
"""Compute a^{2^j} (mod N) by repeated squaring"""
for i in range(j):
a = np.mod(a**2, N)
return a
a2jmodN(7, 2049, 53)
```
If an efficient algorithm is possible in Python, then we can use the same algorithm on a quantum computer. Unfortunately, despite scaling polynomially with $j$, modular exponentiation circuits are not straightforward and are the bottleneck in Shor’s algorithm. A beginner-friendly implementation can be found in reference [1].
## 5. Factoring from Period Finding
Not all factoring problems are difficult; we can spot an even number instantly and know that one of its factors is 2. In fact, there are [specific criteria](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf#%5B%7B%22num%22%3A127%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C70%2C223%2C0%5D) for choosing numbers that are difficult to factor, but the basic idea is to choose the product of two large prime numbers.
A general factoring algorithm will first check to see if there is a shortcut to factoring the integer (is the number even? Is the number of the form $N = a^b$?), before using Shor’s period finding for the worst-case scenario. Since we aim to focus on the quantum part of the algorithm, we will jump straight to the case in which N is the product of two primes.
### Example: Factoring 15
To see an example of factoring on a small number of qubits, we will factor 15, which we all know is the product of the not-so-large prime numbers 3 and 5.
```
N = 15
```
The first step is to choose a random number, $a$, between $1$ and $N-1$:
```
np.random.seed(1) # This is to make sure we get reproduceable results
a = randint(2, 15)
print(a)
```
Next we quickly check it isn't already a non-trivial factor of $N$:
```
from math import gcd # greatest common divisor
gcd(a, N)
```
Great. Next, we do Shor's order finding algorithm for `a = 7` and `N = 15`. Remember that the phase we measure will be $s/r$ where:
$$ a^r \bmod N = 1 $$
and $s$ is a random integer between 0 and $r-1$.
```
def qpe_amod15(a):
n_count = 8
qc = QuantumCircuit(4+n_count, n_count)
for q in range(n_count):
qc.h(q) # Initialize counting qubits in state |+>
qc.x(3+n_count) # And auxiliary register in state |1>
for q in range(n_count): # Do controlled-U operations
qc.append(c_amod15(a, 2**q),
[q] + [i+n_count for i in range(4)])
qc.append(qft_dagger(n_count), range(n_count)) # Do inverse-QFT
qc.measure(range(n_count), range(n_count))
# Simulate Results
aer_sim = Aer.get_backend('aer_simulator')
# Setting memory=True below allows us to see a list of each sequential reading
t_qc = transpile(qc, aer_sim)
qobj = assemble(t_qc, shots=1)
result = aer_sim.run(qobj, memory=True).result()
readings = result.get_memory()
print("Register Reading: " + readings[0])
phase = int(readings[0],2)/(2**n_count)
print("Corresponding Phase: %f" % phase)
return phase
```
From this phase, we can easily find a guess for $r$:
```
phase = qpe_amod15(a) # Phase = s/r
Fraction(phase).limit_denominator(15) # Denominator should (hopefully!) tell us r
frac = Fraction(phase).limit_denominator(15)
s, r = frac.numerator, frac.denominator
print(r)
```
Now we have $r$, we might be able to use this to find a factor of $N$. Since:
$$a^r \bmod N = 1 $$
then:
$$(a^r - 1) \bmod N = 0 $$
which means $N$ must divide $a^r-1$. And if $r$ is also even, then we can write:
$$a^r -1 = (a^{r/2}-1)(a^{r/2}+1)$$
(if $r$ is not even, we cannot go further and must try again with a different value for $a$). There is then a high probability that the greatest common divisor of $N$ and either $a^{r/2}-1$, or $a^{r/2}+1$ is a proper factor of $N$ [2]:
```
guesses = [gcd(a**(r//2)-1, N), gcd(a**(r//2)+1, N)]
print(guesses)
```
The cell below repeats the algorithm until at least one factor of 15 is found. You should try re-running the cell a few times to see how it behaves.
```
a = 7
factor_found = False
attempt = 0
while not factor_found:
attempt += 1
print("\nAttempt %i:" % attempt)
phase = qpe_amod15(a) # Phase = s/r
frac = Fraction(phase).limit_denominator(N) # Denominator should (hopefully!) tell us r
r = frac.denominator
print("Result: r = %i" % r)
if phase != 0:
# Guesses for factors are gcd(x^{r/2} ±1 , 15)
guesses = [gcd(a**(r//2)-1, N), gcd(a**(r//2)+1, N)]
print("Guessed Factors: %i and %i" % (guesses[0], guesses[1]))
for guess in guesses:
if guess not in [1,N] and (N % guess) == 0: # Check to see if guess is a factor
print("*** Non-trivial factor found: %i ***" % guess)
factor_found = True
```
## 6. References
1. Stephane Beauregard, _Circuit for Shor's algorithm using 2n+3 qubits,_ [arXiv:quant-ph/0205095](https://arxiv.org/abs/quant-ph/0205095)
2. M. Nielsen and I. Chuang, _Quantum Computation and Quantum Information,_ Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000). (Page 633)
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
## Computer vision data
```
%matplotlib inline
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
```
This module contains the classes that define datasets handling [`Image`](/vision.image.html#Image) objects and their transformations. As usual, we'll start with a quick overview, before we get in to the detailed API docs.
Before any work can be done a dataset needs to be converted into a [`DataBunch`](/basic_data.html#DataBunch) object, and in the case of the computer vision data - specifically into an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) subclass.
This is done with the help of [data block API](/data_block.html) and the [`ImageList`](/vision.data.html#ImageList) class and its subclasses.
However, there is also a group of shortcut methods provided by [`ImageDataBunch`](/vision.data.html#ImageDataBunch) which reduce the multiple stages of the data block API, into a single wrapper method. These shortcuts methods work really well for:
- Imagenet-style of datasets ([`ImageDataBunch.from_folder`](/vision.data.html#ImageDataBunch.from_folder))
- A pandas `DataFrame` with a column of filenames and a column of labels which can be strings for classification, strings separated by a `label_delim` for multi-classification or floats for a regression problem ([`ImageDataBunch.from_df`](/vision.data.html#ImageDataBunch.from_df))
- A csv file with the same format as above ([`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv))
- A list of filenames and a list of targets ([`ImageDataBunch.from_lists`](/vision.data.html#ImageDataBunch.from_lists))
- A list of filenames and a function to get the target from the filename ([`ImageDataBunch.from_name_func`](/vision.data.html#ImageDataBunch.from_name_func))
- A list of filenames and a regex pattern to get the target from the filename ([`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re))
In the last five factory methods, a random split is performed between train and validation, in the first one it can be a random split or a separation from a training and a validation folder.
If you're just starting out you may choose to experiment with these shortcut methods, as they are also used in the first lessons of the fastai deep learning course. However, you can completely skip them and start building your code using the data block API from the very beginning. Internally, these shortcuts use this API anyway.
The first part of this document is dedicated to the shortcut [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory methods. Then all the other computer vision data-specific methods that are used with the data block API are presented.
## Quickly get your data ready for training
To get you started as easily as possible, the fastai provides two helper functions to create a [`DataBunch`](/basic_data.html#DataBunch) object that you can directly use for training a classifier. To demonstrate them you'll first need to download and untar the file by executing the following cell. This will create a data folder containing an MNIST subset in `data/mnist_sample`.
```
path = untar_data(URLs.MNIST_SAMPLE); path
```
There are a number of ways to create an [`ImageDataBunch`](/vision.data.html#ImageDataBunch). One common approach is to use *Imagenet-style folders* (see a ways down the page below for details) with [`ImageDataBunch.from_folder`](/vision.data.html#ImageDataBunch.from_folder):
```
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
```
Here the datasets will be automatically created in the structure of *Imagenet-style folders*. The parameters specified:
- the transforms to apply to the images in `ds_tfms` (here with `do_flip`=False because we don't want to flip numbers),
- the target `size` of our pictures (here 24).
As with all [`DataBunch`](/basic_data.html#DataBunch) usage, a `train_dl` and a `valid_dl` are created that are of the type PyTorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader).
If you want to have a look at a few images inside a batch, you can use [`DataBunch.show_batch`](/basic_data.html#DataBunch.show_batch). The `rows` argument is the number of rows and columns to display.
```
data.show_batch(rows=3, figsize=(5,5))
```
The second way to define the data for a classifier requires a structure like this:
```
path\
train\
test\
labels.csv
```
where the labels.csv file defines the label(s) of each image in the training set. This is the format you will need to use when each image can have multiple labels. It also works with single labels:
```
pd.read_csv(path/'labels.csv').head()
```
You can then use [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv):
```
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
```
An example of multiclassification can be downloaded with the following cell. It's a sample of the [planet dataset](https://www.google.com/search?q=kaggle+planet&rlz=1C1CHBF_enFR786FR786&oq=kaggle+planet&aqs=chrome..69i57j0.1563j0j7&sourceid=chrome&ie=UTF-8).
```
planet = untar_data(URLs.PLANET_SAMPLE)
```
If we open the labels files, we seach that each image has one or more tags, separated by a space.
```
df = pd.read_csv(planet/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim=' ',
ds_tfms=get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.))
```
The `show_batch`method will then print all the labels that correspond to each image.
```
data.show_batch(rows=3, figsize=(10,8), ds_type=DatasetType.Valid)
```
You can find more ways to build an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) without the factory methods in [`data_block`](/data_block.html#data_block).
```
show_doc(ImageDataBunch)
```
This is the same initialization as a regular [`DataBunch`](/basic_data.html#DataBunch) so you probably don't want to use this directly, but one of the factory methods instead.
### Factory methods
If you quickly want to get a [`ImageDataBunch`](/vision.data.html#ImageDataBunch) and train a model, you should process your data to have it in one of the formats the following functions handle.
```
show_doc(ImageDataBunch.from_folder)
```
Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments.
"*Imagenet-style*" datasets look something like this (note that the test folder is optional):
```
path\
train\
clas1\
clas2\
...
valid\
clas1\
clas2\
...
test\
```
For example:
```
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
```
Note that this (and all factory methods in this section) pass any `kwargs` to [`DataBunch.create`](/basic_data.html#DataBunch.create).
```
show_doc(ImageDataBunch.from_csv)
```
Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments.
Create an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) from `path` by splitting the data in `folder` and labelled in a file `csv_labels` between a training and validation set. Use `valid_pct` to indicate the percentage of the total images to use as the validation set. An optional `test` folder contains unlabelled data and `suffix` contains an optional suffix to add to the filenames in `csv_labels` (such as '.jpg'). `fn_col` is the index (or the name) of the the column containing the filenames and `label_col` is the index (indices) (or the name(s)) of the column(s) containing the labels. Use [`header`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv) to specify the format of the csv header, and `delimiter` to specify a non-standard csv-field separator. In case your csv has no header, column parameters can only be specified as indices. If `label_delim` is passed, split what's in the label column according to that separator.
For example:
```
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=24);
show_doc(ImageDataBunch.from_df)
```
Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments.
Same as [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv), but passing in a `DataFrame` instead of a csv file. e.g
```
df = pd.read_csv(path/'labels.csv', header='infer')
df.head()
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
```
Different datasets are labeled in many different ways. The following methods can help extract the labels from the dataset in a wide variety of situations. The way they are built in fastai is constructive: there are methods which do a lot for you but apply in specific circumstances and there are methods which do less for you but give you more flexibility.
In this case the hierarchy is:
1. [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re): Gets the labels from the filenames using a regular expression
2. [`ImageDataBunch.from_name_func`](/vision.data.html#ImageDataBunch.from_name_func): Gets the labels from the filenames using any function
3. [`ImageDataBunch.from_lists`](/vision.data.html#ImageDataBunch.from_lists): Labels need to be provided as an input in a list
```
show_doc(ImageDataBunch.from_name_re)
```
Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments.
Creates an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) from `fnames`, calling a regular expression (containing one *re group*) on the file names to get the labels, putting aside `valid_pct` for the validation. In the same way as [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv), an optional `test` folder contains unlabelled data.
Our previously created dataframe contains the labels in the filenames so we can leverage it to test this new method. [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re) needs the exact path of each file so we will append the data path to each filename before creating our [`ImageDataBunch`](/vision.data.html#ImageDataBunch) object.
```
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
show_doc(ImageDataBunch.from_name_func)
```
Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments.
Works in the same way as [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re), but instead of a regular expression it expects a function that will determine how to extract the labels from the filenames. (Note that `from_name_re` uses this function in its implementation).
To test it we could build a function with our previous regex. Let's try another, similar approach to show that the labels can be obtained in a different way.
```
def get_labels(file_path): return '3' if '/3/' in str(file_path) else '7'
data = ImageDataBunch.from_name_func(path, fn_paths, label_func=get_labels, ds_tfms=tfms, size=24)
data.classes
show_doc(ImageDataBunch.from_lists)
```
Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments.
The most flexible factory function; pass in a list of `labels` that correspond to each of the filenames in `fnames`.
To show an example we have to build the labels list outside our [`ImageDataBunch`](/vision.data.html#ImageDataBunch) object and give it as an argument when we call `from_lists`. Let's use our previously created function to create our labels list.
```
labels_ls = list(map(get_labels, fn_paths))
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels_ls, ds_tfms=tfms, size=24)
data.classes
show_doc(ImageDataBunch.create_from_ll)
```
Use `bs`, `num_workers`, `collate_fn` and a potential `test` folder. `ds_tfms` is a tuple of two lists of transforms to be applied to the training and the validation (plus test optionally) set. `tfms` are the transforms to apply to the [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). The `size` and the `kwargs` are passed to the transforms for data augmentation.
```
show_doc(ImageDataBunch.single_from_classes)
jekyll_note('This method is deprecated, you should use DataBunch.load_empty now.')
```
### Other methods
In the next few methods we will use another dataset, CIFAR. This is because the second method will get the statistics for our dataset and we want to be able to show different statistics per channel. If we were to use MNIST, these statistics would be the same for every channel. White pixels are [255,255,255] and black pixels are [0,0,0] (or in normalized form [1,1,1] and [0,0,0]) so there is no variance between channels.
```
path = untar_data(URLs.CIFAR); path
show_doc(channel_view)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, valid='test', size=24)
def channel_view(x:Tensor)->Tensor:
"Make channel the first axis of `x` and flatten remaining axes"
return x.transpose(0,1).contiguous().view(x.shape[1],-1)
```
This function takes a tensor and flattens all dimensions except the channels, which it keeps as the first axis. This function is used to feed [`ImageDataBunch.batch_stats`](/vision.data.html#ImageDataBunch.batch_stats) so that it can get the pixel statistics of a whole batch.
Let's take as an example the dimensions our MNIST batches: 128, 3, 24, 24.
```
t = torch.Tensor(128, 3, 24, 24)
t.size()
tensor = channel_view(t)
tensor.size()
show_doc(ImageDataBunch.batch_stats)
data.batch_stats()
show_doc(ImageDataBunch.normalize)
```
In the fast.ai library we have `imagenet_stats`, `cifar_stats` and `mnist_stats` so we can add normalization easily with any of these datasets. Let's see an example with our dataset of choice: MNIST.
```
data.normalize(cifar_stats)
data.batch_stats()
```
## Data normalization
You may also want to normalize your data, which can be done by using the following functions.
```
show_doc(normalize)
show_doc(denormalize)
show_doc(normalize_funcs)
```
On MNIST the mean and std are 0.1307 and 0.3081 respectively (looked on Google). If you're using a pretrained model, you'll need to use the normalization that was used to train the model. The imagenet norm and denorm functions are stored as constants inside the library named <code>imagenet_norm</code> and <code>imagenet_denorm</code>. If you're training a model on CIFAR-10, you can also use <code>cifar_norm</code> and <code>cifar_denorm</code>.
You may sometimes see warnings about *clipping input data* when plotting normalized data. That's because even although it's denormalized when plotting automatically, sometimes floating point errors may make some values slightly out or the correct range. You can safely ignore these warnings in this case.
```
data = ImageDataBunch.from_folder(untar_data(URLs.MNIST_SAMPLE),
ds_tfms=tfms, size=24)
data.normalize()
data.show_batch(rows=3, figsize=(6,6))
show_doc(get_annotations)
```
To use this dataset and collate samples into batches, you'll need to following function:
```
show_doc(bb_pad_collate)
```
Finally, to apply transformations to [`Image`](/vision.image.html#Image) in a [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset), we use this last class.
## ItemList specific to vision
The vision application adds a few subclasses of [`ItemList`](/data_block.html#ItemList) specific to images.
```
show_doc(ImageList, title_level=3)
```
Create a [`ItemList`](/data_block.html#ItemList) in `path` from filenames in `items`. `create_func` will default to [`open_image`](/vision.image.html#open_image). `label_cls` can be specified for the labels, `xtra` contains any extra information (usually in the form of a dataframe) and `processor` is applied to the [`ItemList`](/data_block.html#ItemList) after splitting and labelling.
```
show_doc(ImageList.from_folder)
show_doc(ImageList.from_df)
show_doc(get_image_files)
show_doc(ImageList.open)
show_doc(ImageList.show_xys)
show_doc(ImageList.show_xyzs)
show_doc(ObjectCategoryList, title_level=3)
show_doc(ObjectItemList, title_level=3)
show_doc(SegmentationItemList, title_level=3)
show_doc(SegmentationLabelList, title_level=3)
show_doc(PointsLabelList, title_level=3)
show_doc(PointsItemList, title_level=3)
show_doc(ImageImageList, title_level=3)
```
## Building your own dataset
This module also contains a few helper functions to allow you to build you own dataset for image classification.
```
show_doc(download_images)
show_doc(verify_images)
```
It will try if every image in this folder can be opened and has `n_channels`. If `n_channels` is 3 – it'll try to convert image to RGB. If `delete=True`, it'll be removed it this fails. If `resume` – it will skip already existent images in `dest`. If `max_size` is specified, image is resized to the same ratio so that both sizes are less than `max_size`, using `interp`. Result is stored in `dest`, `ext` forces an extension type, `img_format` and `kwargs` are passed to PIL.Image.save. Use `max_workers` CPUs.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(PointsItemList.get)
show_doc(SegmentationLabelList.new)
show_doc(ImageList.from_csv)
show_doc(ObjectCategoryList.get)
show_doc(ImageList.get)
show_doc(SegmentationLabelList.reconstruct)
show_doc(ImageImageList.show_xys)
show_doc(ImageImageList.show_xyzs)
show_doc(ImageList.open)
show_doc(PointsItemList.analyze_pred)
show_doc(SegmentationLabelList.analyze_pred)
show_doc(PointsItemList.reconstruct)
show_doc(SegmentationLabelList.open)
show_doc(ImageList.reconstruct)
show_doc(resize_to)
show_doc(ObjectCategoryList.reconstruct)
show_doc(PointsLabelList.reconstruct)
show_doc(PointsLabelList.analyze_pred)
show_doc(PointsLabelList.get)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(ObjectCategoryList.analyze_pred)
```
| github_jupyter |
```
import pandas as pd
import itertools
import math
import matplotlib.pyplot as plt
import random
import seaborn
number_rows = 1000
file = pd.read_csv("rockyousubset.csv", nrows=number_rows)
country_dataset = pd.read_csv("../Datasets/top_200_password_2020_by_country_extended.csv")
country_dataset
file['Data Length'] = file['name'].str.len()
file
L = file['Data Length']
N = 127 #number of symbols that can be typed (ascii table)
file['strength'] = L*(math.log(N))/math.log(2)
file['Data Length'] = file['Data Length'].sort_values()
file.sort_values(by=['Data Length'])
file.dropna()
plt.xlabel('strength')
plt.ylabel('number of occurences')
plt.hist(file['strength'], bins=60)
common_words = pd.read_csv("unigram_freq.csv")
list_of_bad_words = []
for i in common_words.index:
if(len(str(common_words['name'][i])) <= 1):
list_of_bad_words.append(i)
common_words = common_words.drop(list_of_bad_words)
common_words
list_of_matches = []
#Calculate the list of matches with the most common words
for i in file.index:
match = ""
for j in common_words.index:
if(str(common_words['name'][j]) in str(file['name'][i])):
match += str(common_words['name'][j]) + ","
list_of_matches.append(match[:-1])
file.index
list_of_matches
len(list_of_matches)
file['matches'] = list_of_matches
pd.set_option('display.max_rows', 1000)
file
country_dataset = pd.read_csv("../Datasets/top_200_password_2020_by_country_extended.csv")
country_dataset['Data Length'] = country_dataset['Password'].str.len()
L = country_dataset['Data Length']
N = 127 #number of symbols that can be typed (ascii table)
country_dataset['strength'] = L*(math.log(N))/math.log(2)
country_dataset.sort_values(by=['Data Length'])
country_dataset.dropna()
plt.hist(country_dataset['strength'], bins=60)
china_strengths = []
russia_strengths = []
spain_strengths = []
us_strengths = []
vietnam_strengths = []
for i in country_dataset.index:
if(country_dataset['country'][i] == "China"):
china_strengths.append(country_dataset['strength'][i])
if(country_dataset['country'][i] == "Russia"):
russia_strengths.append(country_dataset['strength'][i])
if(country_dataset['country'][i] == "Spain"):
spain_strengths.append(country_dataset['strength'][i])
if(country_dataset['country'][i] == "United States"):
us_strengths.append(country_dataset['strength'][i])
if(country_dataset['country'][i] == "Vietnam"):
vietnam_strengths.append(country_dataset['strength'][i])
vietnam_strengths
```
| github_jupyter |
# Analyses of brain diseases in TMS dataset
```
%load_ext lab_black
import matplotlib
from scdrs.util import test_gearysc
import submitit
from os.path import join
import glob
import pandas as pd
import scanpy as sc
import seaborn as sns
import matplotlib.pyplot as plt
DATA_ROOT_DIR = "/n/holystore01/LABS/price_lab/Users/mjzhang/scDRS_data"
URL_SUPP_TABLE = "https://www.dropbox.com/s/qojbzu5zln33j7f/supp_tables.xlsx?dl=1"
file_list = glob.glob(
join(
DATA_ROOT_DIR,
"tabula_muris_senis/tabula-muris-senis-facs-processed-official-annotations-*",
)
)
tissue_list = [f.split("annotations-")[-1].split(".")[0] for f in file_list]
df_trait_info = pd.read_excel(
URL_SUPP_TABLE,
sheet_name=0,
)
dict_trait_code = {
row["Trait_Identifier"]: row["Code"] for _, row in df_trait_info.iterrows()
}
df_celltype_info = pd.read_excel(
URL_SUPP_TABLE,
sheet_name=1,
)
trait_list = [
"PASS_MDD_Howard2019",
"PASS_Schizophrenia_Pardinas2018",
"UKB_460K.mental_NEUROTICISM",
"UKB_460K.cov_SMOKING_STATUS",
"UKB_460K.cov_EDU_COLLEGE",
"UKB_460K.body_BMIz",
"UKB_460K.body_HEIGHTz",
]
SCORE_DIR = join(DATA_ROOT_DIR, "score_file/score.tms_facs_with_cov.magma_10kb_1000")
adata = sc.read_h5ad(
join(
DATA_ROOT_DIR,
f"tabula_muris_senis/tabula-muris-senis-facs-processed-official-annotations-Brain_Non-Myeloid.h5ad",
)
)
for trait in trait_list:
df_score = pd.read_csv(join(SCORE_DIR, f"{trait}.score.gz"), sep="\t", index_col=0)
adata.obs[trait] = df_score["zscore"].reindex(adata.obs.index)
sc.pl.umap(
adata,
color=["age", "sex", "subtissue", "cell_ontology_class", "free_annotation"],
ncols=1,
)
sc.pl.umap(adata, color=trait_list, cmap="RdBu_r")
adata_neuron = adata[adata.obs.cell_ontology_class == "neuron"].copy()
adata_neuron.obs["subtissue"] = adata_neuron.obs.subtissue.apply(lambda x: x.strip())
fig, ax = plt.subplots(ncols=len(trait_list), figsize=(13, 4))
for i, trait in enumerate(trait_list):
df = adata_neuron.obs[["subtissue", trait]]
g = sns.violinplot(x=df["subtissue"].values, y=df[trait].values, ax=ax[i])
g.set_xticklabels(g.get_xticklabels(), rotation=60, ha="right")
g.set_title(dict_trait_code[trait])
g.set_ylabel("scDRS score")
g.set_ylim(-6, 8)
plt.tight_layout()
plt.savefig("results/tms_stratify_subtissue.pdf", bbox_inches="tight")
dict_df_rls = dict()
for trait in trait_list:
df_score_full = pd.read_csv(
join(SCORE_DIR, f"{trait}.full_score.gz"), sep="\t", index_col=0
)
dict_df_rls[trait] = test_gearysc(adata_neuron, df_score_full, "subtissue")
def pval2str(p_):
if p_ > 0.05:
return ""
elif p_ > 0.005:
return "×"
else:
return "××"
df_pval = pd.DataFrame(
{dict_trait_code[trait]: dict_df_rls[trait]["pval"] for trait in trait_list}
).T
df_zscore = pd.DataFrame(
{dict_trait_code[trait]: dict_df_rls[trait]["zsc"] for trait in trait_list}
).T
df_pval = df_pval[["Cerebellum", "Cortex", "Hippocampus", "Striatum"]]
df_zscore = df_zscore[["Cerebellum", "Cortex", "Hippocampus", "Striatum"]]
fig, ax = plt.subplots(figsize=(1.5, 1.5))
h = sns.heatmap(
df_zscore,
annot=df_pval.applymap(pval2str),
linewidths=0.2,
linecolor="gray",
fmt="s",
cmap="RdBu_r",
center=0,
annot_kws={"size": 6},
xticklabels=True,
yticklabels=True,
)
h.set_xticklabels(h.get_xticklabels(), rotation=45, ha="right", fontsize=8)
h.set_yticklabels(h.get_yticklabels(), fontsize=8)
h.set_title("Heterogeneity level within sub-tissue \n (TMS FACS)", fontsize=8)
plt.savefig("results/tms_subtissue_hetero.pdf", bbox_inches="tight")
```
| github_jupyter |
## RDF
The radial distribution function (RDF) denoted in equations by g(r) defines the probability of finding a particle at a distance r from another tagged particle. The RDF is strongly dependent on the type of matter so will vary greatly for solids, gases and liquids.
<img src="../images/rdf.png" width="60%" height="60%">
As you might have observed the code complexity of the algorithm in $N^{2}$ . Let us get into details of the sequential code. **Understand and analyze** the code present at:
[RDF Serial Code](../../source_code/serial/rdf.cpp)
[File Reader](../../source_code/serial/dcdread.h)
[Makefile](../../source_code/serial/Makefile)
Open the downloaded file for inspection.
```
!cd ../../source_code/serial && make clean && make
```
We plan to follow the typical optimization cycle that every code needs to go through
<img src="../images/workflow.png" width="70%" height="70%">
In order analyze the application we we will make use of profiler "nsys" and add "nvtx" marking into the code to get more information out of the serial code. Before running the below cells, let's first start by divining into the profiler lab to learn more about the tools. Using Profiler gives us the hotspots and helps to understand which function is important to be made parallel.
-----
# <div style="text-align: center ;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em">[Profiling lab](../../../../../profiler/English/jupyter_notebook/nsight_systems.ipynb)</div>
-----
Now, that we are familiar with the Nsight Profiler and know how to [NVTX](../../../../../profiler/English/jupyter_notebook/nsight_systems.ipynb#nvtx), let's profile the serial code and checkout the output.
```
!cd ../../source_code/serial&& nsys profile -t nvtx --stats=true --force-overwrite true -o rdf_serial ./rdf
```
Once you run the above cell, you should see the following in the terminal.
<img src="../images/serial.png" width="70%" height="70%">
To view the profiler report, you would need to Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/serial/rdf_serial.qdrep) and open it via the GUI. For more information on how to open the report via the GUI, please checkout the section on [How to view the report](../../../../../profiler/English/jupyter_notebook/profiling-c.ipynb#gui-report).
From the timeline view, right click on the nvtx row and click the "show in events view". Now you can see the nvtx statistic at the bottom of the window which shows the duration of each range. In the following labs, we will look in to the profiler report in more detail.
<img src="../images/nvtx_serial.png" width="100%" height="100%">
The obvious next step is to make **Pair Calculation** algorithm parallel using different approaches to GPU Programming. Please follow the below link and choose one of the approaches to parallelise th serial code.
-----
# <div style="text-align: center ;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em">[HOME](../../../nways_MD_start.ipynb)</div>
-----
# Links and Resources
<!--[OpenACC API guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf)-->
[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)
<!--[NVIDIA Nsight Compute](https://developer.nvidia.com/nsight-compute)-->
<!--[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)-->
[Profiling timelines with NVTX](https://devblogs.nvidia.com/cuda-pro-tip-generate-custom-application-profile-timelines-nvtx/)
**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).
Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
---
## Licensing
This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| github_jupyter |
# Data Leakage
Most people find target leakage very tricky until they've thought about it for a long time.
So, before trying to think about leakage in the housing price example, we'll go through a few examples in other applications. Things will feel more familiar once you come back to a question about house prices.
# 1. The Data Science of Shoelaces
Nike has hired you as a data science consultant to help them save money on shoe materials. Your first assignment is to review a model one of their employees built to predict how many shoelaces they'll need each month. The features going into the machine learning model include:
- The current month (January, February, etc)
- Advertising expenditures in the previous month
- Various macroeconomic features (like the unemployment rate) as of the beginning of the current month
- The amount of leather they ended up using in the current month
The results show the model is almost perfectly accurate if you include the feature about how much leather they used. But it is only moderately accurate if you leave that feature out. You realize this is because the amount of leather they use is a perfect indicator of how many shoes they produce, which in turn tells you how many shoelaces they need.
Do you think the _leather used_ feature constitutes a source of data leakage? If your answer is "it depends," what does it depend on?
After you have thought about your answer, check it against the solution below.
#### Solution:
This is tricky, and it depends on details of how data is collected (which is common when thinking about leakage). Would you at the beginning of the month decide how much leather will be used that month? If so, this is ok. But if that is determined during the month, you would not have access to it when you make the prediction. If you have a guess at the beginning of the month, and it is subsequently changed during the month, the actual amount used during the month cannot be used as a feature (because it causes leakage).
# 2. Return of the Shoelaces
You have a new idea. You could use the amount of leather Nike ordered (rather than the amount they actually used) leading up to a given month as a predictor in your shoelace model.
Does this change your answer about whether there is a leakage problem? If you answer "it depends," what does it depend on?
#### Solution:
This could be fine, but it depends on whether they order shoelaces first or leather first. If they order shoelaces first, you won't know how much leather they've ordered when you predict their shoelace needs. If they order leather first, then you'll have that number available when you place your shoelace order, and you should be ok.
# 3. Getting Rich With Cryptocurrencies?
You saved Nike so much money that they gave you a bonus. Congratulations.
Your friend, who is also a data scientist, says he has built a model that will let you turn your bonus into millions of dollars. Specifically, his model predicts the price of a new cryptocurrency (like Bitcoin, but a newer one) one day ahead of the moment of prediction. His plan is to purchase the cryptocurrency whenever the model says the price of the currency (in dollars) is about to go up.
The most important features in his model are:
- Current price of the currency
- Amount of the currency sold in the last 24 hours
- Change in the currency price in the last 24 hours
- Change in the currency price in the last 1 hour
- Number of new tweets in the last 24 hours that mention the currency
The value of the cryptocurrency in dollars has fluctuated up and down by over \$100 in the last year, and yet his model's average error is less than \$1. He says this is proof his model is accurate, and you should invest with him, buying the currency whenever the model says it is about to go up.
Is he right? If there is a problem with his model, what is it?
##### Solution:
There is no source of leakage here. These features should be available at the moment you want to make a predition, and they're unlikely to be changed in the training data after the prediction target is determined. But, the way he describes accuracy could be misleading if you aren't careful. If the price moves gradually, today's price will be an accurate predictor of tomorrow's price, but it may not tell you whether it's a good time to invest. For instance, if it is 100today,amodelpredictingapriceof100today,amodelpredictingapriceof 100 tomorrow may seem accurate, even if it can't tell you whether the price is going up or down from the current price. A better prediction target would be the change in price over the next day. If you can consistently predict whether the price is about to go up or down (and by how much), you may have a winning investment opportunity.
# 4. Preventing Infections
An agency that provides healthcare wants to predict which patients from a rare surgery are at risk of infection, so it can alert the nurses to be especially careful when following up with those patients.
You want to build a model. Each row in the modeling dataset will be a single patient who received the surgery, and the prediction target will be whether they got an infection.
Some surgeons may do the procedure in a manner that raises or lowers the risk of infection. But how can you best incorporate the surgeon information into the model?
You have a clever idea.
1. Take all surgeries by each surgeon and calculate the infection rate among those surgeons.
2. For each patient in the data, find out who the surgeon was and plug in that surgeon's average infection rate as a feature.
Does this pose any target leakage issues?
Does it pose any train-test contamination issues?
##### Solution:
This poses a risk of both target leakage and train-test contamination (though you may be able to avoid both if you are careful).
You have target leakage if a given patient's outcome contributes to the infection rate for his surgeon, which is then plugged back into the prediction model for whether that patient becomes infected. You can avoid target leakage if you calculate the surgeon's infection rate by using only the surgeries before the patient we are predicting for. Calculating this for each surgery in your training data may be a little tricky.
You also have a train-test contamination problem if you calculate this using all surgeries a surgeon performed, including those from the test-set. The result would be that your model could look very accurate on the test set, even if it wouldn't generalize well to new patients after the model is deployed. This would happen because the surgeon-risk feature accounts for data in the test set. Test sets exist to estimate how the model will do when seeing new data. So this contamination defeats the purpose of the test set.
# 5. Housing Prices
You will build a model to predict housing prices. The model will be deployed on an ongoing basis, to predict the price of a new house when a description is added to a website. Here are four features that could be used as predictors.
1. Size of the house (in square meters)
2. Average sales price of homes in the same neighborhood
3. Latitude and longitude of the house
4. Whether the house has a basement
You have historic data to train and validate the model.
Which of the features is most likely to be a source of leakage?
potential_leakage_feature = 2
Correct:
2 is the source of target leakage. Here is an analysis for each feature:
The size of a house is unlikely to be changed after it is sold (though technically it's possible). But typically this will be available when we need to make a prediction, and the data won't be modified after the home is sold. So it is pretty safe.
We don't know the rules for when this is updated. If the field is updated in the raw data after a home was sold, and the home's sale is used to calculate the average, this constitutes a case of target leakage. At an extreme, if only one home is sold in the neighborhood, and it is the home we are trying to predict, then the average will be exactly equal to the value we are trying to predict. In general, for neighborhoods with few sales, the model will perform very well on the training data. But when you apply the model, the home you are predicting won't have been sold yet, so this feature won't work the same as it did in the training data.
These don't change, and will be available at the time we want to make a prediction. So there's no risk of target leakage here.
This also doesn't change, and it is available at the time we want to make a prediction. So there's no risk of target leakage here.
Solution: 2 is the source of target leakage. Here is an analysis for each feature:
The size of a house is unlikely to be changed after it is sold (though technically it's possible). But typically this will be available when we need to make a prediction, and the data won't be modified after the home is sold. So it is pretty safe.
We don't know the rules for when this is updated. If the field is updated in the raw data after a home was sold, and the home's sale is used to calculate the average, this constitutes a case of target leakage. At an extreme, if only one home is sold in the neighborhood, and it is the home we are trying to predict, then the average will be exactly equal to the value we are trying to predict. In general, for neighborhoods with few sales, the model will perform very well on the training data. But when you apply the model, the home you are predicting won't have been sold yet, so this feature won't work the same as it did in the training data.
These don't change, and will be available at the time we want to make a prediction. So there's no risk of target leakage here.
This also doesn't change, and it is available at the time we want to make a prediction. So there's no risk of target leakage here.
# Conclusion
Leakage is a hard and subtle issue. You should be proud if you picked up on the issues in these examples.
Now you have the tools to make highly accurate models, and pick up on the most difficult practical problems that arise with applying these models to solve real problems.
| github_jupyter |
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BC2D4'. Only compress the string if it saves space.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Can we assume the string is ASCII?
* Yes
* Note: Unicode strings could require special handling depending on your language
* Is this case sensitive?
* Yes
* Can we use additional data structures?
* Yes
* Can we assume this fits in memory?
* Yes
## Test Cases
* None -> None
* '' -> ''
* 'AABBCC' -> 'AABBCC'
* 'AAABCCDDDD' -> 'A3BC2D4'
## Algorithm
* For each char in string
* If char is the same as last_char, increment count
* Else
* Append last_char and count to compressed_string
* last_char = char
* count = 1
* Append last_char and count to compressed_string
* If the compressed string size is < string size
* Return compressed string
* Else
* Return string
Complexity:
* Time: O(n)
* Space: O(n)
Complexity Note:
* Although strings are immutable in Python, appending to strings is optimized in CPython so that it now runs in O(n) and extends the string in-place. Refer to this [Stack Overflow post](http://stackoverflow.com/a/4435752).
## Code
```
class CompressString(object):
def compress(self, string):
if string is None or not string:
return string
result = ''
prev_char = string[0]
count = 0
for char in string:
if char == prev_char:
count += 1
else:
result += self._calc_partial_result(prev_char, count)
prev_char = char
count = 1
result += self._calc_partial_result(prev_char, count)
return result if len(result) < len(string) else string
def _calc_partial_result(self, prev_char, count):
return prev_char + (str(count) if count > 1 else '')
```
## Unit Test
```
%%writefile test_compress.py
from nose.tools import assert_equal
class TestCompress(object):
def test_compress(self, func):
assert_equal(func(None), None)
assert_equal(func(''), '')
assert_equal(func('AABBCC'), 'AABBCC')
assert_equal(func('AAABCCDDDDE'), 'A3BC2D4E')
assert_equal(func('BAAACCDDDD'), 'BA3C2D4')
assert_equal(func('AAABAACCDDDD'), 'A3BA2C2D4')
print('Success: test_compress')
def main():
test = TestCompress()
compress_string = CompressString()
test.test_compress(compress_string.compress)
if __name__ == '__main__':
main()
%run -i test_compress.py
```
| github_jupyter |
# Bayesian Hierarchical Linear Regression
Author: [Carlos Souza](mailto:souza@gatech.edu)
Probabilistic Machine Learning models can not only make predictions about future data, but also **model uncertainty**. In areas such as **personalized medicine**, there might be a large amount of data, but there is still a relatively **small amount of data for each patient**. To customize predictions for each person it becomes necessary to **build a model for each person** — with its inherent **uncertainties** — and to couple these models together in a **hierarchy** so that information can be borrowed from other **similar people** [1].
The purpose of this tutorial is to demonstrate how to **implement a Bayesian Hierarchical Linear Regression model using NumPyro**. To motivate the tutorial, I will use [OSIC Pulmonary Fibrosis Progression](https://www.kaggle.com/c/osic-pulmonary-fibrosis-progression) competition, hosted at Kaggle.
## 1. Understanding the task
Pulmonary fibrosis is a disorder with no known cause and no known cure, created by scarring of the lungs. In this competition, we were asked to predict a patient’s severity of decline in lung function. Lung function is assessed based on output from a spirometer, which measures the forced vital capacity (FVC), i.e. the volume of air exhaled.
In medical applications, it is useful to **evaluate a model's confidence in its decisions**. Accordingly, the metric used to rank the teams was designed to reflect **both the accuracy and certainty of each prediction**. It's a modified version of the Laplace Log Likelihood (more details on that later).
Let's explore the data and see what's that all about:
```
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro arviz
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
train = pd.read_csv('https://gist.githubusercontent.com/ucals/'
'2cf9d101992cb1b78c2cdd6e3bac6a4b/raw/'
'43034c39052dcf97d4b894d2ec1bc3f90f3623d9/'
'osic_pulmonary_fibrosis.csv')
train.head()
```
In the dataset, we were provided with a baseline chest CT scan and associated clinical information for a set of patients. A patient has an image acquired at time Week = 0 and has numerous follow up visits over the course of approximately 1-2 years, at which time their FVC is measured. For this tutorial, I will use only the Patient ID, the weeks and the FVC measurements, discarding all the rest. Using only these columns enabled our team to achieve a competitive score, which shows the power of Bayesian hierarchical linear regression models especially when gauging uncertainty is an important part of the problem.
Since this is real medical data, the relative timing of FVC measurements varies widely, as shown in the 3 sample patients below:
```
def chart(patient_id, ax):
data = train[train['Patient'] == patient_id]
x = data['Weeks']
y = data['FVC']
ax.set_title(patient_id)
ax = sns.regplot(x, y, ax=ax, ci=None, line_kws={'color':'red'})
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00010637202177584971671', axes[2])
```
On average, each of the 176 provided patients made 9 visits, when FVC was measured. The visits happened in specific weeks in the [-12, 133] interval. The decline in lung capacity is very clear. We see, though, they are very different from patient to patient.
We were are asked to predict every patient's FVC measurement for every possible week in the [-12, 133] interval, and the confidence for each prediction. In other words: we were asked fill a matrix like the one below, and provide a confidence score for each prediction:
<img src="https://i.ibb.co/0Z9kW8H/matrix-completion.jpg" alt="drawing" width="600"/>
The task was perfect to apply Bayesian inference. However, the vast majority of solutions shared by Kaggle community used discriminative machine learning models, disconsidering the fact that most discriminative methods are very poor at providing realistic uncertainty estimates. Because they are typically trained in a manner that optimizes the parameters to minimize some loss criterion (e.g. the predictive error), they do not, in general, encode any uncertainty in either their parameters or the subsequent predictions. Though many methods can produce uncertainty estimates either as a by-product or from a post-processing step, these are typically heuristic based, rather than stemming naturally from a statistically principled estimate of the target uncertainty distribution [2].
## 2. Modelling: Bayesian Hierarchical Linear Regression with Partial Pooling
The simplest possible linear regression, not hierarchical, would assume all FVC decline curves have the same $\alpha$ and $\beta$. That's the **pooled model**. In the other extreme, we could assume a model where each patient has a personalized FVC decline curve, and **these curves are completely unrelated**. That's the **unpooled model**, where each patient has completely separate regressions.
Here, I'll use the middle ground: **Partial pooling**. Specifically, I'll assume that while $\alpha$'s and $\beta$'s are different for each patient as in the unpooled case, **the coefficients all share similarity**. We can model this by assuming that each individual coefficient comes from a common group distribution. The image below represents this model graphically:
<img src="https://i.ibb.co/H7NgBfR/Artboard-2-2x-100.jpg" alt="drawing" width="600"/>
Mathematically, the model is described by the following equations:
\begin{align}
\mu_{\alpha} &\sim \mathcal{N}(0, 100) \\
\sigma_{\alpha} &\sim |\mathcal{N}(0, 100)| \\
\mu_{\beta} &\sim \mathcal{N}(0, 100) \\
\sigma_{\beta} &\sim |\mathcal{N}(0, 100)| \\
\alpha_i &\sim \mathcal{N}(\mu_{\alpha}, \sigma_{\alpha}) \\
\beta_i &\sim \mathcal{N}(\mu_{\beta}, \sigma_{\beta}) \\
\sigma &\sim \mathcal{N}(0, 100) \\
FVC_{ij} &\sim \mathcal{N}(\alpha_i + t \beta_i, \sigma)
\end{align}
where *t* is the time in weeks. Those are very uninformative priors, but that's ok: our model will converge!
Implementing this model in NumPyro is pretty straightforward:
```
import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
assert numpyro.__version__.startswith('0.6.0')
def model(PatientID, Weeks, FVC_obs=None):
μ_α = numpyro.sample("μ_α", dist.Normal(0., 100.))
σ_α = numpyro.sample("σ_α", dist.HalfNormal(100.))
μ_β = numpyro.sample("μ_β", dist.Normal(0., 100.))
σ_β = numpyro.sample("σ_β", dist.HalfNormal(100.))
unique_patient_IDs = np.unique(PatientID)
n_patients = len(unique_patient_IDs)
with numpyro.plate("plate_i", n_patients):
α = numpyro.sample("α", dist.Normal(μ_α, σ_α))
β = numpyro.sample("β", dist.Normal(μ_β, σ_β))
σ = numpyro.sample("σ", dist.HalfNormal(100.))
FVC_est = α[PatientID] + β[PatientID] * Weeks
with numpyro.plate("data", len(PatientID)):
numpyro.sample("obs", dist.Normal(FVC_est, σ), obs=FVC_obs)
```
That's all for modelling!
## 3. Fitting the model
A great achievement of Probabilistic Programming Languages such as NumPyro is to decouple model specification and inference. After specifying my generative model, with priors, condition statements and data likelihood, I can leave the hard work to NumPyro's inference engine.
Calling it requires just a few lines. Before we do it, let's add a numerical Patient ID for each patient code. That can be easily done with scikit-learn's LabelEncoder:
```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train['PatientID'] = le.fit_transform(train['Patient'].values)
FVC_obs = train['FVC'].values
Weeks = train['Weeks'].values
PatientID = train['PatientID'].values
```
Now, calling NumPyro's inference engine:
```
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=2000, num_warmup=2000)
rng_key = random.PRNGKey(0)
mcmc.run(rng_key, PatientID, Weeks, FVC_obs=FVC_obs)
posterior_samples = mcmc.get_samples()
```
## 4. Checking the model
### 4.1. Inspecting the learned parameters
First, let's inspect the parameters learned. To do that, I will use [ArviZ](https://arviz-devs.github.io/arviz/), which perfectly integrates with NumPyro:
```
import arviz as az
data = az.from_numpyro(mcmc)
az.plot_trace(data, compact=True);
```
Looks like our model learned personalized alphas and betas for each patient!
### 4.2. Visualizing FVC decline curves for some patients
Now, let's visually inspect FVC decline curves predicted by our model. We will completely fill in the FVC table, predicting all missing values. The first step is to create a table to fill:
```
pred_template = []
for i in range(train['Patient'].nunique()):
df = pd.DataFrame(columns=['PatientID', 'Weeks'])
df['Weeks'] = np.arange(-12, 134)
df['PatientID'] = i
pred_template.append(df)
pred_template = pd.concat(pred_template, ignore_index=True)
```
Predicting the missing values in the FVC table and confidence (sigma) for each value becomes really easy:
```
PatientID = pred_template['PatientID'].values
Weeks = pred_template['Weeks'].values
predictive = Predictive(model, posterior_samples,
return_sites=['σ', 'obs'])
samples_predictive = predictive(random.PRNGKey(0),
PatientID, Weeks, None)
```
Let's now put the predictions together with the true values, to visualize them:
```
df = pd.DataFrame(columns=['Patient', 'Weeks', 'FVC_pred', 'sigma'])
df['Patient'] = le.inverse_transform(pred_template['PatientID'])
df['Weeks'] = pred_template['Weeks']
df['FVC_pred'] = samples_predictive['obs'].T.mean(axis=1)
df['sigma'] = samples_predictive['obs'].T.std(axis=1)
df['FVC_inf'] = df['FVC_pred'] - df['sigma']
df['FVC_sup'] = df['FVC_pred'] + df['sigma']
df = pd.merge(df, train[['Patient', 'Weeks', 'FVC']],
how='left', on=['Patient', 'Weeks'])
df = df.rename(columns={'FVC': 'FVC_true'})
df.head()
```
Finally, let's see our predictions for 3 patients:
```
def chart(patient_id, ax):
data = df[df['Patient'] == patient_id]
x = data['Weeks']
ax.set_title(patient_id)
ax.plot(x, data['FVC_true'], 'o')
ax.plot(x, data['FVC_pred'])
ax = sns.regplot(x, data['FVC_true'], ax=ax, ci=None,
line_kws={'color':'red'})
ax.fill_between(x, data["FVC_inf"], data["FVC_sup"],
alpha=0.5, color='#ffcd3c')
ax.set_ylabel('FVC')
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00011637202177653955184', axes[2])
```
The results are exactly what we expected to see! Highlight observations:
- The model adequately learned Bayesian Linear Regressions! The orange line (learned predicted FVC mean) is very inline with the red line (deterministic linear regression). But most important: it learned to predict uncertainty, showed in the light orange region (one sigma above and below the mean FVC line)
- The model predicts a higher uncertainty where the data points are more disperse (1st and 3rd patients). Conversely, where the points are closely grouped together (2nd patient), the model predicts a higher confidence (narrower light orange region)
- Finally, in all patients, we can see that the uncertainty grows as the look more into the future: the light orange region widens as the # of weeks grow!
### 4.3. Computing the modified Laplace Log Likelihood and RMSE
As mentioned earlier, the competition was evaluated on a modified version of the Laplace Log Likelihood. In medical applications, it is useful to evaluate a model's confidence in its decisions. Accordingly, the metric is designed to reflect both the accuracy and certainty of each prediction.
For each true FVC measurement, we predicted both an FVC and a confidence measure (standard deviation $\sigma$). The metric was computed as:
\begin{align}
\sigma_{clipped} &= max(\sigma, 70) \\
\delta &= min(|FVC_{true} - FVC_{pred}|, 1000) \\
metric &= -\dfrac{\sqrt{2}\delta}{\sigma_{clipped}} - \ln(\sqrt{2} \sigma_{clipped})
\end{align}
The error was thresholded at 1000 ml to avoid large errors adversely penalizing results, while the confidence values were clipped at 70 ml to reflect the approximate measurement uncertainty in FVC. The final score was calculated by averaging the metric across all (Patient, Week) pairs. Note that metric values will be negative and higher is better.
Next, we calculate the metric and RMSE:
```
y = df.dropna()
rmse = ((y['FVC_pred'] - y['FVC_true']) ** 2).mean() ** (1/2)
print(f'RMSE: {rmse:.1f} ml')
sigma_c = y['sigma'].values
sigma_c[sigma_c < 70] = 70
delta = (y['FVC_pred'] - y['FVC_true']).abs()
delta[delta > 1000] = 1000
lll = - np.sqrt(2) * delta / sigma_c - np.log(np.sqrt(2) * sigma_c)
print(f'Laplace Log Likelihood: {lll.mean():.4f}')
```
What do these numbers mean? It means if you adopted this approach, you would **outperform most of the public solutions** in the competition. Curiously, the vast majority of public solutions adopt a standard deterministic Neural Network, modelling uncertainty through a quantile loss. **Most of the people still adopt a frequentist approach**.
**Uncertainty** for single predictions becomes more and more important in machine learning and is often a requirement. **Especially when the consequenses of a wrong prediction are high**, we need to know what the probability distribution of an individual prediction is. For perspective, Kaggle just launched a new competition sponsored by Lyft, to build motion prediction models for self-driving vehicles. "We ask that you predict a few trajectories for every agent **and provide a confidence score for each of them**."
Finally, I hope the great work done by Pyro/NumPyro developers help democratize Bayesian methods, empowering an ever growing community of researchers and practitioners to create models that can not only generate predictions, but also assess uncertainty in their predictions.
## References
1. Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 521, 452–459 (2015). https://doi.org/10.1038/nature14541
2. Rainforth, Thomas William Gamlen. Automating Inference, Learning, and Design Using Probabilistic Programming. University of Oxford, 2017.
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
**BikeShare Demand Forecasting**
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Compute](#Compute)
1. [Data](#Data)
1. [Train](#Train)
1. [Featurization](#Featurization)
1. [Evaluate](#Evaluate)
## Introduction
This notebook demonstrates demand forecasting for a bike-sharing service using AutoML.
AutoML highlights here include built-in holiday featurization, accessing engineered feature names, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.
Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.
Notebook synopsis:
1. Creating an Experiment in an existing Workspace
2. Configuration and local run of AutoML for a time-series model with lag and holiday features
3. Viewing the engineered names for featurized data and featurization summary for all raw features
4. Evaluating the fitted model using a rolling test
## Setup
```
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core import Workspace, Experiment, Dataset
from azureml.train.automl import AutoMLConfig
from datetime import datetime
```
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
```
print("This notebook was created using version 1.32.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
```
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-bikeshareforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Compute
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your cluster.
amlcompute_cluster_name = "bike-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=4)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
```
## Data
The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace) is paired with the storage account, which contains the default data store. We will use it to upload the bike share data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
```
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./bike-no.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
```
Let's set up what we know about the dataset.
**Target column** is what we want to forecast.
**Time column** is the time axis along which to predict.
```
target_column_name = 'cnt'
time_column_name = 'date'
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'dataset/bike-no.csv')]).with_timestamp_columns(fine_grain_timestamp=time_column_name)
# Drop the columns 'casual' and 'registered' as these columns are a breakdown of the total and therefore a leak.
dataset = dataset.drop_columns(columns=['casual', 'registered'])
dataset.take(5).to_pandas_dataframe().reset_index(drop=True)
```
### Split the data
The first split we make is into train and test sets. Note we are splitting on time. Data before 9/1 will be used for training, and data after and including 9/1 will be used for testing.
```
# select data that occurs before a specified date
train = dataset.time_before(datetime(2012, 8, 31), include_boundary=True)
train.to_pandas_dataframe().tail(5).reset_index(drop=True)
test = dataset.time_after(datetime(2012, 9, 1), include_boundary=True)
test.to_pandas_dataframe().head(5).reset_index(drop=True)
```
## Forecasting Parameters
To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.
|Property|Description|
|-|-|
|**time_column_name**|The name of your time column.|
|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|
|**country_or_region_for_holidays**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|
|**target_lags**|The target_lags specifies how far back we will construct the lags of the target variable.|
|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information.
## Train
Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
|Property|Description|
|-|-|
|**task**|forecasting|
|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>
|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|
|**experiment_timeout_hours**|Experimentation timeout in hours.|
|**training_data**|Input dataset, containing both features and label column.|
|**label_column_name**|The name of the label column.|
|**compute_target**|The remote compute for training.|
|**n_cross_validations**|Number of cross validation splits.|
|**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.|
|**forecasting_parameters**|A class that holds all the forecasting related parameters.|
This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results.
### Setting forecaster maximum horizon
The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 14 periods (i.e. 14 days). Notice that this is much shorter than the number of days in the test set; we will need to use a rolling test to evaluate the performance on the whole test set. For more discussion of forecast horizons and guiding principles for setting them, please see the [energy demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand).
```
forecast_horizon = 14
```
### Config AutoML
```
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=forecast_horizon,
country_or_region_for_holidays='US', # set country_or_region will trigger holiday featurizer
target_lags='auto', # use heuristic based lag setting
freq='D' # Set the forecast frequency to be daily
)
automl_config = AutoMLConfig(task='forecasting',
primary_metric='normalized_root_mean_squared_error',
blocked_models = ['ExtremeRandomTrees'],
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
max_concurrent_iterations=4,
max_cores_per_iteration=-1,
verbosity=logging.INFO,
forecasting_parameters=forecasting_parameters)
```
We will now run the experiment, you can go to Azure ML portal to view the run details.
```
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
```
### Retrieve the Best Model
Below we select the best model from all the training iterations using get_output method.
```
best_run, fitted_model = remote_run.get_output()
fitted_model.steps
```
## Featurization
You can access the engineered feature names generated in time-series featurization. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization.
```
fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names()
```
### View the featurization summary
You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:
- Raw feature name
- Number of engineered features formed out of this raw feature
- Type detected
- If feature was dropped
- List of feature transformations for the raw feature
```
# Get the featurization summary as a list of JSON
featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary()
# View the featurization summary as a pandas dataframe
pd.DataFrame.from_records(featurization_summary)
```
## Evaluate
We now use the best fitted model from the AutoML Run to make forecasts for the test set. We will do batch scoring on the test dataset which should have the same schema as training dataset.
The scoring will run on a remote compute. In this example, it will reuse the training compute.
```
test_experiment = Experiment(ws, experiment_name + "_test")
```
### Retrieving forecasts from the model
To run the forecast on the remote compute we will use a helper script: forecasting_script. This script contains the utility methods which will be used by the remote estimator. We copy the script to the project folder to upload it to remote compute.
```
import os
import shutil
script_folder = os.path.join(os.getcwd(), 'forecast')
os.makedirs(script_folder, exist_ok=True)
shutil.copy('forecasting_script.py', script_folder)
```
For brevity, we have created a function called run_forecast that submits the test data to the best model determined during the training run and retrieves forecasts. The test set is longer than the forecast horizon specified at train time, so the forecasting script uses a so-called rolling evaluation to generate predictions over the whole test set. A rolling evaluation iterates the forecaster over the test set, using the actuals in the test set to make lag features as needed.
```
from run_forecast import run_rolling_forecast
remote_run = run_rolling_forecast(test_experiment, compute_target, best_run, test, target_column_name)
remote_run
remote_run.wait_for_completion(show_output=False)
```
### Download the prediction result for metrics calcuation
The test data with predictions are saved in artifact outputs/predictions.csv. You can download it and calculation some error metrics for the forecasts and vizualize the predictions vs. the actuals.
```
remote_run.download_file('outputs/predictions.csv', 'predictions.csv')
df_all = pd.read_csv('predictions.csv')
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from sklearn.metrics import mean_absolute_error, mean_squared_error
from matplotlib import pyplot as plt
# use automl metrics module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
```
For more details on what metrics are included and how they are calculated, please refer to [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics). You could also calculate residuals, like described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).
Since we did a rolling evaluation on the test set, we can analyze the predictions by their forecast horizon relative to the rolling origin. The model was initially trained at a forecast horizon of 14, so each prediction from the model is associated with a horizon value from 1 to 14. The horizon values are in a column named, "horizon_origin," in the prediction set. For example, we can calculate some of the error metrics grouped by the horizon:
```
from metrics_helper import MAPE, APE
df_all.groupby('horizon_origin').apply(
lambda df: pd.Series({'MAPE': MAPE(df[target_column_name], df['predicted']),
'RMSE': np.sqrt(mean_squared_error(df[target_column_name], df['predicted'])),
'MAE': mean_absolute_error(df[target_column_name], df['predicted'])}))
```
To drill down more, we can look at the distributions of APE (absolute percentage error) by horizon. From the chart, it is clear that the overall MAPE is being skewed by one particular point where the actual value is of small absolute value.
```
df_all_APE = df_all.assign(APE=APE(df_all[target_column_name], df_all['predicted']))
APEs = [df_all_APE[df_all['horizon_origin'] == h].APE.values for h in range(1, forecast_horizon + 1)]
%matplotlib inline
plt.boxplot(APEs)
plt.yscale('log')
plt.xlabel('horizon')
plt.ylabel('APE (%)')
plt.title('Absolute Percentage Errors by Forecast Horizon')
plt.show()
```
| github_jupyter |
# Inference in the full model
This is the same example as considered in [Liu et al.](https://arxiv.org/abs/1801.09037) though we
do not consider the special analysis in that paper. We let the computer
guide us in correcting for selection.
The functions `full_model_inference` and `pivot_plot` below are just simulation utilities
used to simulate results in least squares regression. The underlying functionality
is contained in the function `selectinf.learning.core.infer_full_target`.
```
import functools
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import regreg.api as rr
from selectinf.tests.instance import gaussian_instance # to generate the data
from selectinf.learning.core import normal_sampler # our representation of the (limiting) Gaussian data
from selectinf.learning.utils import full_model_inference, pivot_plot
from selectinf.learning.fitters import gbm_fit_sk
```
We will know generate some data from an OLS regression model and fit the LASSO
with a fixed value of $\lambda$. In the simulation world, we know the
true parameters, hence we can then return
pivots for each variable selected by the LASSO. These pivots should look
(marginally) like a draw from `np.random.sample`. This is the plot below.
```
np.random.seed(0) # for replicability
def simulate(n=200,
p=20,
s=5,
signal=(0.5, 1),
sigma=2,
alpha=0.1,
B=6000,
verbose=False):
# description of statistical problem
X, y, truth = gaussian_instance(n=n,
p=p,
s=s,
equicorrelated=False,
rho=0.5,
sigma=sigma,
signal=signal,
random_signs=True,
scale=False)[:3]
XTX = X.T.dot(X)
XTXi = np.linalg.inv(XTX)
resid = y - X.dot(XTXi.dot(X.T.dot(y)))
dispersion = np.linalg.norm(resid)**2 / (n-p)
S = X.T.dot(y)
covS = dispersion * X.T.dot(X)
# this declares our target as linear in S where S has a given covariance
sampler = normal_sampler(S, covS)
def base_algorithm(XTX, lam, sampler):
p = XTX.shape[0]
success = np.zeros(p)
loss = rr.quadratic_loss((p,), Q=XTX)
pen = rr.l1norm(p, lagrange=lam)
scale = 0.
noisy_S = sampler(scale=scale)
loss.quadratic = rr.identity_quadratic(0, 0, -noisy_S, 0)
problem = rr.simple_problem(loss, pen)
soln = problem.solve(max_its=100, tol=1.e-10)
success += soln != 0
return set(np.nonzero(success)[0])
lam = 3.5 * np.sqrt(n)
selection_algorithm = functools.partial(base_algorithm, XTX, lam)
if verbose:
print(selection_algorithm(sampler))
# run selection algorithm
return full_model_inference(X,
y,
truth,
selection_algorithm,
sampler,
success_params=(1, 1),
B=B,
fit_probability=gbm_fit_sk,
fit_args={'n_estimators':500})
```
Let's take a look at what we get as a return value:
```
while True:
df = simulate(verbose=True)
if df is not None:
break
df.columns
dfs = []
for i in range(30):
df = simulate()
if df is not None:
dfs.append(df)
fig = plt.figure(figsize=(8, 8))
results = pd.concat(dfs)
pivot_plot(results, fig=fig);
```
| github_jupyter |
```
import xgboost as xgb
from sklearn.metrics import mean_squared_error
import pandas as pd
import numpy as np
from pathlib import Path
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.model_selection import train_test_split, RandomizedSearchCV, PredefinedSplit
from sklearn.feature_extraction import DictVectorizer
from scipy.stats import uniform, randint
from matplotlib import pyplot as plt
import re
from tqdm import tqdm
from helpers import *
# path to project directory
path = Path('./')
# read in training dataset
train_df = pd.read_csv(path/'data/train_v7.csv', index_col=0, dtype={'season':str,
'squad':str,
'comp':str})
# for experimenting with expected stats
# only available from 17/18 season
# train_df = train_df[train_df['season'] != '1617']
```
## XGBoost model
XGboost is another ensemble tree-based predictive algorithm that performs well across a range of applications. Preparation of the data is very similar to the random forest approach.
Again this is a time series problem, where metrics from recent time periods can be predicitve, so we need to add in window features (e.g. points scored last gameweek).
```
# add a bunch of player lag features
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
team_lag_vars
```
Again we have introduced the lag (window) features for each player's points per game, their team's points per game and the opposition team's points per game over the previous 1, 2, 3, 4, 5, 10 and all gameweeks.
Next we can again do everything we need to train the model, using a function to make things cleaner for the experimentation below. We do the following:
- Set the validation point and length as well as the categorical and continuous features (additional to the player and team continuous features set above) we'll be using to predict the dependent variable, total points for each game
- Various operations for ordering fields and setting categories
- Use create_lag_train function to get an our training set (including appropriate lag values in the validation set)
- Build the input (X) and dependent (y) variable datasets. Again this includes encoding the categorical features so that each level is represented in it's own column (e.g. postition_1, position_2, etc.)
- Split out training and validation datasets, limiting validation to rows with >0 minutes
- Return the training sets for use with the XGBoost API, and validation sets
```
# set validaton point/length and categorical/continuous variables
valid_season = '1920'
valid_gw = 20
valid_len = 6
cat_vars = ['season', 'position', 'was_home']
cont_vars = ['gw', 'minutes']
dep_var = ['total_points']
def xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len):
# we want to set gw and season as ordered categorical variables
# need lists with ordered categories
ordered_gws = list(range(1,39))
ordered_seasons = ['1617', '1718', '1819', '1920', '2021']
# set as categories with correct order
lag_train_df['gw'] = lag_train_df['gw'].astype('category')
lag_train_df['season'] = lag_train_df['season'].astype('category')
lag_train_df['gw'].cat.set_categories(ordered_gws, ordered=True, inplace=True)
lag_train_df['season'].cat.set_categories(ordered_seasons, ordered=True, inplace=True)
# create dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(lag_train_df, cat_vars, cont_vars,
player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len)
# split out dependent variable
X, y = train_valid_df[cat_vars + cont_vars + player_lag_vars + team_lag_vars].copy(), train_valid_df[dep_var].copy()
# since position is categorical, it should be a string
X['position'] = X['position'].apply(str)
# need to transform season
enc = LabelEncoder()
X['season'] = enc.fit_transform(X['season'])
X_dict = X.to_dict("records")
# Create the DictVectorizer object: dv
dv = DictVectorizer(sparse=False, separator='_')
# Apply dv on df: df_encoded
X_encoded = dv.fit_transform(X_dict)
X_df = pd.DataFrame(X_encoded, columns=dv.feature_names_)
# split out training and validation sets
X_train = X_df.loc[train_idx]
# train_mask = X_train['minutes_last_all'] > 0
# X_train = X_train[train_mask]
y_train = y.loc[train_idx]#[train_mask]
X_test = X_df.loc[valid_idx]
# we only want look at rows with >0 minutes (i.e. the player played)
test_mask = (X_test['minutes'] > 0) #& (X_test['total_points_pg_last_38'] > 4)
X_test = X_test[test_mask]
y_test = y.loc[valid_idx][test_mask]
return X_train, y_train, X_test, y_test
```
We'll also define a function to instatiate and fit an XGBoost Regressor object with some set parameters
```
def xg(xs, y, objective="reg:squarederror", gamma=0.35, learning_rate=0.08, max_depth=4,
n_estimators=100, subsample=0.87, **kwargs):
return xgb.XGBRegressor(n_jobs=-1, objective=objective, gamma=gamma, learning_rate=learning_rate,
max_depth=max_depth, n_estimators=n_estimators, subsample=subsample).fit(xs, y)
```
Now lets train a model and look at the performance
```
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
m = xg(X_train, y_train)
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
```
This is another clear improvement on the previous best (random forest), but perhaps it can be improved by doing a parameter search.
To do this we will first define the grid of parameters to be searched.
```
# parameter search space
params = {#"colsample_bytree": uniform(0.7, 0.3),
"gamma": uniform(0.3, 0.05),
"learning_rate": uniform(0.08, 0.04), # default 0.1
"max_depth": randint(3, 6), # default 3
"n_estimators": randint(25, 150), # default 100
"subsample": uniform(0.7, 0.2)}
```
In this case we will pass both train and validation parts of the dataset, along with a series telling the XGBRegressor object which rows to use for training, and which for validation. We'll do this by recombining the train and test sets and creating a predefined split array to tell the search object what is what.
We can then again instatiate the XGBRegressor object, but this time pass it to a randomised search validation object, along with the parameter grid, validation splits, and number of iterations we want to run.
We fit this to the training data - 100 random parameter selections will be made and the best parameters for the validation set can be found (takes about 30 minutes to run for me).
```
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
X_train['ps'] = -1
X_test['ps'] = 0
X_train_search = pd.concat([X_train, X_test])
y_train_search = pd.concat([y_train, y_test])
ps = PredefinedSplit(X_train_search['ps'])
X_train.drop(['ps'], axis=1, inplace=True)
X_test.drop(['ps'], axis=1, inplace=True)
X_train_search.drop(['ps'], axis=1, inplace=True)
# Instantiate the regressor: gbm
gbm = xgb.XGBRegressor(objective="reg:squarederror")
# Perform random search: grid_mse
randomized_mse = RandomizedSearchCV(estimator=gbm,
param_distributions=params,
scoring="neg_mean_squared_error",
n_iter=100,
cv=ps,
verbose=1)
# Fit randomized_mse to the data
randomized_mse.fit(X_train_search, y_train_search)
# Print the best parameters and lowest RMSE
print("Best parameters found: ", randomized_mse.best_params_)
print("Lowest RMSE found: ", np.sqrt(np.abs(randomized_mse.best_score_)))
# rerun it with the new parameters to get the mae too
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
m = xg(X_train, y_train, gamma=0.325, learning_rate=0.105,
max_depth=4, n_estimators=133, subsample=0.76)
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
```
Another nice improvement, but perhaps we can do better by adding additional data.
## Expected goals
The problem with goals is that they are actually quite rare - just 2 to 3 per game on average - and don't come evenly. A good team or player can apparently under-perform in the long term, even though they are playing well and creating chances. They can just be unlucky (despite what pundits might make you believe). To get around this we'll introduce expected goals as a metric (see https://fbref.com/en/expected-goals-model-explained/ for an explanation), specifically for teams in the more recent past.
```
# add a bunch of player lag features
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points', 'xg'], [5, 10, 20])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'],
[3, 5, 10, 20, 38, 'all'])
```
We're interested in the recent number of expected goals for the team and opposition, but it will also be useful to know how many expected goals they've been conceding too. This is calculated in the lag_train_df function, where we've used 5, 10 and 20 weeks for the lag period. we just need to add them to our team_lag_vars.
```
# manually add team pg conceded fields for xg
team_lag_vars += [x.replace('team', 'team_conceded') for x in team_lag_vars if 'xg' in x]
# team_lag_vars += [x.replace('team', 'team_conceded') for x in team_lag_vars if 'total_points_team' in x]
```
Using the same model parameters as previously, let's see if these additional features make a difference.
```
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
m = xg(X_train, y_train, gamma=0.325, learning_rate=0.105,
max_depth=4, n_estimators=133, subsample=0.76)
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
X_train['ps'] = -1
X_test['ps'] = 0
X_train_search = pd.concat([X_train, X_test])
y_train_search = pd.concat([y_train, y_test])
ps = PredefinedSplit(X_train_search['ps'])
X_train.drop(['ps'], axis=1, inplace=True)
X_test.drop(['ps'], axis=1, inplace=True)
X_train_search.drop(['ps'], axis=1, inplace=True)
# Instantiate the regressor: gbm
gbm = xgb.XGBRegressor(objective="reg:squarederror")
# Perform random search: grid_mse
randomized_mse = RandomizedSearchCV(estimator=gbm,
param_distributions=params,
scoring="neg_mean_squared_error",
n_iter=100,
cv=ps,
verbose=1)
# Fit randomized_mse to the data
randomized_mse.fit(X_train_search, y_train_search)
# Print the best parameters and lowest RMSE
print("Best parameters found: ", randomized_mse.best_params_)
print("Lowest RMSE found: ", np.sqrt(np.abs(randomized_mse.best_score_)))
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
m = xg(X_train, y_train, gamma=0.321, learning_rate=0.0882,
max_depth=5, n_estimators=70, subsample=0.859)
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
```
Seems to be improving performance some more, at least for the point of the season that we're using for validation.
The XGBoost package also allows us to look at features importance, just like with Random Forest in sklearn.
```
xgb.plot_importance(m, max_num_features=25)
plt.show()
```
Looks like we're getting a few of the expected goal features in there, which may account for the apparent improvement in performance.
Now we can go ahead and run validation across the whole 2019/20 season.
```
def xgb_season(df, cat_vars, cont_vars, player_lag_vars, team_lag_vars,
dep_var, model_params, valid_season='1920'):
# empty list for scores
scores_rmse = []
scores_mae = []
valid_len = 6
for valid_gw in tqdm(range(1,34)):
X_train, y_train, X_test, y_test = xgb_data(df, cat_vars, cont_vars,
player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len)
m = xg(X_train, y_train, gamma=model_params['gamma'], learning_rate=model_params['learning_rate'],
max_depth=model_params['max_depth'], n_estimators=model_params['n_estimators'],
subsample=model_params['subsample'])
preds = m.predict(X_test)
gw_rmse = r_mse(preds, y_test['total_points'])
gw_mae = mae(preds, y_test['total_points'])
# print("GW%d RMSE: %f MAE: %f" % (valid_gw, gw_rmse, gw_mae))
scores_rmse.append(gw_rmse)
scores_mae.append(gw_mae)
print('Season RMSE: %f' % np.mean(scores_rmse))
print('Season MAE: %f' % np.mean(scores_mae))
return [scores_rmse, scores_mae]
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points', 'xg'], [5, 10, 20])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], [3, 5, 10, 20, 38, 'all'])
# manually add team pg conceded fields for xg
team_lag_vars += [x.replace('team', 'team_conceded') for x in team_lag_vars if 'xg' in x]
model_params = {"gamma": 0.321,
"learning_rate": 0.0882,
"max_depth": 5,
"n_estimators": 70,
"subsample": 0.859}
scores = xgb_season(lag_train_df, cat_vars, cont_vars, player_lag_vars, team_lag_vars, dep_var, model_params)
plt.plot(scores[1])
plt.ylabel('GW MAE')
plt.xlabel('GW')
plt.text(15, 1.95, 'Season Avg MAE: %.2f' % np.mean(scores[1]), bbox={'facecolor':'white', 'alpha':1, 'pad':5})
plt.show()
```
It appears that the performance improvement is sustained across the whole season, which is good to see.
Again, we'll add these scores to the comparison dataset.
```
model_validation_scores = pd.read_csv(path/'charts/model_validation_scores.csv', index_col=0)
model_validation_scores['xgboost'] = scores[1]
model_validation_scores.to_csv(path/'charts/model_validation_scores.csv')
```
Finally, in the next notebook we'll move away from tree-based algorithms and try a neural network with embeddings. (Below are my workings trying to find the best inputs for the model).
### Model development notes
Weirdly, the model performs worse when I remove the rows with 0 minutes from training (removing them after I've calculated all the lag values of course), so I keep them in.
```
# for experimenting with team points split by position
# train_df['total_points_def'] = ((train_df['position'] == 1) | (train_df['position'] == 2)).astype(int) * train_df['total_points']
# train_df['total_points_mid'] = (train_df['position'] == 3).astype(int) * train_df['total_points']
# train_df['total_points_fwd'] = (train_df['position'] == 4).astype(int) * train_df['total_points']
# for experimenting with expected stats
# only available from 17/18 season
# train_df = train_df[train_df['season'] != '1617']
# original non-xg model (hmmm, seems to be better...)
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
model_params = {"gamma": 0.42,
"learning_rate": 0.047,
"max_depth": 4,
"n_estimators": 171,
"subsample": 0.6}
scores = xgb_season(lag_train_df, cat_vars, cont_vars, player_lag_vars, team_lag_vars, dep_var, model_params)
# original model with above params
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
model_params = {"gamma": 0.325,
"learning_rate": 0.105,
"max_depth": 4,
"n_estimators": 133,
"subsample": 0.76}
scores = xgb_season(lag_train_df, cat_vars, cont_vars, player_lag_vars, team_lag_vars, dep_var, model_params)
# try it with total points for the team and total points conceded for the opposition - bit of improvement
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
# add team conceded fields
pattern = re.compile('total_points_team_pg_last_.*_opponent')
team_lag_vars += [x.replace('team', 'team_conceded') for x in team_lag_vars if pattern.match(x)]
# but remove them for the player's team
pattern = re.compile('total_points_team_pg_last_.*_opponent')
team_lag_vars = [x for x in team_lag_vars if not pattern.match(x)]
print(team_lag_vars)
model_params = {"gamma": 0.42,
"learning_rate": 0.047,
"max_depth": 4,
"n_estimators": 171,
"subsample": 0.6}
scores = xgb_season(lag_train_df, cat_vars, cont_vars, player_lag_vars, team_lag_vars, dep_var, model_params)
plt.plot(scores[1])
plt.ylabel('GW MAE')
plt.xlabel('GW')
plt.text(15, 1.95, 'Season Avg MAE: %.2f' % np.mean(scores[1]), bbox={'facecolor':'white', 'alpha':1, 'pad':5})
plt.show()
model_validation_scores = pd.read_csv(path/'charts/model_validation_scores.csv', index_col=0)
model_validation_scores['xgboost'] = scores[1]
model_validation_scores.to_csv(path/'charts/model_validation_scores.csv')
# try it with total points for the team and total points conceded for the opposition
# but change up the lags a bit - improves little bit
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 3, 5, 10, 20])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
# add team conceded fields
pattern = re.compile('total_points_team_pg_last_.*_opponent')
team_lag_vars += [x.replace('team', 'team_conceded') for x in team_lag_vars if pattern.match(x)]
# but remove them for the player's team
pattern = re.compile('total_points_team_pg_last_.*_opponent')
team_lag_vars = [x for x in team_lag_vars if not pattern.match(x)]
print(team_lag_vars)
model_params = {"gamma": 0.42,
"learning_rate": 0.047,
"max_depth": 4,
"n_estimators": 171,
"subsample": 0.6}
scores = xgb_season(lag_train_df, cat_vars, cont_vars, player_lag_vars, team_lag_vars, dep_var, model_params)
plt.plot(scores[1])
plt.ylabel('GW MAE')
plt.xlabel('GW')
plt.text(15, 1.95, 'Season Avg MAE: %.2f' % np.mean(scores[1]), bbox={'facecolor':'white', 'alpha':1, 'pad':5})
plt.show()
model_validation_scores = pd.read_csv(path/'charts/model_validation_scores.csv', index_col=0)
model_validation_scores['xgboost'] = scores[1]
model_validation_scores.to_csv(path/'charts/model_validation_scores.csv')
X_train, y_train, X_test, y_test = xgb_data(lag_train_df, cat_vars, cont_vars, player_lag_vars,
team_lag_vars, dep_var, valid_season, valid_gw, valid_len)
m = xg(X_train, y_train, gamma=0.42,
learning_rate=0.047,
max_depth=4,
n_estimators=171,
subsample=0.6)
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
xgb.plot_importance(m, max_num_features=25)
plt.show()
# xg (including conceded) and points for teams
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
# xg conceded only and points for teams
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
# xg (including conceded) and points (including conceded) for teams
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
# only points (including conceded for teams)
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
# xg (including conceded) and points (excluding opponent) for teams
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
# includes 0 minute rows
# validation only >0 minute rows
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
# excludes 0 minute all history rows
# validation only >0 minute rows
preds = xg_reg.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test['total_points'])))
print("MAE: %f" % mae(preds, y_test['total_points']))
for i in range(1,5):
position = 'position_' + str(i)
mask = X_test[position] == 1
print(r_mse(xg_reg.predict(X_test[mask]), y_test[mask]['total_points']))
mask = (X_test['total_points_pg_last_10'] > X_test['total_points_pg_last_10'].mean() + 2)
sum(X_test['total_points_pg_last_10'] > X_test['total_points_pg_last_10'].mean())/6
len(X_test)/6
# lag_train_df[['season', 'gw', 'team', 'xg_team_conceded_pg_last_10']][(lag_train_df['season'] == '2021') & (lag_train_df['gw'] == 15)].drop_duplicates().sort_values('xg_team_conceded_pg_last_10')
# train_df[(train_df['season'] == '2021')][['team', 'xg']].groupby(['team']).sum()
# train_df[(train_df['team'] == 'Burnley') & (train_df['season'] == '2021')][['player', 'xg']].groupby(['player']).sum()
# train_df[(train_df['team'] == 'Burnley') & (train_df['season'] == '2021') & (train_df['player'] == 'Charlie Taylor')][['gw', 'xg']]
```
| github_jupyter |
```
import gym
import numpy as np
from stable_baselines.sac.policies import MlpPolicy as MlpPolicy_SAC
from stable_baselines import SAC
from citylearn import CityLearn
import matplotlib.pyplot as plt
from pathlib import Path
import time
# Central agent controlling one of the buildings using the OpenAI Stable Baselines
climate_zone = 1
data_path = Path("data/Climate_Zone_"+str(climate_zone))
building_attributes = data_path / 'building_attributes.json'
weather_file = data_path / 'weather_data.csv'
solar_profile = data_path / 'solar_generation_1kW.csv'
building_state_actions = 'buildings_state_action_space.json'
building_ids = ['Building_3']
objective_function = ['ramping','1-load_factor','average_daily_peak','peak_demand','net_electricity_consumption','quadratic']
env = CityLearn(data_path, building_attributes, weather_file, solar_profile, building_ids, buildings_states_actions = building_state_actions, cost_function = objective_function, central_agent = True, verbose = 1)
model = SAC(MlpPolicy_SAC, env, verbose=0, learning_rate=0.01, gamma=0.99, tau=3e-4, batch_size=2048, learning_starts=8759)
start = time.time()
model.learn(total_timesteps=8760*7, log_interval=1000)
print(time.time()-start)
obs = env.reset()
dones = False
counter = []
while dones==False:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
counter.append(rewards)
env.cost()
# Plotting winter operation
interval = range(0,8759)
plt.figure(figsize=(16,5))
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using SAC for storage(kW)'])
# Central agent controlling all the buildings using the OpenAI Stable Baselines
climate_zone = 1
data_path = Path("data/Climate_Zone_"+str(climate_zone))
building_attributes = data_path / 'building_attributes.json'
weather_file = data_path / 'weather_data.csv'
solar_profile = data_path / 'solar_generation_1kW.csv'
building_state_actions = 'buildings_state_action_space.json'
building_ids = ['Building_1',"Building_2","Building_3","Building_4","Building_5","Building_6","Building_7","Building_8","Building_9"]
objective_function = ['ramping','1-load_factor','average_daily_peak','peak_demand','net_electricity_consumption']
env = CityLearn(data_path, building_attributes, weather_file, solar_profile, building_ids, buildings_states_actions = building_state_actions, cost_function = objective_function, central_agent = True, verbose = 1)
# model = SAC(MlpPolicy_SAC, env, verbose=0, gamma=0.985, learning_rate=0.01, learning_starts=8759)
model = SAC(MlpPolicy_SAC, env, verbose=0, learning_rate=0.01, gamma=0.985, batch_size=2048, learning_starts=8759)
start = time.time()
model.learn(total_timesteps=8760*5, log_interval=1000)
print(time.time()-start)
obs = env.reset()
dones = False
counter = []
while dones==False:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
counter.append(rewards)
env.cost()
# Plotting winter operation
interval = range(5000,5200)
plt.figure(figsize=(16,5))
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using SAC for storage(kW)'])
# model = SAC(MlpPolicy_SAC, env, verbose=0, gamma=0.985, learning_rate=0.01, learning_starts=8759)
model = SAC(MlpPolicy_SAC, env, verbose=0, learning_rate=0.01, gamma=0.99, tau=3e-4, batch_size=2048, learning_starts=8759)
start = time.time()
model.learn(total_timesteps=8760*4, log_interval=1000)
print(time.time()-start)
obs = env.reset()
dones = False
counter = []
while dones==False:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
counter.append(rewards)
env.cost()
# Plotting winter operation
interval = range(5000,5200)
plt.figure(figsize=(16,5))
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using SAC for storage(kW)'])
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm
import pandas as pd
from scipy.stats import multivariate_normal
import seaborn as sns
plt.style.use('ggplot')
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import f1_score, confusion_matrix
random_state = 42
```
## Data Loading
```
#training data
train_df = pd.read_csv('tr_server_data.csv')
print('data shape', train_df.shape)
train_df.head()
# CV data: the data used for calculation purpose.
cv_df = pd.read_csv('cv_server_data.csv')
print('data shape', cv_df.shape)
cv_df.head()
# CV data results 0 for normal 1 for outlier.
result_df = pd.read_csv('gt_server_data.csv')
print('data shape', result_df.shape)
result_df.head()
result_df['results'].value_counts()
```
### Test Data
```
plt.figure(figsize= (10,6))
plt.plot(train_df['Latency'], train_df['Throughput'], 'bx' )
plt.xlabel('Latency (ms)')
plt.ylabel('Throghput (mb/s)')
plt.show()
```
### Training data
```
normal = cv_df.iloc[result_df.index[result_df['results'] == 0]]
print(normal.shape)
abnormal = cv_df.iloc[result_df.index[result_df['results'] == 1]]
plt.figure(figsize= (10,6))
plt.plot(normal['Latency'], normal['Throughput'], 'bx' )
plt.plot(abnormal['Latency'], abnormal['Throughput'], 'ro' )
plt.xlabel('Latency (ms)')
plt.ylabel('Throghput (mb/s)')
plt.show()#
```
### Using Support Vector Machine
We will only used Test data so that SVM find the outliers without any cross calidation data (Unsupervised Learning)
```
#Here nu reperesent the fraction of data that are outliers
clf = svm.OneClassSVM(nu = 0.02, kernel = 'rbf', gamma = 0.01)
clf.fit(train_df)
pred = clf.predict(train_df)
#1 means normal, -1 means abnormal
normal = train_df[pred == 1]
abnormal = train_df[pred == -1]
plt.figure(figsize= (10,6))
plt.plot(normal['Latency'], normal['Throughput'], 'bx' )
plt.plot(abnormal['Latency'], abnormal['Throughput'], 'ro' )
plt.xlabel('Latency (ms)')
plt.ylabel('Throghput (mb/s)')
plt.show()
```
### Using multivariateGaussian
```
# calculate mean
tr_data = train_df.values
mu = np.mean(tr_data, axis = 0)
mu
# calculate Sigma
sigma = np.cov(tr_data.T)
sigma
# Get Guassian distribustion for train data
p = multivariate_normal(mean=mu, cov=sigma)
p_tr = p.pdf(tr_data)
# get Guassian distribution for CV data
p = multivariate_normal(mean=mu, cov=sigma)
p_cv = p.pdf(cv_df.values)
p_cv.shape
# sns.distplot(p_cv, hist=False, rug=True)
# plt.show()
#Find Thresold
best_epsilon = 0
best_f1 = 0
f = 0
stepsize = (max(p_cv) - min(p_cv)) / 1000;
print('stepsize', stepsize)
epsilons = np.arange(min(p_cv),max(p_cv),stepsize)
print('epsilon',epsilon)
for epsilon in np.nditer(epsilons):
predictions = (p_cv < epsilon)
f = f1_score(result_df.values, predictions, average = "binary")
print('f1 score',f)
if f > best_f1:
best_f1 = f
best_epsilon = epsilon
fscore, ep = best_f1, best_epsilon
outliers = np.asarray(np.where(p_tr < ep))
plt.figure(figsize= (10,6))
plt.xlabel("Latency (ms)")
plt.ylabel("Throughput (mb/s)")
plt.plot(tr_data[:,0],tr_data[:,1],"bx")
plt.plot(tr_data[outliers,0],tr_data[outliers,1],"ro")
plt.show()
```
### Using Supervised Approach
```
def random_search(clf, param_dist, X, y):
grid_clf = RandomizedSearchCV(clf,
param_distributions = param_dist,
cv = 5,
n_iter = 50,
verbose=2,
scoring = 'accuracy',
random_state = 42,
n_jobs = 4
)
grid_clf.fit(X, y)
print('Shape', X.shape)
best_parameters, score, _ = max(grid_clf.grid_scores_, key=lambda x: x[1])
print('Score:', score)
for param_name in sorted(best_parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
return grid_clf.best_estimator_
from scipy import stats
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV
clf = RandomForestClassifier( random_state = 42,
# class_weight={0:20,1:1}
)
param_dist = { 'n_estimators': randint(2, 1000),
"max_depth": randint(2, 20),
"min_samples_split": randint(2, 11),
"min_samples_leaf": randint(1, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
X_train = cv_df.values
y_train = result_df.values.ravel()
X_test = tr_data
#clf = random_search(clf, param_dist, X_train, y_train)
clf = RandomForestClassifier(random_state = 42,
max_depth = 6,
n_estimators = 412,
min_samples_split =2,
min_samples_leaf = 1,
bootstrap = False,
criterion = 'entropy',
class_weight={0:20,1:1}
)
y_pred = cross_val_predict(clf, X_train, y_train, cv= 7)
print('CV Accuracy RF Model', accuracy_score(y_train, y_pred))
print('confusion_matrix', confusion_matrix(y_train, y_pred))
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
```
## Test Data Results
```
#0 means normal, 1 means abnormal
print(train_df.shape)
normal = train_df[y_pred == 0]
print(normal.shape)
abnormal = train_df[y_pred == 1]
plt.figure(figsize= (10,6))
plt.plot(normal['Latency'], normal['Throughput'], 'bx' )
plt.plot(abnormal['Latency'], abnormal['Throughput'], 'ro' )
plt.xlabel('Latency (ms)')
plt.ylabel('Throghput (mb/s)')
plt.show()
```
## Using K means
```
from sklearn.cluster import KMeans
# Number of clusters
kmeans = KMeans(n_clusters=1)
# Fitting the input data
kmeans = kmeans.fit(X_train)
# Getting the cluster labels
labels = kmeans.predict(X_train)
labels
normal = train_df[labels == 0]
abnormal = train_df[labels == 1]
plt.figure(figsize= (10,6))
plt.plot(normal['Latency'], normal['Throughput'], 'bx' )
plt.plot(abnormal['Latency'], abnormal['Throughput'], 'ro' )
plt.xlabel('Latency (ms)')
plt.ylabel('Throghput (mb/s)')
plt.show()
kmeans.cluster_centers_
#dist = np.linalg.norm(X_train, kmeans.cluster_centers_)
```
| github_jupyter |
# Najboljše videoigre vseh časov
**Projektna naloga pri predmetu Programiranje 1**
# 0. Priprava podatkov
Preden sploh lahko začnemo z analizo podatkov, moramo seveda pripraviti vse potrebno. V ta namen uvozimo knjižnico pandas in vnesemo naše tabele.
```
import pandas as pd
pd.options.display.max_rows = 30
%matplotlib inline
igre = pd.read_csv("igre.csv")
zanri = pd.read_csv("zanri.csv")
igre
zanri
```
# 1. Kaj pomeni "najboljša videoigra"?
Da bo naslov te projektne naloge upravičen, bi se spodobilo, da najprej dorečemo, kaj sploh pomeni, da je igra "najboljša". Igre so na spletni strani Metacritic, s katere sem prenesel podatke, razvrščene po oceni **metascore**. Tu jih bomo ocenjevali malo drugače.
Konstruirali bomo lastno metriko za rangiranje. Bralec se morda sprašujemo zakaj je to potrebno. Razlogov je več:
* Neobelani podatki so razvrščeni po **metascore**, torej ocenah strokovnjakov. Ne upoštevamo mnenja običajnega igralca.
* Nikjer ne upoštevamo števila ocen, kar lahko pripelje do patoloških primerov. Igra, ki ima 5 glasov in oceno 9.0, intuitivno seveda ni enako dobra kot igra, ki ima 5000 glasov in oceno 8.5.
* Do neke mere bi radi upoštevali, da je igra z malo glasovi in dobro oceno boljša od igre z veliko glasovi, ki pravijo, da je slaba.
Najprej malo preletimo podatke, da vidimo, s čim sploh imamo opravka.
## Userscore
**Userscore** je ocena, ki jo dajo običajni uporabniki spletne strani. Poglejmo, kaj nam lahko pove.
```
igre.plot.scatter(x="glasovi userscore", y="userscore")
igre[igre["glasovi userscore"] > 10 ** 5]
```
V redu, zanimiv začetek ...
Za bralca, ki ni na tekočem s svetom videoiger, *The Last of Us* je bila ena najbolj uspešnih videoiger za *PlayStation 3* ter *PlayStation 4* in ena najbolj uspešnih videoiger nasploh. Je kanoničen primer, ko zagovarjaš, da igre niso samo streljačine, ampak celoten medij, ki se lahko ob rob postavi knjigi in filmu. Ko bomo iskali najboljše igre na teh platformah, bomo to gotovo našli med njimi.
Skratka, igra je izšla 2013 in oboževalci so čakali 7 let na njeno nadaljevanje, *The Last of Us Part II*, ki pa je bila dokaj kontroverzna. Oddaljila se je od sporočila prve igre in se usmerila v LGBT vsebine. To je skupaj z napihnjenimi pričakovanji občinstva pomenilo, da je veliko ljudi želelo izraziti svoje mnenje o igri. Ravno to lahko opazimo v zgornjem grafu.
Morda se sedaj osredotočimo na območje, kjer je glavnina podatkov.
```
igre[igre["glasovi userscore"] < 1000].plot.scatter(x="glasovi userscore", y="userscore")
```
Vidimo, da več kot je glasov, boljše so igre. Torej je smiselno število glasov upoštevati pri naši oceni. Naredimo sledeče:
**Prilagojen userscore** izračunamo po naslednji formuli: `Up = U * (Gu / (Gu + GuAv))`,
kjer je `U` prvotna ocena, `Gu` število glasov in `GuAv` povprečno število glasov.
S tem smo dosegli sledeče:
* Igre z visoko oceno in velikim številom glasov bodo obdržale prestižno oceno.
* Igre z nizko oceno in velikim številom glasov bodo ostale slabo ocenjene.
* Igre s sicer malo glasovi, ampak z res dobro oceno bodo povprečno ali celo nadpovprečno ocenjene.
* Za igre z nizko oceno in malo glasovi smatramo, da niso dovolj popularne, zato se njihova ocena zmanjša.
Če je glasov veliko, ocena ostane skoraj enaka. Če pa je glasov malo, se ocena temu primerno prilagodi, pri čimer pa lahko res dobra igra vseeno izstopa.
Tako smo graf ocene v odvisnosti od glasov nekoliko linealizirali.
```
def prilagodi_glasove_userscore(tabela_iger):
def prilagodi(ocena, st_glasov, medijana_glasov):
return round(((st_glasov) / (st_glasov + medijana_glasov)) * ocena, 2)
tabela_iger["prilagojen userscore"] = prilagodi(
tabela_iger["userscore"], tabela_iger["glasovi userscore"], (tabela_iger["glasovi userscore"]).mean()
)
prilagodi_glasove_userscore(igre)
igre[igre["glasovi userscore"] < 1000].plot.scatter(x="glasovi userscore", y="prilagojen userscore")
```
## Metascore
```
igre.plot.scatter(x="glasovi metascore", y="metascore")
```
Tu opazimo, da kvaliteta iger s številom glasov ne narašča. To je po pričakovanjih, saj proizvajalci pogosto podarijo igro ocenjevalcem, da jo ti ocenijo in s tem naredijo reklamo. Število glasov je torej v veliki meri odvisno od proizvajalca in ga zato tu ne bomo upoštevali. Poleg tega so števila glasov stokovnjakov veliko nižja v primerjavi s števili glasov uporabnikov, zato izračun po zgornji enačbi v tem primeru ne bi bil realen.
Oceno strokovnjakov bomo le delili z 10, da bosta userscore in metascore števili istega reda velikosti.
Torej **prilagojen metascore** dobimo po formuli: `Mp = M / 10`, kjer je `M` metascore.
```
def prilagodi_glasove_metascore(tabela_iger):
tabela_iger["prilagojen metascore"] = tabela_iger["metascore"] / 10
prilagodi_glasove_metascore(igre)
```
## Prilagojena ocena
Naša **prilagojena ocena** bo vsota prilagojenega metascora in prilagojenega userscora. In ko smo ravno pri tem, bomo še odstranili igre, ki niso ocenjene.
```
def prilagojena_ocena(tabela_iger):
tabela_iger["prilagojena ocena"] = tabela_iger["prilagojen userscore"] + tabela_iger["prilagojen metascore"]
tabela_iger.dropna(subset=["prilagojena ocena"], inplace=True)
prilagojena_ocena(igre)
igre
```
Sedaj imamo našo metriko za ocenjevanje iger in prava analiza podatkov se lahko prične.
Poglejmo, katere videoigre so najboljše.
```
igre.sort_values("prilagojena ocena", ascending=False).head(15)
```
Sedaj pa uredimo igre po prilagojeni oceni in jih preindeksiramo. Tako bomo lahko hitro videli, na katerem mestu seznama je določena igra.
```
igre = igre.sort_values("prilagojena ocena", ascending=False)
igre.reset_index(drop=True, inplace=True)
igre
```
# 2. Kaj nam povedo podatki?
## Userscore in metascore
Razliko med oceno *userscore* in *metascore* smo že omenili. Raziščimo to bolj podrobno.
Metascore je ocena "strokovnjakov" na področju videoiger. Ti preigrajo igro ob izidu in so vir prvih informacij o kvaliteti. Iz istega razloga podjetja skušajo dobiti dobre ocene na račun daril in drugih bonitet.
Userscore je ocena običajnih igralcev in uporabnikov. Ti preživijo več časa z igro in zato nudijo bolj dologoročno mnenje.
Poglejmo, kakšne razlike lahko opazimo.
```
igre.plot.scatter(x="metascore", y="userscore")
igre.plot.scatter(x="prilagojen metascore", y="prilagojen userscore")
```
Nič presenetljivega ne opazimo, točke so precej enakomerno porazdeljene. Morda res vidimo, da se oba tabora strinjata pri kvaliteti najboljših iger. Toda lahko izluščimo nekaj zelo zanimivega.
Ne smemo pozabiti, da so surovi podatki 10000 iger z najboljšim metascorom. Vendar pa pri userscoru vidimo precejšnjo varianco. Torej so igre, ki jih strokovnjaki ocenjujejo nadpovprečno, vseeno zelo mešane kvalitete. Prvi graf na to namiguje, drugi pa to potrdi. Sklepamo lahko, da se mnenja strokovnjakov in uporabnikov ne ujemajo nujno.
## Katere igre so najboljše?
Preden odgovorimo na to vprašanje, bomo ločili igre glede na platformo. Različna strojna oprema in različna implementacija pomenita, da se lahko kvaliteta od platforme do platforme precej razlikuje. Zato bomo v določenih pogledih obravnavali najboljše igre na vsaki platformi posebej.
```
igre.platforma.unique()
```
Na tem mestu bomo iz nadaljne analize izvzeli igre, ki so na platformi *Stadia*. No, pravzaprav dve igri:
```
igre[igre["platforma"] == "Stadia"]
```
*Stadia* je *cloud gaming service*, ki ga je razvil *Google*. Ideja je preprosta. Z mesečno naročnino imaš dostop do računalniške strojne opreme, ki jo nudi *Google*, tako da doma ne potrebuješ najzmogljivejših konzol. Potrebuješ samo zaslon, tipkovnico ali kontroler in dovolj dobro internetno povezavo. Ideja je v teoriji sicer dobra, vendar se v praksi še ni tako zelo prijela. Dejstvo, da sta med 10.000 najboljšimi igrami le dve s te platforme, je dokaz tega, in bomo zato brez slabe vesti *Stadio* izvzeli iz nadaljnje analize.
```
seznam_platform = igre[igre["platforma"] != "Stadia"]["platforma"].unique()
seznam_platform
igre_po_platformi = {}
for platforma in seznam_platform:
igre_po_platformi[platforma] = igre[igre.platforma == platforma]
igre_po_platformi["PC"]
```
Sedaj lahko končno odgovorimo na glavno vprašanje te projektne naloge. Katere so najboljše videoigre?
```
igre[["naslov", "platforma", "prilagojena ocena"]].head(20)
```
Poglejmo še, katere so najbolj priljubljene igre na posamezni platformi:
```
for platforma in igre_po_platformi:
display(platforma)
display(igre_po_platformi[platforma][["naslov", "prilagojena ocena"]].sort_values(
"prilagojena ocena", ascending=False).head(5))
```
V nadaljnji analizi bomo ponekod pogledali le podatke **dobrih iger**, torej tiste, ki so v prvi polovici, in **najboljše igre**, ki so v prvih 5 %. Zato nam bo prišlo prav, če na tem mestu uvedemo ti dve skupini iger:
```
dobre_igre = igre.head(5000)
najboljse_igre = igre.head(500)
```
## Katera platforma je nabolj uspešna?
Zanima nas, katera **platforma** je bila najbolj uspešna. Enostavno preštejmo, koliko iger se je pojavilo v naših podatkih:
```
st_iger_na_platformo = igre.groupby("platforma").size().sort_values()
st_iger_na_platformo.plot.bar()
```
Da je računalnik najbolj priljubljena platforma, nas ne preseneča. Nekoliko nepošteno je primerjati konzole z račualnikom, saj se računalniki sčasoma spreminjajo, si izboljšujejo strojno opremo, pa vendar ostajajo pod istim imenom. Konzole pa izidejo z neko strojno opremo in ko je ta zastarela, izide naslednja generacija konzol. Tako imamo sosledja *PlayStation 2*, *PlayStation 3*, *PlayStation 4* in *Xbox*, *Xbox 360*, *Xbox One*.
Med konzolami so torej najboljše *PlayStation 4*, *Xbox 360*, *PlayStation 3*.
Če smo se že tako trudili, da smo naredili lastno metriko, poglejmo, na katerih konzolah se pojavijo najboljše igre.
```
st_iger_na_platformo_najboljse = najboljse_igre.groupby("platforma").size().sort_values()
st_iger_na_platformo_najboljse.plot.bar()
```
Opazimo zelo podobne rezultate. Med konzolami še vedno prevladuje *PlayStation 4*, tik za petami pa sta mu tokrat ročna konzola *Switch* in nato *PlayStation 2*.
Pozoren bralec bo opazil, da se igre z istim imenom pojavijo večkrat na različnih platformah. Kot rečeno, se lahko izkušnja od platforme do platforme precej razlikuje. *PlayStation 4* ima na primer kontroler, ki se trese, medtem ko igraš, *Switch* ti nudi igranje na poti, *PC* pa seveda najboljše grafične zmogljivosti.
To, da pa igre nastopajo na različnih platformah, nam da izvrstno priložnost, da jih med sabo primerjamo. Izkušnja ne bo odvisna od vsebine igre, temveč samo od zmožnosti platforme.
```
naslovi = igre.naslov.unique()
veckratni = []
for naslov in naslovi:
if igre.groupby("naslov").size()[naslov] > 1:
veckratni.append(naslov)
score = {}
for naslov in veckratni:
top_platforma = igre[igre.naslov == naslov].sort_values("prilagojena ocena").iloc[0]["platforma"]
if top_platforma in score.keys():
score[top_platforma] += 1
else:
score[top_platforma] = 1
konzole = pd.DataFrame(list(score.items()), columns = ['platforme','točke']).sort_values("točke")
konzole.plot.bar(x="platforme", y="točke")
```
Rezultati presenetljivo prikazujejo, da je najbolje igrati na *Xbox One*.
Eno izmed vprašanj, na katero sem želel odgovoriti, je stalnica v svetu videoiger že od leta 2013. Katera konzola je boljša, *PlayStation 4* ali *Xbox One*? Na to vprašanje tudi sedaj ne bom odgovoril, saj ne vem, kakšnega prepričanja je ocenjevalec moje projektne naloge. Na podlagi podatkov pa lahko naredimo naslednje sklepe.
S stališča zmogljivosti je *Xbox One* boljša, saj so igre na njej najbolje ocenjene. Kljub temu pa je, kot pikazujeta prejšnja grafa, večji del najboljših iger na *PlayStation 4*. Razlog je v tem, da *Sony*, podjetje, ki je lastnik konzol *PlayStation*, razvija ekskluzivne igre, ki jih lahko igraš samo na njihovih platformah. Ker morajo te igre prepričati ljudi, da zanje kupijo konzolo, so pogosto boljše. *Microsoft*, lastnik *Xbox* konzol, je dolgo poskušal najti svoje ekskluzivne igre, a se preprosto ne morejo kosati z naslovi, kot so *The Last of Us*, *God of War* ali *Spider Man*. Nasploh ima *Microsoft* drugačno mentaliteto, kar se tega tiče. Kvečjemu poskrbijo, da igre ne izidejo na *PlayStation* konzolah, vseeno pa izidejo na računalnikih.
Debata ostaja odprta.
Ob času pisanja pa smo ravno v napetem obdobju, ko izhaja nova generacija igralnih konzol; *PlayStation 5* in *Xbox Series X*.
Trenutno kaže, da v bitki vodi *PlayStation 5*.
## Kateri studio je najbolj uspešen?
Sedaj poglejmo, kateri **studio** se lahko pohvali z najboljšimi igrami.
```
st_iger_na_studio_najboljse = najboljse_igre.groupby("studio").size().sort_values(ascending=False).head(20)
st_iger_na_studio_najboljse.plot.bar()
```
*Ninetendo*. Ta rezultat gotovo nikogar ne preseneča. *Mario*, *Legend of Zelda*, *Pokemon*, *Metroid*, *Animal Crossing*, *Smash Bros*. Te naslove gotovo vsi poznamo.
Pisalo se je 1985, ko je *Nintendo* na Japonskem izdal *Super Mario Bros.* in leto pozneje je ta uspešnica skupaj z *Metroid* in *Legend of Zelda* zavzela ZDA.
Začelo se je obdobje videoiger, kot jih poznamo danes. Torej ne na arkadah, ampak doma. *Nintendo entertainment sistem* ali *NES* je bila ena prvih konzol, ki je to omogočala. Bili so tudi pionirji na področju 3D videoiger z njihovo konzolo *Nintendo64*.
Sčasoma so se bolj posvetili ročnim konzolam, kot so *DS*, *3DS* in sedaj *Switch*.
Podjetje dela izključno ekskluzivne igre za svoje konzole.
## Kaj lahko povemo o datumu?
Med drugim sem zbral podatke o **mesecu** in **letu**, kdaj je katera igra izšla. Sedaj bi se malo poigral s tem.
Zanimivo bo videti, katera leta so bila najboljša za videoigre.
Prav tako me zanima, če mesec izida kaj vpliva na uspeh iger. Natančneje, ali je med poletnimi počitnicami, ko imajo mladostniki več časa, ali pa decembra, ko starši raje kupijo igre, izdanih več uspešnih iger?
Najprej pogljemo leto.
```
st_iger_na_leto_dobre = dobre_igre.groupby("leto").size()
st_iger_na_leto_dobre.plot.bar()
st_iger_na_leto_najboljse = najboljse_igre.groupby("leto").size()
st_iger_na_leto_najboljse.plot.bar()
```
Prvi graf prikazuje, da je z leti vedno več dobrih iger. Sklepam, da je to enostavno posledaica tega, kako deluje naša metrika.
Mi damo namreč velik poudarek na glasovom uporabnikov njihovemu številu. Potem pa vplivajo številni dejavniki, da je več uporabnikov ocenilo igre.
Sama stran *Metacritic* je izšla leta 2001, potem pa je gotovo trajalo še nekaj let, da se je prijela in da so ljudje začeli množično podajati svoja mnenja.
Poleg tega proizvajalci zadnjih 10 let izvajajo kampanje in ogromne promocije, da širša publika izve za njihove igre.
Še ena opazka. 2020 ima manj iger, iz očitnih razlogov. Poleg tega so bili podatki zbrani jeseni, ko večina iger leta še ni izšla.
Drugi graf pa pokaže, da je bilo najboljše leto za igre 2011. In če spodaj pogledamo, katere igre so to bile, nas rezultat ne preseneča. *Minecraft*, *Portal*, *The Elder Scrolls V: Skyrim*, pa še marsikaj.
```
igre[igre["leto"] == 2011].head(10)
```
Poglejmo še, kakšen je vpliv meseca izida na uspeh. Da bomo imeli mesece v običajnem vrstem redu, spišimo spodnji slovar.
```
meseci = {"Jan" : 0, "Feb" : 1, "Mar" : 2, "Apr" : 3, "May" : 4, "Jun" : 5,
"Jul" : 6, "Aug" : 7, "Sep" : 8, "Oct" : 9, "Nov" : 10, "Dec" : 11}
st_iger_na_mesec_dobre = dobre_igre.groupby("mesec").size().sort_index(key = lambda x: x.map(meseci))
st_iger_na_mesec_dobre.plot.bar()
st_iger_na_mesec_najboljse = najboljse_igre.groupby("mesec").size().sort_index(key = lambda x: x.map(meseci))
st_iger_na_mesec_najboljse.plot.bar()
```
Zgornja grafa prikazujeta, da je bila moja hipoteza pravilna. Poletni in božični čas vplivata na število dobrih iger, če gledamo dobre igre in najboljše igre nasploh. Presenetljivo je korelacija negativna. Ravno v tem času je izdanih najmanj uspešnih iger. Menim, da je razlog za to naslednji.
Videoigre so izredno zahteven medij, kar se tiče proizvodnje, morda celo bolj kot film. Programiranje dejanske igre, snemanje glasov, izdelava grafik. Iskreno sploh ne vem, kaj vse se dogaja v zakulisju. Nekaj pa je jasno, to je drag posel, ki pa lahko vrne ogromen dobiček. Poudarek na lahko. Podjetja zato iščejo investitorje, ki pa gledajo le na denar. Tako kot sem sklepal jaz, tudi oni menijo, da so prazniki in poletno obdobje najboljši čas za prodajo in zato spodbujajo ali zahtevajo, da studii izdajo igre v teh sezonah. Časovna stiska in zahtevnost proizvodnje pomenita, da se v zadnjih mesecih izdelave res hiti. Posledično pride do izdelkov, ki niso tako popolni, kot bi lahko bili. Konkreten primer tega je *Cyberpunk 2077*. To naj bi bila najbolj ambiciozna igra tega desetletja, a je žal ravno iz teh razlogov še ena žrtev leta 2020.
## Vpliv oznak
Zanima nas, kako **oznake** iger vplivajo na njihov uspeh.
Tako kot filmi lahko igre vsebujejo bolj ali manj primerne vsebine. Proizvajalci na to opozorijo z oznako, ki namigne, keterim starostnim skupinam je igra namenjena.
Glede na to, da je *Metacritic* ameriška spletna stran, so oznake v mojih podatkih sicer po ameriških standartih, ampak bomo vseeno lahko kaj sklepali. Za lažje razumevanje navajam, kaj katera oznaka pomeni:
* **E Everyone** Primerno za vse starosti. Včasih pod oznako **K-A**.
* **E10+ Everyone 10+** Primerno za 10 let in starejše.
* **T Teen** Primerno za najstnike oziroma starejše od 13 let.
* **M Mature 17+** Za starejše od 17 let.
* **AO Adults Only** Za starejše od 18 let.
* **RP Rating Pending** Igra še ni ocenjena.
Bolj natančen opis posamezne kategorije je dostopen na strani [*ESRB*](https://www.esrb.org/ratings-guide/).
```
oznake_dobre = dobre_igre.groupby("oznaka").size()
oznake_dobre.plot.bar()
```
Največ dobrih iger spada v kategoriji *Teen* in *Mature*. Te igre lahko vsebujejo nasilje, kri, kletvice in spolne vsebine. Običajno gre za bolj realistične igre, zato je treba na nasilje primerno opozoriti. Ampak se zdi, da ni tako velike razlike med igrami z bolj lahkotno in bolj zrelo vsebino. Morda kaj več odkrijemo, če pogledamo najboljše igre.
```
oznake_najboljse = najboljse_igre.groupby("oznaka").size()
oznake_najboljse.plot.bar()
```
Na zgornjem grafu pa izstopa oznaka *M*. Sicer ne želim komentirati, kaj to pomeni, da se najraje ubadamo s takimi vsebinami. Sem pa za šalo pogledal, kaj pravijo podatki za filme s predavanj, in zdi se, da so tudi tam najbolj priljubljeni filmi z oznako *R*, torej tudi primerno za starejše od 17 let.
Če morda koga zanima primer "igre za odrasle":
```
igre[igre.oznaka == "AO"]
```
## Vpliv žanra
Poglejmo, kaj lahko povemo o **žanru**. Za razliko od filma se ta ne navezuje samo na vsebino zgodbe v igri, ampak tudi na to, kakšne vrste igra je (strelska, dirkalna, platformer ipd.), in kako deluje (v prvi osebi, 3D ali 2D ipd.).
```
zanri_dobrih_iger = dobre_igre.merge(zanri)
zanri_dobre = zanri_dobrih_iger.groupby("zanr").size().sort_values(ascending=False).head(15)
zanri_dobre.plot.bar()
zanri_najboljsih_iger = najboljse_igre.merge(zanri)
zanri_najboljse = zanri_najboljsih_iger.groupby("zanr").size().sort_values(ascending=False).head(15)
zanri_najboljse.plot.bar()
```
Zdi se, da tako pri dobrih kot najboljših igrah izstopajo podobni žanri.
Prevladale so akcijske igre. Spet lahko podoben trend opazimo pri filmih, če se poglobimo v podatke s predavanj.
Morda še nekaj opazk. *Role-Paying* so igre, kjer z dejanji vplivaš na potek zgodbe. Ta torej ni čito določena vnaprej in linearna. Tako se lahko bolj poglobiš v zgodbo in v virtualni svet, ki te obdaja. Zdi se, da je to večini igralcev najbolj pri srcu. Strelske igre so popularne, kar ne preseneča. Morda v oči bode *Western-Style*. Za to sta verjetno odgovorni *Red Dead Redemption* in *Red Dead Redemption 2*, tudi eni izmed najbolj priznanih iger tega desetletja.
Pravzaprev nam ti podatki povedo, da čeprav so najuspešnjše igre akcijske, so te v drugih pogledih, kot sta stil ali zgodba, precej raznolike.
## Enoigralski ali večigralski način
Sedaj bomo pogledali, ali so bolj uspešne **enoigralske** ali **večigralske** igre. V enoigraskih igrah igraš sam. Običajno gre za bolj zaključene celote s poudarkom na zgodbi, kot film, v katerem lahko sodeluješ. Večigralske igre so po naravi bolj odprte in prava zabava je v tem, da igraš s prijateljem ali proti njemu. Te so igrišča, ki ti omogočajo zabavo na spletu. Zanima nas tudi, ali so bolj uspešne večigralske igre, ko igraš samo z enim prijateljem ali pa vas je več.
```
igralski_nacin_dobre = dobre_igre.groupby("stevilo igralcev").size()
igralski_nacin_dobre.plot.bar()
igralski_nacin_najboljse = najboljse_igre.groupby("stevilo igralcev").size()
igralski_nacin_najboljse.plot.bar()
```
Odgovor je jasen. Bolj uspešne so enoigralske igre, tako med dobrimi kot najboljšimi. Med večigralskimi pa so boljše tiste, ko naenkrat skupaj igra manj ljudi. Za to vidim nekaj razlogov.
* Enoigralske igre so veliko bolj dostopne. Vsakdo se lahko prepusti izkušnji in si pogosto lahko celo prilagodi težavnost. Večigralske igre so veliko bolj zapletene. Preden jih začneš igrati, se moraš ogromno naučiti. Še več dela pa je, da postaneš dober v njih. Zato se je razvila *Esports* scena, to so športna tekmovanja v videoigrah, ki postajajo vedno bolj popularna in posledično so denarne nagrade vedno večje. Nekaj primerov je *Super Smash Bros.*, *League of Legends* ali *Overwatch*.
* Ker pri večigralskih igrah prideš v stik z neznanci na spletu, se združita tekmovalnost profesionalnega športa in anonimnost spleta, rezultat pa je lahko res neprijetno okolje. Da se preklinja in zmerja, ni nič nenavadnega. Potrpljenja za novince je malo in od tu prihaja izraz *noob*, ki ga otroci dandanes radi mečejo naokoli. Če je igra namenjena manj igralcem, jo boš verjetno igral s prijatelji in do teh težav ne bo prišlo. Menim, da so zato take igre med večigralskimi bolj uspešne.
* Ker so enoigralske igre zaključena celota, jih lahko končaš. To je za nekatere ljudi zelo pomembno. Tako imaš občutek, da si nekaj dosegel, vseeno pa se lahko vrneš k izkušnji in jo igraš od začetka. Večigralske igre pa so narejene tako, da vanje vložiš čim več časa. Sama izkušnja mora biti dovolj zabavna, da se vedno znova vrneš. Večina ljudi preprosto nima časa, da bi si to privoščila.
* Zadnja in nekoliko banalna točka. Enoigralsko igro lahko daš na pavzo, večigralske pa ne.
# 3. Katere igre naj igram?
Kot vesten in delaven študent nimam privilegija, da bi čas zapravljal za iskanje videoiger za krajšanje prostega časa, ko bo ta napočil. V ta namem sem pripravil program, ki sprejme seznam **id-jev iger**, ki sem jih že igral, **platforme**, ki so mi na voljo, in **oznake**. Nato vrne priporočila, katere igre bi me morda zanimale. Upošteva naslednje kriterije:
* Platforme, ki so mi na razpolago.
* Oznake iger, ki jih smem igrati.
* Žanr mojih iger.
* Besedilo v opisih iger.
* Kvaliteto iger po naši metriki.
Natančneje:
Program najprej izbere vse primerne kandidate, torej tiste igre, ki so na pravih platformah in imajo pravo oznako, ter odstrani igre, ki smo jih že igrali.
Žanr upoštevamo na sledeč način. Program najprej pogleda podatke iger, ki smo jih že igrali. Za vsak žanr izračuna, kolikšen delež žanrov iger, ki smo jih igrali, je tega žanra. Za dano igro seštejemo pripadajoče deleže njenih žanrov. Tako dobimo utež za žanr, neko število `Uz` med 0 in 1.
Sedaj še opis. Program naredi množico vseh korenov besed, ki se pojavijo v opisih igranih iger. Nato opis vsake igre, če ga ima, razbijemo na korene in pogledamo, kolikšen delež korenov besed tega opisa je v množici. Tako dobimo utež za opis `Uo`.
**Ocena priporočila** je naša prilagojena ocena, obtežena z zgornjima utežema, torej `Op = Po * Uz * Uo`, kjer je `Po` prilagojena ocena.
Za obdelavo korenov besed sem si sposodil kodo s predavanj.
```
def koren_besede(beseda):
beseda = ''.join(znak for znak in beseda if znak.isalpha())
if not beseda:
return '$'
konec = len(beseda) - 1
if beseda[konec] in 'ds':
konec -= 1
while konec >= 0 and beseda[konec] in 'aeiou':
konec -= 1
return beseda[:konec + 1]
def koreni_besed(niz):
if type(niz) == str: #Če igra morda nima opisa
return [koren_besede(beseda) for beseda in niz.lower().split() if len(koren_besede(beseda)) > 2]
else:
return []
def priporocene_igre(ze_igrano, platforme, oznake):
# Maske
def pomozna_platforma(niz):
return niz in platforme
def pomozna_oznaka(niz):
return niz in oznake
def pomozna_id(n):
return n in ze_igrano
# Kandidati
kandidati = igre[(igre["platforma"].apply(pomozna_platforma)) & (igre["oznaka"].apply(pomozna_oznaka))
& (igre["id"].apply(lambda x: not pomozna_id(x)))].copy()
# Žanr
def utez_zanr(tabela_iger):
zanri_igranih = zanri[zanri["id"].apply(pomozna_id)].groupby("zanr").size()
odstotki_zanri_igranih = (zanri_igranih / zanri_igranih.sum()).to_dict()
def ujemanje_zanrov(_id):
zanri_igre = zanri[zanri["id"] == _id].copy()
def zanr_v_delez(niz):
if niz in odstotki_zanri_igranih:
return odstotki_zanri_igranih[niz]
else:
return 0
return zanri_igre["zanr"].apply(zanr_v_delez).sum()
tabela_iger["utež žanr"] = tabela_iger["id"].apply(ujemanje_zanrov)
# Opis
def utez_opis(tabela_iger):
besede_igranih = set()
for opis in igre[igre["id"].apply(pomozna_id)]["opis"].tolist():
besede_igranih.update(koreni_besed(opis))
def delez_dobrih_besed(niz):
besede = koreni_besed(niz)
count = 0
for beseda in besede:
if beseda in besede_igranih:
count += 1
return count / (len(besede) + 0.01) # Da ne delimo z 0
tabela_iger["utež opis"] = tabela_iger["opis"].apply(delez_dobrih_besed)
# Ocena priporočila
def ocena_priporocila(tabela_iger):
utez_zanr(tabela_iger)
utez_opis(tabela_iger)
tabela_iger["ocena priporočila"] = (
tabela_iger["prilagojena ocena"] * tabela_iger["utež žanr"] * tabela_iger["utež opis"]
)
ocena_priporocila(kandidati)
return kandidati.sort_values("ocena priporočila", ascending=False).head(15)
```
Da lahko program preizkusimo, sem sestavil seznam iger, ki sem jih preigral, ter moje platforme in oznake:
```
moje_igre = [143104, 235439, 510872, 113955, 108940, 527417, 204712, 381511, 412871, 297560, 167419, 491138, 231309,
455529, 455528, 230240, 107838, 109864, 115164, 135708, 110750, 527326, 170232, 477066, 213209, 499153,
182288, 316002, 510045, 492827, 535703, 109325, 154578, 206739, 114802, 155836, 143153, 491124, 480909]
moje_platforme = ['PC', 'DS']
moje_oznake = ['K-A', 'E', 'E10+', 'M', 'T', 'AO']
priporocene_igre(moje_igre, moje_platforme, moje_oznake)
```
Če bralec želi preizkusit program, sta tu seznama vseh platform in oznak. Iz teh lahko odstraniš elemente, ki jih ne potrebuješ, in seznama uporabiš v programu. Prav tako je spodaj "brskalnik", da lažje najdeš id-je igre, ki si jih že igral.
Prvi argument naj bo niz, ki nastopi v naslovu igre. Kot drugi neobvezen argument lahko dodaš še platformo.
Veliko zabave želim!
```
platforme = ['Nintendo 64', 'PC', 'PlayStation 4', 'PlayStation 3', 'Switch', 'Wii', 'Xbox 360', 'Xbox One', 'PlayStation 2',
'GameCube', 'PlayStation', 'Xbox', 'Wii U', '3DS', 'PlayStation Vita', 'DS', 'PSP', 'Game Boy Advance', 'Dreamcast' 'Stadia']
oznake = ['K-A', 'E', 'E10+', 'M', 'T', 'AO', 'RP']
def brskalnik(naslov_igre, platforma=None):
def pomozna_naslov(niz):
return naslov_igre in niz
def pomozna_platforma(niz):
return True if platforma is None else platforma == niz
return igre[
(igre["naslov"].apply(pomozna_naslov)) & (igre["platforma"].apply(pomozna_platforma))
].sort_values("prilagojena ocena", ascending=False)
brskalnik("Assassin's Creed", "PC")
```
| github_jupyter |
Центр непрерывного образования
# Программа «Python для автоматизации и анализа данных»
Неделя 1 - 1
*Татьяна Рогович, НИУ ВШЭ*
## Введение в Python. Целые и вещественные числа. Логические переменные.
# Функция print()
С помощью Python можно решать огромное количество задач. Мы начнем с очень простых и постепенно будем их усложнять, закончив наш курс небольшим проектом. Если вы уже сталкивались с программированием, то вы помните, что обычно самой первой программой становится вывод "Hello, world". Попробуем сделать это в Python.
```
print('Hello, world!')
print(1)
```
Обратите внимание, что "Hello, world!" мы написали в кавычках, а единицу - нет. Это связанно с тем,
что в программировании мы имеем дело с разными типами данных. И Python будет воспринимать текст как текст (строковую переменную), только в том случае, если мы его возьмем в кавычки (неважно, одинарные или двойные). А при выводе эти кавычки отображаться уже не будут (они служат знаком для Python, что внутри них - текст).
print() - это функция, которая выводит то, что мы ей передадим. В других IDE это был бы вывод в терминал, а в Jupyter вывод напечатается под запускаемой ячейкой. Распознать функцию в питоне можно по скобкам после слова, внутри которых мы передаем аргумент, к которому эту функцию нужно применить (текст "Hello, world" или 1 в нашем случае).
```
print(Hello, world)
```
Написали без кавычек - получили ошибку. Кстати, обратите внимание, что очень часто в тексте ошибки есть указание на то, что именно произошло, и можно попробовать догадаться, что же нужно исправить. Текст без кавычек Python воспринимает как название переменной, которую еще не задали. Кстати, если забыть закрыть или открыть кавычку (или поставить разные), то тоже поймаем ошибку.
Иногда мы хотим комментировать наш код, чтобы я-будущий или наши коллеги поменьше задавались вопросами, а что вообще имелось ввиду. Комментарии можно писать прямо в коде, они не повлияют на работу программы, если их правильно оформить.
```
# Обратите внимание: так выглядит комментарий - часть скрипта, которая не будет исполнена
# при запуске программы.
# Каждую строку комментария мы начинаем со знака хэштега.
'''
Это тоже комментарий - обычно выделение тремя апострофами мы используем для тех случаев,
когда хотим написать длинный, развернутый текст.
'''
print('Hello, world')
```
Обратите внимание, что в отличие от других IDE (например, PyCharm) в Jupyter Notebook не всегда обязательно использовать print(), чтобы что-то вывести. Но не относитесь к этому как к стандарту для любых IDE.
```
"Hello, world"
```
Выше рядом с выводом появилась подпись Out[]. В данном случае Jupyter показывает нам последнее значение, лежащее в буфере ячейки. Например, в PyCharm такой вывод всегда будет скрыт, пока мы его не "напечатаем" с помощью print(). Но эта особенность Jupyter помогает нам быстро проверить, что, например, находится внутри переменной, когда мы отлаживаем наш код.
Следующая вещь, которую нужно знать про язык программирования - как в нем задаются переменные. Переменные - это
контейнеры, которые хранят в себе информацию (текстовую, числовую, какие-то более сложные виды данных). В Python
знаком присвоения является знак =.
```
x = 'Hello, world!'
print(x) # Обратите внимание, что результат вызова этой функции такой же, как выше,
# только текст теперь хранится внутри переменной.
```
Python - язык чувствительный к регистру. Поэтому, когда создаете/вызываете переменные или функции, обязательно используйте правильный регистр. Так, следующая строка выдаст ошибку.
```
print(X) # мы создали переменную x, а X не существует
```
Еще раз обратим внимание на ошибку. *NameError: name 'X' is not defined* означает, что переменная с таким названием не была создана в этой программе. Кстати, обратите внимание, что переменные в Jupyter хранятся на протяжении всей сессии (пока вы работаете с блокнотом и не закрыли его), и могут быть созданы в одной ячейке, а вызваны в другой. Давайте опять попробуем обратиться к x.
```
print(x) # работает!
```
# Типы данных: целочисленные переменные (integer)
Знакомство с типа данных мы начнем с целых чисел. Если вы вдруг знакомы с другими языками программирования, то стоит отметить, что типизация в Python - динамическая. Это значит, что вам не нужно говорить какой тип данных вы хотите положить в переменную - Python сам все определит. Проверить тип данных можно с помощью функции type(), передав ей в качестве аргумента сами данные или переменную.
**ЦЕЛЫЕ ЧИСЛА (INT, INTEGER):** 1, 2, 592, 1030523235 - любое целое число без дробной части.
```
y = 2
print(type(2))
print(type(y))
```
Обратите внимание - выше мы вызвали функцию внутри функции.
type(2) возвращает скрытое значение типа переменной (int для integer).
Чтобы вывести это скрытое значение - мы должны его "напечатать".
Самое элементарное, что можно делать в Python - использовать его как калькулятор. Давайте посмотрим, как
он вычитает, складывает и умножает.
```
print(2 + 2)
print(18 - 9)
print(4 * 3)
```
С делением нужно быть немного осторожней. Существует два типа деления - привычное нам, которое даст в ответе дробь при делении 5 на 2, и целочисленное деление, в результате которого мы получим только целую часть частного.
```
print(5 / 2) # в результате такого деления получается другой тип данных (float), подробнее о нем поговорим позже.
print(5 // 2)
```
А если нам надо как раз найти остаток от деления, то мы можем воспользоваться оператором модуло %
```
print(5 % 2)
```
Еще одна математическая операция, которую мы можем выполнять без загрузки специальных математических библиотек - это
возведение в степень.
```
print(5**2)
```
Все то же самое работает и с переменными, содержащими числа.
```
a = 2
b = 3
print(a ** b)
# изменится ли результат, если мы перезапишем переменную a?
a = 5
print(a ** b)
```
Часто возникает ситуация, что мы считали какие-то данные в формате текста, и у нас не работают арифметические операции. Тогда мы можем с помощью функции int() преобразовать строковую переменную (о них ниже) в число, если эта строка может быть переведена в число в десятичной системе.
```
print(2 + '5') # ошибка, не сможем сложить целое число и строку
print(2 + int('5')) # преобразовали строку в число и все заработало
int('текст') # здесь тоже поймаем ошибку, потому что строка не представляет собой число
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
### Сумма цифр трехзначного числа
Дано трехзначное число 179. Найдите сумму его цифр.
**Формат вывода**
Выведите ответ на задачу.
**Ответ**
Вывод программы:
17
```
# (∩`-´)⊃━☆゚.*・。゚
x = 179
x_1 = x // 100
x_2 = x // 10 % 10
x_3 = x % 10
print(x_1, x_2, x_3) # тестовый вывод, проверяем, что правильно "достали" цифры из числа
print(x_1 + x_2 + x_3) # ответ на задачу
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
### Электронные часы
Дано число N. С начала суток прошло N минут. Определите, сколько часов и минут будут показывать электронные часы в этот момент.
**Формат ввода**
Вводится число N — целое, положительное, не превышает 10⁷.
**Формат вывода**
Программа должна вывести два числа: количество часов (от 0 до 23) и количество минут (от 0 до 59).
Учтите, что число N может быть больше, чем количество минут в сутках.
#### Примеры
Тест 1
**Входные данные:**
150
**Вывод программы:**
2 30
```
# (∩`-´)⊃━☆゚.*・。゚
minutes = 150
print(minutes // 60 % 24, minutes % 60)
```
# Типы данных: логические или булевы переменные (boolean)
Следующий тип данных - это логические переменные. Логические переменные могут принимать всего два значения - **истина (True)** или **ложь (False)**. В Python тип обознчается **bool**.
```
print(type(True), type(False))
```
Логические переменные чаще всего используется в условных операторах if-else и в цикле с остановкой по условию while. В части по анализу данных еще увидим одно частое применение - создание булевых масок для фильтрации данных (например, вывести только те данные, где возраст больше 20).
Обратите внимание, что True и False обязательно пишутся с большой буквы и без кавычек, иначе можно получить ошибку.
```
print(type('True')) # тип str - строковая переменная
print(true) # ошибка, Python думает, что это название переменной
```
Как и в случае чисел и строк, с логическими переменными работает преобразование типов. Превратить что-либо в логическую переменную можно с помощью функции bool().
Преобразование чисел работает следующим образом - 0 превращается в False, все остальное в True.
```
print(bool(0))
print(bool(23))
print(bool(-10))
```
Пустая строка преобразуется в False, все остальные строки в True.
```
print(bool(''))
print(bool('Hello'))
print(bool(' ')) # даже строка из одного пробела - это True
print(bool('False')) # и даже строка 'False' - это True
```
И при необходимости булеву переменную можно конвертировать в int. Тут все без сюрпризов - ноль и единица.
```
print(int(True))
print(int(False))
```
## Логические выражения
Давайте посмотрим, где используется новый тип данных.
Мы поработаем с логическими выражениями, результат которых - булевы переменные.
Логические выражения выполняют проверку на истинность, то есть выражение равно True, если оно истинно, и False, если ложно.
В логических выражениях используются операторы сравнения:
* == (равно)
* != (не равно)
* \> (больше)
* < (меньше)
* \>= (больше или равно)
* <= (меньше или равно)
```
print(1 == 1)
print(1 != '1')
c = 1 > 3
print(c)
print(type(c))
x = 5
print(1 < x <= 5) # можно писать связки цепочкой
```
Логические выражения можно комбинировать с помощью следующих логических операторов:
* логическое И (and) - выражение истинно, только когда обе части истинны, иначе оно ложно
* логическое ИЛИ (or) - выражение ложно, только когда обе части ложны, иначе оно истинно
* логическое отрицание (not) - превращает True в False и наоборот
```
print((1 == 1) and ('1' == 1))
print((1 == 1) or ('1' == 1))
print(not(1 == 1))
print(((1 == 1) or ('1' == 1)) and (2 == 2))
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
## Вася в Италии
Вася уехал учиться по обмену на один семестр в Италию. Единственный магазин в городе открыт с 6 до 8 утра и с 16 до 17 вечера (включительно). Вася не мог попасть в магазин уже несколько дней и страдает от голода. Он может прийти в магазин в X часов. Если магазин открыт в X часов, то выведите True, а если закрыт - выведите False.
В единственной строке входных данных вводится целое число X, число находится в пределах от 0 до 23
**Формат ввода**
Целое число X, число находится в пределах от 0 до 23
**Формат вывода**
True или False
#### Примеры
Тест 1
**Входные данные:**
16
**Вывод программы:**
True
```
## (∩`-´)⊃━☆゚.*・。゚
time = 16
can_visit = 6 <= time <= 8
can_visit2 = 16 <= time <= 17
print(can_visit or can_visit2)
```
# Типы данных: вещественные числа (float)
По сути, вещественные числа это десятичные дроби, записанные через точку. Вещественные числа в питоне обозначаются словом float (от "плавающей" точки в них). Также могут быть представлены в виде инженерной записи: 1/10000 = 1e-05
**ВЕЩЕСТВЕННЫЕ ЧИСЛА (FLOAT):** 3.42, 2.323212, 3.0, 1e-05 - число с дробной частью (в том числе целое с дробной частью равной 0)
```
4.5 + 5
```
Если у нас было действие с участие целого и вещественного числа, то ответ всегда будет в виде вещественного числа (см. выше).
Также давайте вспомним про "обычное" деление, в результате которого получается вещественное число.
```
print(11 / 2)
print(type(11 / 2))
print(11 // 2)
print(type(11 // 2))
```
С вещественными числами нужно быть осторожными со сравнениями. В связи с особенностями устройства памяти компьютера дробные числа хранятся там весьма хитро и не всегда условные 0.2 то, чем кажутся. Это связано с проблемой точности представления чисел.
Подробнее можно прочитать [здесь](https://pythoner.name/documentation/tutorial/floatingpoint).
```
0.2 + 0.1 == 0.3
```
Наверняка, от такого равенства мы ожидали результат True, но нет. Поэтому будьте аккуратны и старайтесь не "завязывать" работу вашей программы на условия, связанные с вычислением вещественных чисел. Давайте посмотрим, как на самом деле выглядят эти числа в памяти компьютера.
```
print(0.2 + 0.1)
print(0.3)
```
Числа с плавающей точкой представлены в компьютерном железе как дроби с основанием 2 (двоичная система счисления). Например, десятичная дробь
0.125
имеет значение 1/10 + 2/100 + 5/1000, и таким же образом двоичная дробь
0.001
имеет значение 0/2 + 0/4 + 1/8. Эти две дроби имеют одинаковые значения, отличаются только тем, что первая записана в дробной нотации по основанию 10, а вторая по основанию 2.
К сожалению, большинство десятичных дробей не могут быть точно представлены в двоичной записи. Следствием этого является то, что в основном десятичные дробные числа вы вводите только приближенными к двоичным, которые и сохраняются в компьютере.
Если вам совсем не обойтись без такого сравнения, то можно сделать так: сравнивать не результат сложения и числа, а разность эти двух чисел с каким-то очень маленьким числом (с таким, размер которого будет точно не критичен для нашего вычисления). Например, порог это числа будет разным для каких-то физических вычислений, где важна высокая точность, и сравнения доходов граждан.
```
0.2 + 0.1 - 0.3 < 0.000001
```
Следующая проблема, с которой можно столкнуться - вместо результата вычисления получить ошибку 'Result too large'. Cвязано это с ограничением выделяемой памяти на хранение вещественного числа.
```
1.5 ** 100000
1.5 ** 1000
```
А если все получилось, то ответ еще может выглядеть вот так. Такая запись числа называется научной и экономит место - она хранит целую часть числа (мантисса) и степень десятки на которую это число нужно умножить (экспонента). Здесь результатом возведения 1.5 в степень 1000 будет число 1.2338405969061735, умноженное на 10 в степень 176. Понятно, что это число очень большое. Если бы вместо знакак + стоял -, то и степень десятки была бы отрицательная (10 в -176 степени), и такое число было бы очень, очень маленьким.
Как и в случае с целыми числами, вы можете перевести строку в вещественное число, если это возможно. Сделать это можно фукнцией float()
```
print(2.5 + float('2.4'))
```
## Округление вещественных чисел
У нас часто возникает необходимость превратить вещественное число в целое ("округлить"). В питоне есть несколько способов это сделать, но, к сожалению, ни один из них не работает как наше привычное округление и про это всегда нужно помнить.
Большинство этих функций не реализованы в базовом наборе команд питона и для того, чтобы ими пользоваться, нам придется загрузить дополнительную библиотеку math, которая содержит всякие специальные функции для математических вычислений.
```
import math # команда import загружает модуль под названием math
```
Модуль math устанавливается в рамках дистрибутива Anaconda, который мы использовали, чтобы установить Jupyter Notebook, поэтому его не нужно отдельно скачивать, а можно просто импортировать (загрузить в оперативную память текущей сессии). Иногда нужную библиотеку придется сначала установить на компьютер с помощью команды !pip install -название модуля- и только потом импортировать.
Самый простой способ округлить число - применить к нему функцию int.
```
int(2.6)
```
Обратите внимание, что такой метод просто обрубает дробную часть (значения выше 0.5 не округляются в сторону большего числа).
```
print(int(2.6))
print(int(-2.6))
```
Округление "в пол" из модуля math округляет до ближайшего меньшего целого числа.
```
print(math.floor(2.6)) # чтобы использовать функцю из дополнительного модуля -
# нужно сначала написать название этого модуля и через точку название функции
print(math.floor(-2.6))
```
Округление "в потолок" работает ровно наоброт - округляет до ближайшего большего числа, независимо от значения дробной части.
```
print(math.ceil(2.6))
print(math.ceil(-2.6))
```
В самом питоне есть еще функция round(). Вот она работает почти привычно нам, если бы не одно "но"...
```
print(round(2.2))
print(round(2.7))
print(round(2.5)) # внимание на эту строку
print(round(3.5))
```
Неожиданно? Тут дело в том, что в питоне реализованы не совсем привычные нам правила округления чисел с вещественной частью 0.5 - такое число округляется до ближайшего четного числа: 2 для 2.5 и 4 для 3.5.
## Замечание по импорту функций
Иногда нам не нужна вся библиотека, а только одна функция из-за нее. Скажите, странно же хранить в опреативной памяти всю "Войну и мир", если нас интересует только пятое предложение на восьмой странице.
Для этого можно воспользоваться импортом функции из библиотеки и тогда не нужно будет писать ее название через точку. Подводный камень здесь только тот, что если среди базовых команд питона есть функция с таким же именем, то она перезапишется и придется перезапускать свой блокнот, чтобы вернуть все как было.
```
from math import ceil
ceil(2.6) # теперь работает без math.
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
## Дробная часть
Дано вещественное число. Выведите его дробную часть.
**Формат ввода**
Вещественное число
**Формат вывода**
Вещественное число (ответ на задачу)
#### Примеры
Тест 1
**Входные данные:**
4.0
**Вывод программы:**
0.0
```
# (∩`-´)⊃━☆゚.*・。゚
x = 4.0
print(x - int(x))
x = 5.2
print(x - int(x))
```
| github_jupyter |
# RadiusNeighborsRegressor with MinMaxScaler & PowerTransformer
This Code template is for the regression analysis using a RadiusNeighbors Regression with MinMaxScaler for feature rescaling along with PowerTransformer as a feature transformation technique in a pipeline
### Required Packages
```
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import MinMaxScaler,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import RadiusNeighborsRegressor
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path) # reading file
df.head() # displaying initial entries
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
correlation = df[df.columns[1:]].corr()[target][:]
correlation
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting
```
### Data Rescaling
**Used MinMaxScaler**
* Transform features by scaling each feature to a given range.
* This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
### Feature Transformation
**Power Transformer :**
* Apply a power transform featurewise to make data more Gaussian-like.
* Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
* [For more Reference](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
## Model
**RadiusNeighborsRegressor**
RadiusNeighborsRegressor implements learning based on the neighbors within a fixed radius of the query point, where is a floating-point value specified by the user.
**Tuning parameters :-**
* **radius:** Range of parameter space to use by default for radius_neighbors queries.
* **algorithm:** Algorithm used to compute the nearest neighbors:
* **leaf_size:** Leaf size passed to BallTree or KDTree.
* **p:** Power parameter for the Minkowski metric.
* **metric:** the distance metric to use for the tree.
* **outlier_label:** label for outlier samples
* **weights:** weight function used in prediction.
```
#training the RadiusNeighborsRegressor
model = make_pipeline(MinMaxScaler(),PowerTransformer(),RadiusNeighborsRegressor(radius=1.5))
model.fit(X_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
```
### Model evolution
**r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
**MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
**MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model.
```
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))
print("R-squared score : ",r2_score(y_test,prediction))
#ploting actual and predicted
red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red")
green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green")
plt.title("Comparison of Regression Algorithms")
plt.xlabel("Index of Candidate")
plt.ylabel("target")
plt.legend((red,green),('RadiusNeighborsRegressor', 'REAL'))
plt.show()
```
### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(10,6))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Vamsi Mukkamala , Github: [Profile](https://github.com/vmc99)
| github_jupyter |
# Selecting values from an array
```
import numpy as np
```
We use arrays all the time, in data science.
```
my_array = np.array([10, 11, 12])
```
One of the most common tasks we have to do on arrays, is to select values.
We do this with *array slicing*.
We do array slicing when we follow the **array variable name** with `[` (open
square brackets), followed by **something to specify which elements we want**,
followed by `]` (close square brackets).
For example:
```
my_array[0]
```
The example is an example of the simplest case is when we want a single element
from a one-dimensional array. In that case the thing between the `[` and the
`]` is the *index* of the value that we want.
The *index* of the first value is 0, the index of the second value is 2, and so
on.
```
my_array[0]
```
We start by loading data from a Comma Separated Value file (CSV file).
This is summary data that [Andrew Rosen](https://asrosen.com) downloaded from
<https://www.ratemyprofessors.com> for an analysis he reported in [a 2018
paper](https://asrosen.com/wp-content/uploads/2018/07/postprint_rmp-1.pdf). The
data file here is from the [supplementary
material](https://www.tandfonline.com/doi/suppl/10.1080/02602938.2016.1276155);
it has the average rating across academic discipline for all professors in a
particular discipline who have more than 20 ratings.
See [the dataset page](../data/rate_my_professors) for more
detail.
If you are running on your laptop, you should download the
{download}`rate_my_course.csv <../data/rate_my_course.csv>` file to the same
directory as this notebook.
Don't worry about the code next cell. It loads the data we need from a data
file. We will come on the machinery to do this very soon.
```
# This code we have not covered yet. We will cover it soon.
# It loads the data file, and makes some arrays.
# Load the library for reading data files.
import pandas as pd
# Safe settings for Pandas. We'll come on to this later.
pd.set_option('mode.chained_assignment', 'raise')
# Read the file.
courses = pd.read_csv('rate_my_course.csv')
# Sort the courses by number of rated teachers, most first.
courses_by_n = courses.sort_values(
'Number of Professors', ascending=False)
# Select the top eight easiest courses.
big_courses = courses_by_n.head(8)
big_courses
# Again - don't worry about this code - we will cover it later.
# Put the columns into arrays
disciplines = np.array(big_courses['Discipline'])
easiness = np.array(big_courses['Easiness'])
quality = np.array(big_courses['Overall Quality'])
```
We now have the names of the disciplines with the largest number of professors.
```
disciplines
```
Here we get the first value. This value is at index 0.
```
# Get the first value (at index position 0)
disciplines[0]
```
Notice that this is another way of writing:
```
disciplines.item(0)
# Get the second value (at index position 1)
disciplines[1]
# Get the third value (at index position 2)
disciplines[2]
```
At first this will take some time to get used to, that the first value is at
index position 0. There are good reasons for this, and many programming
languages have this convention, but it does a while to get this habit of mind.
## Square brackets and indexing
Notice too that we use square brackets for indexing.
We have seen square brackets before, when we create lists. For example, we can
create a list with `my_list = [1, 2, 3]`. (We usually then create an array with
something like `my_array = np.array(my_list)`. This is *a different use of
square brackets*. When we create a list, the square brackets tell Python that
the elements between the brackets should be the elements of the list. This use
is called a *list literal* expression. The square brackets follow an equal
sign, or some other operator, or start the line.
When we use square brackets for indexing, the square brackets always follow an expression that returns an array. For example, consider this line:
```
disciplines[2]
```
`disciplines` is an expression that returns an array. Therefore the open square brackets follows this expression, and therefore, Python can tell that this use of square brackets means *select an element or elements from the array*.
So we have seen different uses of square brackets:
* Creating a list (a *list literal*); (often uses in making arrays).
* Indexing into an array.
Python can tell which of these two we mean, from the context.
## Indexing with negative numbers
If we know how many elements the array has, then we can get the last element by using the number of elements, minus one (why?).
Here the number of elements is:
```
len(disciplines)
```
So, the last element of the array is at index position 7:
```
disciplines[7]
```
In fact, there is a short cut for getting elements at the end of the array, and
that is to use an offset with a minus in front. The number is then the offset
from one past the last item. For example, here is another way to get the last
element:
```
disciplines[-1]
```
The last but one element:
```
disciplines[-2]
```
## Index with slices
Sometimes we want more than one element from the array. For example, we might want the first 4 elements from the array. We can get these using an array *slice*. It looks like this:
```
# All the elements from offset zero up to (not including) 4
disciplines[0:4]
# All the elements from offset 4 up to (not including) 8
disciplines[5:11]
```
You can omit the first number, if you mean to start at offset 0:
```
disciplines[:4]
```
You can omit the last number if you mean to end at the last element of the array:
```
disciplines[4:]
```
## Another way of indexing into arrays
The methods of indexing above work just as well for [lists](../data-types/lists) as they do for arrays.
There are other, particularly useful forms of indexing that only work for
arrays. We will cover these later - but the most important of these is the
ability to [index arrays with Boolean arrays](../data-frames/boolean_arrays).
| github_jupyter |
```
import os
import pickle
import warnings
import matplotlib.pyplot as plt
import nltk
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import (accuracy_score, f1_score, precision_score,
recall_score)
from sklearn.model_selection import (GridSearchCV, KFold, RandomizedSearchCV,
learning_curve)
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
%matplotlib inline
warnings.filterwarnings('ignore')
np.random.seed(71)
train_data = pd.read_csv(r'C:\Users\My PC\Documents\TXT\Fake-News-Detection-System-master\datasets\train.csv')
valid_data = pd.read_csv(r'C:\Users\My PC\Documents\TXT\Fake-News-Detection-System-master\datasets\valid.csv')
test_data = pd.read_csv(r'C:\Users\My PC\Documents\TXT\Fake-News-Detection-System-master\datasets\test.csv')
def show_eval_scores(model, test_set, model_name):
y_pred = model.predict(test_set['news'])
y_true = test_set['label']
f1 = f1_score(y_true, y_pred)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
accuracy = accuracy_score(y_true, y_pred)
print('Report for {}'.format(model_name))
print('Accuracy is: {}'.format(accuracy))
print('F1 score is: {}'.format(f1))
print('Precision score is: {}'.format(precision))
print('Recall score is: {}'.format(recall))
print('Train dataset size: {}'.format(train_data.shape))
print('Valid dataset size: {}'.format(valid_data.shape))
print('Test dataset size: {}'.format(test_data.shape))
training_set = pd.concat([train_data, valid_data], ignore_index=True)
print('Training set size: {}'.format(training_set.shape))
training_set.sample(5)
stopwords_list = list(stopwords.words('english'))
tfidf_V = TfidfVectorizer(stop_words=stopwords_list, use_idf=True, smooth_idf=True)
train_count = tfidf_V.fit_transform(training_set['news'].values)
```
## Logistic Regression
```
stopwords_list = list(stopwords.words('english'))
lr_pipeline = Pipeline([
('lr_TF', TfidfVectorizer(lowercase=False, ngram_range=(1, 5), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('lr_clf', LogisticRegression(C=1.0, random_state=42, n_jobs=-1))
])
lr_pipeline.fit(training_set['news'], training_set['label'])
show_eval_scores(lr_pipeline, test_data, 'Logistic Regression Count Vectorizer')
```
## Naive Bayes
```
nb_pipeline = Pipeline([
('nb_TF', TfidfVectorizer(lowercase=True, ngram_range=(1, 2), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('nb_clf', MultinomialNB(alpha=2.0))
])
nb_pipeline.fit(training_set['news'], training_set['label'])
show_eval_scores(nb_pipeline, test_data, 'Naive Bayes Count Vectorizer')
```
## Random Forest
```
rf_pipeline = Pipeline([
('rf_TF', TfidfVectorizer(lowercase=True, ngram_range=(1, 2), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('rf_clf', RandomForestClassifier(n_estimators=200, max_depth=15, random_state=42, n_jobs=-1))
])
rf_pipeline.fit(training_set['news'], training_set['label'])
show_eval_scores(rf_pipeline, test_data, 'Random Forest Classifier Count Vectorizer')
```
## Support Vector Machines
```
svm_pipeline = Pipeline([
('svm_TF', TfidfVectorizer(lowercase=True, ngram_range=(1, 2), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('svm_clf', SVC(gamma=0.2, kernel='rbf', random_state=42))
])
svm_pipeline.fit(training_set['news'], training_set['label'])
show_eval_scores(svm_pipeline, test_data, 'Random Forest Classifier Count Vectorizer')
```
## Soft Estimator
```
lr_voting_pipeline = Pipeline([
('lr_TF', TfidfVectorizer(lowercase=False, ngram_range=(1, 5), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('lr_clf', LogisticRegression(C=1.0, random_state=42, n_jobs=-1))
])
nb_voting_pipeline = Pipeline([
('nb_TF', TfidfVectorizer(lowercase=True, ngram_range=(1, 2), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('nb_clf', MultinomialNB(alpha=2.0))
])
svm_voting_pipeline = Pipeline([
('svm_TF', TfidfVectorizer(lowercase=True, ngram_range=(1, 2), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('svm_clf', SVC(gamma=0.2, kernel='rbf', random_state=42, probability=True))
])
rf_voting_pipeline = Pipeline([
('rf_TF', TfidfVectorizer(lowercase=True, ngram_range=(1, 2), stop_words=stopwords_list, use_idf=True, smooth_idf=True)),
('rf_clf', RandomForestClassifier(n_estimators=200, max_depth=15, random_state=42, n_jobs=-1))
])
voting_classifier = VotingClassifier(estimators=[
('lr', lr_voting_pipeline), ('nb', nb_voting_pipeline),
('svm', svm_voting_pipeline), ('rf', rf_voting_pipeline)], voting='soft', n_jobs=-1)
voting_classifier.fit(training_set['news'], training_set['label'])
show_eval_scores(voting_classifier, test_data, 'Voting Classifier(soft) TFIDF Vectorizer')
```
| github_jupyter |
```
import regex as re
```
# Implement opcodes
```
# Addition
def addr(reg, A, B, C):
reg[C] = reg[A] + reg[B]
def addi(reg, A, B, C):
reg[C] = reg[A] + B
# Multiplication
def mulr(reg, A, B, C):
reg[C] = reg[A] * reg[B]
def muli(reg, A, B, C):
reg[C] = reg[A] * B
# Bitwise-AND
def banr(reg, A, B, C):
reg[C] = reg[A] & reg[B]
def bani(reg, A, B, C):
reg[C] = reg[A] & B
# Bitwise-OR
def borr(reg, A, B, C):
reg[C] = reg[A] | reg[B]
def bori(reg, A, B, C):
reg[C] = reg[A] | B
# Assignment
def setr(reg, A, B, C):
reg[C] = reg[A]
def seti(reg, A, B, C):
reg[C] = A
# Greater-than testing
def gtir(reg, A, B, C):
if A > reg[B]: reg[C] = 1
else: reg[C] = 0
def gtri(reg, A, B, C):
if reg[A] > B: reg[C] = 1
else: reg[C] = 0
def gtrr(reg, A, B, C):
if reg[A] > reg[B]: reg[C] = 1
else: reg[C] = 0
# Equality testing
def eqir(reg, A, B, C):
if A == reg[B]: reg[C] = 1
else: reg[C] = 0
def eqri(reg, A, B, C):
if reg[A] == B: reg[C] = 1
else: reg[C] = 0
def eqrr(reg, A, B, C):
if reg[A] == reg[B]: reg[C] = 1
else: reg[C] = 0
# Store a list of all the opcodes
opcodes = [
addr, addi, mulr, muli,
banr, bani, borr, bori,
setr, seti,
gtir, gtri, gtrr,
eqir, eqri, eqrr
]
# Also create a lookup by name since we're going to need this
opcodes_by_name = { f.__name__: f for f in opcodes }
def test(before, sample, after):
"""
Tests a program sample to see which opcodes produces the observed output.
Returns possible opcodes by name.
"""
valid = []
for op in opcodes:
reg = before.copy()
op(reg, *sample[1:])
if (reg == after):
valid.append(op.__name__)
return valid
test([3,2,1,1], [9,2,1,2], [3,2,2,1])
with open("./16-input.txt", "r") as FILE:
data = FILE.read()
# pattern = re.compile(r"BEFORE: [(.*)]\s+(\d+ \d+ \d+ \d+)\s+AFTER: [(.*)]", re.MULTILINE)
pattern = re.compile(r"Before:\s+\[(.*)\]\s+(\d+ \d+ \d+ \d+)\s+After:\s+\[(.*)\]", re.MULTILINE)
matches = pattern.findall(data)
count = 0
print(len(matches),"samples read")
for m in matches:
before = [int(n.strip()) for n in m[0].split(',')]
sample = [int(n.strip()) for n in m[1].split(' ')]
after = [int(n.strip()) for n in m[2].split(',')]
ops = test(before, sample, after)
if len(ops) >= 3:
count+=1
print(count, "samples matched 3 or more operations")
```
# Part 2
Using the same approach, we now want to narrow down what the opcode is for each operation
```
operations = dict()
for m in matches:
before = [int(n.strip()) for n in m[0].split(',')]
sample = [int(n.strip()) for n in m[1].split(' ')]
after = [int(n.strip()) for n in m[2].split(',')]
ops = test(before, sample, after)
opcode = sample[0]
possible_operations = operations.get(opcode, None)
if possible_operations is None:
possible_operations = set(ops)
else:
possible_operations = possible_operations & set(ops)
operations[opcode] = possible_operations
# display(operations)
display("We will now reduce this to a simple set through elimination")
while max([len(x) for x in operations.values()]) > 1:
for k, v in operations.items():
if len(v) == 1:
for k2, v2 in operations.items():
if k != k2:
operations[k2] = v2 - v
operations = {k: v.pop() for k,v in operations.items()}
operations = {k: opcodes_by_name[v] for k,v in operations.items()}
display(operations)
def execute_line(reg, opcode, A, B, C):
reg = reg.copy()
op = operations[opcode]
op(reg, A, B, C)
return reg
# Let's do a simple test - let's run a simple program. Expect [1,2,3,4] at the end
# You may need to adjust the opcodes to match your input
reg = [0,0,0,0]
reg = execute_line(reg, 5, 1, 0, 0) # setr(1,0,0) - Value 1 => Register 0
reg = execute_line(reg, 1, 0, 1, 1) # addi(0,1,1) - Register 0 + Value 1 => Register 1
reg = execute_line(reg, 12, 0, 1, 2) # addr(0,1,2) - Register 0 + Register 1 => Register 2
reg = execute_line(reg, 4, 1, 1, 3) # mulr(1,1,3) - Register 1 * Register 1 => Register 4
reg
```
We can now read the program line by line and execute. We find the start of the program by looking for three blank lines.
```
program = data[data.index("\n\n\n"):].strip().splitlines()
reg = [0,0,0,0]
for line in program:
line = [int(x.strip()) for x in line.split(' ')]
reg = execute_line(reg, *line)
reg
```
| github_jupyter |
```
import pickle
import numpy as np
import os
from sklearn.model_selection import train_test_split
from collections import Counter
from imblearn.over_sampling import RandomOverSampler
f = open("../../dataset/sense/dict_sense-keys", 'rb')
dict_sense_keys = pickle.load(f)
f.close()
f = open("../../dataset/sense/dict_word-sense", 'rb')
dict_word_sense = pickle.load(f)
f.close()
f = open('../Glove/word_embedding_glove', 'rb')
word_embedding = pickle.load(f)
f.close()
word_embedding = word_embedding[: len(word_embedding)-1]
f = open('../Glove/vocab_glove', 'rb')
vocab = pickle.load(f)
f.close()
word2id = dict((w, i) for i,w in enumerate(vocab))
id2word = dict((i, w) for i,w in enumerate(vocab))
unknown_token = "UNKNOWN_TOKEN"
with open('/data/aviraj/dataset/raw_preprocess_train','rb') as f:
data=pickle.load(f)
with open('/data/aviraj/dataset/fulldata_vocab_sense','rb') as f:
vocab_lex=pickle.load(f)
lex2id = dict((s, i) for i,s in enumerate(vocab_lex))
id2lex = dict((i, s) for i,s in enumerate(vocab_lex))
print(len(vocab_lex))
max_sent_size = 200
_pos = []
for i in range(len(data)):
for pp in data[i][4]:
_pos.append(pp)
pos_count = Counter(_pos)
pos_count = pos_count.most_common()
vocab_pos = [pp for pp, c in pos_count]
pos2id = dict((s, i) for i,s in enumerate(vocab_pos))
print(len(vocab_pos))
data_y1 = []
data_y2 = []
data_y3 = []
for i in range(len(data)):
if (len(data[i][1])<=200):
for j in range(len(data[i][2])):
if data[i][2][j] is not None:
data_y1.append(dict_sense_keys[data[i][2][j]][3])
data_y2.append(dict_sense_keys[data[i][2][j]][4])
data_y3.append(dict_sense_keys[data[i][2][j]][5])
sense_count1 = Counter(data_y1)
sense_count1 = sense_count1.most_common()
sense_count2 = Counter(data_y2)
sense_count4 = sense_count2.most_common(272)
sense_count2 = sense_count2.most_common(312)
sense_count3 = Counter(data_y3)
sense_count5 = sense_count3.most_common(505)
sense_count3 = sense_count3.most_common(1051)
dict_sense_count1 = dict(sense_count1)
dict_sense_count2 = dict(sense_count2)
dict_sense_count3 = dict(sense_count3)
dict_sense_count4 = dict(sense_count4)
dict_sense_count5 = dict(sense_count5)
print(len(sense_count1), len(sense_count2), len(sense_count3), len(sense_count4), len(sense_count5))
data_x = []
data_pos = []
data_label1 = []
data_label2 = []
data_label3 = []
data_label4 = []
data_label5 = []
for i in range(len(data)):
if not all(np.array(data[i][2])==None) and (len(data[i][1])<=200):
data_label1.append([ss if ss is not None and dict_sense_keys[ss][3] in dict_sense_count1 else None for ss in data[i][2]])
data_label2.append([ss if ss is not None and dict_sense_keys[ss][4] in dict_sense_count2 else None for ss in data[i][2]])
data_label3.append([ss if ss is not None and dict_sense_keys[ss][5] in dict_sense_count3 else None for ss in data[i][2]])
data_label4.append([ss if ss is not None and dict_sense_keys[ss][4] in dict_sense_count4 else None for ss in data[i][2]])
data_label5.append([ss if ss is not None and dict_sense_keys[ss][5] in dict_sense_count5 else None for ss in data[i][2]])
data_x.append(data[i][1])
data_pos.append(data[i][4])
def data_prepare(sense_id, x, pos, y, sense_count, lex_cond=False, pos_cond=False):
num_examples = len(x)
vocab_sense = [s for s, c in sense_count]
sense2id = dict((s, i) for i,s in enumerate(vocab_sense))
xx = np.zeros([num_examples, max_sent_size], dtype=int)
xx_mask = np.zeros([num_examples, max_sent_size], dtype=bool)
ss_mask = np.zeros([num_examples, max_sent_size], dtype=bool)
yy = np.zeros([num_examples,max_sent_size], dtype=int)
y_lex = np.zeros([num_examples, max_sent_size], dtype=int)
y_pos = np.zeros([num_examples, max_sent_size], dtype=int)
for j in range(num_examples):
for i in range(max_sent_size):
if(i>=len(x[j])):
break
w = x[j][i]
s = y[j][i]
p = pos[j][i]
xx[j][i] = word2id[w] if w in word2id else word2id['UNKNOWN_TOKEN']
xx_mask[j][i] = True
ss_mask[j][i] = True if s is not None and dict_sense_keys[s][sense_id] in vocab_sense else False
yy[j][i] = sense2id[dict_sense_keys[s][sense_id]] if s is not None and dict_sense_keys[s][sense_id] in vocab_sense else 0
if(lex_cond):
y_lex[j][i] = lex2id[dict_sense_keys[s][3]] if s is not None and dict_sense_keys[s][3] in vocab_lex else len(vocab_lex)
if(pos_cond):
y_pos[j][i] = pos2id[p] if p in vocab_pos else len(vocab_pos)
return xx, xx_mask, ss_mask, yy, y_lex, y_pos
data_x = np.array(data_x)
data_pos = np.array(data_pos)
def train_val_data(name, sense_id, index, split_label, data_label, sense_count, sampling_list, lex_cond=False, pos_cond=False, sampling=False):
index_train, index_val, label_train_id, label_val_id = train_test_split(index, split_label, train_size=0.8, shuffle=True, stratify=split_label, random_state=0)
if(sampling):
dict_sample = dict(sampling_list)
sm = RandomOverSampler(ratio=dict_sample)
index_train1 = np.array(index_train).reshape(-1, 1)
sampled_index, _ = sm.fit_sample(index_train1, label_train_id)
count = Counter(_)
count = count.most_common()
sampled_index_train = np.array(sampled_index).reshape(1, -1)
index_train = sampled_index_train[0]
data_label = np.array(data_label)
x_train = data_x[index_train]
y_train = data_label[index_train]
x_val = data_x[index_val]
y_val = data_label[index_val]
pos_train = []
pos_val = []
if(pos_cond):
pos_train = data_pos[index_train]
pos_val = data_pos[index_val]
x_id_train, mask_train, sense_mask_train, y_id_train, lex_train, pos_id_train = data_prepare(sense_id, x_train, pos_train, y_train, sense_count, lex_cond=lex_cond, pos_cond=pos_cond)
x_id_val, mask_val, sense_mask_val, y_id_val, lex_val, pos_id_val = data_prepare(sense_id, x_val, pos_val, y_val, sense_count, lex_cond=lex_cond, pos_cond=pos_cond)
train_data = {'x':x_id_train,'x_mask':mask_train, 'sense_mask':sense_mask_train, 'y':y_id_train, 'lex':lex_train, 'pos':pos_id_train}
val_data = {'x':x_id_val,'x_mask':mask_val, 'sense_mask':sense_mask_val, 'y':y_id_val, 'lex':lex_val, 'pos':pos_id_val}
with open('/data/aviraj/dataset/train_val_data_coarse/all_word_'+ name,'wb') as f:
pickle.dump([train_data,val_data], f)
print(len(x_id_train)+len(x_id_val))
split_label1 = []
split_label2 = []
split_label3 = []
split_label4 = []
split_label5 = []
index1 = []
index2 = []
index3 = []
index4 = []
index5 = []
for jj, lab in enumerate(data_label1):
min_idx = np.argmin([dict_sense_count1[dict_sense_keys[lab[i]][3]] if lab[i] is not None else np.inf for i in range(len(lab)) ])
if(lab[min_idx] is not None):
index1.append(jj)
split_label1.append(dict_sense_keys[lab[min_idx]][3])
for jj, lab in enumerate(data_label2):
min_idx = np.argmin([dict_sense_count2[dict_sense_keys[lab[i]][4]] if lab[i] is not None else np.inf for i in range(len(lab)) ])
if(lab[min_idx] is not None):
index2.append(jj)
split_label2.append(dict_sense_keys[lab[min_idx]][4])
for jj, lab in enumerate(data_label3):
min_idx = np.argmin([dict_sense_count3[dict_sense_keys[lab[i]][5]] if lab[i] is not None else np.inf for i in range(len(lab)) ])
if(lab[min_idx] is not None):
index3.append(jj)
split_label3.append(dict_sense_keys[lab[min_idx]][5])
for jj, lab in enumerate(data_label4):
min_idx = np.argmin([dict_sense_count4[dict_sense_keys[lab[i]][4]] if lab[i] is not None else np.inf for i in range(len(lab)) ])
if(lab[min_idx] is not None):
index4.append(jj)
split_label4.append(dict_sense_keys[lab[min_idx]][4])
for jj, lab in enumerate(data_label5):
min_idx = np.argmin([dict_sense_count5[dict_sense_keys[lab[i]][5]] if lab[i] is not None else np.inf for i in range(len(lab)) ])
if(lab[min_idx] is not None):
index5.append(jj)
split_label5.append(dict_sense_keys[lab[min_idx]][5])
print(len(split_label1))
print(len(split_label2))
print(len(split_label3))
print(len(split_label4))
print(len(split_label5))
train_val_data('lex1', 3, index1, split_label1, data_label1, sense_count1, [], lex_cond=False, pos_cond=True)
train_val_data('lex2', 3, index2, split_label2, data_label2, sense_count1, [], lex_cond=False, pos_cond=True)
train_val_data('lex3', 3, index3, split_label3, data_label3, sense_count1, [], lex_cond=False, pos_cond=True)
train_val_data('sense1', 4, index4, split_label4, data_label4, sense_count4, [], lex_cond=True, pos_cond=True)
train_val_data('sense2', 4, index5, split_label5, data_label5, sense_count4, [], lex_cond=True, pos_cond=True)
train_val_data('full_sense', 5, index5, split_label5, data_label5, sense_count5, [], lex_cond=True, pos_cond=True)
sampled_sense_count1 = [('1:19', 10000),
('1:17', 10000),
('2:34', 10000),
('2:33', 10000),
('1:27', 10000),
('2:37', 8000),
('1:24', 8000),
('1:08', 8000),
('1:12', 7000),
('1:22', 5000),
('2:29', 5000),
('1:05', 3000),
('1:16', 3000),
('1:25', 3000),
('1:20', 3000),
('1:13', 2000),
('2:43', 1100),
('3:44', 1000)]
sampled_sense_count2= []
for s, c in sense_count2[260:]:
sampled_sense_count2.append((s, 500))
for s, c in sense_count2[180:260]:
sampled_sense_count2.append((s, 2000))
for s, c in sense_count2[140:180]:
sampled_sense_count2.append((s, 5000))
for s, c in sense_count2[75:140]:
sampled_sense_count2.append((s, 8000))
for s, c in sense_count2[25:75]:
sampled_sense_count2.append((s, 12000))
sampled_sense_count3= []
for s, c in sense_count3[400:]:
sampled_sense_count3.append((s, 500))
for s, c in sense_count3[200:400]:
sampled_sense_count3.append((s, 2000))
for s, c in sense_count3[100:200]:
sampled_sense_count3.append((s, 5000))
for s, c in sense_count3[70:100]:
sampled_sense_count3.append((s, 8000))
for s, c in sense_count3[25:70]:
sampled_sense_count3.append((s, 12000))
sampled_sense_count4= []
for s, c in sense_count4[260:]:
sampled_sense_count4.append((s, 500))
for s, c in sense_count4[180:260]:
sampled_sense_count4.append((s, 2000))
for s, c in sense_count4[140:180]:
sampled_sense_count4.append((s, 5000))
for s, c in sense_count4[75:140]:
sampled_sense_count4.append((s, 8000))
for s, c in sense_count4[25:75]:
sampled_sense_count4.append((s, 12000))
sampled_sense_count5= []
for s, c in sense_count5[400:]:
sampled_sense_count5.append((s, 500))
for s, c in sense_count5[200:400]:
sampled_sense_count5.append((s, 2000))
for s, c in sense_count5[100:200]:
sampled_sense_count5.append((s, 5000))
for s, c in sense_count5[70:100]:
sampled_sense_count5.append((s, 8000))
for s, c in sense_count5[25:70]:
sampled_sense_count5.append((s, 12000))
train_val_data('lex1_sampled', 3, index1, split_label1, data_label1, sense_count1, sampled_sense_count1, lex_cond=False, pos_cond=True, sampling=True)
train_val_data('lex2_sampled', 3, index2, split_label2, data_label2, sense_count1, sampled_sense_count2, lex_cond=False, pos_cond=True, sampling=True)
train_val_data('lex3_sampled', 3, index3, split_label3, data_label3, sense_count1, sampled_sense_count3, lex_cond=False, pos_cond=True, sampling=True)
train_val_data('sense1_sampled', 4, index4, split_label4, data_label4, sense_count4, sampled_sense_count4, lex_cond=True, pos_cond=True, sampling=True)
train_val_data('sense2_sampled', 4, index5, split_label5, data_label5, sense_count4, sampled_sense_count5, lex_cond=True, pos_cond=True, sampling=True)
train_val_data('full_sense_sampled', 5, index5, split_label5, data_label5, sense_count5, sampled_sense_count5, lex_cond=True, pos_cond=True, sampling=True)
```
| github_jupyter |
```
# Remove input cells at runtime (nbsphinx)
import IPython.core.display as d
d.display_html('<script>jQuery(function() {if (jQuery("body.notebook_app").length == 0) { jQuery(".input_area").toggle(); jQuery(".prompt").toggle();}});</script>', raw=True)
```
# Energy estimation (TRAINING)
**WARNING**
This is still a work-in-progress, it will evolve with the pipeline comparisons and converge with ctaplot+cta-benchmarks.
Part of this notebook was performed by `protopipe.scripts.model_diagnostics` which will be discontinued.
**Author(s):**
- Dr. Michele Peresano (CEA-Saclay/IRFU/DAp/LEPCHE), 2020
based on previous work by J. Lefacheur.
**Description:**
This notebook contains benchmarks for the _protopipe_ pipeline regarding information from training data used for the training of the energy model.
Additional information is provided by protopipe.scripts.model_diagnostics, which is being gradually migrated here and it will be eventually discontinued.
**NOTES:**
- these benchmarks will be cross-validated and migrated in cta-benchmarks/ctaplot
- Let's try to follow [this](https://www.overleaf.com/16933164ghbhvjtchknf) document by adding those benchmarks or proposing new ones.
**Requirements:**
To run this notebook you will need a set of trained data produced on the grid with protopipe.
The MC production to be used and the appropriate set of files to use for this notebook can be found [here](https://forge.in2p3.fr/projects/step-by-step-reference-mars-analysis/wiki#The-MC-sample ).
The data format required to run the notebook is the current one used by _protopipe_ .
Later on it will be the same as in _ctapipe_ (1 full DL1 file + 1 DL2 file with only shower geometry information).
**Development and testing:**
As with any other part of _protopipe_ and being part of the official repository, this notebook can be further developed by any interested contributor.
The execution of this notebook is not currently automatic, it must be done locally by the user - preferably _before_ pushing a pull-request.
**IMPORTANT:** Please, if you wish to contribute to this notebook, before pushing anything to your branch (better even before opening the PR) clear all the output and remove any local directory paths that you used for testing (leave empty strings).
**TODO:**
* finish to merge model diagnostics output
* add remaining benchmarks from CTA-MARS comparison
* same for EventDisplay
## Table of contents
- [Charge profile](#Charge-profile)
## Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from matplotlib.colors import LogNorm, PowerNorm
count = 0
cmap = dict()
for key in colors.cnames:
if 'dark' in key:
#if key in key:
cmap[count] = key
count = count + 1
#cmap = {'black': 0, 'red': 1, 'blue': 2, 'green': 3}
cmap = {0: 'black', 1: 'red', 2: 'blue', 3: 'green'}
import os
from pathlib import Path
import numpy as np
import pandas as pd
import tables
```
## Functions
```
def plot_profile(ax, data, xcol, ycol, n_xbin, x_range, logx=False, **kwargs):
color = kwargs.get('color', 'red')
label = kwargs.get('label', '')
fill = kwargs.get('fill', False)
alpha = kwargs.get('alpha', 1)
xlabel = kwargs.get('xlabel', '')
ylabel = kwargs.get('ylabel', '')
xlim = kwargs.get('xlim', None)
ms = kwargs.get('ms', 8)
if logx is False:
bin_edges = np.linspace(x_range[0], x_range[-1], n_xbin, True)
bin_center = 0.5 * (bin_edges[1:] + bin_edges[:-1])
bin_width = bin_edges[1:] - bin_edges[:-1]
else:
bin_edges = np.logspace(np.log10(x_range[0]), np.log10(x_range[-1]), n_xbin, True)
bin_center = np.sqrt(bin_edges[1:] * bin_edges[:-1])
bin_width = bin_edges[1:] - bin_edges[:-1]
y = []
yerr = []
for idx in range(len(bin_center)):
counts = data[ (data[xcol] > bin_edges[idx]) & (data[xcol] <= bin_edges[idx+1]) ][ycol]
y.append(counts.mean())
yerr.append(counts.std() / np.sqrt(len(counts)))
ax.errorbar(x=bin_center, y=y, xerr=bin_width / 2., yerr=yerr, label=label, fmt='o', color=color, ms=ms)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
if logx is True:
ax.set_xscale('log')
ax.legend(loc='upper right', framealpha=1, fontsize='medium')
#ax.grid(which='both')
return ax
```
## Load
```
# First we check if a _plots_ folder exists already.
# If not, we create it.
Path("./plots").mkdir(parents=True, exist_ok=True)
# Setup for data loading
parentDir = "/Users/michele/Applications/ctasoft/dirac" # Full path location of 'shared_folder'
analysisName = "v0.4.0_dev1"
# Load data
mode = "tail"
indir = os.path.join(parentDir, "shared_folder/analyses", analysisName, "data", "TRAINING/for_energy_estimation")
infile = 'TRAINING_energy_{}_gamma_merged.h5'.format(mode)
data_image = pd.read_hdf(os.path.join(indir,infile), key='LSTCam')
print('#Images={}'.format(len(data_image)))
data_image['log10_hillas_intensity'] = np.log10(data_image['hillas_intensity'])
```
## Benchmarks
### Charge profile
[back to top](#Table-of-contents)
LST-subarray with the condition of `N_LST >= 2`
```
tel_ids = [1, 2, 3, 4] # WARNING! These are only the LSTs!
n_feature = len(tel_ids)
nrows = int(n_feature / 2) if n_feature % 2 == 0 else int((n_feature + 1) / 2)
emin = 0.03
emax = 10
nbin = 4
energy_range = np.logspace(np.log10(emin), np.log10(emax), nbin + 1, True)
fig = plt.figure(figsize=(5,5))
ax = plt.gca()
for jdx in range(0, len(energy_range) - 1):
data_sel = data_image[data_image['N_LST'] >= 2]
data_sel = data_sel[(data_sel['true_energy'] >= energy_range[jdx]) &
(data_sel['true_energy'] < energy_range[jdx + 1])]
xbins = 10 + 1
xrange = [10, 2000]
opt = {'xlabel': 'Impact parameter [m]', 'ylabel': 'Charge [p.e.]', 'color': cmap[jdx],
'label': 'E [{:.2f},{:.2f}] TeV'.format(energy_range[jdx], energy_range[jdx+1]),
'ms': 6}
plot_profile(ax, data=data_sel,
xcol='impact_dist', ycol='hillas_intensity',
n_xbin=xbins, x_range=xrange, logx=True, **opt)
#ax.grid(which='both')
ax.set_yscale('log')
ax.set_yscale('log')
ax.set_ylim([100, 2. * 100000.])
ax.set_xlim([10, 2000])
ax.grid(which='both')
plt.tight_layout()
fig.savefig(f"./plots/to_energy_estimation_intensity_profile_protopipe_{analysisName}.png")
```
| github_jupyter |
```
#3.6 Implementation of Softmax Regression from Scratch
import torch
from IPython import display
from d2l import torch as d2l
def load_data_fashion_mnist(batch_size, resize=None): #獲取和讀取 Fashion-MNIST 數據集
"""Download the Fashion-MNIST dataset and then load it into memory."""
trans = [transforms.ToTensor()]
if resize:
trans.insert(0, transforms.Resize(resize))
trans = transforms.Compose(trans)
mnist_train = torchvision.datasets.FashionMNIST(root="../data", train=True, transform=trans, download=True)
mnist_test = torchvision.datasets.FashionMNIST(root="../data", train=False, transform=trans, download=True)
return (data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=get_dataloader_workers()),
data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=get_dataloader_workers()))
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
#3.6.1 Initializing Model Parameters
num_inputs = 784
num_outputs = 10
W = torch.normal(0, 0.01, size=(num_inputs, num_outputs), requires_grad=True)
b = torch.zeros(num_outputs, requires_grad=True)
#3.6.2 Defining the Softmax Operation
X = torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
X.sum(0, keepdim=True), X.sum(1, keepdim=True)
def softmax(X):
X_exp = torch.exp(X)
partition = X_exp.sum(1, keepdim=True)
return X_exp / partition # The broadcasting mechanism is applied here
#softmax三步驟:
#1.對每一項使用 exp取冪
#2.對每一行求和(minibatch中每個example有一行)以獲得每個example的normalization constant
#3.將每一行除以它的normalization constant,確保結果總和為 1
X = torch.normal(0, 1, (2, 5))
print("X init==>\n",X)
X_prob = softmax(X)
print("X_prob==>\n",X_prob)
print("X_prob's row sum==>\n",X_prob.sum(1))
#3.6.3 Defining the Model
def net(X): #softmax 回歸模型
return softmax(torch.matmul(X.reshape((-1, W.shape[0])), W) + b) #使用reshape函數將batch中的每個原始圖像轉換成一個向量
#3.6.4 Defining the Loss Function
y = torch.tensor([0, 2])
y_hat = torch.tensor([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]])
y_hat[[0, 1], y]
def cross_entropy(y_hat, y): #cross-entropy loss function
return -torch.log(y_hat[range(len(y_hat)), y])
cross_entropy(y_hat, y)
#3.6.5 Classification Accuracy
def accuracy(y_hat, y):
"""Compute the number of correct predictions."""
if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
y_hat = y_hat.argmax(axis=1) #argmax返回row中(axis=1)最大值的index==>預測出來的class
#print("預測的class index= ",y_hat)
cmp = y_hat.type(y.dtype) == y
#print("實際的class index= ",y)
return float(cmp.type(y.dtype).sum())
accuracy(y_hat, y) / len(y)
def evaluate_accuracy(net, data_iter):
"""Compute the accuracy for a model on a dataset."""
if isinstance(net, torch.nn.Module):
net.eval() # Set the model to evaluation mode
metric = Accumulator(2) # No. of correct predictions, no. of predictions
with torch.no_grad():
for X, y in data_iter:
metric.add(accuracy(net(X), y), y.numel())
return metric[0] / metric[1]
evaluate_accuracy(net, test_iter)
#3.6.6 Training
def train_epoch_ch3(net, train_iter, loss, updater):
"""The training loop defined in Chapter 3."""
# Set the model to training mode
if isinstance(net, torch.nn.Module):
net.train()
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
for X, y in train_iter:
# Compute gradients and update parameters
y_hat = net(X)
l = loss(y_hat, y)
if isinstance(updater, torch.optim.Optimizer):
# Using PyTorch in-built optimizer & loss criterion
updater.zero_grad()
l.backward()
updater.step()
metric.add(float(l) * len(y), accuracy(y_hat, y), y.numel())
else:
# Using custom built optimizer & loss criterion
l.sum().backward()
updater(X.shape[0])
metric.add(float(l.sum()), accuracy(y_hat, y), y.numel())
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
class Animator: #動畫繪製數據
"""For plotting data in animation."""
def __init__(self, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), nrows=1, ncols=1,
figsize=(3.5, 2.5)):
# Incrementally plot multiple lines
if legend is None:
legend = []
d2l.use_svg_display()
self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [self.axes,]
# Use a lambda function to capture arguments
self.config_axes = lambda: d2l.set_axes(self.axes[
0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater):
"""Train a model (defined in Chapter 3)."""
animator = Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0.3, 0.9],
legend=['train loss', 'train acc', 'test acc'])
for epoch in range(num_epochs):
train_metrics = train_epoch_ch3(net, train_iter, loss, updater)
test_acc = evaluate_accuracy(net, test_iter)
animator.add(epoch + 1, train_metrics + (test_acc,))
train_loss, train_acc = train_metrics
assert train_loss < 0.5, train_loss
assert train_acc <= 1 and train_acc > 0.7, train_acc
assert test_acc <= 1 and test_acc > 0.7, test_acc
lr = 0.1
def updater(batch_size): #用minibatch stochastic gradient descent來優化模型的loss function
return d2l.sgd([W, b], lr, batch_size)
num_epochs = 10
train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
#3.6.7 Prediction
def predict_ch3(net, test_iter, n=6):
"""Predict labels (defined in Chapter 3)."""
for X, y in test_iter:
break
trues = d2l.get_fashion_mnist_labels(y)
preds = d2l.get_fashion_mnist_labels(net(X).argmax(axis=1))
titles = [true + '\n' + pred for true, pred in zip(trues, preds)]
d2l.show_images(X[0:n].reshape((n, 28, 28)), 1, n, titles=titles[0:n])
predict_ch3(net, test_iter)
```
| github_jupyter |

# Operational Modelling Workshop
# Table of contents
- [1. Introduction](#1.-Introduction)
- Environment Setup
- [2. Subset and dowload data using motuclient](#2.-Subset-and-dowload-data-using-motuclient)
- [3. Read the files downloaded and access its metadata](#3.-Read-the-files-downloaded-and-access-its-metadata)
- [4. Data conversion](#4.-Data-conversion)
- Conversion to CSV
- Conversion to Shapefile
- [5. Plot the Data](#5.-Plot-the-Data)
- Data visualization at global scale
- Map creation at local scale
- [6. Recursive data plotting](#6.-Recursive-data-plotting)
- Plots Animation (Gif)
***
## 1. Introduction
We will focus on the following product (available on the [Copernicus Catalogue](http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&task=results)):
- [SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001](http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view=details&product_id=SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001)
This product data uses satellite data provided by the [GHRSST project](https://www.ghrsst.org) together with in-situ observations, to determine the sea surface temperature.

- `Level4`: Data have had the greatest amount of processing applied, possibly including modelled output and measurements from several satellites and several days. All input data are validated.
To access the full product information page and its download services, documentation and New flash please click [here](https://resources.marine.copernicus.eu/?option=com_csw&task=results?option=com_csw&view=details&product_id=SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001)
**Please remember that this product has two resolutions, 0.05°x 0.05° and 0.25°x 0.25° as the highest and lowest resolution respectively. The resolution that appears is the product information page (0.05°x 0.05°) is relative just to the following dataset:**
- **METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2**
### 1.1 Environment Setup
There are two options to reproduce the content of this workshop:
- **No install - ready to use**: To launch anytime this environment on the cloud from a web browser, click here https://tiny.cc/20200527
- **Regular Jupyter installation**: To execute this environment and [notebook](https://github.com/copernicusmarine/copernicus-jupyter-notebook-gallery/blob/master/10-01-Subset-Download-Read-Convert-Plot-NetCDF-files-over-Global-Ocean.ipynb) on a local machine, follow these instructions: https://tiny.cc/copernicus-training
The computing language used to perform the analysis on the selected data, is **"Python"**. In order to subset, download, read, convert and plot the data, the next and very first code cell enables the access to required tools.
```
import csv
import glob
import warnings
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import moviepy.editor as mpy
import numpy as np
import pandas as pd
import shapefile
import xarray as xr
from fiona import collection
from matplotlib.dates import date2num, num2date
from mpl_toolkits.basemap import Basemap
from natsort import natsorted
from shapely.geometry import Point, mapping
from shapely.geometry.polygon import Polygon
warnings.filterwarnings('ignore')
%matplotlib inline
```
## 2. Subset and dowload data using motuclient
The motuclient scripts to request and download the data from the SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001 product, in both the montly-mean (2020-04-15) and daily-mean (from 2020-04-01 to 2020-04-30) datasets (lower resolution 0.25°x 0.25°), are showed below.
The files saved are called "download_motuclient.nc" and "download_motuclient30days" respectively and they are going to be automatically stored, once downloaded, inside the "Data" folder.
The only variable downloaded and used in this tutorial is the "analysed_sst" (long mame: "analysed sea surface temperature") expressed in Kelvin. Furthermore It is set for both the data requests the same geographic bounding box.
**Below you need to type the CMEMS log in credentials (in between "") to be able downloading data:**
```
Username = ""
Password = ""
```
**One month data request (from monthly-mean dataset):**
```
!python -m motuclient \
--user $Username --pwd $Password \
--motu http://nrt.cmems-du.eu/motu-web/Motu \
--service-id SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001-TDS \
--product-id METOFFICE-GLO-SST-L4-NRT-OBS-SST-MON-V2 \
--longitude-min -15 --longitude-max 0 \
--latitude-min 35 --latitude-max 45 \
--date-min "2020-04-15 12:00:00" --date-max "2020-04-15 12:00:00" \
--variable analysed_sst \
--out-dir Data --out-name download_motuclient.nc
```
**30 days data request (from daily-mean dataset):**
```
!python -m motuclient \
--user $Username --pwd $Password \
--motu http://nrt.cmems-du.eu/motu-web/Motu \
--service-id SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001-TDS \
--product-id METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2 \
--longitude-min -15 --longitude-max 0 \
--latitude-min 35 --latitude-max 45 \
--date-min "2020-04-01 12:00:00" --date-max "2020-04-30 12:00:00" \
--variable analysed_sst \
--out-dir Data --out-name download_motuclient30days.nc
```
## 3. Read the files downloaded and access its metadata
**For the monthly-mean file:**
```
#Path where the data file is located
data = "Data/download_motuclient.nc"
#Read the data as dataset (DS)
DS = xr.open_dataset(data)
#Show the metadata
DS
#Extract from the dataset (DS) the variable needed
lons = DS.variables['lon'][:]
lats = DS.variables['lat'][:]
sst = DS.variables['analysed_sst'][0, :, :] #[time,lat,long]
#It is good habit to close a dataset (DS) when done
DS.close()
```
**For the daily-mean file:**
```
data30days = "Data/download_motuclient30days.nc"
DS30d = xr.open_dataset(data30days)
DS30d
DS30d.close()
```
## 4. Data conversion
The NetCDF format is used to store multidimentional scientific data. As Python many programming language and softwares are able to access the NetCDF files. However, sometime can be handy to convert them in others formats as can be the "csv" (or "Comma Separated Value" where each row is identified as a single data "record") and the "Shapefile"(Geospatial vector data format for geographic information system software which is developed and regulated by Esdri) file format. Below an example of conversion in "CSV" and "Shapefile":
### 4.1 Conversion to CSV
```
#Read the file and leave the times in our file encoded as numbers. We create a dataset called "ds"
ds = xr.open_dataset("Data/download_motuclient.nc", decode_times=False)
ds
# Convert our dataset in a dataframe called "df" where the data is organise in a tabular form
df = ds.to_dataframe()
df
#The entire dataframe (also the "NaN" values) is converted as csv file
df.to_csv("Data/CSV/download_motuclient.csv")
#Another csv file, whithout "NaN" values, is produced
data = pd.read_csv("Data/CSV/download_motuclient.csv")
data.dropna().to_csv("Data/CSV/download_motuclient_NoNAN.csv", index = False)
```
### 4.2 Conversion to Shapefile
To produce the "shapefile" we are going to use the csv file generated before. In particular we are going to use the one without NaN values (called "download_motuclient_NoNAN.csv"). We proceed like that to avoid that in our shapefile are going to be included empty or not valid records (in land data aquisition for example). The geometry of the shapefile choosen which rapresent the best our data is the "Point" and then starting from our csv we are going to generate a 2d cloud of point desribing the "analised_sst" data distribution. Below the code to convert our csv in a shapefile:
```
filecsv = open("Data/CSV/download_motuclient_NoNAN.csv")
listed=[]
line = filecsv.readline()
for u in line.split(','):
listed.append(u)
variable = "analysed_sst"
schema = { 'geometry': 'Point', 'properties': { variable : 'float' } }
with collection("Data/SHP/download_motuclient_NoNAN.shp", "w", "ESRI Shapefile", schema) as output:
with open("Data/CSV/download_motuclient_NoNAN.csv", 'r') as f:
reader = csv.DictReader(f)
for row in reader:
point = Point(float(row['lon']), float(row['lat']))
output.write({
'properties': {
variable: row[variable]
},
'geometry': mapping(point)
})
#To read the shapefile generated
sf = shapefile.Reader("Data/SHP/download_motuclient_NoNAN.shp")
#To check the total number of records
len(sf)
#To show the first 10 records of the "analysed_sst" variable
sf.records()[0:10]
```
To verify and check that the shapefile was generated correctly it is better to plot it (process done by the code below). It is interesting to see all the nodes or the data records (showed as red dots) inside the area of interest. In fact, The distance between each node or record is 0.25 degree as the product documentation can testify.
```
plt.rcParams["figure.figsize"] = (9,6)
listx=[]
listy=[]
T =[]
for sr in sf.shapeRecords():
for xNew,yNew in sr.shape.points:
listx.append(xNew-360)
listy.append(yNew)
plt.xlabel("Longitude", fontsize=15, labelpad=20)
plt.ylabel("Latitude", fontsize=15, labelpad=20)
plt.plot(listx,listy,'ro', markersize=2)
plt.show()
```
## 5. Plot the Data
In this section will be shown how to visualise the data in both global and local scale (that will give us a more detailed map). For this purpose we are going to use the monthly-mean file downloaded previously and stored as "download_motuclient.nc".
### 5.1 Data visualization at global scale
In the following plot a red poligon is overlayed in the location where we expect to visualise the data.
```
plt.rcParams["figure.figsize"] = (18,12)
ax = plt.axes(projection=ccrs.PlateCarree())
plt.title('SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001 - 2020-04-15')
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylabels_left = False
ax.stock_img()
ax.coastlines()
#Area where we expect to have the data
pgon = Polygon(((-15, 35),(-15,45),(0,45),(0,35)))
ax.add_geometries([pgon], crs=ccrs.PlateCarree(), facecolor = 'r', edgecolor='red', alpha=0.5)
plt.show()
```
Once we add the data to the plot above we can see as the area previously covered by a red polygon now contains the "analysed_sst" data. It is just a preliminary plot just to have a general ata overview in a global map context.
```
ax = plt.axes(projection=ccrs.PlateCarree())
plt.title('SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001 - 2020-04-15')
#Data added to the plot
plt.contourf(lons, lats, sst, 60, cmap='jet',
transform=ccrs.PlateCarree())
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylabels_left = False
ax.stock_img()
ax.coastlines()
plt.show()
```
### 5.2 Map creation at local scale
At this point we can create a detailed map (just related to the data inside its bounding box) which will give us a clearer idea of the data distribution and lead as to a better data insight.
```
ax = plt.axes(projection=ccrs.PlateCarree())
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linestyle=':')
ax.add_feature(cfeature.LAKES, alpha=0.5)
ax.add_feature(cfeature.RIVERS)
plt.title('SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001 - 2020-04-15')
plt.contourf(lons, lats, sst, 60, cmap='jet',
transform=ccrs.PlateCarree())
cb = plt.colorbar(orientation="vertical", pad=0.1)
cb.set_label(label='Analysed_SST [Kelvin]', size='xx-large', weight='bold',labelpad=20)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylabels_left = True
gl.ylabels_right = False
ax.text(-0.15, 0.5, 'Latitude', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes, fontsize=15)
ax.text(0.5, -0.15, 'Longitude', va='bottom', ha='center',
rotation='horizontal', rotation_mode='anchor',
transform=ax.transAxes, fontsize=15)
plt.show()
```
And if we convert in Celcius degree:
```
ax = plt.axes(projection=ccrs.PlateCarree())
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linestyle=':')
ax.add_feature(cfeature.LAKES, alpha=0.5)
ax.add_feature(cfeature.RIVERS)
plt.title('SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001 - 2020-04-15')
sstC= sst-273.15
plt.contourf(lons, lats, sstC, 60, cmap='jet',
transform=ccrs.PlateCarree())
cb = plt.colorbar(orientation="vertical", pad=0.1)
cb.set_label(label='Analysed_SST [Celsius]', size='xx-large', weight='bold',labelpad=20)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylabels_left = True
gl.ylabels_right = False
ax.text(-0.15, 0.5, 'Latitude', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes, fontsize=15)
ax.text(0.5, -0.15, 'Longitude', va='bottom', ha='center',
rotation='horizontal', rotation_mode='anchor',
transform=ax.transAxes, fontsize=15)
plt.show()
```
## 6. Recursive data plotting
This section is pretty useful to plot recursively many plots of the same variable in fuction of the time. For this process It is used the 30 days file downloaded previously and saved as "download_motuclient30days.nc". We also need to find the maximum and minimum value of the "analysed_sst" variable contained in the file downloaded so to set a fixed scale bar that is going to be used for all the plots (Doing that will allow the comparison of the plots in fuction of the time and then the "analysed_sst" value map variation will be displayed using always the same color scale range). Are going to be produced a total of 30 plots, one for each time steps (30 days).
```
plt.rcParams["figure.figsize"] = (9,6)
with xr.open_dataset('Data/download_motuclient30days.nc') as file:
minvar = file.analysed_sst.min()
maxvar = file.analysed_sst.max()
for t in range(file.time.shape[0]):
da = file.analysed_sst.isel(time=t)
num = date2num(file.time[t])
date = num2date(num)
lat = file.variables['lat'][:]
lon = file.variables['lon'][:]
title = file.id+" "+"-"+" "+str(date)
plt.title(title, fontsize=10)
plt.xlabel("Longitude", fontsize=15, labelpad=40)
plt.ylabel("Latitude", fontsize=15, labelpad=50)
m=Basemap(projection='mill',lat_ts=10,llcrnrlon=lon.min(), \
urcrnrlon=lon.max(),llcrnrlat=lat.min(),urcrnrlat=lat.max(), \
resolution='h')
m.drawcoastlines()
m.fillcontinents()
m.drawmapboundary()
m.drawparallels(np.arange(-80., 81., 2.5), labels=[1,0,0,0], fontsize=15)
m.drawmeridians(np.arange(-180., 181., 2.5), labels=[0,0,0,1], fontsize=15)
x, y = m(*np.meshgrid(lon,lat))
col = m.pcolormesh(x,y,da,shading='flat',cmap=plt.cm.jet, vmin=minvar, vmax=maxvar)
cbar = plt.colorbar(col)
cbar.ax.yaxis.set_ticks_position('right')
for I in cbar.ax.yaxis.get_ticklabels():
I.set_size(10)
cbar.set_label("Analysed_SST", size = 15, weight='bold',labelpad=20)
plt.savefig('Data/PNG/{}.png'.format(t), dpi=100)
plt.show()
```
### 6.1 Plots animation (Gif)
Below the code that will allow to stack all the plots obtained before in an animated GIF file:
```
gif_name = 'Analysed_SST'
fps = 2
file_list = glob.glob('Data/PNG/*.png') # Get all the pngs in the current directory
lsorted = natsorted(file_list)
#file_list.sort(key=os.path.getmtime) # Sort the images by generation time
clip = mpy.ImageSequenceClip(lsorted, fps=fps)
clip.write_gif('Data/GIF/{}.gif'.format(gif_name), fps=fps)
```
Finally we can plot the gif file generated above as following:
```
from IPython.display import display, HTML
HTML('''<div style="display: flex; justify-content: row;">
<img src="Data/GIF/Analysed_SST.gif">
</div>''')
```
-----------------
---------------

| github_jupyter |
```
import pandas as pd
df=pd.read_excel("data.xls")
df.head()
df.isnull().sum().sum()
import pandas as pd
import numpy as np
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression, LogisticRegression, Ridge, RidgeClassifier
from sklearn.model_selection import train_test_split
class MiceImputer(object):
def __init__(self, seed_values = True, seed_strategy="mean", copy=True):
self.strategy = seed_strategy # seed_strategy in ['mean','median','most_frequent', 'constant']
self.seed_values = seed_values # seed_values = False initializes missing_values using not_null columns
self.copy = copy
self.imp = SimpleImputer(strategy=self.strategy, copy=self.copy)
def fit_transform(self, X, method = 'Linear', iter = 5, verbose = True):
# Why use Pandas?
# http://gouthamanbalaraman.com/blog/numpy-vs-pandas-comparison.html
# Pandas < Numpy if X.shape[0] < 50K
# Pandas > Numpy if X.shape[0] > 500K
# Data necessary for masking missing-values after imputation
null_cols = X.columns[X.isna().any()].tolist()
null_X = X.isna()[null_cols]
### Initialize missing_values
if self.seed_values:
# Impute all missing values using SimpleImputer
if verbose:
print('Initilization of missing-values using SimpleImputer')
new_X = pd.DataFrame(self.imp.fit_transform(X))
new_X.columns = X.columns
new_X.index = X.index
else:
# Initialize a copy based on value of self.copy
if self.copy:
new_X = X.copy()
else:
new_X = X
not_null_cols = X.columns[X.notna().any()].tolist()
if verbose:
print('Initilization of missing-values using regression on non-null columns')
for column in null_cols:
null_rows = null_X[column]
train_x = new_X.loc[~null_rows, not_null_cols]
test_x = new_X.loc[null_rows, not_null_cols]
train_y = new_X.loc[~null_rows, column]
if X[column].nunique() > 2:
m = LinearRegression(n_jobs = -1)
m.fit(train_x, train_y)
new_X.loc[null_rows,column] = pd.Series(m.predict(test_x))
not_null_cols.append(column)
elif X[column].nunique() == 2:
m = LogisticRegression(n_jobs = -1, solver = 'lbfgs')
m.fit(train_x, train_y)
new_X.loc[null_rows,column] = pd.Series(m.predict(test_x))
not_null_cols.append(column)
### Begin iterations of MICE
model_score = {}
for i in range(iter):
if verbose:
print('Beginning iteration ' + str(i) + ':')
model_score[i] = []
for column in null_cols:
null_rows = null_X[column]
not_null_y = new_X.loc[~null_rows, column]
not_null_X = new_X[~null_rows].drop(column, axis = 1)
train_x, val_x, train_y, val_y = train_test_split(not_null_X, not_null_y, test_size=0.33, random_state=42)
test_x = new_X.drop(column, axis = 1)
if new_X[column].nunique() > 2:
if method == 'Linear':
m = LinearRegression(n_jobs = -1)
elif method == 'Ridge':
m = Ridge()
m.fit(train_x, train_y)
model_score[i].append(m.score(val_x, val_y))
new_X.loc[null_rows,column] = pd.Series(m.predict(test_x))
if verbose:
print('Model score for ' + str(column) + ': ' + str(m.score(val_x, val_y)))
elif new_X[column].nunique() == 2:
if method == 'Linear':
m = LogisticRegression(n_jobs = -1, solver = 'lbfgs')
elif method == 'Ridge':
m = RidgeClassifier()
m.fit(train_x, train_y)
model_score[i].append(m.score(val_x, val_y))
new_X.loc[null_rows,column] = pd.Series(m.predict(test_x))
if verbose:
print('Model score for ' + str(column) + ': ' + str(m.score(val_x, val_y)))
if model_score[i] == []:
model_score[i] = 0
else:
model_score[i] = sum(model_score[i])/len(model_score[i])
return new_X
imp=MiceImputer()
df_new=imp.fit_transform(df)
df_new.isnull().sum().sum()
df_new.to_csv("inputed_data.csv")
```
| github_jupyter |
<table>
<tr align=left><td><img align=left src="https://i.creativecommons.org/l/by/4.0/88x31.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
```
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
```
# Review: Finite Differences
Finite differences are expressions that approximate derivatives of a function evaluated at a set of points, often called a *stencil*. These expressions can come in many different flavors including types of stencils, order of accuracy, and order of derivatives. In this lecture we will review the process of derivation, error analysis and application of finite differences.
## Derivation of Finite Differences
The general approach to deriving finite differences should be familiar for at least the first order differences. Consider three different ways to define a derivative at a point $x_i$
$$
u'(x_i) = \lim_{\Delta x \rightarrow 0} \left \{ \begin{aligned}
&\frac{u(x_i + \Delta x) - u(x_i)}{\Delta x} & \equiv D_+ u(x_i)\\
&\frac{u(x_i + \Delta x) - u(x_i - \Delta_x)}{2 \Delta x} & \equiv D_0 u(x_i)\\
&\frac{u(x_i) - u(x_i - \Delta_x)}{\Delta x} & \equiv D_- u(x_i).
\end{aligned} \right .
$$

If instead of allowing $\Delta x \rightarrow 0$ we come up with an approximation to the slope $u'(x_i)$ and hence our definitions of derivatives can directly be seen as approximations to derivatives when $\Delta x$ is perhaps small but non-zero.
For the rest of the review we will delve into a more systematic way to derive these approximations as well as find higher order accurate approximations, higher order derivative approximations, and understand the error associated with the approximations.
### Interpolating Polynomials
One was to derive finite difference approximations is by finding an interpolating polynomial through the given stencil and differentiating that directly. Given $N+1$ points $(x_0,u(x_0)), (x_1,u(x_1)), \ldots, (x_{N},u(x_{N}))$ assuming the $x_i$ are all unique, the interpolating polynomial $P_N(x)$ can be written as
$$
P_N(x) = \sum^{N}_{i=0} u(x_i) \ell_i(x)
$$
where
$$
\ell_i(x) = \prod^{N}_{j=0, j \neq i} \frac{x - x_j}{x_i - x_j} = \frac{x - x_0}{x_i - x_0} \frac{x - x_1}{x_i - x_1} \cdots \frac{x - x_{i-1}}{x_i - x_{i-1}}\frac{x - x_{i+1}}{x_i - x_{i+1}} \cdots \frac{x - x_{N}}{x_i - x_{N}}
$$
Note that $\ell_i(x_i) = 1$ and $\forall j\neq i, ~~ \ell_i(x_j) = 0$.
Since we know how to differentiate a polynomial we should be able to then compute the given finite difference approximation given these data points.
#### Example: 2-Point Stencil
Say we have two points to form the approximation to the derivative with. The interpolating polynomial through two points is a linear function with the form
$$
P_1(x) = u(x_0) \frac{x - x_1}{x_0 - x_1} + u(x_1) \frac{x - x_0}{x_1 - x_0}.
$$
Differentiating $P_1(x)$ leads to
$$
P'_1(x) = u(x_0) \frac{1}{x_0 - x_1} + u(x_1) \frac{1}{x_1 - x_0}.
$$
If we allow the spacing between $x_0$ and $x_1$ to be $\Delta x = x_1 - x_0$ we can then write this as
$$
P'_1(x) = \frac{u(x_1) - u(x_0)}{\Delta x}
$$
which is the general form of $D_-u(x)$ and $D_+u(x)$ above.
If we extend this to have three points we have the interpolating polynomial
$$
P_2(x) = u(x_0) \frac{x - x_1}{x_0 - x_1} \frac{x - x_2}{x_0 - x_2} + u(x_1) \frac{x - x_0}{x_1 - x_0} \frac{x - x_2}{x_1 - x_2} + u(x_2) \frac{x - x_0}{x_2 - x_0} \frac{x - x_1}{x_2 - x_1}.
$$
Differentiating this leads to
$$\begin{aligned}
P'_2(x) &= u(x_0) \left( \frac{1}{x_0 - x_1} \frac{x - x_2}{x_0 - x_2} + \frac{x - x_1}{x_0 - x_1} \frac{1}{x_0 - x_2}\right )+ u(x_1) \left ( \frac{1}{x_1 - x_0} \frac{x - x_2}{x_1 - x_2} + \frac{x - x_0}{x_1 - x_0} \frac{1}{x_1 - x_2} \right )+ u(x_2)\left ( \frac{1}{x_2 - x_0} \frac{x - x_1}{x_2 - x_1} + \frac{x - x_0}{x_2 - x_0} \frac{1}{x_2 - x_1} \right ) \\
&= u(x_0) \left(\frac{x - x_2}{2 \Delta x^2} + \frac{x - x_1}{2 \Delta x^2} \right )+ u(x_1) \left ( \frac{x - x_2}{-\Delta x^2} + \frac{x - x_0}{-\Delta x^2} \right )+ u(x_2)\left ( \frac{x - x_1}{2\Delta x^2} + \frac{x - x_0}{2 \Delta x^2} \right ) \\
&=\frac{u(x_0)}{2\Delta x^2} (2x - x_2 - x_1)+ \frac{u(x_1)}{-\Delta x^2} ( 2x - x_2 - x_0)+ \frac{u(x_2)}{2\Delta x^2}( 2x - x_1 - x_0) \\
&=\frac{u(x_0)}{2\Delta x^2} (2x - x_2 - x_1)+ \frac{u(x_1)}{-\Delta x^2} ( 2x - x_2 - x_0)+ \frac{u(x_2)}{2\Delta x^2}( 2x - x_1 - x_0).
\end{aligned}$$
If we now evaluate the derivative at $x_1$, assuming this is the central point, we have
$$\begin{aligned}
P'_2(x_1) &= \frac{u(x_0)}{2\Delta x^2} (x_1 - x_2)+ \frac{u(x_1)}{-\Delta x^2} ( x_1 - x_2 + x_1 - x_0)+ \frac{u(x_2)}{\Delta x^2}( x_1 - x_0) \\
&= \frac{u(x_0)}{2\Delta x^2} (-\Delta x)+ \frac{u(x_1)}{-\Delta x^2} ( -\Delta x + \Delta x)+ \frac{u(x_2)}{\Delta x^2}( 2\Delta x) \\
&= \frac{u(x_2) - u(x_0)}{2 \Delta x}
\end{aligned}$$
giving us the third approximation from above.
### Taylor-Series Methods
Another way to derive finite difference approximations can be computed by using the Taylor series and the method of undetermined coefficients.
$$u(x) = u(x_n) + (x - x_n) u'(x_n) + \frac{(x - x_n)^2}{2!} u''(x_n) + \frac{(x - x_n)^3}{3!} u'''(x_n) + \mathcal{O}((x - x_n)^4)$$
Say we want to derive the second order accurate, first derivative approximation that just did, this requires the values $(x_{n+1}, u(x_{n+1})$ and $(x_{n-1}, u(x_{n-1})$. We can express these values via our Taylor series approximation above as
$$\begin{aligned}
u(x_{n+1}) &= u(x_n) + (x_{n+1} - x_n) u'(x_n) + \frac{(x_{n+1} - x_n)^2}{2!} u''(x_n) + \frac{(x_{n+1} - x_n)^3}{3!} u'''(x_n) + \mathcal{O}((x_{n+1} - x_n)^4) \\
&= u(x_n) + \Delta x u'(x_n) + \frac{\Delta x^2}{2!} u''(x_n) + \frac{\Delta x^3}{3!} u'''(x_n) + \mathcal{O}(\Delta x^4)
\end{aligned}$$
and
$$\begin{aligned}
u(x_{n-1}) &= u(x_n) + (x_{n-1} - x_n) u'(x_n) + \frac{(x_{n-1} - x_n)^2}{2!} u''(x_n) + \frac{(x_{n-1} - x_n)^3}{3!} u'''(x_n) + \mathcal{O}((x_{n-1} - x_n)^4) \\
&= u(x_n) - \Delta x u'(x_n) + \frac{\Delta x^2}{2!} u''(x_n) - \frac{\Delta x^3}{3!} f'''(x_n) + \mathcal{O}(\Delta x^4)
\end{aligned}$$
Now to find out how to combine these into an expression for the derivative we assume our approximation looks like
$$u'(x_n) + R(x_n) = A u(x_{n+1}) + B u(x_n) + C u(x_{n-1})$$
where $R(x_n)$ is our error.
Plugging in the Taylor series approximations we find
$$u'(x_n) + R(x_n) = A \left ( u(x_n) + \Delta x u'(x_n) + \frac{\Delta x^2}{2!} u''(x_n) + \frac{\Delta x^3}{3!} u'''(x_n) + \mathcal{O}(\Delta x^4)\right ) + B u(x_n) + C \left ( u(x_n) - \Delta x u'(x_n) + \frac{\Delta x^2}{2!} u''(x_n) - \frac{\Delta x^3}{3!} u'''(x_n) + \mathcal{O}(\Delta x^4) \right )$$
Since we want $R(x_n) = \mathcal{O}(\Delta x^2)$ we want all terms lower than this to disappear except for those multiplying $u'(x_n)$ as those should sum to 1 to give us our approximation. Collecting the terms with common $\Delta x^n$ we get a series of expressions for the coefficients $A$, $B$, and $C$ based on the fact we want an approximation to $u'(x_n)$. The $n=0$ terms collected are $A + B + C$ and are set to 0 as we want the $u(x_n)$ term to disappear
$$\Delta x^0: ~~~~ A + B + C = 0$$
$$\Delta x^1: ~~~~ A \Delta x - C \Delta x = 1 $$
$$\Delta x^2: ~~~~ A \frac{\Delta x^2}{2} + C \frac{\Delta x^2}{2} = 0 $$
This last equation $\Rightarrow A = -C$, using this in the second equation gives $A = \frac{1}{2 \Delta x}$ and $C = -\frac{1}{2 \Delta x}$. The first equation then leads to $B = 0$. Putting this altogether then gives us our previous expression including an estimate for the error:
$$u'(x_n) + R(x_n) = \frac{u(x_{n+1}) - u(x_{n-1})}{2 \Delta x} + \frac{1}{2 \Delta x} \frac{\Delta x^3}{3!} u'''(x_n) + \mathcal{O}(\Delta x^4) + \frac{1}{2 \Delta x} \frac{\Delta x^3}{3!} u'''(x_n) + \mathcal{O}(\Delta x^4) $$
$$R(x_n) = \frac{\Delta x^2}{3!} u'''(x_n) + \mathcal{O}(\Delta x^3) = \mathcal{O}(\Delta x^2)$$
### Example: First Order Derivatives
```
f = lambda x: numpy.sin(x)
f_prime = lambda x: numpy.cos(x)
# Use uniform discretization
x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)
N = 20
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x = x_hat[1] - x_hat[0]
# Compute forward difference using a loop
f_prime_hat = numpy.empty(x_hat.shape)
for i in xrange(N - 1):
f_prime_hat[i] = (f(x_hat[i+1]) - f(x_hat[i])) / delta_x
f_prime_hat[-1] = (f(x_hat[i]) - f(x_hat[i-1])) / delta_x
# Vector based calculation
# f_prime_hat[:-1] = (f(x_hat[1:]) - f(x_hat[:-1])) / (delta_x)
# Use first-order differences for points at edge of domain
f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x # Backward Difference at x_N
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f_prime(x), 'k')
axes.plot(x_hat + 0.5 * delta_x, f_prime_hat, 'ro')
axes.set_xlim((x[0], x[-1]))
axes.set_ylim((-1.1, 1.1))
plt.show()
```
### Example: Second Order Derivative
Using our Taylor series approach lets derive the second order accurate second derivative formula. Again we will use the same points and the Taylor series centered at $x = x_n$ so we end up with the same expression as before:
$$u''(x_n) + R(x_n) = A \left ( u(x_n) + \Delta x u'(x_n) + \frac{\Delta x^2}{2!} u''(x_n) + \frac{\Delta x^3}{3!} u'''(x_n) + \frac{\Delta x^4}{4!} u^{(4)}(x_n) + \mathcal{O}(\Delta x^5)\right ) + B u(x_n) + C \left ( u(x_n) - \Delta x u'(x_n) + \frac{\Delta x^2}{2!} u''(x_n) - \frac{\Delta x^3}{3!} u'''(x_n) + \frac{\Delta x^4}{4!} u^{(4)}(x_n) + \mathcal{O}(\Delta x^5) \right )$$
except this time we want to leave $u''(x_n)$ on the right hand side. Doing the same trick as before we have the following expressions:
$$\Delta x^0: ~~~~ A + B + C = 0$$
$$\Delta x^1: ~~~~ A \Delta x - C \Delta x = 0$$
$$\Delta x^2: ~~~~ A \frac{\Delta x^2}{2} + C \frac{\Delta x^2}{2} = 1$$
The second equation implies $A = C$ which combined with the third implies
$$A = C = \frac{1}{\Delta x^2}$$
Finally the first equation gives
$$B = -\frac{2}{\Delta x^2}$$
leading to the final expression
$$u''(x_n) + R(x_n) = \frac{u(x_{n+1}) - 2 u(x_n) + u(x_{n-1})}{\Delta x^2} + \frac{1}{\Delta x^2} \left(\frac{\Delta x^3}{3!} u'''(x_n) + \frac{\Delta x^4}{4!} u^{(4)}(x_n) - \frac{\Delta x^3}{3!} u'''(x_n) + \frac{\Delta x^4}{4!} u^{(4)}(x_n) \right) + \mathcal{O}(\Delta x^5)$$
with
$$R(x_n) = \frac{\Delta x^2}{12} u^{(4)}(x_n) + \mathcal{O}(\Delta x^3)$$
```
f = lambda x: numpy.sin(x)
f_dubl_prime = lambda x: -numpy.sin(x)
# Use uniform discretization
x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)
N = 10
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x = x_hat[1] - x_hat[0]
# Compute derivative
f_dubl_prime_hat = numpy.empty(x_hat.shape)
f_dubl_prime_hat[1:-1] = (f(x_hat[2:]) -2.0 * f(x_hat[1:-1]) + f(x_hat[:-2])) / (delta_x**2)
# Use first-order differences for points at edge of domain
f_dubl_prime_hat[0] = (2.0 * f(x_hat[0]) - 5.0 * f(x_hat[1]) + 4.0 * f(x_hat[2]) - f(x_hat[3])) / delta_x**2
f_dubl_prime_hat[-1] = (2.0 * f(x_hat[-1]) - 5.0 * f(x_hat[-2]) + 4.0 * f(x_hat[-3]) - f(x_hat[-4])) / delta_x**2
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f_dubl_prime(x), 'k')
axes.plot(x_hat, f_dubl_prime_hat, 'ro')
axes.set_xlim((x[0], x[-1]))
axes.set_ylim((-1.1, 1.1))
plt.show()
```
### General Derivation
For a general finite difference approximation located at $\bar{x}$ to the $k$th derivative with the arbitrary stencil $N \geq k + 1$ points $x_1, \ldots, x_N$ we can use some generalizations of the above method. Note that although it is common that $\bar{x}$ is one of the stencil points this is not necessary. We also assume that $u(x)$ is sufficiently smooth so that our Taylor series are valid.
At each stencil point we have the approximation
$$
u(x_i) = u(\bar{x}) + (x_i - \bar{x})u'(\bar{x}) + \cdots + \frac{1}{k!}(x_i - \bar{x})^k u^{(k)}(\bar{x}) + \cdots.
$$
Following our methodology above we want to find the linear combination of these Taylor series expansions such that
$$
u^{(k)}(\bar{x}) + \mathcal{O}(\Delta x^p) = a_1 u(x_1) + a_2 u(x_2) + a_3 u(x_3) + \cdots + a_n u(x_n).
$$
Note that $\Delta x$ can vary in general and the asymptotic behavior of the method will be characterized by some sort of average distance or sometimes the maximum distance between the stencil points.
Generalizing the approach above with the method of undetermined coefficients we want to eliminate the pieces of the above approximation that are in front of the derivatives less than order $k$. The condition for this is
$$
\frac{1}{(i - 1)!} \sum^N_{j=1} a_j (x_j - \bar{x})^{(i-1)} = \left \{ \begin{aligned}
1 & & \text{if}~i - 1 = k, \\
0 & & \text{otherwise}
\end{aligned} \right .
$$
for $i=1, \ldots, N$. Assuming the $x_j$ are distinct we can write the system of equations in a Vandermonde system which will have a unique solution.
```
import scipy.special
def finite_difference(k, x_bar, x):
"""Compute the finite difference stencil for the kth derivative"""
N = x.shape[0]
A = numpy.ones((N, N))
x_row = x - x_bar
for i in range(1, N):
A[i, :] = x_row ** i / scipy.special.factorial(i)
b = numpy.zeros(N)
b[k] = 1.0
c = numpy.linalg.solve(A, b)
return c
print finite_difference(2, 0.0, numpy.asarray([-1.0, 0.0, 1.0]))
print finite_difference(1, 0.0, numpy.asarray([-1.0, 0.0, 1.0]))
print finite_difference(1, -2.0, numpy.asarray([-2.0, -1.0, 0.0, 1.0, 2.0]))
```
## Error Analysis
### Polynomial View
Given $N + 1$ points we can form an interpolant $P_N(x)$ of degree $N$ where
$$u(x) = P_N(x) + R_N(x)$$
We know from Lagrange's Theorem that the remainder term looks like
$$R_N(x) = (x - x_0)(x - x_1)\cdots (x - x_{N})(x - x_{N+1}) \frac{u^{(N+1)}(c)}{(N+1)!}$$
noting that we need to require that $u(x) \in C^{N+1}$ on the interval of interest. Taking the derivative of the interpolant $P_N(x)$ (in terms of Newton polynomials) then leads to
$$P_N'(x) = [u(x_0), u(x_1)] + ((x - x_1) + (x - x_0)) [u(x_0), u(x_1), u(x_2)] + \cdots + \left(\sum^{N-1}_{i=0}\left( \prod^{N-1}_{j=0,~j\neq i} (x - x_j) \right )\right ) [u(x_0), u(x_1), \ldots, u(x_N)]$$
Similarly we can find the derivative of the remainder term $R_N(x)$ as
$$R_N'(x) = \left(\sum^{N}_{i=0} \left( \prod^{N}_{j=0,~j\neq i} (x - x_j) \right )\right ) \frac{u^{(N+1)}(c)}{(N+1)!}$$
Now if we consider the approximation of the derivative evaluated at one of our data points $(x_k, y_k)$ these expressions simplify such that
$$u'(x_k) = P_N'(x_k) + R_N'(x_k)$$
If we let $\Delta x = \max_i |x_k - x_i|$ we then know that the remainder term will be $\mathcal{O}(\Delta x^N)$ as $\Delta x \rightarrow 0$ thus showing that this approach converges and we can find arbitrarily high order approximations.
### Truncation Error
If we are using a Taylor series approach we can also look at the dominate term left over from in the Taylor series to find the *truncation error*.
As an example lets again consider the first derivative approximations above, we need the Taylor expansions
$$
u(\bar{x} + \Delta x) = u(\bar{x}) + \Delta x u'(\bar{x}) + \frac{1}{2} \Delta x^2 u''(\bar{x}) + \frac{1}{3!} \Delta x^3 u'''(\bar{x}) + \mathcal{O}(\Delta x^4)
$$
and
$$
u(\bar{x} - \Delta x) = u(\bar{x}) - \Delta x u'(\bar{x}) + \frac{1}{2} \Delta x^2 u''(\bar{x}) - \frac{1}{3!} \Delta x^3 u'''(\bar{x}) + \mathcal{O}(\Delta x^4).
$$
Plugging these into our expressions we have
$$\begin{aligned}
D_+ u(\bar{x}) &= \frac{u(\bar{x} + \Delta x) - u(\bar{x})}{\Delta x} \\
&= \frac{\Delta x u'(\bar{x}) + \frac{1}{2} \Delta x^2 u''(\bar{x}) + \frac{1}{3!} \Delta x^3 u'''(\bar{x}) + \mathcal{O}(\Delta x^4)}{\Delta x} \\
&= u'(\bar{x}) + \frac{1}{2} \Delta x u''(\bar{x}) + \frac{1}{3!} \Delta x^2 u'''(\bar{x}) + \mathcal{O}(\Delta x^3).
\end{aligned}$$
If we now difference $D_+ u(\bar{x}) - u'(\bar{x})$ we get the truncation error
$$
\frac{1}{2} \Delta x u''(\bar{x}) + \frac{1}{3!} \Delta x^2 u'''(\bar{x}) + \mathcal{O}(\Delta x^3)
$$
so the error for $D_+$ goes as $\mathcal{O}(\Delta x)$ and is controlled by $u''(\bar{x})$. Note that this approximation is dependent on $\Delta x$ as the derivatives evaluated at $\bar{x}$ are constants.
Similarly for the centered approximation we have
$$
D_0 u(\bar{x}) - u'(\bar{x}) = \frac{1}{6} \Delta x^2 u'''(\bar{x}) + \mathcal{O}(\Delta x^4).
$$
### Computing Order of Accuracy Graphically
Model the error as
$$\begin{aligned}
e(\Delta x) &= C \Delta x^n \\
\log e(\Delta x) &= \log C + n \log \Delta x
\end{aligned}$$
Slope of line is $n$ when computing this! We can also match the first point by solving for $C$:
$$C = e^{\log e(\Delta x) - n \log \Delta x}$$
```
f = lambda x: numpy.sin(x) + x**2 + 3.0 * x**3
f_prime = lambda x: numpy.cos(x) + 2.0 * x + 9.0 * x**2
# Compute the error as a function of delta_x
delta_x = []
error = []
# for N in xrange(2, 101):
for N in xrange(50, 1000, 50):
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x.append(x_hat[1] - x_hat[0])
# Compute forward difference
f_prime_hat = numpy.empty(x_hat.shape)
f_prime_hat[:-1] = (f(x_hat[1:]) - f(x_hat[:-1])) / (delta_x[-1])
# Use first-order differences for points at edge of domain
f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x[-1] # Backward Difference at x_N
error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat + 0.5 * delta_x[-1]) - f_prime_hat), ord=numpy.infty))
error = numpy.array(error)
delta_x = numpy.array(delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error, 'ko', label="Approx. Derivative")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[0], error[0], 1.0) * delta_x**1.0, 'r--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[0], error[0], 2.0) * delta_x**2.0, 'b--', label="2nd Order")
axes.legend(loc=4)
axes.set_title("Convergence of 1st Order Differences")
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$|f'(x) - \hat{f}'(x)|$")
plt.show()
f = lambda x: numpy.sin(x) + x**2 + 3.0 * x**3
f_prime = lambda x: numpy.cos(x) + 2.0 * x + 9.0 * x**2
# Compute the error as a function of delta_x
delta_x = []
error = []
# for N in xrange(2, 101):
for N in xrange(50, 1000, 50):
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N + 1)
delta_x.append(x_hat[1] - x_hat[0])
# Compute derivative
f_prime_hat = numpy.empty(x_hat.shape)
f_prime_hat[1:-1] = (f(x_hat[2:]) - f(x_hat[:-2])) / (2 * delta_x[-1])
# Use first-order differences for points at edge of domain
# f_prime_hat[0] = (f(x_hat[1]) - f(x_hat[0])) / delta_x[-1]
# f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x[-1]
# Use second-order differences for points at edge of domain
f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) + - f(x_hat[2])) / (2.0 * delta_x[-1])
f_prime_hat[-1] = ( 3.0 * f(x_hat[-1]) + -4.0 * f(x_hat[-2]) + f(x_hat[-3])) / (2.0 * delta_x[-1])
error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat) - f_prime_hat), ord=numpy.infty))
error = numpy.array(error)
delta_x = numpy.array(delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error, "ro", label="Approx. Derivative")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[0], error[0], 1.0) * delta_x**1.0, 'b--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[0], error[0], 2.0) * delta_x**2.0, 'r--', label="2nd Order")
axes.legend(loc=4)
axes.set_title("Convergence of 2nd Order Differences")
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$|f'(x) - \hat{f}'(x)|$")
plt.show()
f = lambda x: numpy.sin(x) + x**2 + 3.0 * x**3
f_dubl_prime = lambda x: -numpy.sin(x) + 2.0 + 18.0 * x
# Compute the error as a function of delta_x
delta_x = []
error = []
# for N in xrange(2, 101):
for N in xrange(50, 1000, 50):
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x.append(x_hat[1] - x_hat[0])
# Compute derivative
f_dubl_prime_hat = numpy.empty(x_hat.shape)
f_dubl_prime_hat[1:-1] = (f(x_hat[2:]) -2.0 * f(x_hat[1:-1]) + f(x_hat[:-2])) / (delta_x[-1]**2)
# Use second-order differences for points at edge of domain
f_dubl_prime_hat[0] = (2.0 * f(x_hat[0]) - 5.0 * f(x_hat[1]) + 4.0 * f(x_hat[2]) - f(x_hat[3])) / delta_x[-1]**2
f_dubl_prime_hat[-1] = (2.0 * f(x_hat[-1]) - 5.0 * f(x_hat[-2]) + 4.0 * f(x_hat[-3]) - f(x_hat[-4])) / delta_x[-1]**2
error.append(numpy.linalg.norm(numpy.abs(f_dubl_prime(x_hat) - f_dubl_prime_hat), ord=numpy.infty))
error = numpy.array(error)
delta_x = numpy.array(delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
# axes.plot(delta_x, error)
axes.loglog(delta_x, error, "ko", label="Approx. Derivative")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[2], error[2], 1.0) * delta_x**1.0, 'b--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[2], error[2], 2.0) * delta_x**2.0, 'r--', label="2nd Order")
axes.legend(loc=4)
plt.show()
```
| github_jupyter |
# TRAIN HANGUL-RNN
```
# -*- coding: utf-8 -*-
# Import Packages
import numpy as np
import tensorflow as tf
import collections
import argparse
import time
import os
from six.moves import cPickle
from TextLoader import *
from Hangulpy import *
print ("Packages Imported")
```
# LOAD DATASET WITH TEXTLOADER
```
data_dir = "data/nine_dreams"
batch_size = 50
seq_length = 50
data_loader = TextLoader(data_dir, batch_size, seq_length)
# This makes "vocab.pkl" and "data.npy" in "data/nine_dreams"
# from "data/nine_dreams/input.txt"
```
# VOCAB AND CHARS
```
vocab_size = data_loader.vocab_size
vocab = data_loader.vocab
chars = data_loader.chars
print ( "type of 'data_loader.vocab' is %s, length is %d"
% (type(data_loader.vocab), len(data_loader.vocab)) )
print ( "type of 'data_loader.chars' is %s, length is %d"
% (type(data_loader.chars), len(data_loader.chars)) )
```
# VOCAB: DICTIONARY (CHAR->INDEX)
```
print (data_loader.vocab)
```
# CHARS: LIST (INDEX->CHAR)
```
print (data_loader.chars)
# USAGE
print (data_loader.chars[0])
```
# TRAINING BATCH (IMPORTANT!!)
```
x, y = data_loader.next_batch()
print ("Type of 'x' is %s. Shape is %s" % (type(x), x.shape,))
print ("x looks like \n%s" % (x))
print
print ("Type of 'y' is %s. Shape is %s" % (type(y), y.shape,))
print ("y looks like \n%s" % (y))
```
# DEFINE A MULTILAYER LSTM NETWORK
```
rnn_size = 512
num_layers = 3
grad_clip = 5. # <= GRADIENT CLIPPING (PRACTICALLY IMPORTANT)
vocab_size = data_loader.vocab_size
# SELECT RNN CELL (MULTI LAYER LSTM)
unitcell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size)
cell = tf.nn.rnn_cell.MultiRNNCell([unitcell] * num_layers)
# Set paths to the graph
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
targets = tf.placeholder(tf.int32, [batch_size, seq_length])
initial_state = cell.zero_state(batch_size, tf.float32)
# Set Network
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(1, seq_length, tf.nn.embedding_lookup(
embedding, input_data))
inputs = [tf.squeeze(input_, [1]) for input_ in inputs]
print ("Network ready")
```
# Define functions
```
# Output of RNN
outputs, last_state = tf.nn.seq2seq.rnn_decoder(inputs, initial_state
, cell, loop_function=None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
# Next word probability
probs = tf.nn.softmax(logits)
print ("FUNCTIONS READY")
```
# DEFINE LOSS FUNCTION
```
loss = tf.nn.seq2seq.sequence_loss_by_example([logits], # Input
[tf.reshape(targets, [-1])], # Target
[tf.ones([batch_size * seq_length])], # Weight
vocab_size)
print ("LOSS FUNCTION")
```
# DEFINE COST FUNCTION
```
cost = tf.reduce_sum(loss) / batch_size / seq_length
# GRADIENT CLIPPING !
lr = tf.Variable(0.0, trainable=False) # <= LEARNING RATE
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
_optm = tf.train.AdamOptimizer(lr)
optm = _optm.apply_gradients(zip(grads, tvars))
final_state = last_state
print ("NETWORK READY")
```
# OPTIMIZE NETWORK WITH LR SCHEDULING
```
num_epochs = 500
save_every = 1000
learning_rate = 0.0002
decay_rate = 0.97
save_dir = 'data/nine_dreams'
sess = tf.Session()
sess.run(tf.initialize_all_variables())
summary_writer = tf.train.SummaryWriter(save_dir
, graph=sess.graph)
saver = tf.train.Saver(tf.all_variables())
for e in range(num_epochs): # for all epochs
# LEARNING RATE SCHEDULING
sess.run(tf.assign(lr, learning_rate * (decay_rate ** e)))
data_loader.reset_batch_pointer()
state = sess.run(initial_state)
for b in range(data_loader.num_batches):
start = time.time()
x, y = data_loader.next_batch()
feed = {input_data: x, targets: y, initial_state: state}
# Train!
train_loss, state, _ = sess.run([cost, final_state, optm], feed)
end = time.time()
# PRINT
if b % 100 == 0:
print ("%d/%d (epoch: %d), loss: %.3f, time/batch: %.3f"
% (e * data_loader.num_batches + b
, num_epochs * data_loader.num_batches
, e, train_loss, end - start))
# SAVE MODEL
if (e * data_loader.num_batches + b) % save_every == 0:
checkpoint_path = os.path.join(save_dir, 'model.ckpt')
saver.save(sess, checkpoint_path
, global_step = e * data_loader.num_batches + b)
print("model saved to {}".format(checkpoint_path))
# IT TAKE A LOOOOOOOOT OF TIME
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import recordlinker
from recordlinker import preprocess
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore', 'info')
%load_ext autoreload
%autoreload 2
iowa_matches = pd.read_csv('/Users/kailinlu/Desktop/QMSSWork/RecordLinking/recordlinker/recordlinker/data/iowa_matches.csv')
iowa_nonmatches = pd.read_csv('/Users/kailinlu/Desktop/QMSSWork/RecordLinking/recordlinker/recordlinker/data/iowa_nonmatches.csv')
iowa_matches.head()
union_matches = pd.read_csv('/Users/kailinlu/Desktop/QMSSWork/RecordLinking/recordlinker/recordlinker/data/unionarmy_matches.csv')
union_matches.head()
```
### String Embedding Example
```
from recordlinker.preprocess import embed_letters, embed_shingles, disembed_letters, disembed_shingles
name = 'kailin lu'
max_length = 12
print('Embed Letters: \n',
embed_letters(name, max_length),
disembed_letters(embed_letters(name, max_length)))
print('Embed Letters Normalized: \n',
embed_letters(name, max_length, normalize=True),
disembed_letters(embed_letters(name, max_length, normalize=True)))
print('Embed 2-Shingles: \n',
embed_shingles(name, max_length),
disembed_shingles(embed_shingles(name, max_length)))
print('Embed 2-Shingles Normalized: \n',
embed_shingles(name, max_length, normalize=True),
disembed_shingles(embed_shingles(name, max_length, normalize=True)))
name = 'kailin'
max_length = 12
print('Embed Letters: \n',
embed_letters(name, max_length),
disembed_letters(embed_letters(name, max_length)))
print('Embed Letters Normalized: \n',
embed_letters(name, max_length, normalize=True),
disembed_letters(embed_letters(name, max_length, normalize=True)))
print('Embed 2-Shingles: \n',
embed_shingles(name, max_length),
disembed_shingles(embed_shingles(name, max_length)))
print('Embed 2-Shingles Normalized: \n',
embed_shingles(name, max_length, normalize=True),
disembed_shingles(embed_shingles(name, max_length, normalize=True)))
```
## Train and save autoencoders
### Dense
```
ORIG_LENGTH = 12
BATCH_SIZE = 32
ENCODE_DIM = [256, 128]
DECODE_DIM = [128, 256]
LR = 1e-4
EPOCHS=301
LATENT_DIM = [48]
# Embed letters
namesA = preprocess.embed(iowa_matches['lname1915'],
max_length=ORIG_LENGTH,
embed_type='letters',
normalize=True)
namesB = preprocess.embed(iowa_matches['lname1940'],
max_length=ORIG_LENGTH,
embed_type='letters',
normalize=True)
for latent_dim in LATENT_DIM:
save_path = '/Users/kailinlu/Desktop/QMSSWork/RecordLinking/models/dense_letter_{}_iowa_last/'.format(latent_dim)
run_id = 'dense_{}'.format(latent_dim)
vae = recordlinker.model.VAE(batch_size=BATCH_SIZE,
orig_dim=ORIG_LENGTH,
latent_dim=latent_dim,
encode_dim=ENCODE_DIM,
decode_dim=DECODE_DIM,
lr=LR)
model, encoder, decoder = vae.train(namesA, namesB,
epochs=EPOCHS,
run_id=run_id,
save_path=save_path,
optimizer='adam',
tensorboard=True,
earlystop=True,
earlystop_patience=10,
reconstruct=True,
reconstruct_display=10)
ORIG_LENGTH = 12
BATCH_SIZE = 32
ENCODE_DIM = [128, 128]
DECODE_DIM = [128, 128]
LR = 5e-4
EPOCHS=301
EMBED_TYPE = 'shingles'
# Embed letters
namesA = preprocess.embed(union_matches['first1'],
max_length=ORIG_LENGTH,
embed_type=EMBED_TYPE,
normalize=True)
namesB = preprocess.embed(union_matches['first2'],
max_length=ORIG_LENGTH,
embed_type=EMBED_TYPE,
normalize=True)
LATENT_DIM = [2,4,8,16,24]
for latent_dim in LATENT_DIM:
save_path = '/Users/kailinlu/Desktop/QMSSWork/RecordLinking/models/dense_shingle_{}_union_first/'.format(latent_dim)
run_id = 'dense_{}'.format(latent_dim)
vae = recordlinker.model.VAE(batch_size=BATCH_SIZE,
orig_dim=ORIG_LENGTH,
latent_dim=latent_dim,
encode_dim=ENCODE_DIM,
decode_dim=DECODE_DIM,
lr=LR)
model, encoder, decoder = vae.train(namesA, namesB,
epochs=EPOCHS,
run_id=run_id,
save_path=save_path,
optimizer='adam',
tensorboard=True,
earlystop=True,
earlystop_patience=15,
reconstruct=True,
reconstruct_type='s',
reconstruct_display=10)
```
### LSTM
```
# Train
# One hot encoding of names
ORIG_LENGTH = 12
LR = 5e-4
BATCH_SIZE = 32
EPOCHS = 350
namesA = preprocess.embed(iowa_matches['lname1915'],
max_length=ORIG_LENGTH,
embed_type='letters',
normalize=False,
categorical=True)
namesB = preprocess.embed(iowa_matches['lname1940'],
max_length=ORIG_LENGTH,
embed_type='letters',
normalize=False,
categorical=True)
LATENT_DIM = [96,192,384]
for latent_dim in LATENT_DIM:
save_path = '/Users/kailinlu/Desktop/QMSSWork/RecordLinking/models/lstm_letter_{}_iowa_last/'.format(latent_dim)
run_id = 'lstm_{}'.format(latent_dim)
lstm_vae = recordlinker.model.LSTMVAE(batch_size=BATCH_SIZE,
timesteps=ORIG_LENGTH,
orig_dim=classes,
latent_dim=latent_dim,
encode_dim=[64,64],
decode_dim=[64,64],
lr=LR)
model_lstm, model_encoder, model_decoder = lstm_vae.train(namesA, namesB,
epochs=EPOCHS,
run_id=run_id,
save_path=save_path,
earlystop=True,
earlystop_patience=10,
tensorboard=True,
reconstruct=True,
reconstruct_display=10)
```
| github_jupyter |
```
# matplotlib plots within notebook
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from datetime import datetime
from tqdm import tqdm
import os, sys
from l5kit.configs import load_config_data
from l5kit.data import ChunkedDataset, LocalDataManager
from l5kit.dataset import AgentDataset
from l5kit.rasterization import build_rasterizer
from l5kit.evaluation import write_pred_csv
from l5kit.geometry import transform_points
# Custom libs
sys.path.insert(0, './LyftAgent_lib')
from LyftAgent_lib import train_support as lyl_ts
from LyftAgent_lib import topologies as lyl_nn
# Print Code Version
import git
def print_git_info(path, nombre):
repo = git.Repo(path)
print('Using: %s \t branch %s \t commit hash %s'%(nombre, repo.active_branch.name, repo.head.object.hexsha))
changed = [ item.a_path for item in repo.index.diff(None) ]
if len(changed)>0:
print('\t\t WARNING -- modified files:')
print(changed)
print_git_info('.', 'LyftAgent_lib')
import platform
print("python: "+platform.python_version())
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.backend as K
print('Using TensorFlow version: '+tf.__version__)
print('Using Keras version: '+keras.__version__)
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
# Definitions and configuration
```
# Test model loction
path_load = ''
# Test model base name
net_base_name = ''
# set env variable for data
os.environ["L5KIT_DATA_FOLDER"] = ''
# get config
cfg = load_config_data("./AgentPrediction_config.yaml")
cfg = lyl_ts.fill_defaults(cfg)
# Validation chopped dataset:
eval_base_path = ''
model_map_input_shape = (cfg["raster_params"]["raster_size"][0],
cfg["raster_params"]["raster_size"][1])
num_hist_frames = cfg["model_params"]["history_num_frames"]
num_future_frames = cfg["model_params"]["future_num_frames"]
increment_net = cfg["model_params"]["increment_net"]
mruv_guiding = cfg["model_params"]["mruv_guiding"]
mruv_model_trainable = cfg["model_params"]["mruv_model_trainable"]
retrain_inputs_image_model = cfg["training_params"]["retrain_inputs_image_model"]
gen_batch_size = cfg["train_data_loader"]["batch_size"]
model_version = cfg["model_params"]["version"]
base_image_preprocess = cfg["model_params"]["base_image_preprocess"]
use_fading = cfg["model_params"]["use_fading"]
use_angle = cfg["model_params"]["use_angle"]
isBaseModel = False
if model_version == 'Base':
isBaseModel = True
forward_pass_use = lyl_nn.modelBaseline_forward_pass
elif model_version == 'V1':
forward_pass_use = lyl_nn.modelV1_forward_pass
elif model_version == 'V2':
forward_pass_use = lyl_nn.modelV2_forward_pass
# Get image preprocessing function (depends on image encoding base architecture)
if base_image_preprocess == None:
base_image_preprocess_fcn = lambda x: x
else:
try:
base_image_preprocess_fcn = getattr(keras.applications, base_image_preprocess.split('.')[0])
base_image_preprocess_fcn = getattr(base_image_preprocess_fcn, base_image_preprocess.split('.')[1])
except:
raise Exception('Base image pre-processing not found. Requested function: %s'%base_image_preprocess)
```
### Load model
```
model_list = lyl_ts.load_models(path_load, net_base_name,
load_img_model = (retrain_inputs_image_model or retrain_all_image_model),
isBaseModel = isBaseModel,
mruv_guiding = mruv_guiding)
ImageEncModel = model_list[0]
HistEncModel = model_list[1]
PathDecModel = model_list[2]
mruv_model = model_list[3]
```
### Load dataset
```
dm = LocalDataManager()
rast = build_rasterizer(cfg, dm)
dataset_path_test = dm.require(cfg["test_data_loader"]["key"])
test_zarr = ChunkedDataset(dataset_path_test)
test_zarr.open()
print(test_zarr)
test_mask = np.load(f"../prediction-dataset/scenes/mask.npz")["arr_0"]
test_dataset = AgentDataset(cfg, test_zarr, rast, agents_mask=test_mask)
tf_test_dataset = lyl_ts.get_tf_dataset(test_dataset,
num_hist_frames,
model_map_input_shape,
num_future_frames,
meta_dict_use = lyl_ts.meta_dict_pass)
# Map sample pre-processing function
tf_test_dataset = tf_test_dataset.map(lambda x: lyl_ts.tf_get_input_sample(x,
image_preprocess_fcn=base_image_preprocess_fcn,
use_fading = use_fading,
use_angle = use_angle))
# Set batch size
tf_test_dataset = tf_test_dataset.batch(batch_size=gen_batch_size)
```
# Test!
```
future_coords_offsets_pd = []
timestamps = []
agent_ids = []
test_dataset_prog_bar = tqdm(tf_test_dataset, total=int(np.ceil(len(test_dataset)/gen_batch_size)))
test_dataset_prog_bar.set_description('Testing: ')
for (thisSampleMapComp, thisSampeHistPath, thisSampeTargetPath,
thisHistAvail, thisTargetAvail,
thisTimeStamp, thisTrackID, thisRasterFromAgent, thisWorldFromAgent, thisCentroid, thisSampleIdx) in test_dataset_prog_bar:
pad_size = gen_batch_size-thisSampleMapComp.shape[0]
if pad_size != 0:
# Create padded batch if necessary
aux_SampleMapComp = np.zeros((gen_batch_size,thisSampleMapComp.shape[1],thisSampleMapComp.shape[2],thisSampleMapComp.shape[3]), dtype = np.float32)
aux_SampeHistPath = np.zeros((gen_batch_size,thisSampeHistPath.shape[1],thisSampeHistPath.shape[2]), dtype = np.float32)
aux_SampeTargetPath = np.zeros((gen_batch_size,thisSampeTargetPath.shape[1],thisSampeTargetPath.shape[2]), dtype = np.float32)
aux_HistAvail = np.zeros((gen_batch_size,thisHistAvail.shape[1]), dtype = np.float32)
aux_TargetAvail = np.zeros((gen_batch_size,thisTargetAvail.shape[1]), dtype = np.float32)
aux_TimeStamp = np.zeros((gen_batch_size), dtype = np.float32)
aux_TrackID = np.zeros((gen_batch_size), dtype = np.float32)
aux_SampleMapComp[:gen_batch_size-pad_size,:,:,:] = thisSampleMapComp
aux_SampeHistPath[:gen_batch_size-pad_size,:,:] = thisSampeHistPath
aux_SampeTargetPath[:gen_batch_size-pad_size,:,:] = thisSampeTargetPath
aux_HistAvail[:gen_batch_size-pad_size,:] = thisHistAvail
aux_TargetAvail[:gen_batch_size-pad_size,:] = thisTargetAvail
aux_TimeStamp[:gen_batch_size-pad_size] = thisTimeStamp
aux_TrackID[:gen_batch_size-pad_size] = thisTrackID
thisSampleMapComp = aux_SampleMapComp
thisSampeHistPath = aux_SampeHistPath
thisSampeTargetPath = aux_SampeTargetPath
thisHistAvail = aux_HistAvail
thisTargetAvail = aux_TargetAvail
TimeStamp = aux_TimeStamp
thisTrackID = aux_TrackID
# Predict
if base_model:
predPath = forwardpass_use(thisSampleMapComp, ImageEncModel, PathDecModel)
else:
PathDecModel.reset_states()
HistEncModel.reset_states()
predPath = forwardpass_use(thisSampleMapComp, thisSampeHistPath, thisHistAvail, 50,
ImageEncModel, HistEncModel, PathDecModel,
use_teacher_force=False,
increment_net = increment_net,
mruv_guiding = mruv_guiding,
mruv_model = mruv_model,
mruv_model_trainable = mruv_model_trainable)
predPath = predPath.numpy()
# convert agent coordinates into world offsets
predPath = predPath[:,:,:2]
world_from_agents = thisWorldFromAgent.numpy()
centroids = thisCentroid.numpy()
for idx_sample in range(gen_batch_size-pad_size):
# Save info
future_coords_offsets_pd.append(transform_points(predPath[idx_sample,:,:], thisWorldFromAgent.numpy()[idx_sample,:,:]) -thisCentroid.numpy()[idx_sample,:])
timestamps.append(thisTimeStamp[idx_sample].numpy())
agent_ids.append(thisTrackID[idx_sample])
assert len(agent_ids) == len(test_dataset), "Test data size not equal to dataset."
```
### Write result csv
Encode the predictions into a csv file. Coords can have an additional axis for multi-mode.
We handle up to MAX_MODES modes.
For the uni-modal case (i.e. all predictions have just a single mode), coords should not have the additional axis
and confs should be set to None. In this case, a single mode with confidence 1 will be written.
Args:
csv_path (str): path to the csv to write
timestamps (np.ndarray): (num_example,) frame timestamps
track_ids (np.ndarray): (num_example,) agent ids
coords (np.ndarray): (num_example x (modes) x future_len x num_coords) meters displacements
confs (Optional[np.ndarray]): (num_example x modes) confidence of each modes in each example.
Rows should sum to 1
Returns:
```
write_pred_csv('submission.csv',
timestamps=np.array(timestamps),
track_ids=np.array(agent_ids),
coords=np.array(future_coords_offsets_pd),
confs=None)
```
# Validation
This will perform validation against a chopped dataset. It will provide a value near the test value of the leader board.
```
from l5kit.evaluation.chop_dataset import MIN_FUTURE_STEPS
from l5kit.evaluation import write_pred_csv, compute_metrics_csv, read_gt_csv, create_chopped_dataset
from l5kit.evaluation.metrics import neg_multi_log_likelihood, time_displace, rmse, average_displacement_error_mean
dm_val = LocalDataManager()
# ===== GENERATE AND LOAD CHOPPED DATASET
num_frames_to_chop = 100
eval_cfg = cfg["val_data_loader"]
if not os.path.exists(eval_base_path):
eval_base_path = create_chopped_dataset(dm_val.require(eval_cfg["key"]),
cfg["raster_params"]["filter_agents_threshold"],
num_frames_to_chop,
cfg["model_params"]["future_num_frames"],
MIN_FUTURE_STEPS)
rast_val = build_rasterizer(cfg, dm_val)
eval_zarr_path = os.path.join(eval_base_path, 'validate.zarr')
eval_mask_path = os.path.join(eval_base_path, 'mask.npz')
eval_gt_path = os.path.join(eval_base_path, 'gt.csv')
eval_zarr = ChunkedDataset(eval_zarr_path).open()
eval_mask = np.load(eval_mask_path)["arr_0"]
# ===== INIT DATASET AND LOAD MASK
validation_dataset = AgentDataset(cfg, eval_zarr, rast_val, agents_mask=eval_mask)
# eval_dataloader = DataLoader(eval_dataset, shuffle=eval_cfg["shuffle"], batch_size=eval_cfg["batch_size"],
# num_workers=eval_cfg["num_workers"])
print(validation_dataset)
tf_validation_dataset = lyl_ts.get_tf_dataset(validation_dataset,
num_hist_frames,
model_map_input_shape,
num_future_frames,
meta_dict_use = lyl_ts.meta_dict_pass)
# Map sample pre-processing function
tf_validation_dataset = tf_validation_dataset.map(lambda x: lyl_ts.tf_get_input_sample(x,
image_preprocess_fcn=base_image_preprocess_fcn,
use_fading = use_fading,
use_angle = use_angle))
# Set batch size
tf_validation_dataset = tf_validation_dataset.batch(batch_size=gen_batch_size)
future_coords_offsets_pd = []
timestamps = []
agent_ids = []
val_dataset_prog_bar = tqdm(tf_validation_dataset, total=int(np.ceil(len(validation_dataset)/gen_batch_size)))
val_dataset_prog_bar.set_description('Validating: ')
for (thisSampleMapComp, thisSampeHistPath, thisSampeTargetPath,
thisHistAvail, thisTargetAvail,
thisTimeStamp, thisTrackID, thisRasterFromAgent, thisWorldFromAgent, thisCentroid, thisSampleIdx) in val_dataset_prog_bar:
pad_size = gen_batch_size-thisSampleMapComp.shape[0]
if pad_size != 0:
# Create padded batch if necessary
aux_SampleMapComp = np.zeros((gen_batch_size,thisSampleMapComp.shape[1],thisSampleMapComp.shape[2],thisSampleMapComp.shape[3]), dtype = np.float32)
aux_SampeHistPath = np.zeros((gen_batch_size,thisSampeHistPath.shape[1],thisSampeHistPath.shape[2]), dtype = np.float32)
aux_SampeTargetPath = np.zeros((gen_batch_size,thisSampeTargetPath.shape[1],thisSampeTargetPath.shape[2]), dtype = np.float32)
aux_HistAvail = np.zeros((gen_batch_size,thisHistAvail.shape[1]), dtype = np.float32)
aux_TargetAvail = np.zeros((gen_batch_size,thisTargetAvail.shape[1]), dtype = np.float32)
aux_TimeStamp = np.zeros((gen_batch_size), dtype = np.float32)
aux_TrackID = np.zeros((gen_batch_size), dtype = np.float32)
aux_SampleMapComp[:gen_batch_size-pad_size,:,:,:] = thisSampleMapComp
aux_SampeHistPath[:gen_batch_size-pad_size,:,:] = thisSampeHistPath
aux_SampeTargetPath[:gen_batch_size-pad_size,:,:] = thisSampeTargetPath
aux_HistAvail[:gen_batch_size-pad_size,:] = thisHistAvail
aux_TargetAvail[:gen_batch_size-pad_size,:] = thisTargetAvail
aux_TimeStamp[:gen_batch_size-pad_size] = thisTimeStamp
aux_TrackID[:gen_batch_size-pad_size] = thisTrackID
thisSampleMapComp = aux_SampleMapComp
thisSampeHistPath = aux_SampeHistPath
thisSampeTargetPath = aux_SampeTargetPath
thisHistAvail = aux_HistAvail
thisTargetAvail = aux_TargetAvail
TimeStamp = aux_TimeStamp
thisTrackID = aux_TrackID
# Predict
if base_model:
predPath = forwardpass_use(thisSampleMapComp, ImageEncModel, PathDecModel)
else:
PathDecModel.reset_states()
HistEncModel.reset_states()
predPath = forwardpass_use(thisSampleMapComp, thisSampeHistPath, thisHistAvail, 50,
ImageEncModel, HistEncModel, PathDecModel,
use_teacher_force=False,
increment_net = increment_net,
mruv_guiding = mruv_guiding,
mruv_model = mruv_model,
mruv_model_trainable = mruv_model_trainable)
predPath = predPath.numpy()
# convert agent coordinates into world offsets
predPath = predPath[:,:,:2]
world_from_agents = thisWorldFromAgent.numpy()
centroids = thisCentroid.numpy()
for idx_sample in range(gen_batch_size-pad_size):
# Save info
future_coords_offsets_pd.append(transform_points(predPath[idx_sample,:,:], thisWorldFromAgent.numpy()[idx_sample,:,:]) -thisCentroid.numpy()[idx_sample,:])
timestamps.append(thisTimeStamp[idx_sample].numpy())
agent_ids.append(thisTrackID[idx_sample])
assert len(agent_ids) == len(validation_dataset), "Validation data size not equal to dataset."
pred_path = '/tf/2020-10-Lyft/prediction-dataset/validate_chopped_100/validation_submission.csv'
write_pred_csv(pred_path,
timestamps=np.array(timestamps),
track_ids=np.array(agent_ids, dtype=np.int32),
coords=np.array(future_coords_offsets_pd),
confs=None)
metricas_out = compute_metrics_csv(eval_gt_path, pred_path, [neg_multi_log_likelihood,
time_displace,
rmse,
average_displacement_error_mean])
for metric_name, metric_mean in metricas_out.items():
print(metric_name, metric_mean)
```
| github_jupyter |
# **Pengenalan Digit Tulisan Tangan Dengan Metode Backpropagation**
---
```
import tensorflow as tf #library mechine-learning
import matplotlib.pyplot as plt #untuk visualisasi data
```
## Deskripsi dataset
* Data yang digunakan berasal MNIST database (Modified National Institute of Standards and Technology database) yang berisikan 60.000 data training dan 10.000 data testing.
* Data berupa citra digit tunggal angka 0 sampai 9 dalam format grayscale berdimensi 28 x 28 pixel.
* x_train dan x_test diisi dengan array dari data citra, sedangkan y_train dan y_test diisi dengan angka label dari citra yang bersangkutan.
```
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
```
## Contoh citra dari dataset
menampilkan citra dari data pertama, untuk memastikan hasil dari import dataset berhasil.
```
plt.imshow(x_train[0], cmap="binary")
plt.show()
print("Citra digit angka",y_train[0])
```
## Normalisasi data
* Nilai input dari citra yang sudah diimport pada tahan sebelumnya berupa integer 0 - 255 dinormalisasi terlebih dahulu menjadi angka 0 - 1.
* Nilai target output yang berupa nilai integer 0 - 9 kita ubah menjadi one-hot encoded, yaitu vektor yang berisi angka 0 dan 1
```
# konversi integers jadi floats
train_norm = x_train.astype('float32')
test_norm = x_test.astype('float32')
# normalisasi ke range 0-1
x_train = train_norm / 255.0
x_test = test_norm / 255.0
# one hot encode nilai target
y_train_enc = tf.keras.utils.to_categorical(y_train)
y_test_enc = tf.keras.utils.to_categorical(y_test)
```
## Mendefinisikan Model Neural Network
```
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, input_shape=(784,), activation=tf.nn.relu, kernel_initializer="glorot_uniform"))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu, kernel_initializer="glorot_uniform"))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax, kernel_initializer="glorot_uniform"))
```
### Konfigurasi training
* Fungsi loss yang digunakan adalah categorical crossentropy<br>
sesuai dengan klasifikasi multi class dengan output berupa one-hot encoded
```
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
```
## Proses pembelajaran
Menjalankan proses pembelajaran (mencari loss minimun dan update bobot) dari model yang telah didefinisikan di atas.<br>
Proses ini dijalankan berulang kali sebanyak jumlah epoch yang telah ditentukan
```
history = model.fit(x=x_train, y=y_train_enc, epochs=10, validation_split=0.1)
```
## Evaluasi model NN
```
test_loss, test_acc = model.evaluate(x=x_test, y=y_test_enc)
```
## Pengujian dengan data testing
Setelah model selesai melakukan traning dan hasil dari model dapat dilihat dari evaluasi diatas, maka model siap dilakukan testing
```
predictions = model.predict(x_test)
```
## Menampilkan contoh hasil pengenalan citra
Berikut ini tampilan dari sebagian hasil testing data.
```
x_test__ = x_test.reshape(x_test.shape[0], 28, 28)
fig, axis = plt.subplots(2, 5, figsize=(12, 6))
for i, ax in enumerate(axis.flat):
ax.imshow(x_test__[i], cmap='binary')
ax.set(title = f"Label asli {y_test[i]}\nHasil pengenalan {predictions[i].argmax()}")
```
## Visualisasi hasil training
1. Akurasi training
```
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
2. Loss training
```
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
| github_jupyter |
# Hands-On Data Preprocessing in Python
Learn how to effectively prepare data for successful data analytics
AUTHOR: Dr. Roy Jafari
### Chapter 11: Data Cleaning - Level ⅠⅡ
#### Excercises
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
# Excercise 1
In this exercise, we will be using 'Temperature_data.csv'. This dataset has some missing values. Do the following.
a. After reading the file into a Pandas DataFrame, check if the dataset is level Ⅰ clean and if not clean it. Also, describe the cleanings if any.
```
day_df = pd.read_csv('Temperature_data.csv')
day_df
```
b. Check if the dataset is level Ⅱ clean and if not clean it. Also, describe the cleanings if any.
c. The dataset has missing values. See how many, and run diagnosis to see what types of missing values they are.
d. Are there any outliers in the dataset?
e. How should we best deal with the missing values if our goal is to draw multiple boxplots that show the central tendency and variation of temperature across the months? Draw the described visualization after dealing with the missing values.
# Excercise 2
In this exercise, we are going to use the file ‘Iris_wMV.csv’. Iris data includes 50 samples of three types of iris flowers, totaling 150 rows of data. Each flower is described by its sepal and petal length or width. The column PetalLengthCm has some missing values.
a. Confirm that PetalLengthCm has 6 missing values.
```
iris_df = pd.read_csv('Iris_wMV.csv')
iris_df
```
b. Figure out the types of missing values (MCAR, MAR, MNAR).
c. How would you best deal with the missing values, if your end goal was to draw the following visualization? Comment on all the four different approaches of dealing with missing values in this chapter, citing why the approach would be or wouldn’t be appropriate.
d. Draw the preceding figure twice, once after adopting the “keep as is” approach, and once after adopting “imputing with the central tendency of the appropriate iris Species”. Compare the two figures and comment on their differences.
# Excercise 3
In this exercise, we will be using ‘imdb_top_1000.csv’. More information about this dataset maybe found on this link: https://www.kaggle.com/harshitshankhdhar/imdb-dataset-of-top-1000-movies-and-tv-shows. Perform the following steps for this dataset.
a. Read the file into movie_df, and list the level Ⅰ data cleaning steps that the dataset needs. Implement the listed items, if any.
```
movie_df = pd.read_csv('imdb_top_1000.csv')
movie_df.head(1)
```
b. We want to employ a Decision Tree Classificaiton algorithm using the following columns to predict the IMDB_rating: Certificate, Runtime, Genre, and Gross. For this analytics goals, list the level 2 data cleanings that need to be done, and then implement them.
c. Does the dataset have issues, regarding missing values? If yes, how best should we deal with them given the listed data analytic goals in b.
d. Use the following function from sklearn.tree to create a prediction model that can predict IMDB_rating using Certificate, Runtime, Genre and Gross:
*DecisionTreeRegressor(max_depth=5, min_impurity_decrease=0, min_samples_split=20, splitter='random')*
The tuning parameters have been set for you so the DecsionTreeRegressor can perform better. Once the model is trained draw the trained tree and check if the attribute Gross is used for the prediction of IMDB_rating.
e. Run the following code and then explain what summary_df is.
`
dt_predicted_IMDB_rating = RegressTree.predict(Xs)
mean_predicted_IMDB_rating = np.ones(len(y))*y.mean()
summary_df = pd.DataFrame({'Prediction by Decision Tree': dt_predicted_IMDB_rating, 'Prediction by mean': mean_predicted_IMDB_rating, 'Actual IMDB_rating': y})
`
f. Run the following code and explain the visualizaiton it creates. What can you learn from the visualization?
`summary_df['Decision Tree Error'] = abs(summary_df['Prediction by Decision Tree']- summary_df['Actual IMDB_rating'])
summary_df['Mean Error'] = abs(summary_df['Prediction by mean'] - summary_df['Actual IMDB_rating'])
plt.figure(figsize=(2,10))
table = summary_df[['Decision Tree Error','Mean Error']]
sns.heatmap(table, cmap='Greys')`
# Excercise 4
In this exercise, we will be using two CSV files: responses.csv and columns.csv. The two files are used to record the date of a survey conducted in Slovakia. To access the data on Kaggle.com use this link: https://www.kaggle.com/miroslavsabo/young-people-survey. Perform the following items for this data source.
```
column_df = pd.read_csv('columns.csv')
column_df.head(2)
response_df = pd.read_csv('responses.csv')
response_df.head(2)
```
a. Are there respondents in this survey that are suspected to be outliers based on their age? How many? list them in a separate data frame.
b. Are there respondents in this survey that are suspected to be outliers based on their level of liking for Country and Hardrock music? How many? list them in a separate data frame.
c. Are there respondents in this survey that are suspected to be outliers based on their BMI or Education level? How many? list them in a separate data frame. BMI can be calculated using the following formula.
`BMI=Weight/Height^2`
The Weight has to be in kilograms and height in meters for the following formula. In the dataset, Weight is recorded in kilograms but Height is recorded in centimeters and has to be transformed to Meter.
d. Are there respondents in this survey that are suspected to be outliers based on their BMI and Age? How many? list them in a separate data frame.
e. Are there respondents in this survey that are suspected to be outliers based on their BMI and Gender? How many? list them in a separate data frame.
# Excercise 5
One of the most common approaches for fraud detection is using outlier detection. In this exercise, you will use 'creditcard.csv' from https://www.kaggle.com/mlg-ulb/creditcardfraud to evaluate the effectiveness of outlier detection for credit card fraud detection. Pay attention that most of the columns in this data source are processed values to uphold data anonymity. Perform the following steps.
a. Check the state of the dataset for missing values and address them if any.
```
transaction_df = pd.read_csv('creditcard.csv')
transaction_df
```
b. Using the column Class, which shows if the transaction has been fraudulent or not, find out what percentage of the transactions in the dataset are fraudulent.
c. Using data visualization or the appropriate statistical set, and if necessary both, specify which univariate outliers have a relationship with the column Class? In other words, if the values of which column are outliers then we may suspect fraudulent activity? Which statistical test is appropriate here?
d. First, use the K-Means algorithm to group the transactions into 200 clusters by the attributes that were found to have a relationship with the column Class in part c. Then, filter out the members of the clusters with less than 50 transactions. Does any of them contain significantly fraudulent transactions?
e. If there are any clusters with significant fraudulent transactions, perform centroid analysis for them.
# Excercise 6
In Chapter 5 and Chapter 8 we used ‘WH Report_preprocessed.csv’ which is the preprocessed version of ‘WH Report.csv’. Now that you have learned numerous data preprocessing skills, you will be preprocessing the dataset yourself.
a. Check the status of the dataset for missing values.
```
country_df = pd.read_csv('WH Report.csv')
country_df
```
b. Check the status of the dataset for outliers.
c. We would like to cluster the countries based on their happiness indices over the years. Based on these analytic goals, address the missing values.
d. Based on the listed goal in c, address the outliers.
e. Does data need any level Ⅰ or level Ⅱ data cleaning, before clustering is possible? If any, prepare the dataset for k-means clustering.
f. Perform K-means clustering to separate the countries into three groups, and do all the possible analytics that one does when clustering.
# Excercise 7
Specify if the following items describe random errors or systematic errors.
a. The data has these types of errors as the thermometer that the lab has purchased can give precise readings to one-thousandth of a degree
b. The data has these types of errors the survey records were gathered by 5 different surveyors who attended 5 rigorous training sessions
c. The data has these types of errors because when asking for salary questions in a survey there were no options such as “I would not like to share”
d. The data has these types of errors because the cameras were tampered with so the rubbery would not be tapped.
# Excercise 8
Study Figure 11.13 one more time, and run the first three Exercises by the flowchart in this figure and note down the path that led to our decisions regarding the missing values. Did we take steps in dealing with missing values that were not listed in this figure or this chapter? Would it be better to have a more complex figure so every possibility would be included, or not? Why or why not?
# Excercise 9
Explain why the following statement is incorrect: A row may have a significant number of MCAR missing values.
| github_jupyter |
# NER Training Data Creation (Training-based)
```
def para_to_text(p):
"""
A function to find every texts in the paragraph
params
----
p : docx.Document.Paragraph object
returns
----
str
"""
rs = p._element.xpath(".//w:t")
return u"".join([r.text for r in rs])
def sop_to_text(file_path):
"""
Converts SOP.docx into plain text
params
----
file_path : str (path to the SOP document)
returns
----
str
"""
text = []
with open(file_path, 'rb') as f:
source_stream = BytesIO(f.read())
f.close()
doc = Document(source_stream)
paras = doc.paragraphs
for p in paras:
text.append(para_to_text(p))
text = " ".join(text).strip()
return text
```
#### Spacy requires training data to be in the following format:
```python
train_data = [
("SENTENCES BLABLABLA", {entities : [(entity start index, entity end index, "LABEL A"),
(entity start index, entity end index, "LABEL B")]})
]
```
# Event type entity training (Rule-based)
```
import os
import json
from docx import Document
from io import StringIO, BytesIO
import re
path = "/Users/Public/Desktop/SOPs/"
SOPs = os.listdir(path)
events = []
code_pattern = r"^[A-Z0-9][A-Z0-9]+[\- ]+.+"
for sop in SOPs:
filepath = path+sop
with open(filepath, 'rb') as f:
source_stream = BytesIO(f.read())
f.close()
doc = Document(source_stream)
paras = doc.paragraphs
for p in paras:
style = p.style.name
text = para_to_text(p)
if "Normal" in style:
code = text.split(" event type")[0].split("for the ")[-1]
r = re.findall(code_pattern, code)
if r:
events.append(r[0])
raw_event_list = []
event_code = []
pattern = r"[A-Z0-9][A-Z0-9]+"
inconsistency = set()
for sop in SOPs:
event_des = "".join(sop.split("-")[1:]).strip().split(".")[0]
raw_event_list.append(event_des)
p = re.findall(pattern, event_des)
if p:
event_code.append(p[0])
else:
event_code.append(event_des)
list(set(event_code))
events
pattern = r'^([A-Z]+)[ -]+([\w ]*)-([\w ,-]*).docx'
events_code = []
for sop in SOPs:
match = re.findall(pattern, sop, re.IGNORECASE)
event_parts = [x.strip() for x in list(filter(None, match[0][1:]))]
events_code.extend(event_parts)
events_code.append(" - ".join(event_parts))
import pandas as pd
df = pd.read_csv("../data/interim/sop_types_valid.csv")
df
for x in df["juri"]:
x =
len(event_code)
inconsistency
```
## SITUATION
```
situations = []
for sop in SOPs:
filepath = path+sop
with open(filepath, 'rb') as f:
source_stream = BytesIO(f.read())
f.close()
doc = Document(source_stream)
paras = doc.paragraphs
for p in paras:
style = p.style.name
text = para_to_text(p)
if "heading 2" in style.lower():
situations.append(text)
situations = set(situations)
```
## ROLES
```
roles = []
for sop in SOPs:
filepath = path+sop
with open(filepath, 'rb') as f:
source_stream = BytesIO(f.read())
f.close()
doc = Document(source_stream)
paras = doc.paragraphs
for p in paras:
style = p.style.name
text = para_to_text(p)
if "heading 1" in style.lower():
roles.append(text.strip())
roles = list(set(roles))[1:]
```
### Extract data from acronym
```
import os
import json
from docx import Document
from io import StringIO, BytesIO
import re
file_path = "/Users/flu/Desktop/capstone-2020/utils/acronyms.docx"
with open(file_path, 'rb') as f:
source_stream = BytesIO(f.read())
f.close()
doc = Document(source_stream)
paras = doc.paragraphs
import pandas as pd
df = pd.DataFrame(columns=["jargon","meaning"])
jargon = []
meaning = []
for p in paras:
parsed = p.text.strip().split("\t")
if len(parsed) == 2:
jargon.append(parsed[0].strip())
meaning.append(parsed[1].strip())
tables = doc.tables
for t in tables:
for r in t.rows:
i = 0
for c in r.cells:
if i % 2 == 0:
i += 1
jargon.append(c.text.strip())
else:
meaning.append(c.text.strip())
df["jargon"] = jargon
df["meaning"] = meaning
import numpy as np
df["jargon"].replace("",np.nan, inplace=True)
df["meaning"].replace("",np.nan, inplace=True)
df = df.dropna().reset_index()[["jargon","meaning"]]
terms = list(df["jargon"])
for term in terms:
print(term)
definitions = list(df["meaning"])
definitions
```
#### Organization pattern
```
import spacy
import numpy as np
nlp = spacy.load("en_core_web_sm")
ind = 0
orgs = []
for d in definitions:
doc = nlp(d)
for ent in doc.ents:
if ent.label_ == "ORG":
orgs.append(terms[ind])
ind += 1
orgs = list(set(orgs))
filtered = []
for org in orgs:
if org.upper() != org:
pass
else:
filtered.append(org)
filtered
org_cand = []
glossary = zip(terms, definitions)
for org, mean in glossary:
if org in filtered:
org_cand.append((org, mean))
org_key = ["British","Columbia",
"Service","Services",
"Police","Institute",
"Ltd.", "Association",
"Ministry","National",
"Unit","Incorporated",
"Corporation"]
organization = []
for x,y in org_cand:
if len(set(org_key + y.split())) < len(org_key + y.split()):
organization.append(x)
organization = list(set(organization))
```
## SITUATION, ACTION, QUSETION, CONDITION
```
raw_texts = []
for sop in SOPs:
filepath = path+sop
with open(filepath, 'rb') as f:
source_stream = BytesIO(f.read())
f.close()
doc = Document(source_stream)
paras = doc.paragraphs
for p in paras:
style = p.style.name
text = para_to_text(p)
if "style1" in style.lower():
if text.strip() == "":
pass
else:
raw_texts.append(text.strip())
questions = []
conditions = []
actions = []
others = []
for t in raw_texts:
if t.endswith("?"):
questions.append(t)
elif t.startswith("If"):
conditions.append(t)
elif nlp(t)[0].pos_ == "VERB":
actions.append(nlp(t)[0])
else:
others.append(t)
questions = list(set(questions))
conditions = list(set(conditions))
actions = list(set(actions))
filtered_conditions = []
for c in conditions:
filtered_conditions.append(c.replace("\\","").replace("/","or").replace("\t","").replace(":","").replace(";",""))
others[1300:1380]
```
## Agency, Jurisdiction
```
jurisdiction = ['AB', 'BI', 'BU', 'DE', 'DFPF',
'NW', 'PO', 'RI', 'RM', 'SC',
'SQ', 'SX', 'UN', 'VA', 'WP',
'WV', 'DF PF']
```
## Pattern Based Approach
## Regex Based Approach
# Writing Entities
```
with open("./entity_train/PATTERNS.JSONL","w", encoding="utf-8") as f:
# EVENT CODE EXTRACTION
for ec in event_code:
f.write('{"label":"EVENT", "pattern":"%s"}\n' %ec)
for ic in inconsistency:
f.write('{"label":"EVENT", "pattern":"%s"}\n' %ic)
for e in events:
f.write('{"label":"EVENT", "pattern":"%s"}\n' %e)
# SITUATION EXTRACTION
for s in situations:
f.write('{"label":"SITUATION", "pattern":"%s"}\n' %s)
# ROLE EXTRACTION
for r in roles:
f.write('{"label":"ROLE", "pattern":"%s"}\n' %r)
# QUESTION EXTRACTION
for q in questions:
f.write('{"label":"QUESTION", "pattern":"%s"}\n' %q)
# CONDITION EXTRACTION
for c in filtered_conditions:
f.write('{"label":"CONDITION", "pattern":"%s"}\n' %c)
# ACTION EXTRACTION
for a in actions:
f.write('{"label":"ACTION", "pattern":"%s"}\n' %a)
# ORGANIZATION EXTRACTION
for o in organization:
f.write('{"label":"ORG", "pattern":"%s"}\n' %o)
# JURISDICTION / AGENCY
for j in jurisdiction:
regex_pattern = "^({})[ -]+".format(j)
f.write('{"label":"JURI", "pattern":[{"TEXT":{"REGEX": %s }}]}\n' %regex_pattern)
f.close()
```
| github_jupyter |
# RandomForestRegressor with StandardScaler
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target variable for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
```
### Data Rescaling
Performing StandardScaler data rescaling operation on dataset. The StandardScaler standardize features by removing the mean and scaling to unit variance.
We will fit an object of StandardScaler to **train data** then transform the same data via <Code>fit_transform(X_train)</Code> method, following which we will transform **test data** via <Code>transform(X_test)</Code> method.
```
standard_scaler = StandardScaler()
X_train = standard_scaler.fit_transform(X_train)
X_test = standard_scaler.transform(X_test)
```
### Model
A random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the <code>max_samples</code> parameter if <code>bootstrap=True</code> (default), otherwise the whole dataset is used to build each tree.
#### Model Tuning Parameters
1. n_estimators : int, default=100
> The number of trees in the forest.
2. criterion : {“mae”, “mse”}, default=”mse”
> The function to measure the quality of a split. Supported criteria are “mse” for the mean squared error, which is equal to variance reduction as feature selection criterion, and “mae” for the mean absolute error.
3. max_depth : int, default=None
> The maximum depth of the tree.
4. max_features : {“auto”, “sqrt”, “log2”}, int or float, default=”auto”
> The number of features to consider when looking for the best split:
5. bootstrap : bool, default=True
> Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
6. oob_score : bool, default=False
> Whether to use out-of-bag samples to estimate the generalization accuracy.
7. n_jobs : int, default=None
> The number of jobs to run in parallel. fit, predict, decision_path and apply are all parallelized over the trees. <code>None</code> means 1 unless in a joblib.parallel_backend context. <code>-1</code> means using all processors. See Glossary for more details.
8. random_state : int, RandomState instance or None, default=None
> Controls both the randomness of the bootstrapping of the samples used when building trees (if <code>bootstrap=True</code>) and the sampling of the features to consider when looking for the best split at each node (if <code>max_features < n_features</code>).
9. verbose : int, default=0
> Controls the verbosity when fitting and predicting.
```
model = RandomForestRegressor(n_jobs = -1,random_state = 123)
model.fit(X_train, y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Feature Importances
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Viraj Jayant, Github: [Profile](https://github.com/Viraj-Jayant)
| github_jupyter |
# Implementation of FFT and IFFT
### Libraries
```
import cmath
import numpy as np
from math import log, ceil
import pylab as plt
```
### Utils
```
def omega(p, q):
''' The omega term in DFT and IDFT formulas'''
return cmath.exp((2.0 * cmath.pi * 1j * q) / p)
def pad(lst):
'''padding the list to next nearest power of 2 as FFT implemented is radix 2'''
k = 0
while 2**k < len(lst):
k += 1
return np.concatenate((lst, ([0] * (2 ** k - len(lst)))))
def pad2(x):
m, n = np.shape(x)
M, N = 2 ** int(ceil(log(m, 2))), 2 ** int(ceil(log(n, 2)))
F = np.zeros((M,N), dtype = x.dtype)
F[0:m, 0:n] = x
return F, m, n
```
## FFT
```
## FFT - 1D
def fft(x):
''' FFT of 1-d signals
usage : X = fft(x)
where input x = list containing sequences of a discrete time signals
and output X = dft of x '''
n = len(x)
if n == 1:
return x
Feven, Fodd = fft(x[0::2]), fft(x[1::2])
combined = [0] * n
for m in range(n/2):
combined[m] = Feven[m] + omega(n, -m) * Fodd[m]
combined[m + n/2] = Feven[m] - omega(n, -m) * Fodd[m]
return combined
## FFT - 2D
def fft2(f):
'''FFT of 2-d signals/images with padding
usage X, m, n = fft2(x), where m and n are dimensions of original signal'''
f, m, n = pad2(f)
return np.transpose(fft(np.transpose(fft(f)))), m, n
def ifft2(F, m, n):
''' IFFT of 2-d signals
usage x = ifft2(X, m, n) with unpaded,
where m and n are odimensions of original signal before padding'''
f, M, N = fft2(np.conj(F))
f = np.matrix(np.real(np.conj(f)))/(M*N)
return f[0:m, 0:n]
```
## IFFT
```
## ifft - 1D
def ifft(X):
''' IFFT of 1-d signals
usage x = ifft(X)
unpadding must be done implicitly'''
x = fft([x.conjugate() for x in X])
return [x.conjugate()/len(X) for x in x]
## ifft - 2D
def ifft2(F, m, n):
''' IFFT of 2-d signals
usage x = ifft2(X, m, n) with unpaded,
where m and n are odimensions of original signal before padding'''
f, M, N = fft2(np.conj(F))
f = np.matrix(np.real(np.conj(f)))/(M*N)
return f[0:m, 0:n]
```
### FFT shift
```
def fftshift(F):
''' this shifts the centre of FFT of images/2-d signals'''
M, N = F.shape
R1, R2 = F[0: M/2, 0: N/2], F[M/2: M, 0: N/2]
R3, R4 = F[0: M/2, N/2: N], F[M/2: M, N/2: N]
sF = np.zeros(F.shape,dtype = F.dtype)
sF[M/2: M, N/2: N], sF[0: M/2, 0: N/2] = R1, R4
sF[M/2: M, 0: N/2], sF[0: M/2, N/2: N]= R3, R2
return sF
import numpy as np
def DFT_1D(fx):
fx = np.asarray(fx, dtype=complex)
M = fx.shape[0]
fu = fx.copy()
for i in range(M):
u = i
sum = 0
for j in range(M):
x = j
tmp = fx[x]*np.exp(-2j*np.pi*x*u*np.divide(1, M, dtype=complex))
sum += tmp
# print(sum)
fu[u] = sum
# print(fu)
return fu
def inverseDFT_1D(fu):
fu = np.asarray(fu, dtype=complex)
M = fu.shape[0]
fx = np.zeros(M, dtype=complex)
for i in range(M):
x = i
sum = 0
for j in range(M):
u = j
tmp = fu[u]*np.exp(2j*np.pi*x*u*np.divide(1, M, dtype=complex))
sum += tmp
fx[x] = np.divide(sum, M, dtype=complex)
return fx
def FFT_1D(fx):
""" use recursive method to speed up"""
fx = np.asarray(fx, dtype=complex)
M = fx.shape[0]
minDivideSize = 4
if M % 2 != 0:
raise ValueError("the input size must be 2^n")
if M <= minDivideSize:
return DFT_1D(fx)
else:
fx_even = FFT_1D(fx[::2]) # compute the even part
fx_odd = FFT_1D(fx[1::2]) # compute the odd part
W_ux_2k = np.exp(-2j * np.pi * np.arange(M) / M)
f_u = fx_even + fx_odd * W_ux_2k[:M//2]
f_u_plus_k = fx_even + fx_odd * W_ux_2k[M//2:]
fu = np.concatenate([f_u, f_u_plus_k])
return fu
def inverseFFT_1D(fu):
""" use recursive method to speed up"""
fu = np.asarray(fu, dtype=complex)
fu_conjugate = np.conjugate(fu)
fx = FFT_1D(fu_conjugate)
fx = np.conjugate(fx)
fx = fx / fu.shape[0]
return fx
def FFT_2D(fx):
h, w = fx.shape[0], fx.shape[1]
fu = np.zeros(fx.shape, dtype=complex)
if len(fx.shape) == 2:
for i in range(h):
fu[i, :] = FFT_1D(fx[i, :])
for i in range(w):
fu[:, i] = FFT_1D(fu[:, i])
elif len(fx.shape) == 3:
for ch in range(3):
fu[:, :, ch] = FFT_2D(fx[:, :, ch])
return fu
def inverseDFT_2D(fu):
h, w = fu.shape[0], fu.shape[1]
fx = np.zeros(fu.shape, dtype=complex)
if len(fu.shape) == 2:
for i in range(h):
fx[i, :] = inverseDFT_1D(fu[i, :])
for i in range(w):
fx[:, i] = inverseDFT_1D(fx[:, i])
elif len(fu.shape) == 3:
for ch in range(3):
fx[:, :, ch] = inverseDFT_2D(fu[:, :, ch])
fx = np.real(fx)
return fx
def inverseFFT_2D(fu):
h, w = fu.shape[0], fu.shape[1]
fx = np.zeros(fu.shape, dtype=complex)
if len(fu.shape) == 2:
for i in range(h):
fx[i, :] = inverseFFT_1D(fu[i, :])
for i in range(w):
fx[:, i] = inverseFFT_1D(fx[:, i])
elif len(fu.shape) == 3:
for ch in range(3):
fx[:, :, ch] = inverseFFT_2D(fu[:, :, ch])
fx = np.real(fx)
return fx
```
## Evaluating
```
print ('Testing for 1-d signals')
# Generating sin curve in range [-2p, 2pi] with 128 sample points
f = np.sin(np.linspace(-2*np.pi,2*np.pi,128))
# let us add some noise with mean 0.5 and sigma 0.75
f = f + 0.75 * np.random.rand(128) + 0.5
F = FFT_1D(f)
fig = plt.figure()
fig.add_subplot(311)
plt.plot(f)
plt.title('Original Signal')
fig.add_subplot(312)
plt.plot(np.log(np.abs(F[:64]) + 1))
plt.title('magnitude plot')
fig.add_subplot(313)
plt.plot(np.angle(F[:64]))
plt.title('Phase plot')
plt.show()
print ('\ntesting for 2-d signals/images')
x = np.matrix([[1,2,1],[2,1,2],[0,1,1]])
X, m, n = fft2(x)
print ('\nDFT is :')
print (X)
print ('\nOriginal signal is :')
print ifft2(X, m, n)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cxbxmxcx/EvolutionaryDeepLearning/blob/main/EDL_6_1_MLP_NumPy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Original source https://github.com/zinsmatt/Neural-Network-Numpy/blob/master/neural-network.py
"""
Created on Thu Nov 15 20:42:52 2018
@author: matthieu
"""
```
#@title Imports
import numpy as np
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib.pyplot as plt
#@title Dataset Parameters { run: "auto" }
number_samples = 100 #@param {type:"slider", min:100, max:1000, step:25}
difficulty = 1 #@param {type:"slider", min:1, max:5, step:1}
problem = "circles" #@param ["classification", "blobs", "gaussian quantiles", "moons", "circles"]
number_features = 2
number_classes = 2
middle_layer = 5 #@param {type:"slider", min:5, max:25, step:1}
epochs = 25000 #@param {type:"slider", min:1000, max:50000, step:1000}
def load_data(problem):
if problem == "classification":
clusters = 1 if difficulty < 3 else 2
informs = 1 if difficulty < 4 else 2
data = sklearn.datasets.make_classification(
n_samples = number_samples,
n_features=number_features,
n_redundant=0,
class_sep=1/difficulty,
n_informative=informs,
n_clusters_per_class=clusters)
if problem == "blobs":
data = sklearn.datasets.make_blobs(
n_samples = number_samples,
n_features=number_features,
centers=number_classes,
cluster_std = difficulty)
if problem == "gaussian quantiles":
data = sklearn.datasets.make_gaussian_quantiles(mean=None,
cov=difficulty,
n_samples=number_samples,
n_features=number_features,
n_classes=number_classes,
shuffle=True,
random_state=None)
if problem == "moons":
data = sklearn.datasets.make_moons(
n_samples = number_samples)
if problem == "circles":
data = sklearn.datasets.make_circles(
n_samples = number_samples)
return data
data = load_data(problem)
X, Y = data
# Input Data
plt.figure("Input Data")
plt.scatter(X[:, 0], X[:, 1], c=Y, s=40, cmap=plt.cm.Spectral)
#@title Helper Function to Show Model Predictions
def show_predictions(model, X, Y, name=""):
""" display the labeled data X and a surface of prediction of model """
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01))
X_temp = np.c_[xx.flatten(), yy.flatten()]
Z = model.predict(X_temp)
plt.figure("Predictions " + name)
plt.contourf(xx, yy, Z.reshape(xx.shape), cmap=plt.cm.Spectral)
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(X[:, 0], X[:, 1],c=Y, s=40, cmap=plt.cm.Spectral)
#@title Logisitc Regression with SKLearn
clf = sklearn.linear_model.LogisticRegressionCV()
clf.fit(X, Y)
show_predictions(clf, X, Y, "Logistic regression")
LR_predictions = clf.predict(X)
print("Logistic Regression accuracy : ", np.sum(LR_predictions == Y) / Y.shape[0])
#@title MLP in NumPy
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
## Neural Network
class Neural_Network:
def __init__(self, n_in, n_hidden, n_out):
# Network dimensions
self.n_x = n_in
self.n_h = n_hidden
self.n_y = n_out
# Parameters initialization
self.W1 = np.random.randn(self.n_h, self.n_x) * 0.01
self.b1 = np.zeros((self.n_h, 1))
self.W2 = np.random.randn(self.n_y, self.n_h) * 0.01
self.b2 = np.zeros((self.n_y, 1))
def forward(self, X):
""" Forward computation """
self.Z1 = self.W1.dot(X.T) + self.b1
self.A1 = np.tanh(self.Z1)
self.Z2 = self.W2.dot(self.A1) + self.b2
self.A2 = sigmoid(self.Z2)
def back_prop(self, X, Y):
""" Back-progagate gradient of the loss """
m = X.shape[0]
self.dZ2 = self.A2 - Y
self.dW2 = (1 / m) * np.dot(self.dZ2, self.A1.T)
self.db2 = (1 / m) * np.sum(self.dZ2, axis=1, keepdims=True)
self.dZ1 = np.multiply(np.dot(self.W2.T, self.dZ2), 1 - np.power(self.A1, 2))
self.dW1 = (1 / m) * np.dot(self.dZ1, X)
self.db1 = (1 / m) * np.sum(self.dZ1, axis=1, keepdims=True)
def train(self, X, Y, epochs, learning_rate=1.2):
""" Complete process of learning, alternates forward pass,
backward pass and parameters update """
m = X.shape[0]
for e in range(epochs):
self.forward(X)
loss = -np.sum(np.multiply(np.log(self.A2), Y) + np.multiply(np.log(1-self.A2), (1 - Y))) / m
self.back_prop(X, Y)
self.W1 -= learning_rate * self.dW1
self.b1 -= learning_rate * self.db1
self.W2 -= learning_rate * self.dW2
self.b2 -= learning_rate * self.db2
if e % 1000 == 0:
print("Loss ", e, " = ", loss)
def predict(self, X):
""" Compute predictions with just a forward pass """
self.forward(X)
return np.round(self.A2).astype(np.int)
#@title Create the model and train it
nn = Neural_Network(2, middle_layer, 1)
nn.train(X, Y, epochs, 1.2)
show_predictions(nn, X, Y, "Neural Network")
nn_predictions = nn.predict(X)
print("Neural Network accuracy : ", np.sum(nn_predictions == Y) / Y.shape[0])
```
| github_jupyter |
# Interdisciplinary Communication Exploration
In this final segment you will use cyberinfrastructure to computationally explore the words used in academic articles. This is a peek into computational linguistics and we'll use some natural language processing tools. If you want to know more about those things, google them!
This segment is displayed in "Notebook Mode" rather than "Presentation Mode." So you will need to scroll down to explore the content. Notebook mode allows you to see more content at once. It also allows you to easily compare and contrast cells and visualizations.
Here you are free to explore as much as you want. Once you see how the code works, free to change attributes, code pieces, etc.
```
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# Retreive the user agent string, it will be passed to the hourofci submit button
agent_js = """
IPython.notebook.kernel.execute("user_agent = " + "'" + navigator.userAgent + "'");
"""
Javascript(agent_js)
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
```
## Setup
As always, you have to import the specific Python packages you'll need. However, for this exploration,since many of the functions are basic Python functions they don't require separate imports. So we'll import the packages as we need them so that you can see when we are using something more than the base functionality of Python.
Remember to run each code cell by clicking the "Run" button to the left of the code cell. Wait for the <pre>In [ ]:</pre> to change from an asterisk to a number. That is when you know the code is finished running.
## What text to explore?
As you'll see here, you can use the computer to deconstruct simple strings of letters into sentences, phrases and words. Words can be tagged with their "parts of speech"--nouns, verbs, prepositions, etc. That allows us to quantify and compare content, vocabulary, writing styles and sentiment, all things that make different disciplines unique in their forms of communication. The word clouds you worked with earlier in this lesson were created using these tools.
Although the code in this exploration will work with any PDF in which the text is encoded (i.e. not just images of pages), to get started we'll work with some geospatial science journal articles. Later you can use any PDF file you wish. We'll start with a pair of articles published in a special 10 year anniversary issue of the open *Journal of Spatial Information Science* (http://JOSIS.org). Both articles discuss how spatial information science can help us examine mobility in transportation systems. This is a research area that often requires analysis of massive datasets varying over space and time (i.e. big data!). Good fodder for cyberinfrastructure and cyber literacy for GIScience.
We'll compare these two short articles to see how similar they are:
- Martin Raubal, 2020. <u>Spatial data science for sustainable mobility</u>. JOSIS 20, pp. 109–114, doi:10.5311/JOSIS.2020.20.651
- Harvey J. Miller, 2020. <u>Movement analytics for sustainable mobility</u>. JOSIS 20, pp. 115–123 doi:10.5311/JOSIS.2020.20.663
Since these are open source we can download them directly from their URLs:
- Raubal - http://josis.org/index.php/josis/article/viewFile/651/271
- Miller - http://josis.org/index.php/josis/article/viewFile/663/279
We'll start with Raubal's article, then you can try your hand at processing Miller's article. As much as possible, this code below is generic to make it easier to run again. You'll see a few times where we've created specially named result files to be sure we've still got them when we get around to comparing the two articles at the end.
## Get the data
First we get the document we want to examine from the web using the system command *!wget*. **wget** is a software package that helps us download files from the internet. We use it here to download our PDF files. Since you want to reuse this code later, we're going to put the downloaded document into a temporary file called *article.pdf*.
Then we have to translate the PDF format into plain text using the system command *!pdftotext*, naming the output *article.txt*.
```
!wget http://josis.org/index.php/josis/article/viewFile/651/271 -O article.pdf
!pdftotext -enc ASCII7 'article.pdf' 'article.txt'
```
Finally, we need to open the text file so that we can read it and begin our analysis. The following code opens *article.txt* for "r"eading, reads the file, saves the contents in the variable *article_string*, and then closes the file.
```
file_text = open("article.txt", "r", encoding='utf-8')
article_string = file_text.read()
file_text.close()
```
## View the data
Once you have read the data, you should look at it to make sure it is what you expected before you start your analysis.
```
print(article_string)
```
## Clean the data
You'll notice that there are a lot of "typos" in this translation from PDF to text. While we could, with some clever coding, get rid of the strange text, for now we'll just take off the top bit and the references, since they, in particular, are not very clean.
We'll do this by finding the index (location) of the words "Abstract" and "References". Using these locations we will extract the text between these start and end words (see the [start:end] piece of code below). This is sometimes called 'trimming.'
```
start = article_string.index('Abstract')
end = article_string.index('References')
article_extract = article_string[start:end]
print(article_extract)
```
# The Fun Begins!
Now, we introduce some advanced chunks of code that will enable text processing. We do not expect you to fully understand all of the details of the code, nor become an expert in this advanced technology. Just enjoy the exploration and see what makes sense to you. There are brief explanations of each code chunk that explain the key concepts involved. The code and the explanations contain technical jargon that are most likely unfamiliar to you. As you learn to communicate interdisciplinarily, it is important to reflect on these new experiences where you are exposed to unfamiliar concepts, words, and jargon. This is an opportunity to try out a new experience and hopefully pick up one or two ideas along the way. The most important aspect is to have fun!
## Start processing
To begin, we'll use a new module called TextBlob from the textblob package. Textblob is a Python package for processing textual data. For more information see the QuickStart guide: https://textblob.readthedocs.io/en/dev/quickstart.html
We have to import the module, then turn the trimmed text *article_extract* into a *textblob*, a specific data format used by TextBlob. Then we will be able to start analyzing this text using functions provided by the TextBlob module.
```
from textblob import TextBlob
tblob = TextBlob(article_extract)
```
Now we'll try some of the functions that can be applied to the textblob called *tblob*. Remember that to the computer this is just a string of characters representing text. To do textual analysis, we have to get the computer to recognize meaningful "chunks" of this text. Thus, we break the text string up into words, phrases and sentences.
Here we will focus only on words since we'd need a cleaner version of the text for more complex analysis such as phrases. Run the next code and see what textblob comes up with.
```
words = tblob.words
words
```
That's pretty impressive when you realize the computer just started with a string of characters. Let's take this to the next level, by getting the computer to decide what kinds of words each of these are. This is called Part Of Speech (POS) tagging. For more information about POS tagging, see this Wikipedia article https://en.wikipedia.org/wiki/Part-of-speech_tagging.
```
tags = tblob.tags
tags
```
This function produces a *list of tuples*. A **tuple** is an ordered collection of items (such as words) that cannnot change. A **list** is an ordered collection of items that can change. Here, each tuple contains a word and its associated tag (i.e. a pair of items). Can you figure out what the tags mean?
[OK, here's the hint: NN are variations of nouns, NNP are proper nouns (names of people and places), VB are verbs, CC conjunctions, etc. ]
Next we can quantify the words this author chose to use in their article. Let's focus only on nouns (NN and NNS, singular and plural).
```
article_nouns = [word for (word, tag) in tags if tag == 'NN' or tag == 'NNS']
#returns a Python list, as indicated by the square bracket at the start.
article_nouns
```
Alright! But like any good programmer, you should scan this output to see if it looks good.
Wait! There are lots of instances of ']' and 'https' in this supposed list of nouns. Let's quickly get rid of them so they don't contaminate our results.
```
non_nouns = {']', 'https'}
all_nouns = [noun for noun in article_nouns if noun not in non_nouns]
all_nouns
```
That's better! There are still a few unique non-nouns, but we're only interested in the frequent nouns, so this is good.
Now, before we go any further, let's keep a copy of these nouns for later.
**IMPORTANT!** Read the comment in the code chunk below that instructs you to change the variable name *Raubal_nouns* when you rerun the code as instructed later.
```
#IMPORTANT! Change this variable name when you run your code again otherwise you will overwrite the file!
Raubal_nouns = all_nouns
```
OK, back to our processing. It's hard to see from this long list which nouns are the most frequently used. It is too much data. Let's see if we can get some information by visualizing this data in a word cloud. This is basically a data to information transformation through visualization, which is a lot of words to say that we are distilling a lot of data into a small amount of useful information in the form of a visual word cloud.
```
from wordcloud import WordCloud
import matplotlib.pyplot as plt
wordcloud = WordCloud(colormap='cividis', background_color='white').generate(str(all_nouns))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
```
OK, that's interesting. Let's make a copy of that word cloud for later...
**IMPORTANT!** Again, pay attention to the important note to change the variable name when rerunning the code.
```
#IMPORTANT! Change this variable name when you run the code again!
Raubal_wordcloud = wordcloud
```
But there are a lot of words in this visualization and it's hard to see which words are, say, the top 25 most common words. Let's dig into this deeper with a bit of computation.
We can start by counting up how many times each of these nouns was used. There are plenty of ways to do this frequency count. We'll use a **for loop,** which will iterate over each item in our list of article_nouns. We will store the results into a Python **dictionary**, a special kind of Python data format that stores unordered collection of items organized as pairs of keys and values. To learn more about dictionaries and loops check out the link [here](https://www.w3schools.com/python/python_dictionaries.asp).
```
#create an empty dictionary, indicated by the curly brackets
noun_count = {}
# loop over the list of nouns, identifying unique words and incrementing
# the total for repeated words
for item in all_nouns:
if item in noun_count: # We already saw this noun, so add 1
noun_count[item] += 1
else: # We have not seen this noun yet, so make the count 1
noun_count[item] = 1
#show the resulting dictionary (note the curly brackets)
noun_count
```
Cool! Of course there are a lot of words used only once (after all that's good grammatical style), so let's create a new dictionary of only the top 25 words.
While the code chunk below is deceptively simple, it's pretty sophisticated. See if this description makes sense to you. Deconstruct the code from the inside out.
- We use a module from the package *operator* called **itemgetter**. Remember that dictionaries are composed of pairs of keys (also called items, here it's the nouns) and values (the counts). The module itemgetter will sequentially get the value for each item from our dictionary.
- That result is sent to the **sorted** function to sort all of the items in our dictionary by value.
- Then the result of the **sorted** function is transformed back to a dictionary using the **dict** function. This will create a *topN* dictionary containing the N most common words in our article.
```
N = 25 #sets the number of items to extract
from operator import itemgetter
topN = dict(sorted(noun_count.items(), key = itemgetter(1), reverse = True)[:N])
topN
```
That's it! We've got a sorted dictionary of the top 25 nouns in the article.
OK, let's save this result and then you can run the other article!
```
#IMPORTANT! Change this file name when you run again!
Raubal_top25 = topN
```
# Process the other article
It's your turn to run the Miller article through the same analysis. Start at the beginning again, running each code cell, being sure to make the necessary changes to process the Miller article and not overwrite the key Raubal files. When you've generated 'Miller_nouns', 'Miller_wordcloud', and 'Miller_top25', you'll be ready for the next step.
GO BACK TO THE TOP!
_______________________________________________________________
# Compare the articles
OK, now you've got two sets of files, Raubal's and Miller's. First let's look at the two wordclouds
```
plt.imshow(Raubal_wordcloud, interpolation='bilinear')
plt.axis("off")
plt.imshow(Miller_wordcloud, interpolation='bilinear')
plt.axis("off")
```
What differences and similarities do you see?
Finally, let's look at the word counts in a table. For visualization purposes we can put the two top 25 lists into a single pandas dataframe.
```
import pandas
Raubal_top25_df = pandas.DataFrame(list(Raubal_top25.items()),columns = ['noun','count'])
Miller_top25_df = pandas.DataFrame(list(Miller_top25.items()),columns = ['noun','count'])
result = pandas.concat([Raubal_top25_df, Miller_top25_df], axis=1).reindex(Raubal_top25_df.index)
result
```
What do these two lists tell you about differences in the articles? Can you spot some "words" that shouldn't be in this list? Oops, that would call for a bit more cleaning up, but for now, we're good!
# Congratulations!
**You have finished an Hour of CI!**
But, before you go ...
1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.
2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.
<font size="+1"><a style="background-color:blue;color:white;padding:12px;margin:10px;font-weight:bold;" href="https://forms.gle/JUUBm76rLB8iYppN7">Take the questionnaire and provide feedback</a></font>
```
# This code cell has a tag "Hide" (Setting by going to Toolbar > View > Cell Toolbar > Tags)
# Code input is hidden when the notebook is loaded and can be hide/show using the toggle button "Toggle raw code" at the top
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('times.ttf', 150)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name)
print("Download your certificate by clicking the link below.")
```
<font size="+1"><a style="background-color:blue;color:white;padding:12px;margin:10px;font-weight:bold;" href="hourofci_certificate.pdf">Download your certificate</a></font>
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Partie-1:-Présentation-du-modèle" data-toc-modified-id="Partie-1:-Présentation-du-modèle-1">Partie 1: Présentation du modèle</a></span></li><li><span><a href="#Partie-2:-Trouver-$P$-quand-$Q_0$-est-fixé" data-toc-modified-id="Partie-2:-Trouver-$P$-quand-$Q_0$-est-fixé-2">Partie 2: Trouver $P$ quand $Q_0$ est fixé</a></span></li><li><span><a href="#Partie-3:-Raffinements-algorithmiques-pour-le-problème-à-$Q_0$-fixé." data-toc-modified-id="Partie-3:-Raffinements-algorithmiques-pour-le-problème-à-$Q_0$-fixé.-3">Partie 3: Raffinements algorithmiques pour le problème à $Q_0$ fixé.</a></span></li><li><span><a href="#Partie-4:-Résolution-du-problème-complet" data-toc-modified-id="Partie-4:-Résolution-du-problème-complet-4">Partie 4: Résolution du problème complet</a></span></li></ul></div>
# SD211 TP1: Systèmes de recommandation
*<p>Author: Pengfei MI</p>*
*<p>Date: 05/05/2017</p>*
```
# Importer les bibliothéques qu'on va utiliser.
# from movielens_utils import *
import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from time import time
from scipy.sparse.linalg import svds
from scipy.optimize import check_grad
from scipy.optimize import line_search
def load_movielens(filename, minidata=False):
"""
Cette fonction lit le fichier filename de la base de donnees
Movielens, par exemple
filename = '~/datasets/ml-100k/u.data'
Elle retourne
R : une matrice utilisateur-item contenant les scores
mask : une matrice valant 1 si il y a un score et 0 sinon
"""
data = np.loadtxt(filename, dtype=int)
R = sparse.coo_matrix((data[:, 2], (data[:, 0]-1, data[:, 1]-1)),
dtype=float)
R = R.toarray() # not optimized for big data
# code la fonction 1_K
mask = sparse.coo_matrix((np.ones(data[:, 2].shape),
(data[:, 0]-1, data[:, 1]-1)), dtype=bool )
mask = mask.toarray() # not optimized for big data
if minidata is True:
R = R[0:100, 0:200].copy()
mask = mask[0:100, 0:200].copy()
return R, mask
```
## Partie 1: Présentation du modèle
$\textbf{Question 1.1}\quad \text{Récupérer la base de données Movielens.}$
```
filename = "ml-100k/u.data"
R, mask = load_movielens(filename, minidata=False)
print R.shape
```
<div class="alert alert-success">
<p>
L'option $minidata$ est pour diminuer la taille de données. Si on passe $True$ pour le paramètre $minidata$, la fonction va retourner les note des 100 premiers utilisateurs pour les 200 premiers films.
</p>
</div>
$\textbf{Question 1.2}\quad \text{Investigation de la base de données.}$
```
print R.shape, mask.sum()
```
<div class="alert alert-success">
<p>
Dans la base de données, il y a 943 utilisateurs et 1682 films. Il y a 100 000 notes en total.
</p>
</div>
$\textbf{Question 1.3}\quad \text{Investigation de la fonction objectif.}$
<div class="alert alert-success">
a)
<p>
La fonction n'est pas convexe. Pour le démontrer, on n'a que besoin de trouver une contradiction.
</p>
<p>
Posons $|U| = |C| = 1$, la fonction objectif devient une fonction dont le domaine est $\mathbb{R\times R}$, i.e. $g(x, y) = \frac{1}{2}(1-xy)^2 + \frac{\rho}{2}(x^2+y^2)$.
</p>
<p>
On obtient le dessin de la fonction:
</p>
</div>
```
fig = plt.figure(1)
ax = Axes3D(fig)
rho = 0.2
X = np.arange(-10, 10, 0.1)
Y = np.arange(-10, 10, 0.1)
X, Y = np.meshgrid(X, Y)
Z = (1 - X*Y)**2/2. + rho/2.*(X**2 + Y**2)
ax.plot_surface(X, Y, Z, cmap='rainbow')
plt.show()
```
<div class="alert alert-success">
<p>
On trace la valeur de cette fonction sur le ligne $x+y=10$ dans son domaine.
</p>
</div>
```
X = np.arange(0, 10.01, 0.01)
Y = 10 - X
Z = (1 - X*Y)**2/2. + rho/2.*(X**2 + Y**2)
plt.figure(2, figsize=(8,6))
plt.plot(X, Z)
plt.xlabel("$x$")
plt.ylabel("$g(x, 10-x)$")
plt.xlim(0, 10)
plt.show()
```
<div class="alert alert-success">
<p>
Soient $p1=(0,10), p2=(10,0)$, on a $p3=(5,5)=0.5p1+(1-0.5)p2$.
</p>
</div>
```
g1 = (1 - 0*10)**2/2. + rho/2.*(0**2 + 10**2)
g2 = (1 - 10*0)**2/2. + rho/2.*(10**2 + 0**2)
g3 = (1 - 5*5)**2/2. + rho/2.*(5**2 + 5**2)
print "g(p1) = %f" % g1
print "g(p2) = %f" % g2
print "g(p3) = %f" % g3
```
<div class="alert alert-success">
<p>
Donc on a $t\cdot g(p1)+(1-t)g(p2) \leq g(t\cdot p1+(1-t)p2)$, donc la fonction objectif n'est pas convexe.
</p>
</div>
<div class="alert alert-success">
b)
<p>Notons $g$ la fonction objectif. On a:</p>
<p>$g(P,Q) = \frac{1}{2}\|1_K\circ(R-QP)\|_F^2 + \frac{\rho}{2}\|Q\|_F^2 + \frac{\rho}{2}\|P\|_F^2$</p>
<p>On obtient ses gradients par rapport à $P$ et $Q$:</p>
<p>$\nabla_g(P) = -Q^T(1_K\circ(R-QP)) + \rho P$</p>
<p>$\nabla_g(Q) = -(1_K\circ(R-QP))P^T + \rho Q$</p>
</div>
<div class="alert alert-success">
c)
<p>Les deux gradients ne sont pas lipschitzien.</p>
<p>Par le même exemple, $g(x, y) = \frac{1}{2}(1-xy)^2 + \frac{\rho}{2}(x^2+y^2)$, on a:</p>
$$\frac{\partial g}{\partial x} = (y^2+\rho)x - y$$
$$\frac{\partial^2 g}{\partial x^2} = y^2 + \rho$$
<p>Quand $y \rightarrow \infty$, $\frac{\partial^2 g}{\partial x^2} \rightarrow \infty$, donc $\nabla_g(P)$ n'est pas lipschitzien. De la même façon, on peut conclure que $\nabla_g(Q)$ n'est pas lipschitzien.</p>
</div>
## Partie 2: Trouver $P$ quand $Q_0$ est fixé
$\textbf{Question 2.1}\quad \text{Investigation de la fonction objectif simplifiée.}$
<div class="alert alert-success">
<p>La fonction objectif est convexe.</p>
<p>Son gradient est: $\nabla_g(P) = -(Q^0)^T(1_K\circ(R-Q^0P)) + \rho P$. On a:</p>
<p>$\nabla_g^2(P) = (Q^0)^TQ^0 + \rho I_{|C|}$, cette matrice est définitivement positive. Donc $g(P)$ est convexe.</p>
</div>
$\textbf{Question 2.2}\quad \text{Calculer le gradient.}$
<div class="alert alert-success">
<p>On utilise la conclusion de la question précédente pour compléter la fonction.</p>
</div>
```
def objective(P, Q0, R, mask, rho):
"""
La fonction objectif du probleme simplifie.
Prend en entree
P : la variable matricielle de taille C x I
Q0 : une matrice de taille U x C
R : une matrice de taille U x I
mask : une matrice 0-1 de taille U x I
rho : un reel positif ou nul
Sorties :
val : la valeur de la fonction
grad_P : le gradient par rapport a P
"""
tmp = (R - Q0.dot(P)) * mask
val = np.sum(tmp ** 2)/2. + rho/2.*(np.sum(Q0.T**2) + np.sum(P**2))
grad_P = -Q0.T.dot(tmp) + rho*P
return val, grad_P
# Initialize the varibles;
rho = 0.2
CC = 7
R, mask = load_movielens(filename, minidata=False)
u, i = R.shape
U, S, V_T = svds(R, k=CC, return_singular_vectors=True)
def funcObj(P_vec, Q0, R, mask, rho, c):
P = P_vec.reshape(c, i)
val, grad_P = objective(P, Q0, R, mask, rho)
return val
def gradObj(P_vec, Q0, R, mask, rho, c):
P = P_vec.reshape(c, i)
val, grad_P = objective(P, Q0, R, mask, rho)
return grad_P.ravel()
t0 = time()
print "The difference of gradient is %f" % check_grad(funcObj, gradObj, np.zeros_like(V_T).ravel(), U, R, mask, rho, CC)
print "Done in %0.3fs." % (time()-t0)
```
$\textbf{Question 2.3}\quad \text{Minimiser une fonction }g\text{ par la méthode du gradient.}$
```
def gradient(g, P0, gamma, epsilon):
P = P0
val, grad_P = g(P, U, R, mask, rho)
cnt = 0
while (np.sum(grad_P**2) > epsilon**2):
P = P - gamma*grad_P
val, grad_P = g(P, U, R, mask, rho)
cnt = cnt + 1
return val, P, cnt
```
$\textbf{Question 2.4}\quad \text{minimiser la fonction }g\text{ jusqu’à la précision }\epsilon\text{ = 1.}$
<div class="alert alert-success">
<p>On fait une décomposition SVD, et utilise les $|C|$ premiers vecteurs singuliers à gauche comme la valeur de $Q_0$, et la matrice nulle de taille $|C|\times|I|$ comme la valeur initialse de $P$.</p>
<p>On calcule la constante de Lipschitz de $\nabla_g(P)$, $L$, et utilise $\frac{1}{L}$ comme la longueur de pas.</p>
</div>
```
R, mask = load_movielens(filename, minidata=False)
U, S, V_T = svds(R, k=CC, return_singular_vectors=True)
t0 = time()
L = rho + np.sqrt(np.sum(U.T**2))
gamma = 1./L
val_grad, P, cnt_grad = gradient(objective, np.zeros_like(V_T), gamma, 1)
print "The minimal value of target function is %f" % val_grad
print "The solution of matrix P is:"
print P
t_grad = time() - t0
print "Done in %0.3fs, number of iteration: %d." % (t_grad, cnt_grad)
```
## Partie 3: Raffinements algorithmiques pour le problème à $Q_0$ fixé.
$\textbf{Question 3.1}\quad\text{Rajouter une méthode de recherche linéaire à la méthode de gradient.}$
<div class="alert alert-success">
<p>La solution exacte d'un problème de recherche linéaire est la solution du problème d'optimisation:</p>
$$\gamma_k = arg\,\underset{\gamma\in\mathbb{R}_+}{min} g(P^k-\gamma\nabla_g(P^k))$$
<p>En pratique, on fait une recherche linéaire d'Armijo. Notons $P^+(\gamma_k)=P^k - \gamma_k\nabla_g(P^k)$, on cherche le premier entier $l$ tel que:</p>
$$g(P^+(ba^l)) \leq g(P^k) + \beta\langle\nabla_g(P^k),\,P^+(ba^l)-P^k\rangle$$.
<p>On fixe $a = 0.5, b = 0.5, \beta = 0.5$, c'est équivalent à la recherche linéaire de Taylor.</p>
</div>
```
def gradient_LS(func, P0, a, b, beta, epsilon):
P = P0
val, grad_P = func(P, U, R, mask, rho)
grad_norm_square = np.sum(grad_P**2)
gamma = b
cnt = 0
while (grad_norm_square > epsilon**2):
gamma = 2*gamma
val_, grad_P_ = func(P - gamma*grad_P, U, R, mask, rho)
while (val_ > val - gamma*beta*grad_norm_square):
gamma = gamma*a
val_, grad_P_ = func(P - gamma*grad_P, U, R, mask, rho)
P = P - gamma*grad_P
val = val_
grad_P = grad_P_
grad_norm_square = np.sum(grad_P**2)
cnt = cnt + 1
return val, P, cnt
R, mask = load_movielens(filename, minidata=False)
U, S, V_T = svds(R, k=CC, return_singular_vectors=True)
t0 = time()
a = 0.5
b = 0.5
beta = 0.5
val_ls, P, cnt_ls = gradient_LS(objective, np.zeros_like(V_T), a, b, beta, 1)
print "The minimal value of target function is %f" % val_ls
print "The solution of matrix P is:"
print P
t_ls = time() - t0
print "Done in %0.3fs, number of iteration: %d." % (t_ls, cnt_ls)
```
$\textbf{Question 3.2}\quad\text{Utiliser la méthode du gradient conjugué pour ce problème.}$
<div class="alert alert-success">
<p>On a vu dans les questions précédentes que $\nabla_g^2(P) = (Q^0)^TQ^0 + \rho I_{|C|}$, c'est une matrice symétrique et définitivement positive. Donc on peut appliquer la méthode du gradient conjugué pour résoudre ce problème.</p>
<p>Dans le code ci-dessous, pour trouver une solution exacte dans la partie de recherche linéaire, on utilise la founction $line\_search$ offert par scipy.</p>
</div>
```
shape = V_T.shape
def funcObj(P_vec):
P = P_vec.reshape(shape)
val = np.sum(((R - U.dot(P))*mask)**2)/2. + rho/2.*(np.sum(U.T**2) + np.sum(P**2))
return val
def gradObj(P_vec):
P = P_vec.reshape(shape)
grad_P = -U.T.dot(mask*(R - U.dot(P))) + rho*P
return grad_P.ravel()
def gradient_cg(func, grad, P0, epsilon):
P = P0
val = func(P)
grad_P = grad(P)
grad_norm_square = np.sum(grad_P**2)
d = -grad_P
cnt = 0
while (grad_norm_square > epsilon**2):
#alpha = np.sqrt(np.sum((U.T.dot(U.dot(P)-R) + rho*P)**2))/np.sqrt(np.sum((U.T.dot(mask*U.dot(grad_P)) + rho*grad_P)**2))
alpha, fc, gc, val, old_fval, new_slope = line_search(func, grad, xk=P, pk=d)
P = P + alpha*d
grad_P = grad(P)
grad_norm_square_ = np.sum(grad_P**2)
beta = grad_norm_square_/grad_norm_square
d = -grad_P + beta*d
grad_norm_square = grad_norm_square_
cnt = cnt + 1
return val, P.reshape(shape), cnt
R, mask = load_movielens(filename, minidata=False)
U, S, V_T = svds(R, k=CC, return_singular_vectors=True)
t0 = time()
val_cg, P, cnt_cg = gradient_cg(funcObj, gradObj, np.zeros(shape).ravel(), 1)
print "The minimal value of target function is %f" % val_cg
print "The solution of matrix P is:"
print P
t_cg = time() - t0
print "Done in %0.3fs, number of iteration: %d." % (t_cg, cnt_cg)
```
$\textbf{Question 3.3}\quad\text{Comparer les performances des trois algorithmes.}$
```
index = np.arange(3)
bar_width = 0.5
methods = ["gradient", "gradient_ls", "gradient_cg"]
times = [t_grad, t_ls, t_cg]
values = [val_grad, val_ls, val_cg]
cnts = [cnt_grad, cnt_ls, cnt_cg]
plt.figure(3, figsize=(8,6))
plt.title("Function value of the three methods")
plt.bar(index, values, bar_width)
plt.xticks(index, methods)
plt.xlabel("Method")
plt.ylabel("Value")
plt.show()
plt.figure(4, figsize=(8,6))
plt.title("Running time of the three methods")
plt.bar(index, times, bar_width)
plt.xticks(index, methods)
plt.xlabel("Method")
plt.ylabel("Time")
plt.show()
plt.figure(5, figsize=(8,6))
plt.title("Number of iterations of the three methods")
plt.bar(index, cnts, bar_width)
plt.xticks(index, methods)
plt.xlabel("Method")
plt.ylabel("Number of iterations")
plt.show()
```
<div class="alert alert-success">
<p>Les valeurs minimales trouvées par les trois algorithmes sont très proches, la valeur minimale trouvée par la méthode du gradient conjugué est un peu meilleur. En comparant les matrices $P$ qu'ils obtiennent, ils trouvent en fait la même solution.</p>
<p>Le temps d'execution de la méthode du gradient avec recherche linéaire est meilleur que les autres. On doit considérer le fait que les détails de réalisation des fonctions sont différents.</p>
<p>Le nombre d'itérations de la méthode recherche linéaire et gradient conjugué est meilleur que la méthode du gradient originale.</p>
</div>
## Partie 4: Résolution du problème complet
$\textbf{Question 4.1}\quad\text{Résoudre le problème complet par la méthode du gradient avec recherche linéaire.}$
<div class="alert alert-success">
<p>On fait une combinaison entre $P$ et $Q$ et les comsidère comme une seule variable, après, on applique la méthode qu'on a réalisé dans la question précédente.</p>
</div>
```
def total_objective(P, Q, R, mask, rho):
"""
La fonction objectif du probleme complet.
Prend en entree
P : la variable matricielle de taille C x I
Q : la variable matricielle de taille U x C
R : une matrice de taille U x I
mask : une matrice 0-1 de taille U x I
rho : un reel positif ou nul
Sorties :
val : la valeur de la fonction
grad_P : le gradient par rapport a P
grad_Q : le gradient par rapport a Q
"""
tmp = (R - Q.dot(P)) * mask
val = np.sum(tmp**2)/2. + rho/2.*(np.sum(Q**2) + np.sum(P**2))
grad_P = -Q.T.dot(mask*(R - Q.dot(P))) + rho*P
grad_Q = -(mask*(R - Q.dot(P))).dot(P.T) + rho*Q
return val, grad_P, grad_Q
def total_objective_vectorized(PQvec, R, mask, rho):
"""
Vectorisation de la fonction precedente de maniere a ne pas
recoder la fonction gradient
"""
# reconstruction de P et Q
n_items = R.shape[1]
n_users = R.shape[0]
F = PQvec.shape[0]/(n_items + n_users)
Pvec = PQvec[0:n_items*F]
Qvec = PQvec[n_items*F:]
P = np.reshape(Pvec, (F, n_items))
Q = np.reshape(Qvec, (n_users, F))
val, grad_P, grad_Q = total_objective(P, Q, R, mask, rho)
return val, np.concatenate([grad_P.ravel(), grad_Q.ravel()])
def gradient_LS(func, PQ0, a, b, beta, epsilon):
PQ = PQ0
val, grad = func(PQ, R, mask, rho)
grad_norm_square = np.sum(grad**2)
gamma = b
cnt = 0
while (grad_norm_square > epsilon**2):
gamma = 2 * gamma
PQ_ = PQ - gamma*grad
val_, grad_ = func(PQ_, R, mask, rho)
while (val_ > val - gamma*beta*grad_norm_square):
gamma = gamma*a
PQ_ = PQ - gamma*grad
val_, grad_ = func(PQ_, R, mask, rho)
PQ = PQ_
val = val_
grad = grad_
grad_norm_square = np.sum(grad**2)
cnt = cnt + 1
return val, PQ, cnt
R, mask = load_movielens(filename, minidata=False)
U, S, V_T = svds(R, k=CC, return_singular_vectors=True)
t0 = time()
a = 0.5
b = 1
beta = 0.5
val_grad, PQvec, cnt_grad = gradient_LS(total_objective_vectorized, np.concatenate([V_T.ravel(), U.ravel()]), a, b, beta, 100)
n_items = R.shape[1]
n_users = R.shape[0]
F = PQvec.shape[0]/(n_items + n_users)
Pvec = PQvec[0:n_items*F]
Qvec = PQvec[n_items*F:]
P_grad = np.reshape(Pvec, (F, n_items))
Q_grad = np.reshape(Qvec, (n_users, F))
print "The minimal value of target function is %f" % val_grad
print "The solution of matrix P is:"
print P_grad
print "The solution of matrix Q is:"
print Q_grad
t_grad = time() - t0
print "Done in %0.3fs, number of iteration: %d." % (t_grad, cnt_grad)
```
$\textbf{Question 4.2}\quad\text{Montrer que la valeur de l’objectif décroît à chaque itération. En déduire qu’elle converge.}$
<div class="alert alert-success">
<p>Dans la question 2.1, on a démontré que quand $Q$ (resp. $P$) est fixé, la fonction $g(P)$ (resp. $g(Q)$) est convexe. Donc à chaque pas dans une itération, la valeur de la fonction objective décroît.</p>
<p>L'algorithme est de la forme $x^{k+1} = T(x^k)$, où $T:x\rightarrow x$ est une fonction continuelle. Deplus, la fonction objectif est bien bornée. Nontons $x^*$ la solution optimale, on peut toujours obtenir une configuration plus proche à $x^*$ après une itération. Par le lemme du point-fixe, on conclure que l'algorithm va converger.</p>
</div>
$\textbf{Question 4.3}\quad\text{Coder la méthode des moindres carrés alternés.}$
<div class="alert alert-success">
<p>
Notons $P=(\mathbf{p}_1,...\mathbf{p}_{|I|})$, $Q=(\mathbf{q}_1,...\mathbf{q}_{|U|})^T$, où $\mathbf{p}_i, \mathbf{q}_u\in\mathbb{R}^{|C|}$, $K_u^I$ l'ensemble des films que l'utilisateur $u$ a noté, $K_i^U$ l'ensemble des utilisateurs qui ont noté le film $i$, on obtient:
</p>
$$
\begin{aligned}
g(P,Q) &= \frac{1}{2}\sum_{(u,i)\in K}(r_{ui}-\sum_{c\in C}q_{uc}p_{ci})^2 + \frac{\rho}{2}(\sum_{u,c}q_{uc}^{2}+\sum_{c,i}p_{ci}^{2}) \\
& = \frac{1}{2}\sum_{(u,i)\in K}(r_{ui}-\mathbf{q}_u^T\mathbf{p}_i)^2 + \frac{\rho}{2}(\sum_{u}\|\mathbf{q}_{u}\|^{2}+\sum_{i}\|\mathbf{p}_{i}\|^{2})\\
& = \frac{1}{2} \sum_{u\in U} \sum_{i\in K_u^I}(r_{ui}-\mathbf{q}_u^T\mathbf{p}_i)^2 + \frac{\rho}{2}(\sum_{u}\|\mathbf{q}_{u}\|^{2}+\sum_{i}\|\mathbf{p}_{i}\|^{2}) \\
& = \frac{1}{2} \sum_{i\in I} \sum_{u\in K_i^U}(r_{ui}-\mathbf{q}_u^T\mathbf{p}_i)^2 + \frac{\rho}{2}(\sum_{u}\|\mathbf{q}_{u}\|^{2}+\sum_{i}\|\mathbf{p}_{i}\|^{2})
\end{aligned}
$$
<p>
Dans chaque itération, pour trouver la matrice $Q$ quand $P$ est fixée, on doit en fait résoudre un problème de moindre carré pénalisé. On calcule la dérivée partielle de $q_{uc}$ pour chaque couple $(u, c) \in \mathbf{U \times C}.$
</p>
$$
\begin{aligned}
&\quad\frac{\partial g}{\partial q_{uc}} = 0 \\
\Rightarrow &\quad\sum_{i\in K_u^I}(\mathbf{q}_u^T\mathbf{p}_i - r_{ui})p_{ci} + \rho q_{uc} = 0 \\
\Rightarrow &\quad\sum_{i\in K_u^I}p_{ci}\mathbf{p}_i^T\mathbf{q}_u + \rho q_{uc} = \sum_{i\in K_u^I}p_{ci}r_{ui} \\
\Rightarrow &\quad(P_{K_u}P_{K_u^I}^T + \rho I_{|C|})\mathbf{q}_u = P_{K_u^I}R^T(u, K_u^I)
\end{aligned}
$$
<p>On obtient l'expression explicite de $\mathbf{q}_u$ pour chaqun $u \in \mathbf{U}$ comme la solution d'un système linéaire:$\,\mathbf{q}_u = (P_{K_u^I}P_{K_u^I}^T + \rho I_{|C|})^{-1}P_{K_u^I}R^T(u, K_u^I)$, où $P_{K_u^I}$ est la sous-matrice de $P$ dont les colonnes $i \in K_u^I$ sont sélectionnées, $I_{|C|}$ est la matrice identité de taille $|C|\times|C|$, R(u, K_u^I) est le vecteur ligne où les éléments $i \in K_u^I$ du ligne $u$ de la matrice $R$ sont sélectionnés.
</p>
<p>
Réspectivement, on a $\mathbf{p}_i = (Q_{K_i^U}Q_{K_i^U}^T + \rho I_{|C|})^{-1}Q_{K_i^U}R(K_i^U,i)$, où $Q_{K_i^U}$ est la sous-matrice de $Q$ dont les lignes $u \in K_i^U$ sont sélectionnées, $I_{|C|}$ est la matrice identité de taille $|C|\times|C|$, $R(K_i^U,i)$ est le vecteur colonne où les éléments $u \in K_i^U$ du ligne $i$ de la matrice $R$ sont sélectionnés.
</p>
</div>
```
def ALS(P0, Q0, R, mask, rho, tol=1, max_iter=1e4):
u, i = R.shape
mask_i = [mask[:, k].ravel() for k in range(i)]
mask_u = [mask[k, :].ravel() for k in range(u)]
R_i = [R[mask_i[k], k] for k in range(i)]
R_u = [R[k, mask_u[k]] for k in range(u)]
P = P0
Q = Q0
val = 1e10
delta_val = 1e10
t = 0
cnt = 0
while (delta_val >= tol and t < max_iter):
for k in range(i):
Q_i = Q[mask_i[k], :]
P[:, k] = np.linalg.solve(Q_i.T.dot(Q_i) + rho*np.identity(CC), Q_i.T.dot(R_i[k]))
for k in range(u):
P_u = P[:, mask_u[k]]
Q[k, :] = np.linalg.solve(P_u.dot(P_u.T) + rho*np.identity(CC), P_u.dot(R_u[k].T))
val_ = np.sum((mask*(R - Q.dot(P)))**2)/2. + rho/2.*(np.sum(Q**2) + np.sum(P**2))
delta_val = val - val_
val = val_
t = t+1
return val, P, Q, t
R, mask = load_movielens(filename, minidata=False)
U, S, V_T = svds(R, k=CC, return_singular_vectors=True)
t0 = time()
P0 = V_T
Q0 = U
tol = 1
max_iter = 1e4
val_als, P_als, Q_als, cnt_als = ALS(P0, Q0, R, mask, rho, tol, max_iter)
print "The minimal value of target function is %f" % val_als
print "The solution of matrix P is:"
print P_als
print "The solution of matrix Q is:"
print Q_als
t_als = time() - t0
print "Done in %0.3fs, number of iteration: %d." % (t_als, cnt_als)
```
$\textbf{Question 4.4}\quad\text{Comparer la méthode du gradient avec recherche linéaire et la méthode des moindres carrés alternés.}$
```
index = np.arange(2)
bar_width = 0.5
methods = ["gradient", "ALS"]
times = [t_grad, t_als]
values = [val_grad, val_als]
cnts = [cnt_grad, cnt_als]
plt.figure(6, figsize=(8,6))
plt.title("Function value of the two methods")
plt.bar(index, values, bar_width)
plt.xticks(index, methods)
plt.xlabel("Method")
plt.ylabel("Value")
plt.show()
plt.figure(7, figsize=(8,6))
plt.title("Running time of the two methods")
plt.bar(index, times, bar_width)
plt.xticks(index, methods)
plt.xlabel("Method")
plt.ylabel("Time")
plt.show()
plt.figure(8, figsize=(8,6))
plt.title("Number of iterations of the two methods")
plt.bar(index, cnts, bar_width)
plt.xticks(index, methods)
plt.xlabel("Method")
plt.ylabel("Number of iterations")
plt.show()
```
<div class="alert alert-success">
<p>En comparant les matrice $P$ et $Q$ obtenues par les deux algorithmes, on peut conclure que les solution trouvées par eux sont différentes. Les matrices $\hat{R}$ sont aussi différentes.</p>
<p>Les valeurs de la fonction objectif sont différentes, celle donnée par l'algorithm ALS est meilleur.</p>
<p>En comparant le nombre d'irération, la méthode du gradient avec recherche linéaire est plus rapid. En même temps, les détails de réalisation peur être améliorée.</p>
</div>
$\textbf{Question 4.5}\quad\text{Quel film recommanderiez-vous à l’utilisateur 449?}$
<div class="alert alert-success">
<p>On rétablit la matrice $\hat{R}$ selon les $P^*$ et $Q^*$ qu'on obtient dans les questions précédentes et mettre le ligne 449 en ordre.</p>
</div>
```
R_grad = Q_grad.dot(P_grad)
score = R_grad[449, :].ravel()
rank = (-score).argsort()
print "-----------------------------------------------------------------------"
print "By gradient method, the 10 favorite films of user 449 will be:"
for i in range(10):
print "%d: movie %d, score %f" % (i+1, rank[i], score[rank[i]])
R_als = Q_als.dot(P_als)
score = R_als[449, :].ravel()
rank = (-score).argsort()
print "-----------------------------------------------------------------------"
print "By ALS method, the 10 favorite films of user 449 will be:"
for i in range(10):
print "%d: movie %d, score %f" % (i+1, rank[i], score[rank[i]])
```
<div class="alert alert-success">
<p>Selon les deux méthodes qu'on a utilisé, le film que l'utilisateur 449 préfère est le film 317.</p>
</div>
# $\mathbf{Bibliography}$
Zhou, Yunhong, et al. "Large-scale parallel collaborative filtering for the netflix prize." International Conference on Algorithmic Applications in Management. Springer Berlin Heidelberg, 2008.
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/student/W2D1_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 2, Day 1, Tutorial 3
# Day 1: Parameter Sharing (CNNs and RNNs)
__Content creators:__ Alona Fyshe, Dawn McKnight, Richard Gerum, Cassidy Pirlot, Rohan Saha, Liam Peet-Pare, Saeed Najafi
__Content reviewers:__ Saeed Salehi, Lily Cheng, Yu-Fang Yang, Polina Turishcheva
__Production editors:__ Anmol Gupta, Spiros Chavlis
__Based on material from:__ Konrad Kording, Hmrishav Bandyopadhyay, Rahul Shekhar, Tejas Srivastava
---
# Tutorial Objectives
At the end of this tutorial, we will be able to:
- Understand the structure of a Recurrent Neural Network (RNN)
- Build a simple RNN model
```
#@markdown Tutorial slides
# you should link the slides for all tutorial videos here (we will store pdfs on osf)
from IPython.display import HTML
HTML('<iframe src="https://docs.google.com/presentation/d/1jrKnXGoCXovB5pMGtnsF0ICxr1Cu_vDk/edit#slide=id.p1" frameborder="100" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>')
#@title Dependencies
!pip install livelossplot --quiet
# Imports
import torch
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import HTML
from tqdm.notebook import tqdm, trange
from time import sleep
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device, torch.get_num_threads()
!mkdir images
!wget "https://raw.githubusercontent.com/dem1995/generally_available_files/main/chicago_skyline_shrunk_v2.bmp"
!mv *.bmp images
# @title Figure Settings
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
fig_w, fig_h = (8, 6)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.rcParams["mpl_toolkits.legacy_colorbar"] = False
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="matplotlib")
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/"
"course-content/master/nma.mplstyle")
# @title Set seed for reproducibility
seed = 2021
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
np.random.seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
print ('Seed has been set.')
```
# Section 1: Recurrent Neural Networks (RNNs)
```
#@title Video 1: Intro to RNNs
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="0Majex-aF0E", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
RNNs are compact models that operate over timeseries, and have the ability to remember past input. They also save parameters by using the same weights at every time step. If you've heard of Transformers, those models dont' have this kind of temporal weight sharing, and so they are *much* larger.
The code below is adapted from https://github.com/spro/char-rnn.pytorch
```
#@title RNN framework (run me)
!pip install unidecode
!wget --output-document=/content/sample_data/twain.txt https://github.com/amfyshe/amfyshe.github.io/blob/master/twain.txt
# https://github.com/spro/char-rnn.pytorch
import torch
import torch.nn as nn
from torch.autograd import Variable
class CharRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, model="gru", n_layers=1):
super(CharRNN, self).__init__()
self.model = model.lower()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.encoder = nn.Embedding(input_size, hidden_size)
if self.model == "gru":
self.rnn = nn.GRU(hidden_size, hidden_size, n_layers)
elif self.model == "lstm":
self.rnn = nn.LSTM(hidden_size, hidden_size, n_layers)
elif self.model == "rnn":
self.rnn = nn.RNN(hidden_size, hidden_size, n_layers)
self.decoder = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
batch_size = input.size(0)
encoded = self.encoder(input)
output, hidden = self.rnn(encoded.view(1, batch_size, -1), hidden)
output = self.decoder(output.view(batch_size, -1))
return output, hidden
def init_hidden(self, batch_size):
if self.model == "lstm":
return (Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size)),
Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size)))
return Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size))
#@title Helpers (run me)
# https://github.com/spro/char-rnn.pytorch
import unidecode
import string
import random
import time
import math
import torch
# Reading and un-unicode-encoding data
all_characters = string.printable
n_characters = len(all_characters)
def read_file(filename):
file = unidecode.unidecode(open(filename).read())
return file, len(file)
# Turning a string into a tensor
def char_tensor(string):
tensor = torch.zeros(len(string)).long()
for c in range(len(string)):
try:
tensor[c] = all_characters.index(string[c])
except:
continue
return tensor
# Readable time elapsed
def time_since(since):
s = time.time() - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def generate(decoder, prime_str='A', predict_len=100, temperature=0.8, cuda=False):
hidden = decoder.init_hidden(1)
prime_input = Variable(char_tensor(prime_str).unsqueeze(0))
if cuda:
hidden = hidden.cuda()
prime_input = prime_input.cuda()
predicted = prime_str
# Use priming string to "build up" hidden state
for p in range(len(prime_str) - 1):
_, hidden = decoder(prime_input[:,p], hidden)
inp = prime_input[:,-1]
for p in range(predict_len):
output, hidden = decoder(inp, hidden)
# Sample from the network as a multinomial distribution
output_dist = output.data.view(-1).div(temperature).exp()
top_i = torch.multinomial(output_dist, 1)[0]
# Add predicted character to string and use as next input
predicted_char = all_characters[top_i]
predicted += predicted_char
inp = Variable(char_tensor(predicted_char).unsqueeze(0))
if cuda:
inp = inp.cuda()
return predicted
# https://github.com/spro/char-rnn.pytorch
!ls
!pwd
batch_size = 50
chunk_len = 200
model = "rnn" # other options: lstm, gru
#hyperparams
n_layers = 2
hidden_size = 200
n_epochs = 2000
learning_rate = 0.01
print_every = 25
def train(inp, target):
hidden = decoder.init_hidden(batch_size)
decoder.zero_grad()
loss = 0
for c in range(chunk_len):
output, hidden = decoder(inp[:,c], hidden)
loss += criterion(output.view(batch_size, -1), target[:,c])
loss.backward()
decoder_optimizer.step()
return loss.item() / chunk_len
file, file_len = read_file('/content/sample_data/twain.txt')
def random_training_set(chunk_len, batch_size):
inp = torch.LongTensor(batch_size, chunk_len)
target = torch.LongTensor(batch_size, chunk_len)
for bi in range(batch_size):
start_index = random.randint(0, file_len - chunk_len - 1)
end_index = start_index + chunk_len + 1
chunk = file[start_index:end_index]
inp[bi] = char_tensor(chunk[:-1])
target[bi] = char_tensor(chunk[1:])
inp = Variable(inp)
target = Variable(target)
return inp, target
decoder = CharRNN(
n_characters,
hidden_size,
n_characters,
model=model,
n_layers=n_layers,
)
decoder_optimizer = torch.optim.Adagrad(decoder.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
start = time.time()
all_losses = []
loss_avg = 0
print("Training for %d epochs..." % n_epochs)
for epoch in tqdm(range(1, n_epochs + 1), position=0, leave=True):
loss = train(*random_training_set(chunk_len, batch_size))
loss_avg += loss
if epoch % print_every == 0:
print('[%s (%d %d%%) %.4f]' % (time_since(start), epoch,\
epoch / n_epochs * 100, loss))
print(generate(decoder, 'Wh', 100), '\n')
```
#Section 2: Power consumption in Deep Learning
Training NN models can be incredibly costly, both in actual money but also in power consumption.
```
#@title Video 2: Carbon Footprint of AI
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="2upwdK3bcXU", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
Take a few moments to chat with your pod about the following points:
* Which societal costs of training do you find most compelling?
* When is training an AI model worth the cost? Who should make that decision?
* Should there be additional taxes on energy costs for compute centers?
# Section 3: Wrap up
What a day! We've learned a lot! The basics of CNNs and RNNs, and how changes to architecture that allow models to parameter share can greatly reduce the size of the model. We learned about convoliution and pooling, as well as the basic idea behind RNNs. To wrap up we thought about the impact of training large NN models.
```
#@title Video 3: Wrap-up
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ikb4hwR4pU0", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
| github_jupyter |
# Fandango score analysis
To investigate the potential bias that movie reviews site have, FiveThirtyEight compiled data for 147 films from 2015 that have substantive reviews from both critics and consumers. Every time Hollywood releases a movie, critics from Metacritic, Fandango, Rotten Tomatoes, and IMDB review and rate the film. They also ask the users in their respective communities to review and rate the film. Then, they calculate the average rating from both critics and users and display them on their site.
**FiveThirtyEight** compiled this dataset to investigate if there was any bias to Fandango's ratings.
We'll be working on the following columns
| Data | Description |
| ---- | ---- |
| FILM | film name |
| RT_user_norm | average user rating from Rotten Tomatoes, normalized to a 1 to 5 point scale |
| Metacritic_user_nom | average user rating from Metacritic, normalized to a 1 to 5 point scale |
| IMDB_norm | average user rating from IMDB, normalized to a 1 to 5 point scale |
| Fandango_Ratingvalue | average user rating from Fandango, normalized to a 1 to 5 point scale |
| Fandango_Stars | the rating displayed on the Fandango website |
## Imports and Loading the dataset
```
import pandas as pd
from numpy import arange
import matplotlib.pyplot as plt
%matplotlib inline
reviews = pd.read_csv("fandango_score_comparison.csv")
norm_reviews = reviews.loc[:,["FILM","RT_user_norm","Metacritic_user_nom","IMDB_norm","Fandango_Ratingvalue","Fandango_Stars"]]
norm_reviews.loc[0,:]
num_cols = ['RT_user_norm', 'Metacritic_user_nom', 'IMDB_norm', 'Fandango_Ratingvalue', 'Fandango_Stars']
bar_heights = norm_reviews[num_cols].iloc[0].values
bar_positions = arange(5) + 0.75
fig, ax = plt.subplots()
ax.bar(x = bar_positions, height = bar_heights, width = 0.5)
plt.show
tick_positions = range(1,6)
f,ax = plt.subplots()
ax.barh(y = bar_positions, width = bar_heights, height = 0.5)
ax.set_yticks(tick_positions)
ax.set_yticklabels(num_cols)
ax.set_xlabel("Average Rating")
ax.set_ylabel("Rating Source")
ax.set_title("Average User Rating For Avengers: Age of Ultron (2015)")
plt.show()
f,ax = plt.subplots()
ax.scatter(norm_reviews["Fandango_Ratingvalue"],norm_reviews["RT_user_norm"])
ax.set_xlabel("Fandango")
ax.set_ylabel("Rotten Tomatoes")
plt.show()
fig = plt.figure(figsize=(5,10))
ax1 = fig.add_subplot(3,1,1)
ax2 = fig.add_subplot(3,1,2)
ax3 = fig.add_subplot(3,1,3)
ax1.scatter(norm_reviews["Fandango_Ratingvalue"],norm_reviews["RT_user_norm"])
ax1.set_xlabel("Fandango")
ax1.set_ylabel("Rotten Tomatoes")
ax1.set_xlim(0, 5)
ax1.set_ylim(0, 5)
ax2.scatter(norm_reviews["Fandango_Ratingvalue"],norm_reviews["Metacritic_user_nom"])
ax2.set_xlabel("Fandango")
ax2.set_ylabel("Metacritic")
ax2.set_xlim(0, 5)
ax2.set_ylim(0, 5)
ax3.scatter(norm_reviews["Fandango_Ratingvalue"],norm_reviews["IMDB_norm"])
ax3.set_xlabel("Fandango")
ax3.set_ylabel("IMDB")
ax3.set_xlim(0, 5)
ax3.set_ylim(0, 5)
plt.show()
fandango_distribution = norm_reviews["Fandango_Ratingvalue"].value_counts().sort_index()
imdb_distribution = norm_reviews["IMDB_norm"].value_counts().sort_index()
print(fandango_distribution,imdb_distribution)
fig, ax = plt.subplots()
ax.hist(norm_reviews["Fandango_Ratingvalue"],range = (0,5))
plt.show()
fig = plt.figure(figsize=(5,20))
ax1 = fig.add_subplot(4,1,1)
ax2 = fig.add_subplot(4,1,2)
ax3 = fig.add_subplot(4,1,3)
ax4 = fig.add_subplot(4,1,4)
ax1.hist(norm_reviews["Fandango_Ratingvalue"],range = (0,5),bins=20)
ax1.set_title("Distribution of Fandango Ratings")
ax1.set_ylim(0,50)
ax2.hist(norm_reviews["RT_user_norm"],range = (0,5),bins=20)
ax2.set_title("Distribution of Rotten Tomatoes Ratings")
ax2.set_ylim(0,50)
ax3.hist(norm_reviews["Metacritic_user_nom"],range = (0,5),bins=20)
ax3.set_title("Distribution of Metacritic Ratings")
ax3.set_ylim(0,50)
ax4.hist(norm_reviews["IMDB_norm"],range = (0,5),bins=20)
ax4.set_title("Distribution of IMDB Ratings")
ax4.set_ylim(0,50)
plt.show()
fig, ax = plt.subplots()
ax.boxplot(norm_reviews["RT_user_norm"])
ax.set_ylim(0,5)
ax.set_xticklabels(["Rotten Tomatoes"])
plt.show()
fig, ax = plt.subplots()
ax.boxplot(norm_reviews[num_cols].values)
ax.set_ylim(0,5)
ax.set_xticklabels(num_cols,rotation=90)
plt.show()
```
| github_jupyter |
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org.
Copyright (c) $\omega radlib$ developers.
Distributed under the MIT License. See LICENSE.txt for more info.
# xarray GAMIC backend
In this example, we read GAMIC (HDF5) data files using the xarray `gamic` backend.
```
import glob
import wradlib as wrl
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as pl
import numpy as np
import xarray as xr
try:
get_ipython().magic("matplotlib inline")
except:
pl.ion()
```
## Load ODIM_H5 Volume Data
```
fpath = 'hdf5/DWD-Vol-2_99999_20180601054047_00.h5'
f = wrl.util.get_wradlib_data_file(fpath)
vol = wrl.io.open_gamic_dataset(f)
```
### Inspect RadarVolume
```
display(vol)
```
### Inspect root group
The `sweep` dimension contains the number of scans in this radar volume. Further the dataset consists of variables (location coordinates, time_coverage) and attributes (Conventions, metadata).
```
vol.root
```
### Inspect sweep group(s)
The sweep-groups can be accessed via their respective keys. The dimensions consist of `range` and `time` with added coordinates `azimuth`, `elevation`, `range` and `time`. There will be variables like radar moments (DBZH etc.) and sweep-dependend metadata (like `fixed_angle`, `sweep_mode` etc.).
```
display(vol[0])
```
### Goereferencing
```
swp = vol[0].copy().pipe(wrl.georef.georeference_dataset)
```
### Plotting
```
swp.DBZH.plot.pcolormesh(x='x', y='y')
pl.gca().set_aspect('equal')
fig = pl.figure(figsize=(10,10))
swp.DBZH.wradlib.plot_ppi(proj='cg', fig=fig)
import cartopy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
map_trans = ccrs.AzimuthalEquidistant(central_latitude=swp.latitude.values,
central_longitude=swp.longitude.values)
map_proj = ccrs.AzimuthalEquidistant(central_latitude=swp.latitude.values,
central_longitude=swp.longitude.values)
pm = swp.DBZH.wradlib.plot_ppi(proj=map_proj)
ax = pl.gca()
ax.gridlines(crs=map_proj)
print(ax)
map_proj = ccrs.Mercator(central_longitude=swp.longitude.values)
fig = pl.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection=map_proj)
pm = swp.DBZH.wradlib.plot_ppi(ax=ax)
ax.gridlines(draw_labels=True)
import cartopy.feature as cfeature
def plot_borders(ax):
borders = cfeature.NaturalEarthFeature(category='physical',
name='coastline',
scale='10m',
facecolor='none')
ax.add_feature(borders, edgecolor='black', lw=2, zorder=4)
map_proj = ccrs.Mercator(central_longitude=swp.longitude.values)
fig = pl.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection=map_proj)
DBZH = swp.DBZH
pm = DBZH.where(DBZH > 0).wradlib.plot_ppi(ax=ax)
plot_borders(ax)
ax.gridlines(draw_labels=True)
import matplotlib.path as mpath
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
map_proj = ccrs.AzimuthalEquidistant(central_latitude=swp.latitude.values,
central_longitude=swp.longitude.values,
)
fig = pl.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection=map_proj)
ax.set_boundary(circle, transform=ax.transAxes)
pm = swp.DBZH.wradlib.plot_ppi(proj=map_proj, ax=ax)
ax = pl.gca()
ax.gridlines(crs=map_proj)
fig = pl.figure(figsize=(10, 8))
proj=ccrs.AzimuthalEquidistant(central_latitude=swp.latitude.values,
central_longitude=swp.longitude.values)
ax = fig.add_subplot(111, projection=proj)
pm = swp.DBZH.wradlib.plot_ppi(ax=ax)
ax.gridlines()
swp.DBZH.wradlib.plot_ppi()
```
### Inspect radar moments
The DataArrays can be accessed by key or by attribute. Each DataArray has dimensions and coordinates of it's parent dataset. There are attributes connected which are defined by ODIM_H5 standard.
```
display(swp.DBZH)
```
### Create simple plot
Using xarray features a simple plot can be created like this. Note the `sortby('rtime')` method, which sorts the radials by time.
```
swp.DBZH.sortby('rtime').plot(x="range", y="rtime", add_labels=False)
fig = pl.figure(figsize=(5,5))
pm = swp.DBZH.wradlib.plot_ppi(proj={'latmin': 3e3}, fig=fig)
```
### Mask some values
```
swp['DBZH'] = swp['DBZH'].where(swp['DBZH'] >= 0)
swp['DBZH'].plot()
```
### Export to ODIM and CfRadial2
```
vol.to_odim('gamic_as_odim.h5')
vol.to_cfradial2('gamic_as_cfradial2.nc')
```
### Import again
```
vola = wrl.io.open_odim_dataset('gamic_as_odim.h5')
volb = wrl.io.open_cfradial2_dataset('gamic_as_cfradial2.nc')
```
### Check equality
We have to drop the time variable when checking equality since GAMIC has millisecond resolution, ODIM has seconds.
```
xr.testing.assert_allclose(vol.root.drop("time"), vola.root.drop("time"))
xr.testing.assert_allclose(vol[0].drop(["rtime", "time"]), vola[0].drop(["rtime", "time"]))
xr.testing.assert_allclose(vol.root.drop("time"), volb.root.drop("time"))
xr.testing.assert_equal(vol[0].drop("time"), volb[0].drop("time"))
xr.testing.assert_allclose(vola.root, volb.root)
xr.testing.assert_allclose(vola[0].drop("rtime"), volb[0].drop("rtime"))
```
## More GAMIC loading mechanisms
### Use `xr.open_dataset` to retrieve explicit group
```
swp = xr.open_dataset(f, engine="gamic", group="scan9")
display(swp)
```
| github_jupyter |
```
import re
import glob
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
pd.options.display.max_rows = 10
plt.rcParams['figure.figsize'] = (12, 6)
```
We'll use the same dataset of beer reviews.
```
df = pd.read_csv('data/beer_subset.csv.gz', parse_dates=['time'], compression='gzip')
review_cols = ['review_appearance', 'review_aroma', 'review_overall',
'review_palate', 'review_taste']
df.head()
df.beer_id.nunique()
df.profile_name.nunique()
df.beer_id.value_counts().plot(kind='hist', bins=10, color='k', log=True,
title='Log reviews per beer');
ax = df.brewer_id.value_counts().plot(kind='hist', bins=15, color='k', log=True,
title='Log reviews per brewer');
df.profile_name.value_counts().plot(kind='hist', bins=15, color='k', log=True,
title='Log reviews per person');
df.review_overall.value_counts().sort_index().plot(kind='bar', width=.8, rot=0);
fig, ax = plt.subplots(figsize=(5, 10))
sns.countplot(hue='kind', y='stars', data=(df[review_cols]
.stack()
.reset_index(level=1)
.rename(columns={'level_1': 'kind',
0: 'stars',})),
ax=ax, order=np.arange(0, 5.5, .5));
```
# Groupby
Groupby is a fundamental operation to pandas and data analysis.
The components of a groupby operation are to
1. Split a table into groups
2. Apply a function to each groups
3. Combine the results
In pandas the first step looks like
```python
df.groupby( grouper )
```
`grouper` can be many things
- Series (or string indicating a column in `df`)
- function (to be applied on the index)
- dict : groups by *values*
- `levels=[]`, names of levels in a MultiIndex
```
gr = df.groupby('beer_style')
gr
```
Haven't really done anything yet. Just some book-keeping to figure out which **keys** go with which rows. Keys are the things we've grouped by (each `beer_style` in this case).
The last two steps, apply and combine, are just:
```
gr.agg('mean')
```
This says apply the `mean` function to each column. Non-numeric columns (nusiance columns) are excluded. We can also select a subset of columns to perform the aggregation on.
```
gr[review_cols].agg('mean')
```
`.` attribute lookup works as well.
```
gr.abv.agg('mean')
```
Certain operations are attached directly to the `GroupBy` object, letting you bypass the `.agg` part
```
gr.abv.mean()
```
Exercise: Find the `beer_style`s with the greatest variance in `abv`.
- hint: `.std` calculates the standard deviation, and is available on `GroupBy` objects like `gr.abv`.
- hint: use `.order` to sort a Series
```
# your code goes here
%load -r 15:17 solutions_groupby.py
```
Now we'll run the gamut on a bunch of grouper / apply combinations.
Keep sight of the target though: split, apply, combine.
Single grouper, multiple aggregtaions:
Multiple Aggregations on one column
```
gr['review_aroma'].agg([np.mean, np.std, 'count']).head()
```
Single Aggregation on multiple columns
```
gr[review_cols].mean()
```
Multiple aggregations on multiple columns
```
gr[review_cols].agg(['mean', 'count', 'std'])
```
Hierarchical Indexes in the columns can be awkward to work with, so I'll usually
move a level to the Index with `.stack`.
```
gr[review_cols].agg(['mean', 'count', 'std']).stack(level=0)
```
You can group by **levels** of a MultiIndex.
```
multi = gr[review_cols].agg(['mean', 'count', 'std']).stack(level=0)
multi.head()
multi.groupby(level='beer_style')['mean'].agg(['min', 'max'])
```
Group by **multiple** columns
```
df.groupby(['brewer_id', 'beer_style']).review_overall.mean()
df.groupby(['brewer_id', 'beer_style'])[review_cols].mean()
```
### Exercise: Plot the relationship between review length (the `text` column) and average `review_overall`.
Hint: Break the problem into pieces:
- Find the **len**gth of each reivew (remember the `df.text.str` namespace?)
- Group by that Series of review lengths
- I used `style='k.'` in the plot
```
# Your code goes here
%load -r 1:5 solutions_groupby.py
```
Bonus exercise:
- Try grouping by the number of words.
- Try grouping by the number of sentances.
Remember that `str.count` accepts a regular expression.
Don't worry too much about these, especially if you don't remember the syntax
for regular expressions (I never can). Just jump to the next exercise.
```
%load -r 18:20 solutions_groupby.py
%load -r 21:26 solutions_groupby.py
# Your code goes here
```
### Exercise: Which **brewer** (`brewer_id`) has the largest gap between the min and max `review_overall` for two of their beers.
Hint: You'll need to do this in two steps.
1. Find the average `review_overall` by brewer and beername.
2. Find the difference between the max and min by brewer (rembember `.groupby(level=)`)
```
# Your code goes here. You've got this!
%load -r 6:13 solutions_groupby.py
# Show for those with counts > 20ish
```
Create our own "kind" of beer, which aggregates `style`.
```
style = df.beer_style.str.lower()
style.head()
kinds = ['ipa', 'apa', 'amber ale', 'rye', 'scotch', 'stout', 'barleywine', 'porter', 'brown ale', 'lager', 'pilsner',
'tripel', 'biter', 'farmhouse', 'malt liquour', 'rice']
expr = '|'.join(['(?P<{name}>{pat})'.format(pat=kind, name=kind.replace(' ', '_')) for kind in kinds])
expr
beer_kind = (style.replace({'india pale ale': 'ipa',
'american pale ale': 'apa'})
.str.extract(expr).fillna('').sum(1)
.str.lower().replace('', 'other'))
beer_kind.head()
df.groupby(['brewer_id', beer_kind]).review_overall.mean()
df.groupby(['brewer_id', beer_kind]).beer_id.nunique().unstack(1).fillna(0)
```
### Exercise: Which Brewers have the most different `kinds` of beer?
Hint: we used `df.profile_name.nunique()` to find the number of different profile names.
What are we grouping, and what is our grouper?
```
# %load -r 27:29 solutions_groupby.py
```
### Exercise: Which kinds of beer have the most brewers?
```
%load -r 30:32 solutions_groupby.py
```
We've seen a lot of permutations among number of groupers, number of columns to aggregate, and number of aggregators.
In fact, the `.agg`, which returns one row per group, is just one kind of way to combine the results. The three ways are
- `agg`: one row per results
- `transform`: identicaly shaped output as input
- `apply`: anything goes
# Transform
Combined Series / DataFrame is the same shape as the input. For example, say you want to standardize the reviews by subtracting the mean.
```
def de_mean(reviews):
s = reviews - reviews.mean()
return s
de_mean(df.review_overall)
```
We can do this at the *person* level with `groupby` and `transform`.
```
df.groupby('profile_name').transform(de_mean)
```
# Apply
So there's `gr.agg`. and `gr.transform`, and finally `gr.apply`. We're going to skip apply for now. I have an example in a later notebook.
# Resample
Resample is a special kind of groupby operation for when you have a `DatetimeIndex`.
```
review_times = df.time.value_counts().sort_index()
review_times
review_times.index
```
The number of reviews within a given second isn't that interesting.
```
review_times.plot()
```
Right now the frequency is way to high to be meaningful. `resample` lets you adjust the frequency.
```
review_times.resample("D").plot()
```
We've essentially grouped by day here (but syntax of `.resample('d')` is much nicer than what we'd have to do to use the `.groupby(grouper)` spelling). By default the aggregation function is `mean`, i.e. take the average of all the values that fall on that day. You can also sum.
```
review_times.resample('D', how='sum').plot()
```
The `freq` you pass in to `resample` is pretty flexible.
```
review_times.resample('5D', how='sum')
c = review_times.resample('3H', how='sum')
ax = c.plot(alpha=.5)
pd.rolling_mean(c, window=8).plot(ax=ax)
```
### Exercise: Plot the number of distinct brewers reviewed per day
- Hint: The documentation for `resample` is being worked on, but the `how` in `.resample` is *really* flexible. Try the first thing that comes to mind. What function do you want to apply to get the number of unique values?
- Hint2: (Sorry this is harder than I thought). `resample` needs a `DatetimeIndex`. We have datetimes in `time`. If only there was a way to take a column and **set** it as the **index**...
```
%load -r 33:36 solutions_groupby.py
```
# Aside: Beer Recommender
See [Harvard CS109](https://github.com/cs109/content) for a more complete example (with chocolate instead of beer).
One place where transform comes in handy is as a preprocessing step for any kind of recommender. In some sense, raw score I assign a beer is less important the the score relative to *my* mean.
```
deduped = df[['beer_id', 'profile_name', 'review_overall']].drop_duplicates()
deduped.head()
user_counts = deduped.profile_name.value_counts()
top_users = user_counts[user_counts > user_counts.quantile(.75)].index
beer_counts = deduped.beer_id.value_counts()
top_beers = beer_counts[beer_counts > beer_counts.quantile(.9)].index
top = deduped.query('beer_id in @top_beers and profile_name in @top_users')
user_means = top.groupby('profile_name').review_overall.mean()
beer_means = top.groupby('beer_id').review_overall.mean()
fig, axes = plt.subplots(figsize=(16, 4), ncols=2, sharey=True, sharex=True)
sns.distplot(user_means, kde=False, ax=axes[0], color='k', norm_hist=True, hist_kws={'alpha': 1})
sns.distplot(beer_means, kde=False, ax=axes[1], color='k', norm_hist=True, hist_kws={'alpha': 1})
axes[0].set_title("User Averages")
axes[1].set_title("Beer Averages")
s = top.set_index(['beer_id', 'profile_name']).review_overall.sort_index()
s.head()
```
### `de_mean` the scores in `s`
```
standardized = s.groupby(level='profile_name').transform(de_mean)
standardized.head()
from scipy.stats import pearsonr
def pearson_sim(reviews_1, reviews_2, reg=2):
"""
(regularized) Pearson correlation coefficient between sets
of reviews for two beers, made by a common subset
of reviewers.
`reviews_1` and `reviews_2` should be have the same index,
the `profile_name`s of people who reviewed both beers.
"""
n_common = len(reviews_1)
if n_common == 0:
similarity = 0
else:
rho = pearsonr(reviews_1, reviews_2)[0]
similarity = (n_common * rho) / (n_common + reg) # regularization if few reviews
return similarity, n_common
def beer_similarity(standardized, beer_1, beer_2, simfunc=pearson_sim, **simfunc_kwargs):
"""
Compute the similarity between two beers.
"""
# get common subset...
reviewers_1 = standardized.loc[beer_1].index
reviewers_2 = standardized.loc[beer_2].index
common_idx = reviewers_1 & reviewers_2 # set intersection
# slice the Multiindex, unstack to be N x 2
common_reviews = standardized.loc[[beer_1, beer_2], common_idx].unstack('beer_id')
# ... review similairty for subset
rho, n_common = simfunc(common_reviews[beer_1], common_reviews[beer_2], **simfunc_kwargs)
return rho, n_common
beer_ids = s.index.levels[0]
len(beer_ids)
beer_similarity(standardized, beer_ids[0], beer_ids[10])
%%time
sims = []
for i, beer_1 in enumerate(beer_ids):
for j, beer_2 in enumerate(beer_ids):
if j >= i:
continue
sim, n_common = beer_similarity(s, beer_1, beer_2)
sims.append((beer_1, beer_2, sim, n_common))
print((i, j), end='\r')
sim = pd.DataFrame(sims, columns=['beer_1', 'beer_2', 'score', 'n_common'])
sim.to_csv('beer_subset_similarity.csv', index=False)
sim = pd.read_csv('beer_subset_similarity.csv.gz')
sim.head()
sns.kdeplot(sim[sim.score != 0].dropna().score)
sim = sim.set_index(['beer_1', 'beer_2']).score
sim.loc[21690].nlargest(5)
```
| github_jupyter |
# Summary of fit and evaluate experiments for type 1 graph neural networks
```
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import graph_utils as graph_utils
import graph_neural_networks as graph_nn
import data_preparation_utils as data_prep
from iterative_updaters import VanillaGradientDescent, MomentumGradientDescent, NesterovMomentumGradientDescent, RMSPropGradientDescent, AdamGradientDescent
import training_and_evaluation as train_eval
import graph_nn_experiments as experiments
experiments_directory = "100k_fit_and_evaluate_experiments"
def gradient_descent_set_file_name(no_of_layers, no_of_channels, activation):
return "%s/simulation_Xy_%d_%d_%s.csv" % (experiments_directory, no_of_layers, no_of_channels, activation)
gradient_descent_set_file_name(10, 11, "relu")
```
Seems to work fine. Unfortunately, we haven't saved these file names to fit_eval_results.csv
```
ochota_adj_matrix = np.genfromtxt("macierz_sasiedztwa.txt")
```
We need this adjacency matrix.
```
df = pd.read_csv("100k_fit_and_evaluate_experiments/fit_eval_results.csv", header=None)
df.sort_values([5])
```
We need the original training data for scaling and maximum error calculation:
```
traffic_lights_data = pd.read_csv("100k.csv", header=None)
traffic_lights_data.head()
X, y, X_scaler, y_scaler = data_prep.scale_standard_traffic_light_data(traffic_lights_data)
np.mean(traffic_lights_data[21])
ps = [i / 4.0 for i in range(401)]
percentiles=-np.percentile(-traffic_lights_data[21], ps)
plt.xlim(65000.0,35000.0)
plt.xlabel("Simulator output")
plt.ylabel("Percentile in 100k dataset")
plt.plot(percentiles, ps)
```
Train test split:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=831191)
```
This code not really used, just kept in case:
```
tf.reset_default_graph()
no_of_layers = 3
no_of_channels = 6
activation_name = "tanh"
activation = tf.nn.tanh
model_checkpoint_file = "100k_fit_and_evaluate_experiments/model_3_6_tanh.ckpt"
nn_input = tf.placeholder(dtype=tf.float32, shape=[None, 21])
targets = tf.placeholder(dtype=tf.float32, shape=[None, 1])
print("Constructing network with %d layers, %d channels per layer and %s activation function" % (no_of_layers, no_of_channels, activation_name))
nn_output = graph_nn.transfer_matrix_neural_net(nn_input, no_of_layers, no_of_channels, activation, ochota_adj_matrix, verbose=True, share_weights_in_transfer_matrix=False, share_biases_in_transfer_matrix=False)
print("Restoring gradient descent test set")
#X, y = load_gradient_descent_data(no_of_layers, no_of_channels, activation_name)
print("Scaling")
#X_test, y_test = scale_data(X, y, X_scaler, y_scaler)
print("Restoring network weights from %s and evaluating on test set" % model_checkpoint_file)
model_avg_error, actual_vs_predicted = train_eval.evaluate_model_on_a_dataset(model_checkpoint_file, nn_output,nn_input, X_test, y_test, y_scaler)
print("Model avg. error on test set: %f" % model_avg_error)
# close session (if open)
try:
sess.close()
except:
pass
# open new session
sess = tf.Session()
saver = tf.train.Saver()
saver.restore(sess, model_checkpoint_file)
for row in df.head(1).iterrows():
print(row[1])
```
Core code to be used in this notebook:
```
def load_gradient_descent_data(no_of_layers, no_of_channels, activation_name):
data_file_name = gradient_descent_set_file_name(no_of_layers, no_of_channels, activation_name)
data = pd.read_csv(data_file_name, header=None)
no_of_columns = data.shape[1]
X = data.iloc[:,0:(no_of_columns-1)]
y = data.iloc[:,no_of_columns-1].values.reshape(-1,1)
data_normalized = data.copy()
data_normalized.iloc[:,0:(no_of_columns-1)] = X_scaler.transform(data_normalized.iloc[:,0:(no_of_columns-1)])
data_normalized.iloc[:,no_of_columns-1] = y_scaler.transform(data_normalized.iloc[:,no_of_columns-1].values.reshape(-1,1))
X_scaled = data_normalized.iloc[:,0:(no_of_columns-1)]
y_scaled = data_normalized.iloc[:,no_of_columns-1]
return X, y, X_scaled, y_scaled
def scale_data(X_input, y_input, X_scaler, y_scaler):
X_input_ = X_input.copy()
y_input_ = y_input.copy()
X_scaled = X_scaler.transform(X_input_)
y_scaled = y_scaler.transform(y_input_)
return X_scaled, y_scaled
def find_lowest_avg_waiting_time(no_of_layers, no_of_channels, activation_name):
X, y, X_scaled, y_scaled = load_gradient_descent_data(no_of_layers, no_of_channels, activation_name)
return min(y)[0]
def assess_avg_error_below_some_target_values(target_values,
no_of_layers,
no_of_channels,
activation_name,
model_file_name,
X_scaler,
y_scaler):
g_X, g_y, scaled_g_X, scaled_g_y = load_gradient_descent_data(no_of_layers, no_of_channels, activation_name)
#scaled_g_X, scaled_g_y = scale_data(g_X, g_y, X_scaler, y_scaler)
if activation_name == "relu":
activation = tf.nn.relu
else:
activation = tf.nn.tanh
tf.reset_default_graph()
nn_input = tf.placeholder(dtype=tf.float32, shape=[None, 21])
targets = tf.placeholder(dtype=tf.float32, shape=[None, 1])
print("Constructing network with %d layers, %d channels per layer and %s activation function" % (no_of_layers, no_of_channels, activation_name))
nn_output = graph_nn.transfer_matrix_neural_net(nn_input, no_of_layers, no_of_channels, activation, ochota_adj_matrix, verbose=True, share_weights_in_transfer_matrix=False, share_biases_in_transfer_matrix=False)
avg_errors = []
for target_value in target_values:
selected_indices = np.argwhere(g_y.reshape(-1) < target_value).reshape(-1)
if len(selected_indices) > 0:
X_test = scaled_g_X.iloc[selected_indices,:]
y_test = scaled_g_y[selected_indices]
model_avg_error, actual_vs_predicted = train_eval.evaluate_model_on_a_dataset(model_file_name, nn_output, nn_input, X_test, y_test, y_scaler)
else:
model_avg_error = np.nan
avg_errors.append(model_avg_error)
return avg_errors
def find_maximum_relative_error(no_of_layers, no_of_channels, activation_name, model_file_name, X_scaler, y_scaler):
if activation_name == "relu":
activation = tf.nn.relu
else:
activation = tf.nn.tanh
tf.reset_default_graph()
nn_input = tf.placeholder(dtype=tf.float32, shape=[None, 21])
targets = tf.placeholder(dtype=tf.float32, shape=[None, 1])
print("Constructing network with %d layers, %d channels per layer and %s activation function" % (no_of_layers, no_of_channels, activation_name))
nn_output = graph_nn.transfer_matrix_neural_net(nn_input, no_of_layers, no_of_channels, activation, ochota_adj_matrix, verbose=True, share_weights_in_transfer_matrix=False, share_biases_in_transfer_matrix=False)
model_max_error = train_eval.find_model_maximum_relative_error_on_a_dataset(model_file_name, nn_output, nn_input, X_test, y_test, y_scaler)
return model_max_error
def assess_error_stdev_below_some_target_values(target_values,
no_of_layers,
no_of_channels,
activation_name,
model_file_name,
X_scaler,
y_scaler):
g_X, g_y, scaled_g_X, scaled_g_y = load_gradient_descent_data(no_of_layers, no_of_channels, activation_name)
#scaled_g_X, scaled_g_y = scale_data(g_X, g_y, X_scaler, y_scaler)
if activation_name == "relu":
activation = tf.nn.relu
else:
activation = tf.nn.tanh
tf.reset_default_graph()
nn_input = tf.placeholder(dtype=tf.float32, shape=[None, 21])
targets = tf.placeholder(dtype=tf.float32, shape=[None, 1])
print("Constructing network with %d layers, %d channels per layer and %s activation function" % (no_of_layers, no_of_channels, activation_name))
nn_output = graph_nn.transfer_matrix_neural_net(nn_input, no_of_layers, no_of_channels, activation, ochota_adj_matrix, verbose=True, share_weights_in_transfer_matrix=False, share_biases_in_transfer_matrix=False)
error_stdevs = []
for target_value in target_values:
selected_indices = np.argwhere(g_y.reshape(-1) < target_value).reshape(-1)
if len(selected_indices) > 0:
X_test = scaled_g_X.iloc[selected_indices,:]
y_test = scaled_g_y[selected_indices]
model_error_stdev = train_eval.find_model_relative_error_stdev_on_a_dataset(model_file_name, nn_output, nn_input, X_test, y_test, y_scaler)
else:
model_error_stdev = np.nan
error_stdevs.append(model_error_stdev)
return error_stdevs
```
Add min waiting times (from actual simulation) to the data frame:
```
min_waiting_times = []
for row in df.iterrows():
no_of_layers = row[1][0]
no_of_channels = row[1][1]
activation_name = row[1][2]
min_waiting_time = find_lowest_avg_waiting_time(no_of_layers, no_of_channels, activation_name)
min_waiting_times.append(min_waiting_time)
df_plus_min_waiting_time = df.copy()
df_plus_min_waiting_time[len(df_plus_min_waiting_time.columns)] = pd.Series(min_waiting_times, df_plus_min_waiting_time.index)
df_plus_min_waiting_time.sort_values([5])
```
Also calculate max errors on test set:
```
max_errors = []
for row in df.iterrows():
no_of_layers = row[1][0]
no_of_channels = row[1][1]
activation_name = row[1][2]
model_file = row[1][3]
max_error = find_maximum_relative_error(no_of_layers, no_of_channels, activation_name, model_file, X_scaler, y_scaler)
max_errors.append(max_error)
df_plus_max_errors = df_plus_min_waiting_time.copy()
df_plus_max_errors[len(df_plus_max_errors.columns)] = pd.Series(max_errors, df_plus_max_errors.index)
df_plus_max_errors.sort_values([6])
```
Add stratified simulation errors along gradient descent trajectories:
```
avg_errors_list = []
for row in df.iterrows():
r = row[1]
avg_errors = assess_avg_error_below_some_target_values([37000.0,36000.0,35000.0,34000.0,33000.0,32000.0],
r[0],
r[1],
r[2],
r[3],
X_scaler,
y_scaler)
avg_errors_list.append(avg_errors)
avg_error_list = np.array(avg_errors_list)
avg_errors_df =pd.DataFrame(avg_error_list,columns= ["37000.0","36000.0","35000.0","34000.0","33000.0","32000.0"]
)
df_with_avg_errors = df_plus_min_waiting_time.copy()
df_with_avg_errors = pd.concat([df_with_avg_errors, avg_errors_df],axis=1)
df_with_avg_errors.sort_values([5])
```
Now add stratified error stdev:
```
error_stdev_list = []
for row in df.iterrows():
r = row[1]
error_stdev = assess_error_stdev_below_some_target_values([37000.0,36000.0,35000.0,34000.0,33000.0,32000.0],
r[0],
r[1],
r[2],
r[3],
X_scaler,
y_scaler)
error_stdev_list.append(error_stdev)
error_stdev_list = np.array(error_stdev_list)
error_stdev_df = pd.DataFrame(error_stdev_list,columns= ["37000.0","36000.0","35000.0","34000.0","33000.0","32000.0"]
)
df_with_error_stdev = df_plus_min_waiting_time.copy()
df_with_error_stdev = pd.concat([df_with_error_stdev, error_stdev_df],axis=1)
df_with_error_stdev.sort_values([5])
```
### LaTeX tables and CSVs:
```
df_with_avg_errors.columns = ["Layers","Channels","Activation","Model file","Test","Sim","Min sim","Sim<37000.0","Sim<36000.0","Sim<35000.0","Sim<34000.0","Sim<33000.0","Sim<32000.0"]
df_with_avg_errors_1 = df_with_avg_errors[["Layers","Channels","Activation","Model file","Min sim","Test","Sim","Sim<37000.0","Sim<36000.0","Sim<35000.0","Sim<34000.0","Sim<33000.0","Sim<32000.0"]]
perc_cols = ["Test","Sim","Sim<37000.0","Sim<36000.0","Sim<35000.0","Sim<34000.0","Sim<33000.0","Sim<32000.0"]
perc_format = {c: lambda x: "{:.2%}".format(x) for c in perc_cols}
int_cols = ["Layers","Channels","Min sim"]
int_format = {c: lambda x: str(int(x)) for c in int_cols}
for k in int_format.keys():
perc_format[k] = int_format[k]
df_with_avg_errors_1.style.format(perc_format)
df_with_avg_errors_1[["Layers","Channels","Activation","Min sim","Test","Sim","Sim<37000.0","Sim<36000.0","Sim<35000.0","Sim<34000.0","Sim<33000.0","Sim<32000.0"]].sort_values("Min sim")[0:15].to_latex(formatters=perc_format,index=None)
df_with_avg_errors_1.to_csv(r'fit_eval_results_additional.csv', sep=',', index=None)
```
Now print stdevs table to LaTeX:
```
df_with_error_stdev.columns = ["#Lyr","#Ch","Act","Model file","Err. test","Err. sim","Min sim","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]
df_with_error_stdev_1 = df_with_error_stdev[["#Lyr","#Ch","Act","Min sim","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]]
perc_cols = ["<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]
perc_format = {c: lambda x: "{:.2%}".format(x) for c in perc_cols}
int_cols = ["#Lyr","#Ch","Min sim"]
int_format = {c: lambda x: str(int(x)) for c in int_cols}
for k in int_format.keys():
perc_format[k] = int_format[k]
df_with_error_stdev_1.style.format(perc_format)
df_with_error_stdev_1[["#Lyr","#Ch","Act","Min sim","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]].sort_values("Min sim").loc[:,["#Lyr","#Ch","Act","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]][0:15].to_latex(formatters=perc_format,index=None)
df_with_error_stdev.to_csv(r'fit_eval_results_stdevs.csv', sep=',', index=None)
```
Now print df plus max errors to LaTeX:
```
df_plus_max_errors.columns = ["#Lyr","#Ch","Act","Model file","Err. test","Err. sim","Min sim", "Max err."]
perc_cols = ["Err. test","Err. sim","Max err."]
perc_format = {c: lambda x: "{:.2%}".format(x) for c in perc_cols}
#for k in int_format.keys():
# perc_format[k] = int_format[k]
df_plus_max_errors.style.format(perc_format)
df_plus_max_errors[["#Lyr","#Ch","Act","Err. test","Err. sim","Min sim","Max err."]].sort_values("Min sim").loc[:,["#Lyr","#Ch","Act","Err. test","Err. sim","Max err."]][0:15].to_latex(formatters=perc_format,index=None)
df_plus_max_errors.to_csv(r'fit_eval_results_max_error.csv', sep=',', index=None)
```
Insert test to sim differences and column averages:
```
df_with_avg_errors_1 = pd.read_csv(r'fit_eval_results_additional.csv', sep=',')
df_with_avg_errors_1
df_with_avg_errors_1["Err. sim-test"]=df_with_avg_errors_1["Sim"]-df_with_avg_errors_1["Test"]
df_with_avg_errors_1
df_with_avg_errors_1.columns = ["#Lyr","#Ch","Act","Model file","Min sim","Err. test","Err. sim","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0","Err. sim-test"]
df_with_avg_errors_1 = df_with_avg_errors_1[["#Lyr","#Ch","Act","Model file","Min sim","Err. test","Err. sim","Err. sim-test","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]]
perc_cols = ["Err. test","Err. sim","Err. sim-test","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]
perc_format = {c: lambda x: "{:.2%}".format(x) for c in perc_cols}
int_cols = ["#Lyr","#Ch","Min sim"]
int_format = {c: lambda x: str(int(x)) for c in int_cols}
for k in int_format.keys():
perc_format[k] = int_format[k]
df_with_avg_errors_1.style.format(perc_format)
data_frame = df_with_avg_errors_1[["#Lyr","#Ch","Act","Min sim","Err. test","Err. sim","Err. sim-test","<37000.0","<36000.0","<35000.0","<34000.0","<33000.0","<32000.0"]].sort_values("Min sim")[0:15]
column_averages = data_frame.mean(axis=0)
data_frame = data_frame.append(column_averages,ignore_index=True)
data_frame.to_latex(formatters=perc_format,index=None)
data_frame
```
| github_jupyter |
```
import keras
import os
from deepsense import neptune
from keras_retinanet.bin.train import create_models
from keras_retinanet.callbacks.eval import Evaluate
from keras_retinanet.models.resnet import download_imagenet, resnet_retinanet as retinanet
from keras_retinanet.preprocessing.detgen import DetDataGenerator
from keras_retinanet.utils.transform import random_transform_generator
from keras_retinanet.preprocessing.detgen import DetDataGenerator
from detdata import DetGen
from detdata.augmenters import crazy_augmenter
import warnings
warnings.filterwarnings("ignore")
ctx = neptune.Context()
detg= DetGen('/home/i008/malaria_data/dataset_train.mxrecords',
'/home/i008/malaria_data/dataset_train.csv',
'/home/i008/malaria_data/dataset_valid.mxindex', batch_size=4)
train_generator = DetDataGenerator(detg, augmenter=crazy_augmenter)
train_generator.image_max_side = 750
train_generator.image_min_side = 750
weights = download_imagenet('resnet50')
model_checkpoint = keras.callbacks.ModelCheckpoint('mod-{epoch:02d}_loss-{loss:.4f}.h5',
monitor='loss',
verbose=2,
save_best_only=False,
save_weights_only=False,
mode='auto',
period=1)
callbacks = []
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
ctx.channel_send("loss", logs.get('loss'))
def on_epoch_end(self, epoch, logs={}):
ctx.channel_send("val_loss", logs.get('val_loss'))
callbacks.append(keras.callbacks.ReduceLROnPlateau(
monitor='loss',
factor=0.1,
patience=2,
verbose=1,
mode='auto',
epsilon=0.0001,
cooldown=0,
min_lr=0
))
# callbacks.append(LossHistory())
callbacks.append(model_checkpoint)
# model, training_model, prediction_model = create_models(
# backbone_retinanet=retinanet,
# backbone='resnet50',
# num_classes=train_generator.num_classes(),
# weights=weights,
# multi_gpu=0,
# freeze_backbone=True
# )
# training_model.fit_generator(
# generator=train_generator,
# steps_per_epoch=5000,
# epochs=100,
# verbose=1,
# callbacks=callbacks,
# )
training_model.fit_generator(
generator=train_generator,
steps_per_epoch=5000,
epochs=100,
verbose=1,
callbacks=callbacks,
)
```
| github_jupyter |
# COMP9417 19T2 Homework 2: Applying and Implementing Machine Learning
_Mon Jul 29 09:18:30 AEST 2019_
The aim of this homework is to enable you to:
- **apply** parameter search for machine learning algorithms implemented in the Python [scikit-learn](http://scikit-learn.org/stable/index.html) machine learning library
- answer questions based on your **analysis** and **interpretation** of the empirical results of such applications, using your knowledge of machine learning
- **complete** an implementation of a different version of a learning algorithm you have previously seen
After completing this homework you will be able to:
- set up a simple grid search over different hyper-parameter settings based on $k$-fold cross-validation to obtain performance measures on different datasets
- compare the performance measures of different algorithm settings
- propose properties of algorithms and their hyper-parameters, or datasets, which
may lead to performance differences being observed
- suggest reasons for actual observed performance differences in terms of
properties of algorithms, parameter settings or datasets.
- read and understand incomplete code for a learning algorithm to the point of being able to complete the implementation and run it successfully on a dataset.
There are a total of *10 marks* available.
Each homework mark is worth *0.5 course mark*, i.e., homework marks will be scaled
to a **course mark out of 5** to contribute to the course total.
Deadline: 17:59:59, Monday August 5, 2019.
Submission will be via the CSE *give* system (see below).
Late penalties: one mark will be deducted from the total for each day late, up to a total of five days. If six or more days late, no marks will be given.
Recall the guidance regarding plagiarism in the course introduction: this applies to this homework and if evidence of plagiarism is detected it may result in penalties ranging from loss of marks to suspension.
### Format of the questions
There are 2 questions in this homework. Question 1 requires answering some multiple-choice questions in the file [*answers.txt*](http://www.cse.unsw.edu.au/~cs9417/19T2/hw2/answers.txt). Both questions require you to copy and paste text into the file [*answers.txt*](http://www.cse.unsw.edu.au/~cs9417/19T2/hw2/answers.txt). This file **MUST CONTAIN ONLY PLAIN TEXT WITH NO SPECIAL CHARACTERS**.
This file will form your submission.
In summary, your submission will comprise a single file which should be named as follows:
```
answers.txt
```
Please note: files in any format other than plain text **cannot be accepted**.
Submit your files using ```give```. On a CSE Linux machine, type the following on the command-line:
```
$ give cs9417 hw2 answers.txt
```
Alternatively, you can submit using the web-based interface to ```give```.
### Datasets
You can download the datasets required for the homework [here](http://www.cse.unsw.edu.au/~cs9417/hw2/datasets.zip).
Note: you will need to ensure the dataset files are in the same directory from which you are running this notebook.
**Please Note**: this homework uses some datasets in the Attribute-Relation File Format (.arff). To load datasets from '.arff' formatted files, you will need to have installed the ```liac-arff``` package. You can do this using ```pip``` at the command-line, as follows:
```
$ pip install liac-arff
```
## Question 1 – Overfitting avoidance [Total: 3 marks]
Dealing with noisy data is a key issue in machine learning. Unfortunately, even algorithms that have noise-handling mechanisms built-in, like decision trees, can overfit noisy data, unless their "overfitting avoidance" or *regularization* hyper-parameters are set properly.
You will be using datasets that have had various amounts of "class noise" added
by randomly changing the actual class value to a different one for a
specified percentage of the training data.
Here we will specify three arbitrarily chosen levels of noise: low
($20\%$), medium ($50\%$) and high ($80\%$).
The learning algorithm must try to "see through" this noise and learn
the best model it can, which is then evaluated on test data *without*
added noise to evaluate how well it has avoided fitting the noise.
We will also let the algorithm do a limited _grid search_ using cross-validation
for the best *over-fitting avoidance* parameter settings on each training set.
### Running the classifiers
**1(a). [1 mark]**
Run the code section in the notebook cells below. This will generate a table of results, which you should copy and paste **WITHOUT MODIFICATION** into the file [*answers.txt*](http://www.cse.unsw.edu.au/~cs9417/19T2/hw2/answers.txt)
as your answer for "Question 1(a)".
The output of the code section is a table, which represents the percentage accuracy of classification for the decision tree algorithm. The first column contains the result of the "Default" classifier, which is the decision tree algorithm with default parameter settings running on each of the datasets which have had $50\%$ noise added. From the second column on, in each column the results are obtained by running the decision tree algorithm on $0\%$, $20\%$, $50\%$ and $80\%$ noise added to each of the datasets, and in the parentheses is shown the result of a [grid search](http://en.wikipedia.org/wiki/Hyperparameter_optimization) that has been applied to determine the best value for a basic parameter of the decision tree algorithm, namely [min_samples_leaf](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) i.e., the minimum number of examples that can be used to make a prediction in the tree, on that dataset.
### Result interpretation
Answer these questions in the file called [*answers.txt*](http://www.cse.unsw.edu.au/~cs9417/19T2/hw2/answers.txt). Your answers must be based on the results table you saved in "Question 1(a)".
**1(b). [1 mark]** Refer to [*answers.txt*](http://www.cse.unsw.edu.au/~cs9417/19T2/hw2/answers.txt).
**1(c). [1 mark]** Refer to [*answers.txt*](http://www.cse.unsw.edu.au/~cs9417/19T2/hw2/answers.txt).
### Code for question 1
It is only necessary to run the following code to answer the question, but you should also go through it to make sure you know what is going on.
```
# Code for question 1
import arff, numpy as np
import pandas as pd
from sklearn.base import TransformerMixin
from sklearn import tree
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
import sys
import warnings
# fixed random seed
np.random.seed(1)
def warn(*args, **kwargs):
pass
def label_enc(labels):
le = preprocessing.LabelEncoder()
le.fit(labels)
return le
def features_encoders(features,categorical_features='all'):
n_samples, n_features = features.shape
label_encoders = [preprocessing.LabelEncoder() for _ in range(n_features)]
X_int = np.zeros_like(features, dtype=np.int)
for i in range(n_features):
feature_i = features[:, i]
label_encoders[i].fit(feature_i)
X_int[:, i] = label_encoders[i].transform(feature_i)
enc = preprocessing.OneHotEncoder(categorical_features=categorical_features)
return enc.fit(X_int),label_encoders
def feature_transform(features,label_encoders, one_hot_encoder):
n_samples, n_features = features.shape
X_int = np.zeros_like(features, dtype=np.int)
for i in range(n_features):
feature_i = features[:, i]
X_int[:, i] = label_encoders[i].transform(feature_i)
return one_hot_encoder.transform(X_int).toarray()
warnings.warn = warn
a = numpy.zeros((3, 3))
a
a[:,0:2] =9
a
class DataFrameImputer(TransformerMixin):
def fit(self, X, y=None):
self.fill = pd.Series([X[c].value_counts().index[0]
if X[c].dtype == np.dtype('O') else X[c].mean() for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.fill)
def load_data(path):
dataset = arff.load(open(path, 'r'))
data = np.array(dataset['data'])
data = pd.DataFrame(data)
data = DataFrameImputer().fit_transform(data).values
attr = dataset['attributes']
# mask categorical features
masks = []
for i in range(len(attr)-1):
if attr[i][1] != 'REAL':
masks.append(i)
return data, masks
def preprocess(data,masks, noise_ratio):
# split data
train_data, test_data = train_test_split(data,test_size=0.3,random_state=0)
# test data
test_features = test_data[:,0:test_data.shape[1]-1]
test_labels = test_data[:,test_data.shape[1]-1]
# training data
features = train_data[:,0:train_data.shape[1]-1]
print(features)
labels = train_data[:,train_data.shape[1]-1]
classes = list(set(labels))
# categorical features need to be encoded
if len(masks):
one_hot_enc, label_encs = features_encoders(data[:,0:data.shape[1]-1],masks)
test_features = feature_transform(test_features,label_encs,one_hot_enc)
features = feature_transform(features,label_encs,one_hot_enc)
le = label_enc(data[:,data.shape[1]-1])
labels = le.transform(train_data[:,train_data.shape[1]-1])
test_labels = le.transform(test_data[:,test_data.shape[1]-1])
# add noise
np.random.seed(1234)
noise = np.random.randint(len(classes)-1, size=int(len(labels)*noise_ratio))+1
noise = np.concatenate((noise,np.zeros(len(labels) - len(noise),dtype=np.int)))
labels = (labels + noise) % len(classes)
return features,labels,test_features,test_labels
# load data
paths = ['./datasets/balance-scale','./datasets/primary-tumor',
'./datasets/glass','./datasets/heart-h']
noise = [0,0.2,0.5,0.8]
scores = []
params = []
for path in paths:
score = []
param = []
path += '.arff'
data, masks = load_data(path)
# training on data with 50% noise and default parameters
features, labels, test_features, test_labels = preprocess(data, masks, 0.5)
tree = DecisionTreeClassifier(random_state=0,min_samples_leaf=2, min_impurity_decrease=0)
tree.fit(features, labels)
tree_preds = tree.predict(test_features)
tree_performance = accuracy_score(test_labels, tree_preds)
score.append(tree_performance)
param.append(tree.get_params()['min_samples_leaf'])
# training on data with noise levels of 0%, 20%, 50% and 80%
for noise_ratio in noise:
features, labels, test_features, test_labels = preprocess(data, masks, noise_ratio)
param_grid = {'min_samples_leaf': np.arange(2,30,5)}
grid_tree = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid,cv=10,return_train_score=True)
grid_tree.fit(features, labels)
estimator = grid_tree.best_estimator_
tree_preds = grid_tree.predict(test_features)
tree_performance = accuracy_score(test_labels, tree_preds)
score.append(tree_performance)
param.append(estimator.get_params()['min_samples_leaf'])
scores.append(score)
params.append(param)
# print the results
header = "{:^112}".format("Decision Tree Results") + '\n' + '-' * 112 + '\n' + \
"{:^15} | {:^16} | {:^16} | {:^16} | {:^16} | {:^16} |".format("Dataset", "Default", "0%", "20%", "50%", "80%") + \
'\n' + '-' * 112 + '\n'
# print result table
print(header)
for i in range(len(scores)):
#scores = score_list[i][1]
print("{:<16}".format(paths[i]),end="")
for j in range(len(params[i])):
print("| {:>6.2%} ({:>2}) " .format(scores[i][j],params[i][j]),end="")
print('|\n')
print('\n')
```
## Question 2 – Implementation of a simple RNN [Total: 7 marks]
In this question, you will implement a simple recurrent neural network (RNN).
Recurrent neural networks are commonly used when the input data has temporal dependencies among consecutive observations, for example time series and text data. With such data, having knowledge of the previous data points in addition to the current helps in prediction.
## Recurrent neural networks
RNNs are suitable in such scenarios because they keep a state derived from all previously seen data, which in combination with the current input is used to predict the output.
In general, recurrent neural networks work like the following:

_(Image credit: Goodfellow, Bengio & Courville (2015) - Deep Learning)_
Here, $x$ is the input, and $h$ is the hidden state maintained by the RNN. For each input in the sequence, the RNN takes both the previous state $h_{t-1}$ and the current input $x_t$ to do the prediction.
Notice there is only one set of weights in the RNN, but this set of weights is used for the whole sequence of input. In effect, the RNN is chained with itself a number of times equalling the length of the input.
Thus for the purpose of training the RNN, a common practice is to unfold the computational graph, and run the standard back-propagation thereon. This technique is also known as back-propagation through time.
## Your task
Given a dataset of partial words (words without the last character), your task is to implement an RNN to predict the last character in the word. Specifically, your RNN will have the first 9 characters of a word as its input, and you need to predict the 10th character. If there are fewer than 10 characters in a word, spaces are used to pad it.
Most of the code needed is provided below, what you need to do is to implement the back-propagation through time section in ```NeuralNetwork.fit()```.
There are four sections marked ```TO DO: ``` where you need to add your own code to complete a working implementation.
**HINT:** review the implementation of the ```backpropagate``` method of the ```NeuralNetwork``` class in the code in the notebook for Lab6 on "Neural Learning". That should give you a starting point for your implementation.
Ensure that you have the following files you need for training and testing in the directory in which you run this notebook:
```
training_input.txt
training_label.txt
testing_input.txt
testing_label.txt
```
**HINT:** if your implementation is correct your output should look something like the following:

## Submission
Submit a text file ```RNN_solutions.txt```, containing only the four sections (together with the comments)
Sample submission:
```
# setup for the current step
layer_input = []
weight = []
# calculate gradients
gradients, dW, db = [], [], []
# update weights
self.weights[0] += 0
self.biases[0] += 0
# setup for the next step
previous_gradients = []
layer_output = []
```
## Marking
If your implementation runs and obtains a testing accuracy of more than 0.5 then your submission will be given full marks.
Otherwise, each submitted correct section of your code will receive some part of the total marks, as follows:
```
# setup for the current step [2 marks]
layer_input = []
weight = []
# calculate gradients [1 mark]
gradients, dW, db = [], [], []
# update weights [2 marks]
self.weights[0] += 0
self.biases[0] += 0
# setup for the next step [2 marks]
previous_gradients = []
layer_output = []
```
**NOTE:** it is OK to split your code for each section over multiple lines.
```
import time
import numpy
# helper functions to read data
def read_data(file_name, encoding_function, expected_length):
with open(file_name, 'r') as f:
return numpy.array([encoding_function(row) for row in f.read().split('\n') if len(row) == expected_length])
def encode_string(s):
return [one_hot_encode_character(c) for c in s]
def one_hot_encode_character(c):
base = [0] * 26
index = ord(c) - ord('a')
if index >= 0 and index <= 25:
base[index] = 1
return base
def reverse_one_hot_encode(v):
return chr(numpy.argmax(v) + ord('a')) if max(v) > 0 else ' '
# functions used in the neural network
def sigmoid(x):
return 1 / (1 + numpy.exp(-x))
def sigmoid_derivative(x):
return (1 - x) * x
def argmax(x):
return numpy.argmax(x, axis=1)
class NeuralNetwork:
def __init__(self, learning_rate=2, epochs=5000, input_size=9, hidden_layer_size=64):
# activation function and its derivative to be used in backpropagation
self.activation_function = sigmoid
self.derivative_of_activation_function = sigmoid_derivative
self.map_output_to_prediction = argmax
# parameters
self.learning_rate = learning_rate
self.epochs = epochs
self.input_size = input_size
self.hidden_layer_size = hidden_layer_size
# initialisation
numpy.random.seed(77)
def fit(self, X, y):
# reset timer
timer_base = time.time()
# initialise the weights of the NN
input_dim = X.shape[2] + self.hidden_layer_size
output_dim = y.shape[1]
self.weights, self.biases = [], []
previous_layer_size = input_dim
for current_layer_size in [self.hidden_layer_size, output_dim]:
print( current_layer_size )
# random initial weights and zero biases
weights_of_current_layer = numpy.random.randn(previous_layer_size, current_layer_size)
bias_of_current_layer = numpy.zeros((1, current_layer_size))
self.weights.append(weights_of_current_layer)
self.biases.append(bias_of_current_layer)
previous_layer_size = current_layer_size
# train the NN
self.accuracy_log = []
for epoch in range(self.epochs + 1):
outputs = self.forward_propagate(X)
prediction = outputs.pop()
if epoch % 100 == 0:
accuracy = self.evaluate(prediction, y)
print(f"In iteration {epoch}, training accuracy is {accuracy}.")
self.accuracy_log.append(accuracy)
# first step of back-propagation
dEdz = y - prediction
layer_input = outputs.pop()
layer_output = prediction
# calculate gradients
dEds, dW, db = self.derivatives_of_last_layer(dEdz, layer_output, layer_input)
# print(dW.shape)
# print(self.weights[1].shape)
# update weights
self.weights[1] += self.learning_rate / X.shape[0] * dW
self.biases[1] += self.learning_rate / X.shape[0] * db
# setup for the next step
previous_gradients = dEds
layer_output = layer_input
# back-propagation through time (unrolled)
# print(self.input_size)
# print(len(outputs))
# back-propagation through time (unrolled)
for step in range(self.input_size - 1, -1, -1):
# TO DO: setup for the current step
layer_input = numpy.concatenate((outputs[step], X[:,step, :]), axis=1)
if step == self.input_size -1:
weight = self.weights[1][:64,:]
else:
weight = self.weights[0][:64,:]
# TO DO: calculate gradients
gradients, dW, db = self.derivatives_of_hidden_layer(previous_gradients, layer_output, layer_input, weight)
# TO DO: update weights
self.weights[0] += self.learning_rate / X.shape[0] * dW
self.biases[0] += self.learning_rate/ X.shape[0] * db
# TO DO: setup for the next step
previous_gradients = gradients
layer_output = outputs[step]
print(f"Finished training in {time.time() - timer_base} seconds")
def test(self, X, y, verbose=True):
predictions = self.forward_propagate(X)[-1]
if verbose:
for index in range(len(predictions)):
prefix = ''.join(reverse_one_hot_encode(v) for v in X[index])
print(f"Expected {prefix + reverse_one_hot_encode(y[index])}, predicted {prefix + reverse_one_hot_encode(predictions[index])}")
print(f"Testing accuracy: {self.evaluate(predictions, y)}")
def evaluate(self, predictions, target_values):
successful_predictions = numpy.where(self.map_output_to_prediction(predictions) == self.map_output_to_prediction(target_values))
return successful_predictions[0].shape[0] / len(predictions) if successful_predictions else 0
def forward_propagate(self, X):
# initial states
current_state = numpy.zeros((X.shape[0], self.hidden_layer_size))
outputs = [current_state]
# forward propagation through time (unrolled)
for step in range(self.input_size):
x = numpy.concatenate((current_state, X[:, step, :]), axis=1)
current_state = self.apply_neuron(self.weights[0], self.biases[0], x)
outputs.append(current_state)
# the last layer
output = self.apply_neuron(self.weights[1], self.biases[1], current_state)
outputs.append(output)
return outputs
def apply_neuron(self, w, b, x):
return self.activation_function(numpy.dot(x, w) + b)
def derivatives_of_last_layer(self, dEdz, layer_output, layer_input):
dEds = self.derivative_of_activation_function(layer_output) * dEdz
dW = numpy.dot(layer_input.T, dEds)
db = numpy.sum(dEds, axis=0, keepdims=True)
return dEds, dW, db
def derivatives_of_hidden_layer(self, layer_difference, layer_output, layer_input, weight):
gradients = self.derivative_of_activation_function(layer_output) * numpy.dot(layer_difference, weight.T)
dW = numpy.dot(layer_input.T, gradients)
db = numpy.sum(gradients, axis=0, keepdims=True)
return gradients, dW, db
training_input = read_data("./datasets/training_input.txt", encode_string, 9)
training_input
training_label = read_data("./datasets/training_label.txt", one_hot_encode_character, 1)
# print(training_label)
model = NeuralNetwork()
model.fit(training_input, training_label)
testing_input = read_data("./datasets/testing_input.txt", encode_string, 9)
testing_label = read_data("./datasets/testing_label.txt", one_hot_encode_character, 1)
model.test(testing_input, testing_label, verbose=True)
```
| github_jupyter |
# Server OLED Display Controller
This notebook displays a parking lot counter on an OLED display. It communicates with `/server` using HTTP, updating the counters every 2 seconds.
First we must import some stuff and load `config.json`. This is used mainly to store the server address for us.
```
from pynq.lib.pmod import *
from pynq import PL
import oled
import json
import requests
import time
# load config
config = json.load(open('config.json'))
config
```
Now we instantiate the `OledDisplay` class found in `oled.py`. It is mostly a clone of `~/base/pmod/pmod_grove_oled.ipynb`.
We also get the light bar, reset the display, and clear the display.
```
# display lib from oled.py
show_oled = config.get('show_oled')
if show_oled:
display = oled.OledDisplay(PMODA, PMOD_GROVE_G3)
# reset diplay
PL.reset()
display.clear()
# get light bar if config requests it
show_bar = config.get('show_bar')
if show_bar:
bar = Grove_LEDbar(PMODB, PMOD_GROVE_G3)
```
Next we define the `write` function, which simply prints a string on a line of the OLED display. It keeps track of the current length of each line, so that when a shorter line replaces a longer one, it can clear out the trailing characters.
```
# track line lengths
old_lines = [0 for _ in range(8)]
# easy printing
def write(text, line=0):
global display
global old_lines
old_len = old_lines[line]
# prevent overflow
if len(text) > 16:
text = text[:16]
new_len = len(text)
# write the text to the line
display.set_XY(line, 0)
display.write(text)
# clear trailing chars
if new_len < old_len:
display.set_XY(line, new_len)
display.write(' ' * (old_len - new_len))
# save the new line length
old_lines[line] = new_len
```
Now we connect to the server and update the display every 2 seconds. We iterate through the first 4 (or fewer) lots, printing their names and counts to the display. Finally the display object is deleted, cleaning things up.
```
server = config['server']
while True:
res = requests.get('{}/lots'.format(server)).json()
lots = res['lots']
# send to light bar
if show_bar:
total = sum(min(max(l['cars'], 0), l['capacity']) for l in lots)
capacity = sum(l['capacity'] for l in lots)
bar.write_level(int(10 * total / capacity), 2, 1)
if show_oled:
# show detailed text
for i in range(4):
if i < len(lots):
lot = lots[i]
write('{}: {}'.format(lot['name'], lot['cars']), i * 2)
else:
write('', i * 2)
time.sleep(2)
if show_oled:
del display
```
| github_jupyter |
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
<br>
updated by Özlem Salehi | September 17, 2020
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
<h2> <font color="blue"> Solution for </font>Entanglement and Superdense Coding</h2>
<a id="task1"></a>
<h3> Task 1</h3>
Verify the correctness of the above protocol.
For each pair of $ (x,y) \in \left\{ (0,0), (0,1), (1,0),(1,1) \right\} $:
<ul>
<li> Create a quantum curcuit with two qubits: Asja's and Balvis' qubits.</li>
<li> Both are initially set to $ \ket{0} $.</li>
<li> Apply h-gate (Hadamard) to the first qubit. </li>
<li> Apply cx-gate (CNOT) with parameters first-qubit and second-qubit. </li>
</ul>
Assume that they are separated now.
<ul>
<li> If $ x $ is 1, then apply z-gate to the first qubit. </li>
<li> If $ y $ is 1, then apply x-gate (NOT) to the first qubit. </li>
</ul>
Assume that Asja sends her qubit to Balvis.
<ul>
<li> Apply cx-gate (CNOT) with parameters first-qubit and second-qubit.</li>
<li> Apply h-gate (Hadamard) to the first qubit. </li>
<li> Measure both qubits, and compare the results with pair $ (x,y) $. </li>
</ul>
<h3> Solution </h3>
```
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
all_pairs = ['00','01','10','11']
for pair in all_pairs:
# create a quantum curcuit with two qubits: Asja's and Balvis' qubits.
# both are initially set to |0>.
qreg = QuantumRegister(2) # quantum register with 2 qubits
creg = ClassicalRegister(2) # classical register with 2 bits
mycircuit = QuantumCircuit(qreg,creg) # quantum circuit with quantum and classical registers
# apply h-gate (Hadamard) to the first qubit.
mycircuit.h(qreg[0])
# apply cx-gate (CNOT) with parameters first-qubit and second-qubit.
mycircuit.cx(qreg[0],qreg[1])
# they are separated now.
# if a is 1, then apply z-gate to the first qubit.
if pair[0]=='1':
mycircuit.z(qreg[0])
# if b is 1, then apply x-gate (NOT) to the first qubit.
if pair[1]=='1':
mycircuit.x(qreg[0])
# Asja sends her qubit to Balvis.
# apply cx-gate (CNOT) with parameters first-qubit and second-qubit.
mycircuit.cx(qreg[0],qreg[1])
# apply h-gate (Hadamard) to the first qubit.
mycircuit.h(qreg[0])
# measure both qubits
mycircuit.measure(qreg,creg)
# compare the results with pair (a,b)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(mycircuit)
for outcome in counts:
reverse_outcome = ''
for i in outcome:
reverse_outcome = i + reverse_outcome
print("(a,b) is",pair,": ",reverse_outcome,"is observed",counts[outcome],"times")
```
<a id="task3"></a>
<h3>Task 3</h3>
Can the above set-up be used by Balvis?
Verify that the following modified protocol allows Balvis to send two classical bits by sending only his qubit.
For each pair of $ (a,b) \in \left\{ (0,0), (0,1), (1,0),(1,1) \right\} $:
- Create a quantum curcuit with two qubits: Asja's and Balvis' qubits
- Both are initially set to $ \ket{0} $
- Apply h-gate (Hadamard) to the Asja's qubit
- Apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
Assume that both qubits are separated from each other.
<ul>
<li> If $ a $ is 1, then apply z-gate to Balvis' qubit. </li>
<li> If $ b $ is 1, then apply x-gate (NOT) to Balvis' qubit. </li>
</ul>
Assume that Balvis sends his qubit to Asja.
- Apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
- Apply h-gate (Hadamard) to the Asja's qubit
- Measure both qubits and compare the results with pair $ (a,b) $
<h3> Solution </h3>
```
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
all_pairs = ['00','01','10','11']
for pair in all_pairs:
# create a quantum curcuit with two qubits: Asja's and Balvis' qubits.
# both are initially set to |0>.
q = QuantumRegister(2,"q") # quantum register with 2 qubits
c = ClassicalRegister(2,"c") # classical register with 2 bits
qc = QuantumCircuit(q,c) # quantum circuit with quantum and classical registers
# apply h-gate (Hadamard) to the Asja's qubit
qc.h(q[1])
# apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
qc.cx(q[1],q[0])
# they are separated from each other now
# if a is 1, then apply z-gate to Balvis' qubit
if pair[0]=='1':
qc.z(q[0])
# if b is 1, then apply x-gate (NOT) to Balvis' qubit
if pair[1]=='1':
qc.x(q[0])
# Balvis sends his qubit to Asja
qc.barrier()
# apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
qc.cx(q[1],q[0])
# apply h-gate (Hadamard) to the Asja's qubit
qc.h(q[1])
# measure both qubits
qc.barrier()
qc.measure(q,c)
# draw the circuit in Qiskit's reading order
display(qc.draw(output='mpl',reverse_bits=True))
# compare the results with pair (a,b)
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(qc)
print(pair,"-->",counts)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/claytonchagas/intpy_prod/blob/main/3_3_automatic_evaluation_power_recursive_ast_only_DB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!sudo apt-get update
!sudo apt-get install python3.9
!python3.9 -V
!which python3.9
```
#**i. Colab hardware and software specs:**
- n1-highmem-2 instance
- 2vCPU @ 2.3GHz
- 13GB RAM
- 100GB Free Space
- idle cut-off 90 minutes
- maximum lifetime 12 hours
```
# Colab hardware info (processor and memory):
# !cat /proc/cpuinfo
# !cat /proc/memoinfo
# !lscpu
!lscpu | egrep 'Model name|Socket|Thread|NUMA|CPU\(s\)'
print("---------------------------------")
!free -m
# Colab SO structure and version
!ls -a
print("---------------------------------")
!ls -l /
print("---------------------------------")
!lsb_release -a
```
#**ii. Cloning IntPy repository:**
- https://github.com/claytonchagas/intpy_dev.git
```
!git clone https://github.com/claytonchagas/intpy_dev.git
!ls -a
print("---------------------------------")
%cd intpy_dev/
!git checkout c27b261
!ls -a
print("---------------------------------")
!git branch
print("---------------------------------")
#!git log --pretty=oneline --abbrev-commit
#!git log --all --decorate --oneline --graph
```
#**iii. Power's evolutions and cutoff by approach**
- Evaluating recursive power code and its cutoff by approach
```
!ls -a
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
!rm -rf output_iii.dat
print("--no-cache execution")
#1..327
!for i in {1..300}; do python3.9 power_recursive.py 1 $i --no-cache >> output_iii.dat; rm -rf .intpy; done
print("done!")
print("only intra cache")
!for i in {1..300}; do python3.9 power_recursive.py 1 $i -v v01x >> output_iii.dat; rm -rf .intpy; done
print("done!")
print("full cache")
!for i in {1..300}; do python3.9 power_recursive.py 1 $i -v v01x >> output_iii.dat; done
print("done!")
import matplotlib.pyplot as plt
import numpy as np
f1 = open("output_iii.dat", "r")
data1 = []
dataf1 = []
for x in f1.readlines()[3:1200:4]:
data1.append(float(x))
f1.close()
for datas1 in data1:
dataf1.append(round(datas1, 5))
print(dataf1)
f2 = open("output_iii.dat", "r")
data2 = []
dataf2 = []
for x in f2.readlines()[1203:2400:4]:
data2.append(float(x))
f2.close()
for datas2 in data2:
dataf2.append(round(datas2, 5))
print(dataf2)
f3 = open("output_iii.dat", "r")
data3 = []
dataf3 = []
for x in f3.readlines()[2403:3600:4]:
data3.append(float(x))
f3.close()
for datas3 in data3:
dataf3.append(round(datas3, 5))
print(dataf3)
x = np.arange(1, 301)
#plt.style.use('classic')
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_figheight(5)
fig.set_figwidth(14)
fig.suptitle("Power's evolutions and cutoff by approach", fontweight='bold')
ax1.plot(x, dataf1, "tab:blue", label="no-cache")
ax1.plot(x, dataf2, "tab:orange", label="intra cache")
ax1.plot(x, dataf3, "tab:green", label="full cache")
#ax1.set_title("Power's evolutions and cutoff by approach")
ax1.set_xlabel("Power's Series Value")
ax1.set_ylabel("Time in seconds")
ax1.grid()
lex = ax1.legend()
ax2.plot(x, dataf1, "tab:blue", label="no-cache")
ax2.plot(x, dataf3, "tab:green", label="full cache")
#ax2.set_title("Power's evolutions and cutoff by approach")
ax2.set_xlabel("Power's Series Value")
ax2.set_ylabel("Time in seconds")
ax2.grid()
lex = ax2.legend()
plt.show()
```
#**iv. Power 1 ^ 300, 150 and 75 recursive, three mixed trials**
- Evaluating recursive power code, input 300, 150, and 75, three trials and plot.
- First trial: inputs: 1 ^ 300, 150 and 75, no inter-cache (baseline).
- Second trial: with inter and intra-cache, inputs: 1 ^ 300, 150 and 75, analyzing the cache's behavior with different inputs.
- Third trial: with inter and intra-cache, inputs: 1 ^ 75, 150 and 300, analyzing the cache's behavior with different inputs, in a different order of the previous running.
```
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
!rm -rf output_iv.dat
print("First running, Power 1^300: value and time in sec")
!python3.9 power_recursive.py 1 300 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("Second running, Power 1^150: value and time in sec")
!python3.9 power_recursive.py 1 150 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("Third running, Power 1^75: value and time in sec")
!python3.9 power_recursive.py 1 75 -v v01x | tee -a output_iv.dat
print("---------------------------------")
```
- Second trial: with inter and intra-cache, inputs: 1 ^ 300, 150 and 75.
```
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("First running, Power 1^300: value and time in sec")
!python3.9 power_recursive.py 1 300 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Second running, Power 1^150: value and time in sec")
!python3.9 power_recursive.py 1 150 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Third running, Power 1^75: value and time in sec")
!python3.9 power_recursive.py 1 75 -v v01x | tee -a output_iv.dat
print("---------------------------------")
```
- Third trial: with inter and intra-cache, inputs: 1 ^ 75, 150 and 300.
```
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("First running, Power 1^75: value and time in sec")
!python3.9 power_recursive.py 1 75 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Second running, Power 1^150: value and time in sec")
!python3.9 power_recursive.py 1 150 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Third running, Power 1^300: value and time in sec")
!python3.9 power_recursive.py 1 300 -v v01x | tee -a output_iv.dat
print("---------------------------------")
```
- Plotting the comparison: first graph.
```
import numpy as np
f4 = open("output_iv.dat", "r")
pow300 = []
pow150 = []
pow75 = []
data4 = []
dataf4 = []
for x in f4.readlines()[3::4]:
data4.append(float(x))
f4.close()
for datas4 in data4:
dataf4.append(round(datas4, 6))
print(dataf4)
pow327 = [dataf4[0], dataf4[3], dataf4[8]]
print(pow327)
pow164 = [dataf4[1], dataf4[4], dataf4[7]]
print(pow164)
pow82 = [dataf4[2], dataf4[5], dataf4[6]]
print(pow82)
running3to5 = ['1st trial: cache intra', '2nd trial: cache inter-intra/desc', '3rd trial: cache inter-intra/asc']
y = np.arange(len(running3to5))
width = 0.40
z = ['Pow 300', 'Pow 150', 'Pow 75']
list_color_z = ['blue', 'orange', 'green']
zr = ['Pow 75', 'Pow 150', 'Pow 300']
list_color_zr = ['green', 'orange', 'blue']
t1=[dataf4[0], dataf4[1], dataf4[2]]
t2=[dataf4[3], dataf4[4], dataf4[5]]
t3=[dataf4[6], dataf4[7], dataf4[8]]
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(11,5))
rects1 = ax1.bar(z, t1,width, label='1st trial', color=list_color_z)
rects2 = ax2.bar(z, t2, width, label='2nd trial', color=list_color_z)
rects3 = ax3.bar(zr, t3, width, label='3rd trial', color=list_color_zr)
ax1.set_ylabel('Time in seconds', fontweight='bold')
ax1.set_xlabel('1st trial: cache intra', fontweight='bold')
ax2.set_xlabel('2nd trial: cache inter-intra/desc', fontweight='bold')
ax3.set_xlabel('3rd trial: cache inter-intra/asc', fontweight='bold')
ax2.set_title('Power recursive 300, 150 and 75 v0.1.x', fontweight='bold')
for index, datas in enumerate(t1):
ax1.text(x=index, y=datas, s=t1[index], ha = 'center', va = 'bottom', fontweight='bold')
for index, datas in enumerate(t2):
ax2.text(x=index, y=datas, s=t2[index], ha = 'center', va = 'bottom', fontweight='bold')
for index, datas in enumerate(t3):
ax3.text(x=index, y=datas, s=t3[index], ha = 'center', va = 'bottom', fontweight='bold')
ax1.grid(axis='y')
ax2.grid(axis='y')
ax3.grid(axis='y')
fig.tight_layout()
plt.savefig('chart_iv_pow_75_150_300_v01x.png')
plt.show()
```
#**1. Fast execution, all versions (v0.1.x and from v0.2.1.x to v0.2.7.x)**
##**1.1 Fast execution: only intra-cache**
###**1.1.1 Fast execution: only intra-cache => experiment's executions**
```
!rm -rf .intpy;\
rm -rf stats_intra.dat;\
echo "IntPy only intra-cache";\
experimento=power_recursive.py;\
param1=1;\
param2=300;\
echo "Experiment: $experimento";\
echo "Params: $param1 $param2";\
for i in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do rm -rf output_intra_$i.dat;\
rm -rf .intpy;\
echo "---------------------------------";\
echo "IntPy version $i";\
for j in {1..5};\
do echo "Execution $j";\
rm -rf .intpy;\
if [ "$i" = "--no-cache" ]; then python3.9 $experimento $param1 $param2 $i >> output_intra_$i.dat;\
else python3.9 $experimento $param1 $param2 -v $i >> output_intra_$i.dat;\
fi;\
echo "Done execution $j";\
done;\
echo "Done IntPy version $i";\
done;\
echo "---------------------------------";\
echo "---------------------------------";\
echo "Statistics evaluation:";\
for k in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do echo "Statistics version $k" >> stats_intra.dat;\
echo "Statistics version $k";\
python3.9 stats_colab.py output_intra_$k.dat;\
python3.9 stats_colab.py output_intra_$k.dat >> stats_intra.dat;\
echo "---------------------------------";\
done;\
```
###**1.1.2 Fast execution: only intra-cache => charts generation**
```
%matplotlib inline
import matplotlib.pyplot as plt
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
filev = "f_intra_"
data = "data_intra_"
dataf = "dataf_intra_"
for i, j in zip(versions, colors):
filev_version = filev+i
data_version = data+i
dataf_version = dataf+i
file_intra = open("output_intra_"+i+".dat", "r")
data_intra = []
dataf_intra = []
for x in file_intra.readlines()[3::4]:
data_intra.append(float(x))
file_intra.close()
for y in data_intra:
dataf_intra.append(round(y, 5))
print(i+": ",dataf_intra)
running1_1 = ['1st', '2nd', '3rd', '4th', '5th']
plt.figure(figsize = (10, 5))
plt.bar(running1_1, dataf_intra, color =j, width = 0.4)
plt.grid(axis='y')
for index, datas in enumerate(dataf_intra):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Running only with intra cache "+i, fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Chart "+i+" intra - Power 1^300 recursive - with intra cache, no inter cache - IntPy "+i+" version", fontweight='bold')
plt.savefig("chart_intra_"+i+".png")
plt.close()
#plt.show()
import matplotlib.pyplot as plt
file_intra = open("stats_intra.dat", "r")
data_intra = []
for x in file_intra.readlines()[5::8]:
data_intra.append(round(float(x[8::]), 5))
file_intra.close()
print(data_intra)
versions = ["--no-cache", "0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.7.x"]
#colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
plt.figure(figsize = (10, 5))
plt.bar(versions, data_intra, color = colors, width = 0.7)
plt.grid(axis='y')
for index, datas in enumerate(data_intra):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Median for 5 executions in each version, intra cache", fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Power 300 recursive, cache intra-running, comparison of all versions", fontweight='bold')
plt.savefig('compare_median_intra.png')
plt.close()
#plt.show()
```
##**1.2 Fast execution: full cache -> intra and inter-cache**
###**1.2.1 Fast execution: full cache -> intra and inter-cache => experiment's executions**
```
!rm -rf .intpy;\
rm -rf stats_full.dat;\
echo "IntPy full cache -> intra and inter-cache";\
experimento=power_recursive.py;\
param1=1;\
param2=300;\
echo "Experiment: $experimento";\
echo "Params: $param1 $param2";\
for i in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do rm -rf output_full_$i.dat;\
rm -rf .intpy;\
echo "---------------------------------";\
echo "IntPy version $i";\
for j in {1..5};\
do echo "Execution $j";\
if [ "$i" = "--no-cache" ]; then python3.9 $experimento $param1 $param2 $i >> output_full_$i.dat;\
else python3.9 $experimento $param1 $param2 -v $i >> output_full_$i.dat;\
fi;\
echo "Done execution $j";\
done;\
echo "Done IntPy version $i";\
done;\
echo "---------------------------------";\
echo "---------------------------------";\
echo "Statistics evaluation:";\
for k in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do echo "Statistics version $k" >> stats_full.dat;\
echo "Statistics version $k";\
python stats_colab.py output_full_$k.dat;\
python stats_colab.py output_full_$k.dat >> stats_full.dat;\
echo "---------------------------------";\
done;\
```
###**1.2.2 Fast execution: full cache -> intra and inter-cache => charts generation**
```
%matplotlib inline
import matplotlib.pyplot as plt
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
filev = "f_full_"
data = "data_full_"
dataf = "dataf_full_"
for i, j in zip(versions, colors):
filev_version = filev+i
data_version = data+i
dataf_version = dataf+i
file_full = open("output_full_"+i+".dat", "r")
data_full = []
dataf_full = []
for x in file_full.readlines()[3::4]:
data_full.append(float(x))
file_full.close()
for y in data_full:
dataf_full.append(round(y, 5))
print(i+": ",dataf_full)
running1_1 = ['1st', '2nd', '3rd', '4th', '5th']
plt.figure(figsize = (10, 5))
plt.bar(running1_1, dataf_full, color =j, width = 0.4)
plt.grid(axis='y')
for index, datas in enumerate(dataf_full):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Running full cache "+i, fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Chart "+i+" full - Power 1^300 recursive - with intra and inter cache - IntPy "+i+" version", fontweight='bold')
plt.savefig("chart_full_"+i+".png")
plt.close()
#plt.show()
import matplotlib.pyplot as plt
file_full = open("stats_full.dat", "r")
data_full = []
for x in file_full.readlines()[5::8]:
data_full.append(round(float(x[8::]), 5))
file_full.close()
print(data_full)
versions = ["--no-cache", "0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.7.x"]
#colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
plt.figure(figsize = (10, 5))
plt.bar(versions, data_full, color = colors, width = 0.7)
plt.grid(axis='y')
for index, datas in enumerate(data_full):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Median for 5 executions in each version, full cache", fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Power 300 recursive, cache intra and inter-running, comparison of all versions", fontweight='bold')
plt.savefig('compare_median_full.png')
plt.close()
#plt.show()
```
##**1.3 Displaying charts to all versions**
###**1.3.1 Only intra-cache charts**
```
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
from IPython.display import Image, display
for i in versions:
display(Image("chart_intra_"+i+".png"))
print("=====================================================================================")
```
###**1.3.2 Full cache charts -> intra and inter-cache**
```
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
from IPython.display import Image, display
for i in versions:
display(Image("chart_full_"+i+".png"))
print("=====================================================================================")
```
###**1.3.3 Only intra-cache: median comparison chart of all versions**
```
from IPython.display import Image, display
display(Image("compare_median_intra.png"))
```
###**1.3.4 Full cache -> intra and inter-cache: median comparison chart of all versions**
```
from IPython.display import Image, display
display(Image("compare_median_full.png"))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/suyash091/EEG-MNIST-Analysis/blob/master/Training_EEG_randomforest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import gc
import numpy as np
from google.colab import drive
drive.mount('/content/drive')
Df1=pd.read_csv('/content/drive/My Drive/BCI MNIST/xvar1.csv')
Df2=pd.read_csv('/content/drive/My Drive/BCI MNIST/xvar2.csv')
Df3=pd.read_csv('/content/drive/My Drive/BCI MNIST/xvar3.csv')
bigdata = pd.concat([Df1, Df2, Df3], ignore_index=True, sort =False)
del Df1
del Df2
del Df3
bigdata.drop(bigdata.columns[bigdata.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
bigdata.head()
pdf1=pd.read_csv('/content/drive/My Drive/BCI MNIST/yvar1.csv')
pdf2=pd.read_csv('/content/drive/My Drive/BCI MNIST/yvar2.csv')
pdf3=pd.read_csv('/content/drive/My Drive/BCI MNIST/yvar3.csv')
bigpred = pd.concat([pdf1, pdf2, pdf3], ignore_index=True, sort =False)
bigpred=bigpred['0'].map({'-1-1-1-1-1-1-1-1-1-1-1-1-1-1':10,'88888888888888':8,'66666666666666':6,'00000000000000': 0,'99999999999999':9,'55555555555555':5,'22222222222222':2,'11111111111111':1,'77777777777777':7,'33333333333333':3,'44444444444444':4,88888888888888:8,66666666666666:6,00000000000000: 0,99999999999999:9,55555555555555:5,22222222222222:2,11111111111111:1,77777777777777:7,33333333333333:3,44444444444444:4})
del pdf1
del pdf2
del pdf3
gc.collect()
len(bigpred.dropna())==len(bigpred)
bigpred.head()
bigpred.drop(bigpred.columns[bigpred.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
#bigpred.head(100)
len(bigdata)==len(bigpred)
#Import models from scikit learn module:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold #For K-fold cross validation
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn import metrics
import numpy as np
from sklearn.metrics import roc_curve, auc, precision_score, confusion_matrix
#Generic function for making a classification model and accessing performance:
def classification_model(model, predictors, outcome):
history=model.fit(predictors,outcome.values.ravel())
predictions = model.predict(predictors)
accuracy = metrics.accuracy_score(predictions,outcome)
print('Accuracy : %s' % '{0:.3%}'.format(accuracy))
kf = KFold(n_splits=2)
error = []
for train, test in kf.split(predictors):
train_predictors = (predictors.iloc[train,:])
train_target = outcome.iloc[train]
model.fit(train_predictors, train_target.values.ravel())
error.append(model.score(predictors.iloc[test,:], outcome.iloc[test]))
print('Cross-Validation Score : %s' % '{0:.3%}'.format(np.mean(error)))
return history
idx = np.random.permutation(df1.index)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(bigdata, bigpred, test_size=0.20, random_state=42)
model = RandomForestClassifier(n_estimators=100)
model.fit(X_train,y_train)
idx = np.random.permutation(X_test.index)
X_test=X_test.reindex(idx)
y_test=y_test.reindex(idx)
predictions = model.predict(X_test)
prc=precision_score(predictions,y_test, average=None)
cfm=confusion_matrix(predictions,y_test)
accuracy = metrics.accuracy_score(predictions,y_test)
#false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, predictions)
#auc = auc(false_positive_rate, true_positive_rate)
print(accuracy)
print(prc)
print(cfm)
#history=classification_model(model,X_test,y_test)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jy6zheng/FacialExpressionRecognition/blob/master/Facial_recognition.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Set-up
First, go to runtime->change runtime type -> and select GPU to improve the speed of training
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# !curl -s https://course.fast.ai/setup/colab | bash
```
Mount google drive to enable file browsing. Else, the data saved in your files will be deleted when the runtime is over
```
# from google.colab import drive
# drive.mount('/content/gdrive', force_remount=True)
# root_dir = "/content/gdrive/My Drive/"
# base_dir = root_dir + 'projects/'
from fastai.vision import *
from fastai.metrics import error_rate
import numpy as np
import os
!pip --version
```
# Download dataset
```
!wget https://www.kaggle.com/jonathanoheix/face-expression-recognition-dataset
```
# Process Data
Create directory for face images. Then, unzip the dataset from kaggle (or unzip my cleaned dataset) and move to directory
```
base_dir = "~/.datasets/"
path = Path(base_dir+'faces')
path.mkdir(parents=True, exist_ok=True)
# !unzip -o face-expression-recognition-dataset.zip
# !mv images/ 'gdrive/My Drive/gaia/faces'
```
The np.random.seed(42) ensures that the random numbers are replicable. Using the fast.ai library, a image data bunch is created from the train and validation folders. </br>
All images are cropped to 244x244 since that is what the architecture resnet34 is trained on. The **transformations** adjust the photos by cropping, centering and zooming in the images. </br>
**Normalization** ensures that the three color channels (red green and blue) have pixel values that are normalized (mean of 0 and standard deviation of 1)
```
np.random.seed(42)
path_data = os.path.join(base_dir+'images')
data = ImageDataLoaders.from_folder(path_data, train="train", valid="validation",
size=224, num_workers=4)
# normalize(imagenet_stats)
# ds_tfms=get_transforms()
data.valid_ds
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
```
# Training
We will use a learner to train the model. The learner takes in the image data bunch as well as the resnet34 architecture to train the model. The metrics will be used to print the error_rate when training
```
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
```
Then we train the dataset. With learn.fit_one_cycle(4) we pass in the complete dataset four times. After 4 epochs, the error has been decreased to 29.8%
```
learn.fit_one_cycle(4)
learn.save('stage-1')
learn.load('stage-1')
```
Before, we were only training the few extra layers of the model near the end. With **learn.unfreeze()** we are able to train the whole model. </br>
Then we find the learning rate, and the plot of loss over learning rate. From there, we find where the steepest downward slope of loss occurs (around 1e-4) and use it to fit 8 more epochs
```
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, max_lr=slice(1e-4,3e-4))
learn.save('stage-2')
learn.load('stage-2');
```
# Interpreting Results
```
interp = ClassificationInterpretation.from_learner(learn)
```
For interpreting the results, it is clear from the confusion matrix that sad and neutral faces are often confused. This makes sense since many of the sad and neutral faces are similar and hard to distinguish
```
interp.plot_confusion_matrix()
```
Also, looking at the photos that caused the top losses, (difference between the predicted and the actual label) it is also clear that some data is still dirty and mislabelled, therefore there is still room for improvement
```
interp.plot_top_losses(30, figsize=(15,11))
```
Now export the model, and download it (export.pkl)
```
learn.export()
```
| github_jupyter |
```
import os
import pickle as pkl
import numpy as np
import pandas as pd
from collections import defaultdict
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
BASE_PATH = "/home/nrahaman/python/simulator/work/fancyplots_bigomo_careful_biginit"
METHODS = {p: os.path.join(BASE_PATH, p) for p in os.listdir(BASE_PATH)}
# Shortcuts
BDT1 = "binary_digital_tracing_order_1"
BDT2 = "binary_digital_tracing_order_2"
UNMITIGATED = "no_intervention"
ORACLE = "oracle"
TRANSFORMER = lambda name: f"transformer{name}"
METHODS
def load_tracker_data(method, condition=lambda people, days, init, uptake, seed: True):
pickles, paths = [], []
sims = os.listdir(METHODS[method])
for sim in sims:
sim_stats = sim.split("_")
people = int(sim_stats[2].split("-")[-1])
days = int(sim_stats[3].split("-")[-1])
init = float(sim_stats[4].split("-")[-1])
uptake = float(sim_stats[5].split("-")[-1])
seed = int(sim_stats[6].split("-")[-1])
if condition(people, days, init, uptake, seed):
sim_path = os.path.join(METHODS[method], sim)
for artefact in os.listdir(sim_path):
if not artefact.startswith("tracker_data"):
continue
artefact_path = os.path.join(sim_path, artefact)
print(f"Loading artefact: {artefact_path}")
with open(artefact_path, "rb") as f:
pickles.append(pkl.load(f))
paths.append(artefact_path)
return pickles, paths
# # This is commented out to prevent accidental overwriting
stats = {}
# ------------------------------CONTROL PANEL--------------------------------
exclude = {}
# exclude = {TRANSFORMER("PROUD-DONKEY-686B")}
# include = {ORACLE, TRANSFORMER("AVID-WAVE-652B"), TRANSFORMER("PROUD-DONKEY-686A"), BDT1, BDT2, UNMITIGATED}
include = METHODS
reload = {TRANSFORMER("PROUD-DONKEY-686B")}
# reload = {}
# ---------------------------------------------------------------------------
# ---------------------------------------------------------------------------
for method in include:
if method in exclude:
continue
if method in reload and method in stats:
del stats[method]
if method in stats:
continue
print(f"---------{method}----------")
pickles, _ = load_tracker_data(method)
stats[method] = defaultdict(list)
for pickle in pickles:
stats[method]["cases"].append(pickle["cases_per_day"])
stats[method]["infected"].append(sum(pickle["cases_per_day"]))
stats[method]["effective_contacts"].append(pickle["effective_contacts_since_intervention"])
stats[method]["expected_mobility"].append(pickle["expected_mobility"])
stats[method]["rec_level"].append(pickle["humans_intervention_level"])
stats[method]["outside_daily_contacts"].append(pickle["outside_daily_contacts"])
stats[method]["outside_daily_contacts"].append(pickle["outside_daily_contacts"])
stats[method]["outside_effective_contacts"].append(np.mean(pickle["outside_daily_contacts"][pickle["intervention_day"]:]))
del pickles
# ------------------------------CONTROL PANEL--------------------------------
labels = {
UNMITIGATED: "Unmitigated",
BDT1: "BDT1",
BDT2: "BDT2",
TRANSFORMER("AVID-WAVE-652B"): "Transformer",
TRANSFORMER("PROUD-DONKEY-686A"): "Forward-Transformer (O)",
TRANSFORMER("PROUD-DONKEY-686B"): "Forward-Transformer",
TRANSFORMER("SCARLET-DAWN-653A"): "LR",
TRANSFORMER("OLIVE-PLANT-687A"): "Forward-LR",
ORACLE: "Oracle",
}
exclude = []
# include = METHODS
include = [
# TRANSFORMER("OLIVE-PLANT-687A"),
# TRANSFORMER("SCARLET-DAWN-653A"),
TRANSFORMER("AVID-WAVE-652B"),
TRANSFORMER("PROUD-DONKEY-686A"),
# TRANSFORMER("PROUD-DONKEY-686B"),
BDT1,
ORACLE,
]
plot_type = "scatter"
kde_alpha = 0.7
x_key = "outside_effective_contacts"
y_key = "infected"
# ---------------------------------------------------------------------------
# ---------------------------------------------------------------------------
plt.figure(figsize=(10, 6))
ax = plt.gca()
for method, stat in stats.items():
if method in exclude or method not in include:
continue
if plot_type == "kde":
sns.kdeplot(stat[x_key], stat[y_key], ax=ax,
label=labels[method], shade=True,
alpha=kde_alpha, shade_lowest=False)
elif plot_type == "scatter":
plt.scatter(stat[x_key], stat[y_key],
label=labels[method])
plt.xlabel(x_key.replace("_", " ").title())
plt.ylabel(y_key.replace("_", " ").title())
plt.legend()
plt.title("Uptake = 84.15%")
plt.show()
# ------------------------------CONTROL PANEL--------------------------------
plot_type = "print"
kde_alpha = 1.0
kde_shade = False
# ---------------------------------------------------------------------------
# ---------------------------------------------------------------------------
pretty_x = x_key.replace("_", " ").title()
pretty_y = y_key.replace("_", " ").title()
if plot_type != "print":
plt.figure(figsize=(10, 6))
for method in stats:
if method in exclude or method not in include:
continue
ratio = np.array(stats[method][y_key]) / np.array(stats[method][x_key])
if plot_type == "print":
print(f"Method: {labels[method]} >> ")
print(f"{pretty_y}/{pretty_x} (Mean) : {ratio.mean():.4f}")
print(f"{pretty_y}/{pretty_x} (Std) : {ratio.std():.4f}")
print(f"{pretty_y}/{pretty_x} (Median) : {np.median(ratio):.4f}")
elif plot_type == "hist":
plt.hist(ratio, label=labels[method], alpha=0.6)
elif plot_type == "kde":
sns.kdeplot(ratio, label=labels[method], alpha=kde_alpha, shade=kde_shade)
if plot_type != "print":
plt.xlabel(f"{pretty_y} / {pretty_x}")
plt.legend()
plt.show()
```
| github_jupyter |
# Chapter 4 - Introduction to Autoregressive and Automated Methods for Time Series Forecasting - Azure Machine Learning Example
## Automated Machine Learning
```
# This should be done in a seperate environment as azureml-sdk conflicts with some of our package versions such as statsmodels 0.12
# The environment this was tested with is
# name: azureml
# channels:
# - defaults
# - conda-forge
# dependencies:
# - python=3.6
# - matplotlib=3.1.1
# - pandas=1.1.1
# - pip
# - pip:
# - azureml-sdk[automl,notebooks,explain]
# - azuremlftk
# - azure-cli
import logging
import os
import warnings
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from datetime import datetime
import azureml.core
from azureml.core import Dataset, Experiment, Workspace
from azureml.train.automl import AutoMLConfig
warnings.showwarning = lambda *args, **kwargs: None
experiment_name = 'automatedML-timeseriesforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == "AmlCompute":
found = True
print("Found existing compute target.")
compute_target = cts[amlcompute_cluster_name]
if not found:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_DS12_V2",
max_nodes=6,
)
compute_target = ComputeTarget.create(
ws, amlcompute_cluster_name, provisioning_config
)
print("Checking cluster status...")
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
target_column_name = "demand"
time_column_name = "timeStamp"
ts_data = Dataset.Tabular.from_delimited_files(
path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv"
).with_timestamp_columns(fine_grain_timestamp=time_column_name)
ts_data.take(5).to_pandas_dataframe().reset_index(drop=True)
ts_data = ts_data.time_before(datetime(2017, 10, 10, 5))
train = ts_data.time_before(datetime(2017, 8, 8, 5), include_boundary=True)
train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5)
test = ts_data.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5))
test.to_pandas_dataframe().reset_index(drop=True).head(5)
max_horizon = 24
automl_settings = {
"time_column_name": time_column_name,
"max_horizon": max_horizon,
}
automl_config = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"],
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**automl_settings
)
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
best_run, fitted_model = remote_run.get_output()
fitted_model.steps
featurization_summary = fitted_model.named_steps[
"timeseriestransformer"
].get_featurization_summary()
pd.DataFrame.from_records(featurization_summary)
X_test = test.to_pandas_dataframe().reset_index(drop=True)
y_test = X_test.pop(target_column_name).values
y_predictions, X_trans = fitted_model.forecast(X_test)
from common.forecasting_helper import align_outputs
ts_results_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from automl.client.core.common import constants
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
scores = metrics.compute_metrics_regression(
ts_results_all["predicted"],
ts_results_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None,
None,
None,
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
%matplotlib inline
test_pred = plt.scatter(ts_results_all[target_column_name], ts_results_all["predicted"], color="b")
test_test = plt.scatter(
ts_results_all[target_column_name], ts_results_all[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
```
| github_jupyter |
```
# only for development
%load_ext autoreload
%autoreload 2
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
from diffractem import io, tools
from diffractem.stream_parser import StreamParser, make_substream
import numpy as np
import pandas as pd
import os
import matplotlib
import seaborn as sns
bin_path = '/opts/crystfel_latest/bin/' # might be different than standard
from glob import glob
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
```
# Merging and first validation of serial data sets
...from stream files, mostly using `partialator`, `ambigator`, `check_hkl` and `compare_hkl` from CrystFEL, plus some nice plotting functions. Handles parallel processing of merging runs with different parameters and/or input stream files, as well as creation of custom-split files.
Contains the following parts:
* Generation of a command script for partialator, which runs it with a bunch of different settings, either directly in a shell, or by submission to a SLURM queue
* Batch analysis of `hkl` files using CrystFEL's tools (in parallel) and results parsing
* Plotting of results as function of resolution shell and crystal number
**Note: please change `bin_path` above to your CrystFEL `bin` directory**
Please also check out `merging_fractionated.ipynb`, which takes a closer look at dose fractionation, custom-split files etc.
## Preparation of a partialator script
...which can run partialator with a range of different settings, in order to find the one that fits your data best.
The partialator parameters are all set in the `popts` dictionary.
Please consult the parameter descriptions available via `man partialator`.
If you put a list in any of them, `partialator` will be run in all possible combinations.
In this example we use `stop-after` to get a range of different crystal numbers (we might as well have used the random sampling from streams as shown above) and test two merging models.
It comes in different flavors - either for direct execution, or for submission to a SLURM queue, depending on what you set for the `slurm` argument\*.
If you want a custom-split, set `split=True` and create a split list file with the same name and path as the stream file and ending `_split.txt`. We will however not work with this for now -- see `merging_fractionated.ipynb`.
`call_partialator` will create a script file `partialator_run.sh` and return a `DataFrame` containing a list of the settings with which partialator is run, and what the filename will be.
\*(Note that, unlike with `call_indexamajig_slurm`, no further options are available to e.g. make an archive file for transfer.
Just copy the script and your stream file(s) to the cluster)
```
# get a list of stream files
stream_list = glob('streams/*.stream')
# reject stream files with "all" or "cum" in them, which contain many shots
# per pattern.
stream_list = [st for st in stream_list if not (('all' in st) or ('cum' in st))]
popts = {'no-polarisation': True, 'no-Bscale': False, 'no-scale': False,
'force-bandwidth': 2e-5, 'force-radius': False, 'force-lambda': 0.0251,
'push-res': 1.4, 'min-measurements': [3, ], 'model': ['unity', 'xsphere'],
'symmetry': '422', 'stop-after': list(range(200, 1147, 200)) + [1147],
'no-logs': False, 'iterations': 3, 'j': 10}
# you need to set those if you want to use slurm to submit merging runs
slurm_opts = {'C': 'scratch',
'partition': 'medium',
'time': '"04:00:00"',
'nodes': 1}
tools.call_partialator(stream_list, popts, par_runs=4,
split=False, out_dir='merged',
slurm=False, cache_streams=False,
slurm_opts=slurm_opts)
!chmod +x partialator_run.sh
# example how to send data to a cluster
# !scp -r streams rbuecke1@transfer.gwdg.de:~/SHARED/EDIFF/temp
# !scp partialator_run.sh rbuecke1@transfer.gwdg.de:~/SHARED/EDIFF/temp
# example how to get it back from a cluster
# %mkdir merged
# !scp 'rbuecke1@transfer.gwdg.de:~/SHARED/EDIFF/temp/merged/*.hkl*' merged/
```
## Analyze and validate results
...assuming your partialator run has finished, and using CrystFELs `check_hkl` and `compare_hkl` tools. The results are parsed into _pandas_ DataFrames using `tools.analzye_hkl` and automatically labeled. While `overall` contains the overall figures of merit for all data frames, `sd` is a flat list of all per-shell and per-setting FOMs. You can use pandas' `groupby` and `pivot` features to get from this table whatever you need conveniently.
Finally, plots of figures of merit vs. resolution shell can be generated.
#### Parse available files
If you haven't just run the merging before, but started from here, you need a `settings` DataFrame to get started.
This cell does that for you, mostly rather accurately, using `tools.get_hkl_settings`.
It can then also be used to reject some settings from further analysis (not needed now, but examples are in comments), or mangle with the data columns to have more convenient indicators (done here)
```
# check what hkls we have available....
settings = tools.get_hkl_settings('merged/hits_agg*.hkl', unique_only=True, custom_split=False)
# or do some name mangling... (here: make sure that the folder is stripped of the stream)
if 'input' in settings.columns:
settings['input'] = settings['input'].str.rsplit('/', 1, expand=True).iloc[:,-1]
```
#### Run the analysis
This is done using the `tools.analyze_hkl` function.
However, as the analysis is single-threaded, we can get much faster by doing it in parallel using a `ProcessPoolExecutor` (see the documentation of the `concurrent.futures` package if you want to know more.
Concise tables of the results are written into the `shell/` subdirectory.
(if there is trouble with finding the CrystFEL executables, set the bin_path parameter manually)
```
default_symmetry = '422'
highres = 1.75 # highest shell, in A
nshells = 10
# tools.analyze_hkl() #...is used using ProcessPoolExecutor
ftrs = {}
with ProcessPoolExecutor() as exc:
for _, s in settings.iterrows():
ftrs[s.hklfile] = exc.submit(tools.analyze_hkl, fn=s.hklfile, cell='refined.cell',
point_group=s.symmetry if 'symmetry' in s else default_symmetry,
highres=highres, nshells=nshells, bin_path='/opts/crystfel_master/bin')
err = {lbl: v.exception() for lbl, v in ftrs.items() if v.exception()}
if err:
print('Analysis gave errors!', str(err))
out = {lbl: v.result() for lbl, v in ftrs.items() if not v.exception()}
sd = pd.concat([v.result()[0].assign(hklfile=lbl)
for lbl, v in ftrs.items()
if not v.exception()], axis=0).merge(
settings, on='hklfile')
overall = pd.concat([pd.DataFrame(v.result()[1], index=[lbl])
for lbl, v in ftrs.items()
if not v.exception()], axis=0).merge(
settings, right_on='hklfile', left_index=True).rename(
columns={'<snr>': 'SNR', 'redundancy': 'Red', 'completeness': 'Compl', 'CC*': 'CCstar'})
# write out results
%rm -f shell/*
for ident, grp in sd.groupby(['hklfile']):
grp.sort_values('Center 1/nm')[['Center 1/nm', 'nref', 'Possible', 'Compl', 'Meas', 'Red', 'SNR',
'Mean', 'd/A', 'Min 1/nm', 'Max 1/nm', 'CC', 'CCstar',
'Rsplit']].to_csv(f'shell/{ident.rsplit("/",1)[-1]}.csv', index=False, float_format='%.2f')
```
#### Example to show results
...using the result DataFrame's `pivot` function.
```
# convenient function to get FOMs. Set the one you want as 'value'
sd.pivot(index='d/A', columns='hklfile', values=['CC']).sort_index(ascending=False)
```
## Analysis Plots
...ideally using an interactive backend, like `qt`, or `widget`. You can choose which 4 FOMs to plot as a function of resolution, and their display ranges.
From the $CC=1/7$, or $CC^*=1/2$ criterion (which are automatically drawn if you ask for those), we can read off a reasonable cut-off for resulution.
To learn about what FOMs you can plot, please look at `sd.columns`.
The `sdsel=sd.query(......)` allows you to only plot a sub-selection - e.g. here only the full set with unity merging, but all aggregations.
In the first line, you can choose your matplotlib backend. `widget` is highly recommended (interactive), but may not always work.
### Fig 1: Figures of Merit vs resolution shell
You can pick subsets of merging runs using `sd.query`, and set the x axis by setting `angstrom`.
```
# %matplotlib inline
%matplotlib widget
# SETTINGS ---
fh, axs = plt.subplots(2, 2, figsize=(18/2.54,15/2.54), dpi=120, sharex=True)
lsp, lrow = 0.85, 3 # space near top left for legend, and # of legend columns
# pick your FOMs and their y ranges
FOMs = [('CC', 0, 1), ('Mean', 0, 100), ('Compl', 0, 100), ('Red', 0, 100)]
sdsel = sd.query('stop_after > 1000')
angstrom = False # if True, show x axis in A, instead of 1/nm
# ------
try:
import seaborn as sns
sns.set('notebook','whitegrid') # optional. Set a style...
except:
print('Seaborn not installed, it seems.')
axs = axs.ravel()
# ids = get_id_table(sdsel['identifier'])
idcols = [cn for cn, col in sdsel[settings.columns].iteritems()
if len(col.unique()) > 1 and (cn != 'hklfile')]
print('Legend is', ' '.join(idcols))
for ident, grp in sdsel.groupby(['hklfile']):
ls = '-'
lbl = tuple(grp[idcols].drop_duplicates().values.astype(str).ravel())
for ax, (fom, ymin, ymax) in zip(axs, FOMs):
ax.plot(grp['d/A'] if angstrom else grp['Center 1/nm'], grp[fom],
label=' '.join(lbl), ls=ls)
ax.set_title(fom)
ax.set_ylim((ymin, ymax))
if angstrom:
ax.set_xlim(sorted(ax.get_xlim(), reverse=True))
ax.grid(True)
if fom in ['CC', 'CCstar']:
ax.axhline(0.143 if fom == 'CC' else 0.5,ls=':')
lg = fh.legend(*ax.get_legend_handles_labels(), ncol=lrow,
fontsize='xx-small', loc='lower center',
bbox_to_anchor=(0.5, lsp), frameon=True)
axs[-1].set_xlabel(r'Resolution shell/Å' if angstrom else r'Resolution shell/nm$^{-1}$')
plt.draw()
# lpos = lg.get_window_extent()
fh.subplots_adjust(wspace=0.3, top=lsp-0.05)
```
### Fig. 2: overall and single-shell FOM vs. crystal number.
Obviously only makes sense for data sets which have multiple crystal numbers. If you did not define this using `stop_after` or `start_after`, you'll have to mangle around with e.g. hklfile names yourself before. The plots display values of a selected shell in dashed lines, which you can pick in the variable `res`. The available values are printed when running the cell.
```
# SETTINGS ---
N_col = 'stop_after' # column containing the number of crystals
fh, axs = plt.subplots(2,2,figsize=(18/2.54,15/2.54),dpi=120,sharex=True)
# pick your FOMs and their y ranges
FOMs = [('CC', 0, 1), ('Compl', 0, 100), ('Rsplit', 0, 100), ('Red', 0, 50)]
# resolution bin for dashed plot
res = 1.85
# ovsel = overall.query('input in ["0to1.stream", "0to2.stream", "0to8.stream"]')
ovsel = overall
# -----
try:
import seaborn as sns
sns.set('notebook','whitegrid') # optional. Set a style...
except:
print('Seaborn not installed, it seems.')
axs = axs.ravel()
idcols = [cn for cn, col in sd[settings.columns].iteritems()
if len(col.unique()) > 1 and (cn not in ['hklfile', N_col])]
print('Available resolution bins are', ' '.join(sd['d/A'].unique().astype(str)))
print('Legend is', ' '.join(idcols))
for idcol, grp in ovsel.groupby(idcols if len(idcols) else np.ones(len(ovsel))):
sdsel = sd.merge(grp, on='hklfile', validate='m:1', suffixes=('','_ov'))
sdsel = sdsel[sdsel['d/A'] == res].sort_values(N_col)
grp = grp.sort_values(N_col)
lbl = tuple(grp[idcols].drop_duplicates().values.astype(str).ravel())
for ax, (fom, xmin, xmax) in zip(axs, FOMs):
ph = ax.plot(grp[N_col], grp[fom], label=' '.join(lbl), ls='-')
ax.plot(sdsel[N_col], sdsel[fom], color=ph[0].get_color(), ls='--')
ax.set_title(fom)
ax.set_ylim((xmin, xmax))
ax.grid(True)
if fom in ['CC', 'CCstar']:
ax.axhline(0.143 if fom == 'CC' else 0.5,ls=':')
axs[-1].legend(ncol=2, fontsize='xx-small')
axs[-1].set_xlabel(r'Crystal number')
fh.subplots_adjust(wspace=0.3)
```
## Step 5: publication-ready plots
Another plot section, which is more dataset-specific and less convenient, but really makes good plots for final reports with slightly more effort.
Needs to be adapted to each dataset manually, so you'll have to do that if you copied this notebook from another dataset.
**This is _not_ part of the general workflow tutorial - generally most likely it will not work without customization**
#### Versus Resolution shell
```
import matplotlib as mpl
mpl.rc('axes', titleweight='normal', labelweight='normal')
# Dataset-separated FOM vs resolution plot
import seaborn as sns
sns.set('paper','whitegrid') # optional. Set a style...
FOMs = [(['CC', None], 0, 1, '$CC_{1/2}$'),
(['Compl', None], 0, 100, 'Completeness'),
(['Red', None], 0, 50, 'Redundancy')]#, R_\mathrm{work}$ (dashed)')]
# select the data subset to show. Add "cmp" to the method list if you want relative data
sd_sel = sd.query(f'stop_after > 1000 and input in ["0to1.stream", "0to2.stream", "0to8.stream"]')
plt.close('all')
fh, axs= plt.subplots(len(FOMs), 1, figsize=(8.5/2.54, 15/2.54), sharex=True, dpi=300,
gridspec_kw={'hspace':0.1, 'wspace':0.1, 'height_ratios': [1]*len(FOMs)})
# The 'zzz' rename is required to put the plots in proper order
last_model = ''
for (model, agg, hklfile), grp in \
sd_sel.groupby(['model', 'input', 'hklfile'], sort=True):
for ii, (fom, ymin, ymax, cpt) in enumerate(FOMs):
ax = axs[ii]
if model != last_model:
ax.set_prop_cycle(None)
if fom[0] == 'CC':
ax.axhline(0.143, color='k', alpha=0.1)
lbl = f'{2*int(agg[3])+1} ms' if model=='unity' else ''
src = grp
ph = ax.plot(src['Center 1/nm'], src[fom[0]],
label=lbl,
ls='-' if model=='unity' else '--',
marker='o' if model=='unity' else 'v', markersize=3,
fillstyle='none')
c = [p.get_color() for p in ph]
if fom[1] is not None:
lbl = None
ax.plot(src['Center 1/nm'], src[fom[1]],
label=lbl, color=c[0], ls='--',
marker='o' if model=='unity' else 'v', fillstyle='none')
# common axis settings
ax.set_ylim((ymin, ymax))
ax.set_xticks([10/d for d in range(6,1,-1)])
ax.grid(True)
# stuff appearing for specific axes only
if ii == 0:
ax.legend()
ax.set_ylabel(cpt)
if ii == (len(FOMs)-1):
ax.set_xlabel('Resolution (Å)')
ax.set_xticklabels([f'{(10/float(l)):.0f}' for l in ax.get_xticks()])
else:
ax.set_xticklabels([])
last_model = model
ax.set_xlim(10/4.5, ax.get_xlim()[1])
plt.savefig(f'fom_vs_res.pdf', transparent=True, bbox_inches='tight')
```
#### Versus Crystal Number
```
# PART 2: FOMs at fixed resolution/overall vs. crystal number
# %matplotlib inline
# MAIN TEXT
target = 'main'
FOMs = [(['CC_ov', 'CC'], 0, 1, '$CC_{1/2}$'),
(['Compl_ov', 'Compl', None], 0, 100, 'Completeness'),
(['Red_ov', 'Red'], 0, 50, 'Redundancy')]
# make subset (including carving out the highest shell only)
hishell = 1.85
sd_sel = sd.loc[sd['d/A'] == hishell,:].sort_index() # xfel9 / ssx9: 0.9.0
# merge overall data. Inner merge, so it is sub-selected automatically
sd_sel = sd_sel.merge(overall, on=['model', 'input', 'stop_after'],
how='inner', suffixes=('', '_ov'))
# select crystal number range
sd_sel = sd_sel.query('input == "0to2.stream"')
fh, axs= plt.subplots(len(FOMs), 1, figsize=(8.5/2.54, 15/2.54), sharex=True, dpi=150,
gridspec_kw={'hspace':0.1, 'wspace':0.1, 'height_ratios': [1]*len(FOMs)})
for (model, agg), grp in sd_sel.groupby(['model', 'input'], sort=True):
grps = grp.sort_values(by='stop_after').rename(columns={'stop_after': 'crystals'})
for ii, (fom, ymin, ymax, cpt) in enumerate(FOMs):
ax = axs[ii]
if fom[0] in grps.columns:
ph = ax.plot(grps.crystals, grps[fom[0]],
label=f'{model}',
linestyle='-', marker='o' if model=='unity' else 'v',
fillstyle='none', markersize=3)
if (fom[1] is not None) and (fom[1] in grps.columns):
ax.plot(grps.crystals, grps[fom[1]],
# label=f'{method.upper()} ({fom[1]})',
label = None,
color=ph[0].get_color(),
linestyle='--', marker='o' if model=='unity' else 'v',
fillstyle='none', markersize=3)
ax.set_ylim((ymin, ymax))
ax.set_ylabel(cpt)
if (ii == 0):
ax.legend()
ax.set_xlim(100, 1300)
if ii == (len(FOMs)-1):
ax.set_xlabel('Crystal number')
plt.savefig(f'fom_vs_number.pdf', transparent=True, bbox_inches='tight')
```
#### Both together
...as seen in workflow paper
```
import matplotlib as mpl
mpl.rc('axes', titleweight='normal', labelweight='normal')
plt.close('all')
# Dataset-separated FOM vs resolution plot
import seaborn as sns
sns.set('paper','whitegrid') # optional. Set a style...
FOMs = [(['CC', None], 0, 1, '$CC_{1/2}$'),
(['Compl', None], 0, 100, 'Completeness'),
(['Red', None], 0, 50, 'Redundancy')]#, R_\mathrm{work}$ (dashed)')]
fh, axs_both = plt.subplots(len(FOMs), 2, figsize=(17/2.54, 15/2.54), sharex=False, sharey=False, dpi=150,
gridspec_kw={'hspace':0.1, 'wspace':0.1, 'height_ratios': [1]*len(FOMs)})
# select the data subset to show. Add "cmp" to the method list if you want relative data
sd_sel = sd.query(f'stop_after > 1000 and input in ["0to1.stream", "0to2.stream", "0to8.stream"]')
axs = axs_both[:, 0]
# The 'zzz' rename is required to put the plots in proper order
last_model = ''
for (model, agg, hklfile), grp in \
sd_sel.groupby(['model', 'input', 'hklfile'], sort=True):
for ii, (fom, ymin, ymax, cpt) in enumerate(FOMs):
ax = axs[ii]
if model != last_model:
# print(last_model, model)
ax.set_prop_cycle(None)
if fom[0] == 'CC':
ax.axhline(0.143, color='k', alpha=0.1)
lbl = f'{2*int(agg[3])+1} ms' if model=='unity' else ''
# lbl = f'{model}, {2*int(agg[3])+1} ms'
src = grp
ph = ax.plot(src['Center 1/nm'], src[fom[0]],
label=lbl,
ls='-' if model=='unity' else '--',
marker='o' if model=='unity' else 'v', markersize=3,
fillstyle='none')
c = [p.get_color() for p in ph]
if fom[1] is not None:
lbl = None
ax.plot(src['Center 1/nm'], src[fom[1]],
label=lbl, color=c[0], ls='--',
marker='o' if model=='unity' else 'v', fillstyle='none')
# common axis settings
ax.set_ylim((ymin, ymax))
ax.set_xticks([10/d for d in range(6,1,-1)])
ax.grid(True)
# stuff appearing for specific axes only
if ii == 0:
# ax.set_title('MB ($N=8000$)' if sample=='mb' else 'FAcD ($N=10000$)')
ax.legend(ncol=1)
ax.set_ylabel(cpt)
if ii == (len(FOMs)-1):
ax.set_xlabel('Resolution (Å)')
ax.set_xticklabels([f'{(10/float(l)):.0f}' for l in ax.get_xticks()])
else:
ax.set_xticklabels([])
ax.set_xlim(10/4.5, ax.get_xlim()[1])
last_model = model
# plt.savefig(f'fom_vs_res.pdf', transparent=True, bbox_inches='tight')
# PART 2: FOMs at fixed resolution/overall vs. crystal number
# %matplotlib inline
# MAIN TEXT
target = 'main'
FOMs = [(['CC_ov', 'CC'], 0, 1, '$CC_{1/2}$'),
(['Compl_ov', 'Compl', None], 0, 100, 'Completeness'),
(['Red_ov', 'Red'], 0, 50, 'Redundancy')]
# make subset (including carving out the highest shell only)
hishell = 1.85
sd_sel = sd.loc[sd['d/A'] == hishell,:].sort_index() # xfel9 / ssx9: 0.9.0
# merge overall data. Inner merge, so it is sub-selected automatically
sd_sel = sd_sel.merge(overall, on=['model', 'input', 'stop_after'],
how='inner', suffixes=('', '_ov'))
# select crystal number range
sd_sel = sd_sel.query('input == "0to2.stream"')
axs = axs_both[:, 1]
for (model, agg), grp in sd_sel.groupby(['model', 'input'], sort=True):
grps = grp.sort_values(by='stop_after').rename(columns={'stop_after': 'crystals'})
for ii, (fom, ymin, ymax, cpt) in enumerate(FOMs):
ax = axs[ii]
if fom[0] in grps.columns:
ph = ax.plot(grps.crystals, grps[fom[0]],
label=f'{model}',
linestyle='-', marker='o' if model=='unity' else 'v',
fillstyle='none', markersize=3)
if (fom[1] is not None) and (fom[1] in grps.columns):
ax.plot(grps.crystals, grps[fom[1]],
# label=f'{method.upper()} ({fom[1]})',
label = None,
color=ph[0].get_color(),
linestyle=':', marker='o' if model=='unity' else 'v',
fillstyle='none', markersize=3)
ax.set_ylim((ymin, ymax))
# ax.set_ylabel(cpt)
ax.set_yticklabels([])
if (ii == 0):
ax.legend()
ax.set_xlim(100, 1300)
if ii == (len(FOMs)-1):
ax.set_xlabel('Crystal number')
else:
ax.set_xticklabels([])
plt.savefig(f'fom_all.pdf', transparent=True, bbox_inches='tight')
```
| github_jupyter |
# 第3章 列表简介
## 3.1 列表是什么
#### 在python中,用方括号([])来表示列表,并用逗号来分隔其中的元素。
```
# bicycles.py
bicycles = ['trek','cannondale','redline','specialized']
print(bicycles)
```
### 3.1.1 访问列表元素
#### 要访问列表元素,可指出列表的名称,再指出元素的索引,并将其放在方括号内。
```
bicycles = ['trek','cannondale','redline','specialized']
print(bicycles[0])
bicycles = ['trek','cannondale','redline','specialized']
print(bicycles[0].title())
```
### 3.1.2 索引是从0而不是1开始
#### 在python中,第一个列表元素的索引是0,而不是1。
```
bicycles = ['trek','cannondale','redline','specialized']
print(bicycles[1])
print(bicycles[3])
bicycles = ['trek','cannondale','redline','specialized']
print(bicycles[-1])
print(bicycles[-2])
print(bicycles[-3])
```
### 3.1.3 使用列表中的各个值
```
bicycles = ['trek','cannondale','redline','specialized']
message = "My first bicycle was a " + bicycles[0].title() + "."
print(message)
```
## 3.2 修改、添加和删除元素
### 3.2.1 修改列表元素
```
# motorcycles,py
motorcycles = ["honda","yamaha","suzuki"]
print(motorcycles)
motorcycles[0] = "ducati"
print(motorcycles)
```
### 3.2.2 在列表中添加元素
#### 1.在列表末尾添加元素
#### 2.方法append()可在列表末尾添加新元素
```
motorcycles = ["honda","yamaha","suzuki"]
print(motorcycles)
motorcycles.append("ducati")
print(motorcycles)
motorcycles = []
motorcycles.append("honda")
motorcycles.append("yamaha")
motorcycles.append("suzuki")
print(motorcycles)
```
#### 2.在列表中插入元素
#### 方法insert()可在列表的任何位置添加新元素
```
motorcycles = ["honda","yamaha","suzuki"]
motorcycles.insert(0,"ducati")
print(motorcycles)
# 这种操作将列表中既有的每个元素都右移一个位置
```
### 3.2.3 从列表中删除元素
#### 1.使用del语句删除元素
```
motorcycles = ["honda","yamaha","suzuki"]
print(motorcycles)
del motorcycles[0]
print(motorcycles)
motorcycles = ["honda","yamaha","suzuki"]
print(motorcycles)
del motorcycles[1]
print(motorcycles)
```
#### 2.使用方法pop()删除元素
#### 方法pop()可删除列表末尾的元素,并让你能够接着使用它
```
motorcycles = ["honda","yamaha","suzuki"]
print(motorcycles)
popped_motorcycle = motorcycles.pop()
print(motorcycles)
print(popped_motorcycle)
motorcycles = ['honda','yamaha','suzuki']
last_owned = motorcycles.pop()
print("The last motorcycle I owned was a " + last_owned.title() + ".")
```
#### 3.弹出列表中任何位置处的元素
```
motorcycles = ['honda','yamaha','suzuki']
first_owned = motorcycles.pop(0)
print("The first motorcycle I owned was a " + first_owned.title() + ".")
## 如果你要从列表中删除一个元素,且不再以任何方式使用它,就使用del语句;如果你要在删除元素后还能继续使用它,就使用方法pop()
```
#### 4.根据值删除元素
#### 如果你只知道要删除的元素的值,可使用方法remove()
```
motorcycles = ['honda','yamaha','suzuki','ducati']
print(motorcycles)
motorcycles.remove('ducati')
print(motorcycles)
# 使用remove()从列表中删除元素时,也可接着使用它的值
motorcycles = ['honda','yamaha','suzuki','ducati']
print(motorcycles)
too_expensive = 'ducati'
motorcycles.remove(too_expensive)
print(motorcycles)
print("\nA " + too_expensive.title() + " is too expensive for me.")
```
## 3.3 组织列表
### 3.3.1 使用方法sort()对列表进行永久性排序
```
# cars.py
cars = ['bmw','audi','toyota','subaru']
cars.sort()
print(cars)
cars = ['bmw','audi','toyota','subaru']
cars.sort(reverse = True)
print(cars)
```
### 3.3.2 使用函数sorted()对列表进行临时排序
```
cars = ['bmw','audi','toyota','subaru']
print("Here is the original list:")
print(cars)
print("\nHere is the sorted list:")
print(sorted(cars))
print("\nHere is the original list again:")
print(cars)
# 按与字母顺序相反的顺序显示列表
print("\nHere is the list in reverse order:")
print(sorted(cars,reverse = True))
```
### 3.3.3 倒着打印列表
```
cars = ['bmw','audi','toyota','subaru']
print(cars)
cars.reverse()
print(cars)
```
### 3.3.4 确定列表的长度
#### Python计算列表元素数时从1开始,因此确定列表长度时,不会遇到差1错误
## 3.4 使用列表时避免索引错误
```
motorcycles = ['honda','yamaha','suzuki']
print(motorcycles[2])
# 列表的索引是从0开始的
motorcycles = ['honda','yamaha','suzuki']
print(motorcycles[-1])
motorcycles = ['honda','yamaha','suzuki']
print(motorcycles[-1])
```
## 3.5 小结
### 在本章中,学习了:
#### 1.列表是什么以及如何使用其中的元素;
#### 2.如何定义列表以及如何增删元素;
#### 3.如何对列表进行永久性排序,以及如何为展示列表而进行临时排序;
#### 4.如何确定列表的长度,以及在使用列表时如何避免索引错误。
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn import decomposition
bms = pd.read_csv('bms.csv', sep=';', index_col=0)
bms.head()
bms.dtypes
bms.shape
bms.describe()
bms["Item_Fat_Content"].value_counts()
bms["Item_Fat_Content"].value_counts().plot(kind="bar")
```
There is an inconsistent data in Item_Fat_Content Variable, so we need to replace "LF" and "low fat" to "Low Fat, "reg" to "Regular".
```
bms["Item_Fat_Content"]=bms["Item_Fat_Content"].replace(["LF","low fat"],"Low Fat")
bms["Item_Fat_Content"]=bms["Item_Fat_Content"].replace("reg","Regular")
bms['Item_Fat_Content'].value_counts()
```
# Detecting Missing Values
```
bms.info()
bms.isnull().sum()
```
We know that there is some missing values in Item_Weight and Outlet_Size Variable, the persentation of missing values in Item_Weight Variable is 17.16% and 28.28% in Outlet_Sales Variable. Based on the correlation matrix above, we don't need to drop this missing values or variables but we need to cleaning this missing values by imputing with mean, median, or mode.
# Cleaning Missing Values
```
from sklearn.impute import SimpleImputer
from scipy.stats import mode
bms["Item_Weight"]=bms["Item_Weight"].fillna(bms["Item_Weight"].mean())
bms["Outlet_Size"]=bms["Outlet_Size"].fillna(bms["Outlet_Size"].mode()[0])
bms.isnull().sum()
```
# Handling Duplicate Records
```
dup = bms.duplicated()
print(dup.sum())
bms[dup]
```
We know that in this dataset, there are no duplicate data
# Handling Outliers
```
bms.boxplot(["Item_Outlet_Sales"])
bms["Item_Outlet_Sales"].describe()
bms["Item_Outlet_Sales"].plot(kind="hist")
```
# Normalization
```
from scipy import stats
z=np.abs(stats.zscore(bms._get_numeric_data()))
print(z)
z.shape
bms_new=bms[(z<3).all(axis=1)]
print(bms_new.shape)
bms_new.head()
import seaborn as sns
sns.set(style="whitegrid")
c=pd.DataFrame([])
c["Keterangan:"]=["Tidak ada outlier","Outlier"]
c["Jumlah observasi"]=[len(bms_new),len(bms)-len(bms_new)]
c
plot=sns.barplot(x="Keterangan:",y="Jumlah observasi",data=c)
```
# Transformation or Recording Categorical Value
```
bms_new.head()
kategori=["Item_Fat_Content", "Item_Type","Outlet_Identifier","Outlet_Establishment_Year","Outlet_Size","Outlet_Location_Type","Outlet_Type"]
from sklearn import preprocessing
le=preprocessing.LabelEncoder()
for feature in kategori:
if feature in bms_new.columns.values:
bms_new[feature]=le.fit_transform(bms_new[feature])
bms_new.dtypes
```
# Feature Selection
```
bmsx=np.array([bms_new["Item_Weight"],bms_new["Item_Fat_Content"],bms_new["Item_Visibility"],bms_new['Item_MRP'],bms_new["Outlet_Size"]])
bmsx.shape
bmsx
bmsxt=bmsx.transpose()
bmsxt.shape
bms_std=StandardScaler().fit_transform(bmsxt)
bms_std
pcafs=PCA(n_components=0.70,whiten=True)
bmsfs_pca=pcafs.fit_transform(bms_std)
print('Original number of features:', bms_std.shape[1])
print('Reduced numer of features:', bmsfs_pca.shape[1])
datafs_pca=pd.DataFrame(bmsfs_pca, columns=["PC1","PC2","PC3","PC4"])
datafs_pca
```
# Feature Extraction
```
pcafe=decomposition.PCA(n_components=3)
bmsfe_pca=pcafe.fit_transform(bms_std)
print('Original number of features:', bms_std.shape[1])
print('Reduced numer of features:', bmsfe_pca.shape[1])
datafe_pca=pd.DataFrame(bmsfe_pca, columns=["PC1","PC2", "PC3"])
datafe_pca
```
## Let's take a look distributed
```
print(bms['Item_Outlet_Sales'].describe())
plt.figure(figsize=(15, 10))
sns.distplot(bms['Item_Outlet_Sales'], color='g', bins=100, hist_kws={'alpha': 0.4})
```
From the visualization above, it can be seen that the Item Outlet Sales skewed right
# Numerical Data Distribution
```
list(set(bms.dtypes.to_list()))
bms_num = bms.select_dtypes(include = ['float64', 'int64'])
bms_num.head()
```
let's plot them all
```
bms_num.hist(figsize=(16,20), bins=50, xlabelsize=8, ylabelsize=8)
```
# Correlation
```
bms_num_corr = bms_num.corr()['Item_Outlet_Sales'][:-1] # -1 because the latest row is Item_Outlet_Sales
gold_features_list = bms_num_corr[abs(bms_num_corr) > 0.5].sort_values(ascending=False)
print("There is {} strongly correlated values with Item Outlet Sales:\n{}".format(len(gold_features_list), gold_features_list))
for i in range(0, len(bms_num.columns), 5):
sns.pairplot(data=bms_num,
x_vars=bms_num.columns[i:i+5],
y_vars=['Item_Outlet_Sales'])
```
# Feature Relationships using Heatmap
```
corr = bms_num.drop('Item_Outlet_Sales', axis=1).corr()
plt.figure(figsize=(12,10))
sns.heatmap(corr[(corr >= 0.5) | (corr <= -0.4)],
cmap='viridis', vmax=1.0, vmin=-1.0, linewidths=0.1, annot=True, annot_kws={'size': 8}, square=True)
sns.heatmap(bms_num[bms_num.columns].corr(),annot=True)
bms_num.hist(figsize=(10,8),bins=6,color='Y')
plt.tight_layout()
plt.show()
plt.figure(1)
plt.subplot(321)
bms['Outlet_Type'].value_counts().plot(figsize=(10,12),kind='bar',color='green')
plt.subplot(322)
bms['Item_Fat_Content'].value_counts().plot(figsize=(10,12),kind='bar',color='yellow')
plt.subplot(323)
bms['Item_Type'].value_counts().plot(figsize=(10,12),kind='bar',color='red')
plt.subplot(324)
bms['Outlet_Size'].value_counts().plot(figsize=(10,12),kind='bar',color='orange')
plt.subplot(325)
bms['Outlet_Location_Type'].value_counts().plot(figsize=(10,12),kind='bar',color='black')
plt.subplot(326)
bms['Outlet_Establishment_Year'].value_counts().plot(figsize=(10,12),kind='bar',color='olive')
plt.tight_layout()
plt.show()
bms[bms["Outlet_Type"]=="Supermarket Type1"]["Item_Type"].value_counts().plot(kind="bar")
```
Jadi item type yang sering dijual di sumpermarket tipe 1 adalah fruit, vagetables, snack food, household, frozex food dan diary.
```
#untuk mengetahui Outlet_size di supermarket type 1
bms[bms['Outlet_Type']=="Supermarket Type1"]["Outlet_Size"].value_counts().plot(kind="bar")
```
#Boxplot
```
ax = sns.boxplot(x="Outlet_Size", y="Item_Outlet_Sales", data=bms)
#berdasarkan outlet_Size=Medium
fig,axes = plt.subplots(figsize = (10,10))
ax = sns.boxplot(x="Outlet_Type", y="Item_Outlet_Sales", data=bms[bms["Outlet_Size"]=="Medium"])
```
Ternyata berdasarkan tipe outlet yang medium, persebaran penjualan paling banyak ada di supermarket type 3 dengan range sekitar 0-13000
```
bms_medium = bms[bms["Outlet_Size"]=="Medium"]
bms_medium[bms_medium["Outlet_Type"]=="Supermarket Type3"]["Item_Type"].value_counts().plot(kind="bar")
```
dengan type outlet medium, type barang yang dijual di supermarket type 3 adalah fruit, vagetables, snack foods, house hold, frozen food dan canned
```
ax = sns.boxplot(x="Item_Fat_Content", y="Item_Outlet_Sales", data=bms)
fig,axes = plt.subplots(figsize = (10,10))
sns.boxplot(x = bms['Outlet_Establishment_Year'], y = bms['Item_Outlet_Sales'], hue = bms['Outlet_Type'], ax = axes )
plt.plot
fig,axes = plt.subplots(figsize = (10,10))
bms.groupby("Outlet_Establishment_Year")["Item_Outlet_Sales"].sum().plot(kind="line")
bms.groupby("Outlet_Establishment_Year")["Item_Outlet_Sales"].sum()
#supermarket type 3
#sales dengan tipe apa yang paling mahal?
fig,axes = plt.subplots(figsize = (20,20))
bms_market3=bms[bms['Outlet_Type']=="Supermarket Type3"]
sns.boxplot(x="Item_Type", y="Item_Outlet_Sales", data=bms_market3)
```
| github_jupyter |
## 今天的範例,帶大家運用python 裡面的套件
* 如何模擬這些分配的樣本點
* 進行一些機率的運算
包含以下離散型分配
4. 負二項分配(Negative Binomial Distribution)
5. 超幾何分配(Hypergeometric Distribution)
```
# library
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
import math
import statistics
```
### 4. 負二項分配(Negative Binomial Distribution)
```
'''
# 負二項分配(Negative Binomial Distribution)
# 前提:在一系列獨立同分布的伯努利試驗中,X為成功次數到達指定次數(記為 𝑘 )時,需要試驗的次數的機率分布
p: 成功的機率
k: 累積到 k 次成功後才停整
r: 產生出 樣本點空間
'''
# 1.定義負二項分配的基本資訊
p = 0.4 #成功的機率
k = 3 #指定次數
#產生一個同樣間隔的序列
#print(stats.nbinom.ppf(0.01, k, p)) #0.0
#print(stats.nbinom.ppf(0.99, k, p)) #15
r = np.arange(stats.nbinom.ppf(0.01, k,p),
stats.nbinom.ppf(0.99, k,p))
print(r)
# 2.計算機率質量函數 (probability mass function)
# P(X=x) --> 是機率
probs = stats.nbinom.pmf(r,k,p)
print(probs)
print(type(probs))
plt.bar(r, probs)
plt.ylabel('P(X=x)')
plt.xlabel('x')
plt.title('NB(k=3,p=0.4)')
plt.show()
# 3.計算負二項分佈的累積機率 (cumulative density function),pmf 的累加
# P(X=x) --> 是機率
cumsum_probs = stats.nbinom.cdf(r, k, p)
#array([ 0.07776, 0.2592 , 0.3456 , 0.2304 , 0.0768 , 0.01024])
plt.show()
plt.ylabel('P(X<=x)')
plt.xlabel('x')
plt.title('binomial(n=5,p=0.4)')
plt.plot(r, cumsum_probs)
plt.show()
# 4.透過 cdf ,給定一個 機率值,反推出對應到的 x
p_loc= stats.nbinom.ppf(cumsum_probs, k, p)
print(p_loc)
#看上圖看結果
# 5.產生符合負二項分佈的隨機樣本點 (random sample)
X = stats.nbinom.rvs(k,p,size=20)
print(X)
plt.hist(X)
plt.show()
#試試看,,每一次的結果一樣嗎?
#6.計算固定參數下,隨機變數的平均數、變異數、偏度和峰度。
stat_nbin=stats.nbinom.stats(k,p,moments='mvks')
print(stat_nbin)
print(type(stat_nbin))
#E(X)
print("negative binomial mean=",float(stat_nbin[0]))
print("negative binomial variance=",float(stat_nbin[1]))
print("negative binomial kurtosis=",float(stat_nbin[2]))
print("negative binomial skew=",float(stat_nbin[3]))
```
### 5. 超幾何分配(Hypergeometric Distribution)
```
'''
超幾何分配(Hypergeometric Distribution)描述了
由有限個物件中抽出n個物件,成功抽出指定種類的物件的個數(不歸還 (without replacement))
若隨機變量X 服從參數,則記為 H(n,K,N),
𝑁 : 共有幾個物件, 𝑁 =0,1,…
𝐾 : 𝑁 個物件中,有 𝐾 個是你關心的物件類型個數, 𝐾 =0,1,2,…, 𝑁
𝑛 : K個物件,要抽出 𝑛 個物件, 𝑛 =0,1,…, 𝑁
現在有兩堆骰子,30個為紅色數字,20個為黑色數字,取出10個,X=有幾個是紅色的。
'''
# 1.定義超幾何分配的基本資訊
N=50
K=30
n=10
#產生一個同樣間隔的序列
r = np.arange(0, min(n+1,K+1)) #產出 x 對應點
print(r)
# 2.計算機率質量函數 (probability mass function)
# P(X=x) --> 是機率
probs = stats.hypergeom.pmf(r, N,K,n)
print(probs)
print(type(probs))
plt.bar(r, probs)
plt.ylabel('P(X=x)')
plt.xlabel('x')
plt.title('pmf of Hypergeometric(N=50,K=30,n=10)')
plt.show()
# 3.計算超幾何分配的累積機率 (cumulative density function),pmf 的累加
# P(X=x) --> 是機率
cumsum_probs = stats.hypergeom.cdf(r, N,K,n)
plt.show()
plt.ylabel('P(X<=x)')
plt.xlabel('x')
plt.title('cdf of Hypergeometric(N=50,K=30,n=10)')
plt.plot(r, cumsum_probs)
plt.show()
# 4.透過 cdf ,給定一個 機率值,反推出對應到的 x
p_loc= stats.hypergeom.ppf(cumsum_probs, N,K,n)
print(p_loc)
#看上圖看結果
# 5.產生符合超幾何分配的隨機樣本點 (random sample)
X = stats.hypergeom.rvs(N,K,n,size=20)
print(X)
plt.hist(X,bins=25)
plt.show()
#試試看,,每一次的結果一樣嗎?
#6.計算固定參數下,隨機變數的平均數、變異數、偏度和峰度。
stat_hyperg=stats.hypergeom.stats(N,K,n,moments='mvks')
print(stat_hyperg)
print(type(stat_hyperg))
#E(X)
print("negative hypergeom mean=",float(stat_hyperg[0]))
print("negative hypergeom variance=",float(stat_hyperg[1]))
print("negative hypergeom kurtosis=",float(stat_hyperg[2]))
print("negative hypergeom skew=",float(stat_hyperg[3]))
```
### #投影片例子: 丟 5次骰子出現紅色三次的機率



```
# 用 python 計算
# 直接公式是運算
n=5
p=1/3
prob1=math.factorial(5)/math.factorial(2)/math.factorial(3)*pow(p,3)*pow((1-p),2) #page
print(prob1)
#P(X=3)
probs = stats.binom.pmf(3, n, p)
print(probs)
#兩者是相同的
```
| github_jupyter |
# RNN + GRU + bidirectional + Attentional context
This kernes uses a recurrent neural network in keras that uses GRU cells with a bidirectional layer and an attention context layer. The model uses the begining of the text and the end of the text and them join both outputs along with an one hot encoded layer for the gene and another for the variation. The variation has been encoded using the first and the last letter.
This kernel is based in a kernel by [ReiiNakanoBasic](https://www.kaggle.com/reiinakano/basic-nlp-bag-of-words-tf-idf-word2vec-lstm).
```
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import log_loss, accuracy_score
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.decomposition import TruncatedSVD
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import LabelEncoder
import gensim
import scikitplot.plotters as skplt
import nltk
import os
```
## Load data
```
df_train_txt = pd.read_csv('../input/training_text', sep='\|\|', header=None, skiprows=1, names=["ID","Text"])
df_train_var = pd.read_csv('../input/training_variants')
df_val_txt = pd.read_csv('../input/test_text', sep='\|\|', header=None, skiprows=1, names=["ID","Text"])
df_val_var = pd.read_csv('../input/test_variants')
df_test_txt = pd.read_csv('../input/stage2_test_text.csv', sep='\|\|', header=None, skiprows=1, names=["ID","Text"])
df_test_var = pd.read_csv('../input/stage2_test_variants.csv')
df_val_txt = pd.read_csv('../input/test_text', sep='\|\|', header=None, skiprows=1, names=["ID","Text"])
df_val_var = pd.read_csv('../input/test_variants')
df_val_labels = pd.read_csv('../input/stage1_solution_filtered.csv')
df_val_labels['Class'] = pd.to_numeric(df_val_labels.drop('ID', axis=1).idxmax(axis=1).str[5:])
df_val_labels = df_val_labels[['ID', 'Class']]
df_val_txt = pd.merge(df_val_txt, df_val_labels, how='left', on='ID')
df_train = pd.merge(df_train_var, df_train_txt, how='left', on='ID')
df_train.head()
df_test = pd.merge(df_test_var, df_test_txt, how='left', on='ID')
df_test.head()
df_val = pd.merge(df_val_var, df_val_txt, how='left', on='ID')
df_val = df_val[df_val_txt['Class'].notnull()]
df_val.head()
```
## Word2Vec model
```
class MySentences(object):
"""MySentences is a generator to produce a list of tokenized sentences
Takes a list of numpy arrays containing documents.
Args:
arrays: List of arrays, where each element in the array contains a document.
"""
def __init__(self, *arrays):
self.arrays = arrays
def __iter__(self):
for array in self.arrays:
for document in array:
for sent in nltk.sent_tokenize(document):
yield nltk.word_tokenize(sent)
def get_word2vec(sentences, location):
"""Returns trained word2vec
Args:
sentences: iterator for sentences
location (str): Path to save/load word2vec
"""
if os.path.exists(location):
print('Found {}'.format(location))
model = gensim.models.Word2Vec.load(location)
return model
print('{} not found. training model'.format(location))
model = gensim.models.Word2Vec(sentences, size=100, window=5, min_count=5, workers=8)
print('Model done training. Saving to disk')
model.save(location)
return model
w2vec = get_word2vec(
MySentences(
df_train['Text'].values,
df_val['Text'].values
),
'w2vmodel'
)
```
### Tokenizer
We'll define a transformer (with sklearn interface) to convert a document into its corresponding vector
```
class MyTokenizer:
def __init__(self):
pass
def fit(self, X, y=None):
return self
def transform(self, X):
transformed_X = []
for document in X:
tokenized_doc = []
for sent in nltk.sent_tokenize(document):
tokenized_doc += nltk.word_tokenize(sent)
transformed_X.append(np.array(tokenized_doc))
return np.array(transformed_X)
def fit_transform(self, X, y=None):
return self.transform(X)
class MeanEmbeddingVectorizer(object):
def __init__(self, word2vec):
self.word2vec = word2vec
# if a text is empty we should return a vector of zeros
# with the same dimensionality as all the other vectors
self.dim = len(word2vec.wv.syn0[0])
def fit(self, X, y=None):
return self
def transform(self, X):
X = MyTokenizer().fit_transform(X)
return np.array([
np.mean([self.word2vec.wv[w] for w in words if w in self.word2vec.wv]
or [np.zeros(self.dim)], axis=0)
for words in X
])
def fit_transform(self, X, y=None):
return self.transform(X)
```
## RNN in Keras
We use a vocabulary of 10000 most used words and a sequence length of 3000 words (3000 for the beggining and 3000 for the ending)
This takes about few hours to run on GPU
```
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
# Use the Keras tokenizer
VOCABULARY_SIZE = 10000
SEQUENCE_LENGTH = 3000
tokenizer = Tokenizer(num_words=VOCABULARY_SIZE)
tokenizer.fit_on_texts(df_train['Text'].values)
# Train set
train_set = df_train.sample(frac=1) # shuffle data first
train_set_input = tokenizer.texts_to_sequences(train_set['Text'].values)
train_set_input_reverse = [list(reversed(x)) for x in train_set_input]
train_set_input_begin = pad_sequences(train_set_input, maxlen=SEQUENCE_LENGTH)
train_set_input_end = pad_sequences(train_set_input_reverse, maxlen=SEQUENCE_LENGTH)
train_set_output = pd.get_dummies(train_set['Class']).values
print(train_set_input_begin.shape, train_set_input_end.shape, train_set_output.shape)
# Validation set
val_set_input = tokenizer.texts_to_sequences(df_val['Text'].values)
val_set_input_reverse = [list(reversed(x)) for x in val_set_input]
val_set_input_begin = pad_sequences(val_set_input, maxlen=SEQUENCE_LENGTH)
val_set_input_end = pad_sequences(val_set_input_reverse, maxlen=SEQUENCE_LENGTH)
val_set_output = pd.get_dummies(df_val['Class']).values
print(val_set_input_begin.shape, val_set_input_end.shape, val_set_output.shape)
# Test set
test_set_input = tokenizer.texts_to_sequences(df_test['Text'].values)
test_set_input_reverse = [list(reversed(x)) for x in test_set_input]
test_set_input_begin = pad_sequences(test_set_input, maxlen=SEQUENCE_LENGTH)
test_set_input_end = pad_sequences(test_set_input_reverse, maxlen=SEQUENCE_LENGTH)
print(test_set_input_begin.shape, test_set_input_end.shape)
```
#### Add genes and variations as one hot encoding
We only transform the variations to use the first and last letter, otherwise it will be almos one variation per exmple and it will be useless.
```
# Add gene and variation to predictor
gene_le = LabelEncoder()
all_genes = np.concatenate([df_train['Gene'], df_val['Gene'], df_test['Gene']])
all_variations = np.concatenate([df_train['Variation'], df_val['Variation'], df_test['Variation']])
all_variations = np.asarray([v[0]+v[-1] for v in all_variations])
print ("Unique genes: ", len(np.unique(all_genes)))
print ("Unique variations:", len(np.unique(all_variations)))
# gene_encoded = gene_le.fit_transform(all_genes.ravel()).reshape(-1, 1)
# gene_encoded = gene_encoded / np.max(gene_encoded.ravel())
# variation_le = LabelEncoder()
# variation_encoded = variation_le.fit_transform(all_variations).reshape(-1, 1)
# variation_encoded = variation_encoded / np.max(variation_encoded)
gene_encoded = pd.get_dummies(all_genes).values
variation_encoded = pd.get_dummies(all_variations).values
len_train_set = len(train_set_input)
len_val_set = len(val_set_input)
len_test_set = len(test_set_input)
train_set_input_gene = gene_encoded[:len_train_set]
train_set_input_variation = variation_encoded[:len_train_set]
val_set_input_gene = gene_encoded[len_train_set:-len_test_set]
val_set_input_variation = variation_encoded[len_train_set:-len_test_set]
test_set_input_gene = gene_encoded[-len_test_set:]
test_set_input_variation = variation_encoded[-len_test_set:]
print (len_train_set, len(train_set_input_gene))
print (len_val_set, len(val_set_input_gene))
print (len_test_set, len(test_set_input_gene))
```
## Attention layer
from: https://gist.github.com/cbaziotis/7ef97ccf71cbc14366835198c09809d2
```
from keras import backend as K
from keras.engine.topology import Layer
from keras import initializers, regularizers, constraints
import numpy as np
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)
class AttentionWithContext(Layer):
"""
Attention operation, with a context/query vector, for temporal data.
Supports Masking.
Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf]
"Hierarchical Attention Networks for Document Classification"
by using a context vector to assist the attention
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
2D tensor with shape: `(samples, features)`.
How to use:
Just put it on top of an RNN Layer (GRU/LSTM/SimpleRNN) with return_sequences=True.
The dimensions are inferred based on the output shape of the RNN.
Note: The layer has been tested with Keras 2.0.6
Example:
model.add(LSTM(64, return_sequences=True))
model.add(AttentionWithContext())
# next add a Dense layer (for classification/regression) or whatever...
"""
def __init__(self,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
from keras.models import Sequential, Model
from keras.layers import Dense, Embedding, LSTM, GRU, Bidirectional, Merge, Input, concatenate
from keras.layers.merge import Concatenate
from keras.utils.np_utils import to_categorical
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
from keras.optimizers import Adam
# Build out our simple LSTM
embed_dim = 128
lstm_out = 196
# Model saving callback
ckpt_callback = ModelCheckpoint('keras_model',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='auto')
input_sequence_begin = Input(shape=(train_set_input_begin.shape[1],))
input_sequence_end = Input(shape=(train_set_input_end.shape[1],))
input_gene = Input(shape=(train_set_input_gene.shape[1],))
input_variant = Input(shape=(train_set_input_variation.shape[1],))
merged = concatenate([input_gene, input_variant])
dense = Dense(32, activation='sigmoid')(merged)
embeds_begin = Embedding(VOCABULARY_SIZE, embed_dim, input_length = SEQUENCE_LENGTH)(input_sequence_begin)
embeds_out_begin = Bidirectional(GRU(lstm_out, recurrent_dropout=0.2, dropout=0.2, return_sequences=True))(embeds_begin)
attention_begin = AttentionWithContext()(embeds_out_begin)
embeds_end = Embedding(VOCABULARY_SIZE, embed_dim, input_length = SEQUENCE_LENGTH)(input_sequence_end)
embeds_out_end = Bidirectional(GRU(lstm_out, recurrent_dropout=0.2, dropout=0.2, return_sequences=True))(embeds_end)
attention_end = AttentionWithContext()(embeds_out_end)
merged2 = concatenate([attention_begin, attention_end, dense])
dense2 = Dense(9,activation='softmax')(merged2)
model = Model(inputs=[input_sequence_begin, input_sequence_end, input_gene, input_variant], outputs=dense2)
model.compile(loss = 'categorical_crossentropy', optimizer='adam')
print(model.summary())
```
### Training
```
model.fit([train_set_input_begin, train_set_input_end, train_set_input_gene, train_set_input_variation], train_set_output,
epochs=6, batch_size=16,
validation_data=([val_set_input_begin,val_set_input_end,val_set_input_gene,val_set_input_variation], val_set_output),
callbacks=[ckpt_callback])
```
### Validation
```
model = load_model('keras_model', custom_objects={'AttentionWithContext': AttentionWithContext})
probas = model.predict([val_set_input_begin, val_set_input_end, val_set_input_gene, val_set_input_variation])
pred_indices = np.argmax(probas, axis=1)
classes = np.array(range(1, 10))
preds = classes[pred_indices]
print('Log loss: {}'.format(log_loss(classes[np.argmax(val_set_output, axis=1)], probas)))
print('Accuracy: {}'.format(accuracy_score(classes[np.argmax(val_set_output, axis=1)], preds)))
skplt.plot_confusion_matrix(classes[np.argmax(val_set_output, axis=1)], preds)
```
## Train with validation set
For the final submission we can add the validation set to the training set and run the network for 4 epochs. After 4 epcohs it starts overfitting in the validation set. We only do this to add more training samples and try to get better results this way
```
model.fit([
np.concatenate([train_set_input_begin, val_set_input_begin]),
np.concatenate([train_set_input_end,val_set_input_end]),
np.concatenate([train_set_input_gene, val_set_input_gene]),
np.concatenate([train_set_input_variation, val_set_input_variation])
], np.concatenate([train_set_output,val_set_output]),
epochs=4, batch_size=16, callbacks=[ckpt_callback])
```
## Submission
```
probas = model.predict([test_set_input_begin, test_set_input_end, test_set_input_gene, test_set_input_variation])
submission_df = pd.DataFrame(probas, columns=['class'+str(c+1) for c in range(9)])
submission_df['ID'] = df_test['ID']
submission_df.head()
submission_df.to_csv('submission.csv', index=False)
```
# Public LB Score: 0.93662
The private leaderboard shows and score of 2.8. Everybody get much worse results in the private leader board, there has been a long discussion in the forums.
| github_jupyter |
# Bayesian Classification for Machine Learning for Computational Linguistics
## Using token probabilities for classification
**(C) 2017-2020 by [Damir Cavar](http://damir.cavar.me/)**
**Download:** This and various other Jupyter notebooks are available from my [GitHub repo](https://github.com/dcavar/python-tutorial-notebooks).
**Version:** 1.3, September 2020
**License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))
This is a tutorial related to the discussion of a Bayesian classifier in the textbook [Machine Learning: The Art and Science of Algorithms that Make Sense of Data](https://www.cs.bris.ac.uk/~flach/mlbook/) by [Peter Flach](https://www.cs.bris.ac.uk/~flach/).
This tutorial was developed as part of my course material for the course Machine Learning for Computational Linguistics in the [Computational Linguistics Program](http://cl.indiana.edu/) of the [Department of Linguistics](http://www.indiana.edu/~lingdept/) at [Indiana University](https://www.indiana.edu/).
## Creating a Training Corpus
Assume that we have a set of e-mails that are annotated as *spam* or *ham*, as described in the textbook.
There are $4$ e-mails labeled *ham* and $1$ e-mail is labeled *spam*, that is we have a total of $5$ texts in our corpus.
If we would randomly pick an e-mail from the collection, the probability that we pick a spam e-mail would be $1 / 5$.
Spam emails might differ from ham e-mails just in some words. Here is a sample email constructed with typical keywords:
```
spam = [ """Our medicine cures baldness. No diagnostics needed.
We guarantee Fast Viagra delivery.
We can provide Human growth hormone. The cheapest Life
Insurance with us. You can Lose weight with this treatment.
Our Medicine now and No medical exams necessary.
Our Online pharmacy is the best. This cream Removes
wrinkles and Reverses aging.
One treatment and you will Stop snoring. We sell Valium
and Viagra.
Our Vicodin will help with Weight loss. Cheap Xanax.""" ]
```
The data structure above is a list of strings that contains only one string. The triple-double-quotes mark multi-line text. We can output the size of the variable *spam* this way:
```
print(len(spam))
```
We can create a list of *ham* mails in a similar way:
```
ham = [ """Hi Hans, hope to see you soon at our family party.
When will you arrive.
All the best to the family.
Sue""",
"""Dear Ata,
did you receive my last email related to the car insurance
offer? I would be happy to discuss the details with you.
Please give me a call, if you have any questions.
John Smith
Super Car Insurance""",
"""Hi everyone:
This is just a gentle reminder of today's first 2017 SLS
Colloquium, from 2.30 to 4.00 pm, in Ballantine 103.
Rodica Frimu will present a job talk entitled "What is
so tricky in subject-verb agreement?". The text of the
abstract is below.
If you would like to present something during the Spring,
please let me know.
The current online schedule with updated title
information and abstracts is available under:
http://www.iub.edu/~psyling/SLSColloquium/Spring2017.html
See you soon,
Peter""",
"""Dear Friends,
As our first event of 2017, the Polish Studies Center
presents an evening with artist and filmmaker Wojtek Sawa.
Please join us on JANUARY 26, 2017 from 5:30 p.m. to
7:30 p.m. in the Global and International Studies
Building room 1100 for a presentation by Wojtek Sawa
on his interactive installation art piece The Wall
Speaks–Voices of the Unheard. A reception will follow
the event where you will have a chance to meet the artist
and discuss his work.
Best,"""]
```
The ham-mail list contains $4$ e-mails:
```
print(len(ham))
```
We can access a particular e-mail via index from either spam or ham:
```
print(spam[0])
print(ham[3])
```
We can lower-case the email using the string *lower* function:
```
print(ham[3].lower())
```
We can loop over all e-mails in spam or ham and lower-case the content:
```
for text in ham:
print(text.lower())
```
We can use the tokenizer from NLTK to tokenize the lower-cased text into single tokens (words and punctuation marks):
```
from nltk import word_tokenize
print(word_tokenize(ham[0].lower()))
```
We can count the numer of tokens and types in lower-cased text:
```
from collections import Counter
myCounts = Counter(word_tokenize("This is a test. Will this test teach us how to count tokens?".lower()))
print(myCounts)
print("number of types:", len(myCounts))
print("number of tokens:", sum(myCounts.values()))
```
Now we can create a frequency profile of ham and spam words given the two text collections:
```
hamFP = Counter()
spamFP = Counter()
for text in spam:
spamFP.update(word_tokenize(text.lower()))
for text in ham:
hamFP.update(word_tokenize(text.lower()))
print("Ham:\n", hamFP)
print("-" * 30)
print("Spam:\n", spamFP)
from math import log
tokenlist = []
frqprofiles = []
for x in spam:
frqprofiles.append( Counter(word_tokenize(x.lower())) )
tokenlist.append( set(word_tokenize(x.lower())) )
for x in ham:
frqprofiles.append( Counter(word_tokenize(x.lower())) )
tokenlist.append( set(word_tokenize(x.lower())) )
#print(tokenlist)
for x in frqprofiles[0]:
frq = frqprofiles[0][x]
counter = 0
for y in tokenlist:
if x in y:
counter += 1
print(x, frq * log(len(tokenlist)/counter, 2))
```
The probability that we pick randomly an e-mail that is spam or ham can be computed as the ratio of the counts divided by the number of e-mails:
```
total = len(spam) + len(ham)
spamP = len(spam) / total
hamP = len(ham) / total
print("probability to pick spam:", spamP)
print("probability to pick ham:", hamP)
```
We will need the total token count to calculate the relative frequency of the tokens, that is to generate likelihood estimates. We could *brute force* add one to create space in the probability mass for unknown tokens.
```
totalSpam = sum(spamFP.values()) + 1
totalHam = sum(hamFP.values()) + 1
print("total spam counts + 1:", totalSpam)
print("total ham counts + 1:", totalHam)
```
We can relativize the counts in the frequency profiles now:
```
hamFP = Counter( dict([ (token, frequency/totalHam) for token, frequency in hamFP.items() ]) )
spamFP = Counter( dict([ (token, frequency/totalSpam) for token, frequency in spamFP.items() ]) )
print(hamFP)
print("-" * 30)
print(spamFP)
```
We can now compute the default probability that we want to assign to unknown words as $1 / totalSpam$ or $1 / totalHam$ respectively. Whenever we encounter an unknown token that is not in our frequency profile, we will assign the default probability to it.
```
defaultSpam = 1 / totalSpam
defaultHam = 1 / totalHam
print("default spam probability:", defaultSpam)
print("default ham probability:", defaultHam)
```
We can test an unknown document by calculating how likely it was generated by the hamFP-distribution or the spamFP-distribution. We have to tokenize the lower-cased unknown document and compute the product of the likelihood of every single token in the text. We should scale this likelihood with the likelihood of randomly picking a ham or a spam e-mail. Let us calculate the likelihood that the random email is spam:
```
unknownEmail = """Dear ,
we sell the cheapest and best Viagra on the planet. Our delivery is guaranteed confident and cheap.
"""
unknownEmail = """Dear Hans,
I have not seen you for so long. When will we go out for a coffee again.
"""
tokens = word_tokenize(unknownEmail.lower())
result = 1.0
for token in tokens:
result *= spamFP.get(token, defaultSpam)
print(result * spamP)
```
Since this number is very small, a better strategy might be to sum up the log-likelihoods:
```
from math import log
resultSpam = 0.0
for token in tokens:
resultSpam += log(spamFP.get(token, defaultSpam), 2)
resultSpam += log(spamP)
print(resultSpam)
resultHam = 0.0
for token in tokens:
resultHam += log(hamFP.get(token, defaultHam), 2)
resultHam += log(hamP)
print(resultHam)
```
The log-likelihood for spam is larger than for *ham*. Our simple classifier would have guessed that this e-mail is *spam*.
```
if max(resultHam, resultSpam) == resultHam:
print("e-mail is ham")
else:
print("e-mail is spam")
```
The are numerous ways to improve the algorithm and tutorial. Please [send me](http://cavar.me/damir/) your suggestions.
(C) 2017-2020 by [Damir Cavar](http://damir.cavar.me/) - [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))
| github_jupyter |
<h1><center style='color:#f95738'>Table of Contents</h1><a class ='anchor' id='a'></a>
* [Packages Used](#b)
* [Data](#c)
* [EDA](#d)
* [Data Visualization](#e)
* **[State-Wise Trends](#j)**
* [Andaman and Nicobar Islands](#1)
* [Andhra Pradesh](#2)
* [Arunachal Pradesh](#3)
* [Assam](#4)
* [Bihar](#5)
* [Chandigarh](#6)
* [Chhattisgarh](#7)
* [Dadra and Nagar Haveli and Daman and Diu](#8)
* [Delhi](#9)
* [Goa](#10)
* [Gujarat](#11)
* [Haryana](#12)
* [Himachal Pradesh](#13)
* [Jammu and Kashmir](#14)
* [Jharkhand](#15)
* [Karnataka](#16)
* [Kerala](#17)
* [Ladakh](#18)
* [Lakshadweep](#19)
* [Madhya Pradesh](#20)
* [Maharashtra](#21)
* [Manipur](#22)
* [Meghalaya](#23)
* [Mizoram](#24)
* [Nagaland](#25)
* [Odisha](#26)
* [Puducherry](#27)
* [Punjab](#28)
* [Rajasthan](#29)
* [Sikkim](#30)
* [Tamil Nadu](#31)
* [Telangana](#32)
* [Tripura](#33)
* [Uttar Pradesh](#34)
* [Uttarakhand](#35)
* [West Bengal](#36)
* [Stats](#f)
<h2><center style='color:#f95738'>Loading Packages</h2><a class='anchor' id='b'></a>
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from datetime import datetime
from datetime import datetime
today = datetime.now()
import os
if not os.path.exists('DIST_'+ today.strftime('%d-%m-%Y')):
os.mkdir('E:/Data Science Datasets/Covid_19_Master/Local/' +'DIST_'+ today.strftime('%d-%m-%Y'))
%matplotlib inline
```
<h2><center style='color:#f95738'>Loading Data</h2><a class='anchor' id='c'></a>
```
district = pd.read_csv('https://api.covid19india.org/csv/latest/districts.csv')
```
<h2><center style='color:#f95738'>Exploratory Data Analysis</h2><a class='anchor' id='d'></a>
```
district.head()
district.tail()
district.describe(include = 'all').transpose()
#Dealing with `nan` and changing data-type
district['Tested'].fillna(0.0, inplace = True)
district['Tested'] = district['Tested'].astype('int64')
district.info()
```
##### Replacing names for districts which are present in 2 different states
* `Pratapgarh` (rajasthan) ----> `Pratapgarh_rj`
* `Aurangabad` (bihar) --------> `Aurangabad_br`
* `Balrampur` (chhattisgarh) --------> `Balrampur_cg`
* `Bilaspur` (himanchal pradesh) --------> `Bilaspur_hp`
```
district[(district['State'] == 'Rajasthan') & (district['District'] == 'Pratapgarh')]= district[(district['State'] == 'Rajasthan') & (district['District'] == 'Pratapgarh')].replace('Pratapgarh', 'Pratapgarh_rj')
district[(district['State'] == 'Bihar') & (district['District'] == 'Aurangabad')]= district[(district['State'] == 'Bihar') & (district['District'] == 'Aurangabad')].replace('Aurangabad', 'Aurangabad_br')
district[(district['State'] == 'Chhattisgarh') & (district['District'] == 'Balrampur')]= district[(district['State'] == 'Chhattisgarh') & (district['District'] == 'Balrampur')].replace('Balrampur', 'Balrampur_cg')
district[(district['State'] == 'Himachal Pradesh') & (district['District'] == 'Bilaspur')]= district[(district['State'] == 'Himachal Pradesh') & (district['District'] == 'Bilaspur')].replace('Bilaspur', 'Bilaspur_hp')
```
#### Classifying Unknown Districts of Each State
```
district[(district['State'] == 'Manipur') & (district['District'] == 'Unknown')]= district[(district['State'] == 'Manipur') & (district['District'] == 'Unknown')].replace('Unknown', 'Unknown_MN')
district[(district['State'] == 'Goa') & (district['District'] == 'Unknown')]= district[(district['State'] == 'Goa') & (district['District'] == 'Unknown')].replace('Unknown', 'Unknown_GA')
district[(district['State'] == 'Assam') & (district['District'] == 'Unknown')]= district[(district['State'] == 'Assam') & (district['District'] == 'Unknown')].replace('Unknown', 'Unknown_AS')
district[(district['State'] == 'Andaman and Nicobar Islands') & (district['District'] == 'Unknown')]= district[(district['State'] == 'Andaman and Nicobar Islands') & (district['District'] == 'Unknown')].replace('Unknown', 'Unknown_AN')
district[(district['State'] == 'Telangana') & (district['District'] == 'Unknown')]= district[(district['State'] == 'Telangana') & (district['District'] == 'Unknown')].replace('Unknown', 'Unknown_TS')
district[(district['State'] == 'Sikkim') & (district['District'] == 'Unknown')]= district[(district['State'] == 'Sikkim') & (district['District'] == 'Unknown')].replace('Unknown', 'Unknown_SK')
```
#### Dropping Invalid Districts from DataFrame
```
dst = ['Italians', 'Foreign Evacuees','Other State' ,'Others','Unknown']
for i in dst:
italian_index = district[district['District'] == i].index
district.drop(index = italian_index, inplace = True)
```
### Districts
<h5 style='color:#312244'>Districts seperated for each state and saved to each state alias.</h5>
```
#---------------Andaman & Nicobar------------------------------------
an = ['Unknown_AN']
ga = ['Unknown_GA']
sk = ['Unknown_SK']
aS = ['Unknown_AS']
mn = ['Unknown_MN']
ts = ['Unknown_TS']
#----------------Andhra Pradesh--------------------------
ap = ['Anantapur' ,'Chittoor', 'East Godavari', 'Guntur', 'Krishna', 'Kurnool','Prakasam', 'S.P.S. Nellore', 'Srikakulam' ,
'Visakhapatnam', 'West Godavari', 'Y.S.R. Kadapa', 'Vizianagaram']
#----------------Arunachal Pradesh--------------------------
ar = ['Lohit' ,'Papum Pare', 'Changlang', 'Anjaw' ,'Capital Complex', 'East Kameng','East Siang', 'Kamle' ,'Kra Daadi',
'Kurung Kumey', 'Lepa Rada' ,'Longding','Lower Dibang Valley', 'Lower Siang', 'Lower Subansiri', 'Namsai','Pakke Kessang'
,'Shi Yomi' ,'Siang' ,'Tawang' ,'Tirap' ,'Upper Dibang Valley', 'Upper Siang', 'Upper Subansiri', 'West Kameng'
,'West Siang']
#----------------Bihar--------------------------
br = ['Arwal' ,'Aurangabad_br' ,'Banka', 'Begusarai', 'Bhagalpur' ,'Bhojpur', 'Buxar','East Champaran' ,'Gaya' ,'Gopalganj',
'Jehanabad' ,'Kaimur', 'Lakhisarai','Madhepura', 'Munger', 'Nalanda' ,'Nawada' ,'Patna', 'Rohtas', 'Saran', 'Siwan',
'Vaishali' ,'Darbhanga' ,'Madhubani' ,'Purnia','Araria', 'Sheikhpura','Sitamarhi' ,'West Champaran', 'Katihar' ,'Sheohar'
,'Samastipur','Kishanganj' ,'Khagaria' ,'Saharsa' ,'Supaul' ,'Muzaffarpur' ,'Jamui']
#----------------Chandigarh--------------------------
ch = ['Chandigarh']
#----------------Chhattisgarh--------------------------
cg = ['Bilaspur', 'Durg','Korba', 'Raipur', 'Rajnandgaon' ,'Surajpur', 'Kabeerdham', 'Balod', 'Janjgir Champa' ,'Koriya',
'Baloda Bazar', 'Gariaband' ,'Raigarh','Surguja', 'Balrampur_cg' ,'Bametara' ,'Bastar' ,'Bijapur','Dakshin Bastar Dantewada'
,'Dhamtari' ,'Jashpur', 'Kondagaon' ,'Mahasamund','Mungeli', 'Narayanpur' ,'Sukma' ,'Uttar Bastar Kanker'
,'Gaurela Pendra Marwahi' ]
#----------------Delhi--------------------------
dl = ['Delhi']
#----------------Gujarat--------------------------
gj = ['Ahmedabad', 'Anand' ,'Aravalli' ,'Banaskantha', 'Bharuch', 'Bhavnagar','Botad', 'Chhota Udaipur', 'Dahod',
'Dang', 'Gandhinagar', 'Gir Somnath','Jamnagar', 'Kheda', 'Kutch', 'Mahisagar', 'Mehsana', 'Morbi', 'Narmada',
'Navsari', 'Panchmahal', 'Patan', 'Porbandar' ,'Rajkot', 'Sabarkantha', 'Surat', 'Surendranagar' ,'Tapi',
'Vadodara', 'Valsad' ,'Devbhumi Dwarka', 'Junagadh', 'Amreli']
#----------------Himachal Pradesh--------------------------
hp = ['Chamba', 'Hamirpur', 'Kangra' ,'Sirmaur' ,'Solan', 'Una', 'Mandi', 'Shimla','Bilaspur_hp', 'Kullu', 'Kinnaur'
,'Lahaul and Spiti']
#----------------Haryana--------------------------
hr = ['Ambala', 'Bhiwani', 'Charkhi Dadri' ,'Faridabad' ,'Fatehabad' ,'Gurugram','Hisar' ,'Jind' ,'Kaithal','Karnal' ,
'Kurukshetra' ,'Nuh','Palwal','Panchkula', 'Panipat', 'Rohtak', 'Sirsa', 'Sonipat' ,'Yamunanagar' ,'Jhajjar',
'Mahendragarh', 'Rewari']
#----------------Jharkhand--------------------------
jh = ['Bokaro', 'Deoghar' ,'Dhanbad', 'Garhwa' ,'Giridih' ,'Hazaribagh', 'Koderma', 'Palamu', 'Ranchi', 'Simdega' ,'Jamtara'
,'Godda' ,'Dumka', 'East Singhbhum','Latehar' ,'Lohardaga' ,'Ramgarh', 'West Singhbhum' ,'Gumla', 'Saraikela-Kharsawan' ,
'Chatra' , 'Khunti','Pakur', 'Sahibganj']
#----------------Jammu and Kashmir--------------------------
jk = ['Anantnag', 'Bandipora', 'Baramulla', 'Budgam', 'Ganderbal' ,'Jammu' ,'Kathua','Kishtwar', 'Kulgam', 'Kupwara', 'Pulwama'
,'Rajouri' ,'Ramban', 'Samba', 'Shopiyan', 'Srinagar' ,'Udhampur' ,'Reasi', 'Punch', 'Doda']
#----------------Karnataka--------------------------
ka = ['Bagalkote', 'Ballari', 'Belagavi', 'Bengaluru Rural', 'Bengaluru Urban','Bidar', 'Chikkaballapura' ,'Chitradurga',
'Dakshina Kannada' ,'Davanagere','Dharwad','Gadag' ,'Kalaburagi' ,'Kodagu', 'Mandya' ,'Mysuru' , 'Tumakuru', 'Udupi',
'Uttara Kannada' ,'Vijayapura', 'Haveri' ,'Shivamogga','Hassan' ,'Kolar' ,'Yadgir', 'Koppal','Raichur' ,'Chikkamagaluru',
'Ramanagara' ,'Chamarajanagara']
#----------------Kerala--------------------------
kl = ['Alappuzha' ,'Ernakulam', 'Idukki' ,'Kannur' ,'Kasaragod' ,'Kollam' ,'Kottayam','Kozhikode', 'Malappuram' ,'Palakkad'
,'Pathanamthitta' ,'Thiruvananthapuram','Thrissur' ,'Wayanad']
#----------------Ladakh--------------------------
la = ['Kargil','Leh']
#----------------Maharashtra--------------------------
mh = ['Ahmednagar' ,'Akola', 'Amravati', 'Aurangabad' ,'Beed', 'Buldhana','Chandrapur', 'Dhule', 'Gondia' ,'Hingoli','Jalgaon',
'Jalna', 'Kolhapur','Latur', 'Mumbai', 'Nagpur', 'Nanded' ,'Nandurbar' ,'Nashik', 'Osmanabad','Palghar',
'Parbhani', 'Pune', 'Raigad', 'Ratnagiri', 'Sangli','Satara', 'Sindhudurg', 'Solapur', 'Thane', 'Washim', 'Yavatmal',
'Bhandara','Wardha', 'Gadchiroli']
#----------------Meghalaya--------------------------
ml = ['East Khasi Hills', 'West Garo Hills', 'North Garo Hills','South West Garo Hills', 'West Khasi Hills',
'West Jaintia Hills' ,'Ribhoi','East Jaintia Hills', 'East Garo Hills', 'South Garo Hills','South West Khasi Hills']
#----------------Madhya Pradesh--------------------------
mp = ['Agar Malwa', 'Alirajpur' ,'Barwani' ,'Betul' ,'Bhopal', 'Chhindwara' ,'Dewas', 'Dhar' ,'Dindori' ,'Gwalior',
'Hoshangabad', 'Indore', 'Jabalpur', 'Khandwa','Khargone', 'Mandsaur' ,'Morena', 'Other Region', 'Raisen', 'Ratlam',
'Sagar','Shajapur', 'Sheopur' ,'Shivpuri' ,'Tikamgarh', 'Ujjain' ,'Vidisha', 'Harda','Ashoknagar', 'Burhanpur', 'Rewa',
'Shahdol', 'Anuppur', 'Katni', 'Niwari','Panna', 'Satna', 'Neemuch' ,'Jhabua', 'Guna', 'Sehore' ,'Bhind' ,'Mandla',
'Seoni', 'Sidhi' ,'Damoh', 'Datia' ,'Umaria', 'Rajgarh' ,'Singrauli','Chhatarpur', 'Balaghat', 'Narsinghpur']
#----------------Mizoram--------------------------
mz = ['Aizawl', 'Lunglei','Mamit','Saitual' ,'Lawngtlai', 'Champhai' ,'Khawzawl', 'Saiha', 'Kolasib', 'Serchhip', 'Hnahthial']
#----------------Odisha--------------------------
od = ['Balasore', 'Bhadrak', 'Cuttack' ,'Dhenkanal', 'Jajpur' ,'Kalahandi', 'Kendrapara', 'Khordha', 'Puri' ,'Sundargarh',
'Koraput', 'Deogarh', 'Jharsuguda', 'Kendujhar' ,'Balangir', 'Ganjam' ,'Jagatsinghpur', 'Mayurbhanj', 'Angul', 'Nayagarh',
'Boudh', 'Gajapati', 'Kandhamal', 'Malkangiri','Nuapada', 'Rayagada', 'Sambalpur', 'Nabarangapur', 'Bargarh','Subarnapur',
'State Pool']
#----------------Punjab--------------------------
pb = ['Amritsar', 'Barnala' ,'Faridkot', 'Fatehgarh Sahib', 'Ferozepur' ,'Gurdaspur','Hoshiarpur', 'Jalandhar', 'Kapurthala'
,'Ludhiana' ,'Mansa', 'Moga','Pathankot', 'Patiala', 'Rupnagar', 'S.A.S. Nagar', 'Sangrur','Shahid Bhagat Singh Nagar'
,'Sri Muktsar Sahib', 'Tarn Taran', 'Bathinda', 'Fazilka']
#----------------Puducherry--------------------------
py = ['Mahe', 'Puducherry', 'Karaikal', 'Yanam']
#----------------Rajasthan--------------------------
rj = ['Ajmer', 'Alwar', 'Banswara', 'Barmer', 'Bharatpur', 'Bhilwara', 'Bikaner','Chittorgarh' ,'Churu', 'Dausa' ,'Dholpur'
,'Dungarpur', 'Evacuees','Hanumangarh','Jaipur', 'Jaisalmer', 'Jhalawar', 'Jhunjhunu','Jodhpur' ,'Karauli'
,'Kota' ,'Nagaur' ,'Pali', 'Pratapgarh_rj','Rajsamand' ,'Sawai Madhopur', 'Sikar', 'Tonk', 'Udaipur', 'Baran',
'BSF Camp','Jalore', 'Sirohi' ,'Ganganagar', 'Bundi']
#----------------Tamil Nadu--------------------------
tn = ['Ariyalur', 'Chengalpattu' ,'Chennai' ,'Coimbatore' ,'Cuddalore', 'Dharmapuri','Dindigul' ,'Erode' ,'Kallakurichi',
'Kancheepuram' ,'Kanyakumari' ,'Karur','Madurai', 'Nagapattinam', 'Namakkal', 'Nilgiris', 'Perambalur', 'Pudukkottai',
'Ramanathapuram', 'Ranipet', 'Salem' ,'Sivaganga', 'Tenkasi', 'Thanjavur','Theni', 'Thiruvallur', 'Thiruvarur',
'Thoothukkudi', 'Tiruchirappalli','Tirunelveli', 'Tirupathur', 'Tiruppur', 'Tiruvannamalai', 'Vellore', 'Viluppuram',
'Virudhunagar', 'Krishnagiri', 'Airport Quarantine', 'Railway Quarantine']
#----------------Tripura--------------------------
tr = ['Gomati', 'North Tripura' ,'Dhalai' ,'Khowai' ,'Sipahijala', 'South Tripura', 'Unokoti' ,'West Tripura']
#----------------Uttar Pradesh--------------------------
up = ['Agra', 'Aligarh', 'Amroha', 'Auraiya', 'Ayodhya', 'Azamgarh', 'Baghpat', 'Bahraich', 'Balrampur', 'Banda', 'Barabanki',
'Bareilly', 'Basti', 'Bhadohi','Bijnor', 'Budaun', 'Bulandshahr', 'Etah', 'Etawah', 'Firozabad','Gautam Buddha Nagar',
'Ghaziabad', 'Ghazipur', 'Gonda', 'Hapur', 'Hardoi','Hathras', 'Jalaun', 'Jaunpur', 'Kannauj', 'Kanpur Nagar', 'Kasganj',
'Kaushambi', 'Lakhimpur Kheri', 'Lucknow' ,'Maharajganj', 'Mainpuri','Mathura', 'Mau', 'Meerut', 'Mirzapur', 'Moradabad',
'Muzaffarnagar','Pilibhit', 'Pratapgarh', 'Prayagraj', 'Rae Bareli', 'Rampur', 'Saharanpur','Sambhal' ,'Sant Kabir Nagar',
'Shahjahanpur' ,'Shamli' ,'Shrawasti','Sitapur' ,'Sultanpur', 'Unnao', 'Varanasi', 'Gorakhpur', 'Jhansi','Kanpur Dehat',
'Deoria', 'Siddharthnagar', 'Mahoba', 'Amethi', 'Kushinagar','Chitrakoot', 'Fatehpur', 'Farrukhabad', 'Hamirpur',
'Lalitpur', 'Sonbhadra','Ambedkar Nagar', 'Ballia', 'Chandauli']
#----------------Uttarakhand--------------------------
uk = ['Almora', 'Dehradun', 'Haridwar', 'Nainital' ,'Pauri Garhwal','Udham Singh Nagar', 'Uttarkashi', 'Bageshwar', 'Chamoli',
'Champawat','Pithoragarh', 'Rudraprayag' ,'Tehri Garhwal' ]
#----------------West Bengal--------------------------
wb = ['Darjeeling' ,'Hooghly', 'Howrah', 'Jalpaiguri', 'Kalimpong', 'Kolkata','Murshidabad', 'Nadia', 'North 24 Parganas',
'Paschim Bardhaman','Purba Bardhaman' ,'Purba Medinipur', 'South 24 Parganas', 'Birbhum', 'Malda' ,'Paschim Medinipur' ,
'Jhargram','Uttar Dinajpur', 'Dakshin Dinajpur', 'Bankura' ,'Purulia' ,'Alipurduar','Cooch Behar']
#----------------Dadra and Nagar Haveli and Daman and Diu--------------------------
dnd = ['Dadra and Nagar Haveli' ,'Daman' ,'Diu']
#----------------Nagaland--------------------------
ng = ['Dimapur', 'Kohima', 'Mokokchung', 'Mon' ,'Peren' ,'Phek','Tuensang' ,'Wokha', 'Kiphire' ,'Zunheboto','Longleng']
#----------------Lakshadweep--------------------------
lk = ['Lakshadweep']
```
<h5 style='color:#312244'>Function for plotting graph by district for each state.</h5>
```
def district_plot(st):
for i in st:
a = district[district['District'] == i]
a['Date'] = pd.to_datetime(a['Date'], format = ('%Y-%m-%d'))
plt.figure(figsize = (15, 10))
sns.set_style("dark")
rgb = np.random.rand(3,)
rgb1 = np.random.rand(3,)
rgb2 = np.random.rand(3,)
ax = sns.lineplot( a['Date'], a['Confirmed'], label = 'Confirmed', linestyle = '--', color = rgb, linewidth = 2.5)
ax = sns.lineplot( a['Date'], a['Recovered'], label = 'Recovered', linestyle = '-.', color = rgb1,linewidth = 2.5)
ax = sns.lineplot( a['Date'], a['Deceased'], label = 'Deceased', linestyle = ':', color = rgb2,linewidth = 2.5)
for value in ax.lines:
y = value.get_ydata()
if len(y)>0:
ax.annotate(f'{y[-1]:.0f}',xy=(1,y[-1]), xycoords=('axes fraction','data'),ha='left',va='center',color=value.get_color())
ax.set_xlim(a['Date'].iloc[0], a['Date'].iloc[-1])
plt.ylabel('Counts', fontsize = 15)
plt.xlabel('Date', fontsize = 15)
plt.title('{} district of {} state'.format(a['District'].iloc[0], a['State'].iloc[0]), fontsize = 15)
plt.savefig(os.path.join('DIST_'+ today.strftime('%d-%m-%Y'), '{}.png'.format(i)))
```
<h2><center style='color:#f95738'>Data Visualization</h2><a class='anchor' id='e'></a>
<h4 align='center' style='color:#4d194d'>District wise plot for each state.</h4>
#### An alternative way to make lineplots for all the districts
`(To use, un-comment all the lines and run the code)`
```
#
#st = district['State'].unique()
#for i in st:
# ds = district[district['State'] ==i]['District'].unique()
# for i in ds:
# a = district[district['District'] == i]
# a['Date'] = pd.to_datetime(a['Date'], format = ('%Y-%m-%d'))
# plt.figure(figsize = (15, 10))
# sns.set_style("dark")
# rgb = np.random.rand(3,)
# rgb1 = np.random.rand(3,)
# rgb2 = np.random.rand(3,)
# ax = sns.lineplot( a['Date'], a['Confirmed'], label = 'Confirmed', linestyle = '--', color = rgb, linewidth = 2.5)
# ax = sns.lineplot( a['Date'], a['Recovered'], label = 'Recovered', linestyle = '-.', color = rgb1,linewidth = 2.5)
# ax = sns.lineplot( a['Date'], a['Deceased'], label = 'Deceased', linestyle = ':', color = rgb2,linewidth = 2.5)
# for value in ax.lines:
# y = value.get_ydata()
# if len(y)>0:
# ax.annotate(f'{y[-1]:.0f}',xy=(1,y[-1]), xycoords=('axes fraction','data'),ha='left',va='center',color=value.get_color())
# ax.set_xlim(a['Date'].iloc[0], a['Date'].iloc[-1])
# plt.ylabel('Counts', fontsize = 15)
# plt.xlabel('Date', fontsize = 15)
# plt.title('{} district of {} state'.format(a['District'].iloc[0], a['State'].iloc[0]), fontsize = 15)
# plt.savefig(os.path.join('DIST_'+ today.strftime('%d-%m-%Y'), '{}.png'.format(i)))
```
<b><i><h3 align='center' style='color:#33415c'>Andaman & Nicobar Islands<a class='anchor' id='1'></a>
```
district_plot(an)
```
<b><i><h3 align='center' style='color:#33415c'>Andhra Pradesh<a class='anchor' id='2'></a>
```
district_plot(ap)
```
<b><i><h3 align='center' style='color:#33415c'>Arunanchal Pradesh<a class='anchor' id='3'></a>
```
district_plot(ar)
```
<b><i><h3 align='center' style='color:#33415c'>Assam<a class='anchor' id='4'></a>
```
district_plot(aS)
```
<b><i><h3 align='center' style='color:#33415c'>Bihar<a class='anchor' id='5'></a>
```
district_plot(br)
```
<b><i><h3 align='center' style='color:#33415c'>Chandigarh<a class='anchor' id='6'></a>
```
district_plot(ch)
```
<b><i><h3 align='center' style='color:#33415c'>Chhattisgarh<a class='anchor' id='7'></a>
```
district_plot(cg)
```
<b><i><h3 align='center' style='color:#33415c'>Dadra and Nagar Haveli and Daman Diu<a class='anchor' id='8'></a>
```
district_plot(dnd)
```
<b><i><h3 align='center' style='color:#33415c'>Delhi<a class='anchor' id='9'></a>
```
district_plot(dl)
```
<b><i><h3 align='center' style='color:#33415c'>Goa<a class='anchor' id='10'></a>
```
district_plot(ga)
```
<b><i><h3 align='center' style='color:#33415c'>Gujarat<a class='anchor' id='11'></a>
```
district_plot(gj)
```
<b><i><h3 align='center' style='color:#33415c'>Haryana<a class='anchor' id='12'></a>
```
district_plot(hr)
```
<b><i><h3 align='center' style='color:#33415c'>Himnachal Pradesh<a class='anchor' id='13'></a>
```
district_plot(hp)
```
<b><i><h3 align='center' style='color:#33415c'>Jammu Kashmir<a class='anchor' id='14'></a>
```
district_plot(jk)
```
<b><i><h3 align='center' style='color:#33415c'>Jharkhand<a class='anchor' id='15'></a>
```
district_plot(jh)
```
<b><i><h3 align='center' style='color:#33415c'>Karnataka<a class='anchor' id='16'></a>
```
district_plot(ka)
```
<b><i><h3 align='center' style='color:#33415c'>Kerala<a class='anchor' id='17'></a>
```
district_plot(kl)
```
<b><i><h3 align='center' style='color:#33415c'>Ladakh<a class='anchor' id='18'></a>
```
district_plot(la)
```
<b><i><h3 align='center' style='color:#33415c'>Lakshadweep<a class='anchor' id='19'></a>
```
district_plot(lk)
```
<b><i><h3 align='center' style='color:#33415c' id='20'>Madhya Pradesh</h3>
```
district_plot(mp)
```
<b><i><h3 align='center' style='color:#33415c'>Maharashtra<a class='anchor' id='21'></a>
```
district_plot(mh)
```
<b><i><h3 align='center' style='color:#33415c'>Manipur<a class='anchor' id='22'></a>
```
district_plot(mn)
```
<b><i><h3 align='center' style='color:#33415c'>Meghalaya<a class='anchor' id='23'></a>
```
district_plot(ml)
```
<b><i><h3 align='center' style='color:#33415c'>Mizoram<a class='anchor' id='24'></a>
```
district_plot(mz)
```
<b><i><h3 align='center' style='color:#33415c'>Nagaland<a class='anchor' id='25'></a>
```
district_plot(ng)
```
<b><i><h3 align='center' style='color:#33415c'>Odisha<a class='anchor' id='26'></a>
```
district_plot(od)
```
<b><i><h3 align='center' style='color:#33415c'>Puducherry<a class='anchor' id='27'></a>
```
district_plot(py)
```
<b><i><h3 align='center' style='color:#33415c'>Punjab<a class='anchor' id='28'></a>
```
district_plot(pb)
```
<b><i><h3 align='center' style='color:#33415c'>Rajasthan<a class='anchor' id='29'></a>
```
district_plot(rj)
```
<b><i><h3 align='center' style='color:#33415c'>Sikkim<a class='anchor' id='30'></a>
```
district_plot(sk)
```
<b><i><h3 align='center' style='color:#33415c'>Tamil Nadu<a class='anchor' id='31'></a>
```
district_plot(tn)
```
<b><i><h3 align='center' style='color:#33415c'>Telangana<a class='anchor' id='32'></a>
```
district_plot(ts)
```
<b><i><h3 align='center' style='color:#33415c'>Tripura<a class='anchor' id='33'></a>
```
district_plot(tr)
```
<b><i><h3 align='center' style='color:#33415c'>Uttar Pradesh<a class='anchor' id='34'></a>
```
district_plot(up)
```
<b><i><h3 align='center' style='color:#33415c'>Uttarakhand<a class='anchor' id='35'></a>
```
district_plot(uk)
```
<b><i><h3 align='center' style='color:#33415c'>West Bengal<a class='anchor' id='36'></a>
```
district_plot(wb)
```
<h2><center style='color:#f95738'>Stats</h2><a class='anchor' id='f'></a>
```
def dist_stat(dis):
for i in dis:
print(' ')
print('-----------------------------------------Confirmed----------------------------------------------------------------------')
print('Total Confirmed cases in {} district of {} are {}'.format(i, district[district['District'] == i]['State'].iloc[0],
district[district['District'] == i]['Confirmed'].iloc[-1] ))
print('-----------------------------------------Recovered----------------------------------------------------------------------')
print('Total Recovered cases in {} district of {} are {}'.format(i, district[district['District'] == i]['State'].iloc[0],
district[district['District'] == i]['Recovered'].iloc[-1] ))
print('-----------------------------------------Deceased-----------------------------------------------------------------------')
print('Total Deceased cases in {} district of {} are {}'.format(i, district[district['District'] == i]['State'].iloc[0],
district[district['District'] == i]['Deceased'].iloc[-1] ))
print('-----------------------------------------Tested-------------------------------------------------------------------------')
print('Total Tested cases in {} district of {} are {}'.format(i, district[district['District'] == i]['State'].iloc[0],
district[district['District'] == i]['Tested'].iloc[-1] ))
print(' ')
print('XOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOX')
dist_stat(ap)
dist_stat(ar)
dist_stat(br)
dist_stat(ch)
dist_stat(cg)
dist_stat(dnd)
dist_stat(dl)
dist_stat(gj)
dist_stat(hr)
dist_stat(hp)
dist_stat(jk)
dist_stat(jh)
dist_stat(ka)
dist_stat(kl)
dist_stat(la)
dist_stat(lk)
dist_stat(mp)
dist_stat(mh)
dist_stat(ml)
dist_stat(mz)
dist_stat(ng)
dist_stat(od)
dist_stat(py)
dist_stat(pb)
dist_stat(rj)
dist_stat(tn)
dist_stat(tr)
dist_stat(uk)
dist_stat(up)
dist_stat(wb)
```
[TOP](#a)
| github_jupyter |
# COMPAS Recidivism
The COMPAS dataset consists of the results of a commercial algorithm called COMPAS (Correctional Offender Management
Profiling for Alternative Sanctions), used to assess a convicted criminal's likelihood of reoffending.
COMPAS has been used by judges and parole officers and is widely known for its bias against African-Americans.
In this notebook, we will use `fairlens` to explore the COMPAS dataset for bias toward legally protected
features. We will go on to show similar biases in a logistic regressor trained to forecast a criminal's risk of
reoffending using the dataset. [1]
```
# Import libraries
import numpy as np
import pandas as pd
import fairlens as fl
import matplotlib.pyplot as plt
from itertools import combinations, chain
from sklearn.linear_model import LogisticRegression
# Load in the 2 year COMPAS Recidivism dataset
df = pd.read_csv("https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv")
df
```
The analysis done by ProPublica suggests that certain cases may have had alternative reasons for being charged [1].
We will drop such rows which are not usable.
```
df = df[(df["days_b_screening_arrest"] <= 30)
& (df["days_b_screening_arrest"] >= -30)
& (df["is_recid"] != -1)
& (df["c_charge_degree"] != 'O')
& (df["score_text"] != 'N/A')].reset_index(drop=True)
df
```
## Analysis
We'll begin by identifying the legally protected attributes in the data. `fairlens` detects these using
using fuzzy matching on the column names and values to a custom preset of expected values.
```
# Detect sensitive attributes
sensitive_attributes = fl.sensitive.detect_names_df(df, deep_search=True)
print(sensitive_attributes)
print(sensitive_attributes.keys())
```
We can see that the attributes that we should be concerned about correspond to gender, age and ethnicity.
```
df[["sex", "race", "age", "dob", "age_cat", "decile_score"]].head()
```
`fairlens` will discretize continuous sensitive attributes such as age to make the results more interpretable,
i.e. "Greater than 45", "25 - 45", "Less than 25" in the case of age. The COMPAS dataset comes with a categorical
column for age which we can use instead.
We can inspect potential biases in decile scores by visualizing the distributions of different sensitive
sub-groups in the data. Methods in `fairlens.plot` can be used to generate plots of distributions of variables
in different sub-groups in the data.
```
target_attribute = "decile_score"
sensitive_attributes = ["sex", "race", "age", "dob", "age_cat"]
# Set the seaborn style
fl.plot.use_style()
# Plot the distributions
fl.plot.mult_distr_plot(df, target_attribute, sensitive_attributes)
plt.show()
```
The largest horizontal disparity in scores seems to be in race, specifically between African-Americans and Caucasians,
who make up most of the sample. We can visualize or measure the distance between two arbitrary sub-groups using
predicates as shown below.
```
# Plot the distributions of decile scores in subgroups made of African-Americans and Caucasians
group1 = {"race": ["African-American"]}
group2 = {"race": ["Caucasian"]}
fl.plot.distr_plot(df, "decile_score", [group1, group2])
plt.legend(["African-American", "Caucasian"])
plt.show()
group1 = {"race": ["African-American"]}
group2 = df["race"] != "African-American"
fl.plot.distr_plot(df, "decile_score", [group1, group2])
plt.legend(["African-American", "Rest of Population"])
plt.show()
```
The above disparity by measuring statistical distances between the two distributions. Since the
the decile scores are categorical, metrics such as the Earth Mover's Distance, the LP-Norm, or the Hellinger Distance
would be useful. `fairlens.metrics` provides a `stat_distance` method which can be used to compute these metrics.
```
import fairlens.metrics as fm
group1 = {"race": ["African-American"]}
group2 = {"race": ["Caucasian"]}
distances = {}
for metric in ["emd", "norm", "hellinger"]:
distances[metric] = fm.stat_distance(df, "decile_score", group1, group2, mode=metric)
pd.DataFrame.from_dict(distances, orient="index", columns=["distance"])
```
Measuring the statistical distance between the distribution of a variable in a subgroup and in the entire dataset
can indicate how biased the variable is with respect to the subgroup. We can use the `fl.FairnessScorer` class
to compute this for each sub-group.
```
fscorer = fl.FairnessScorer(df, "decile_score", ["sex", "race", "age_cat"])
fscorer.distribution_score(max_comb=1, p_value=True)
```
The method `fl.FairnessScorer.distribution_score()` makes use of suitable hypothesis tests to determine how different
the distribution of the decile scores is in each sensitive subgroup.
## Training a Model
Our above analysis has confirmed that there are inherent biases present in the COMPAS dataset. We now show the
result of training a model on the COMPAS dataset and using it to predict an unknown criminal's
likelihood of reoffending.
We use a logistic regressor trained on a subset of the features.
```
# Select the features to use
df = df[["sex", "race", "age_cat", "c_charge_degree", "priors_count", "two_year_recid", "score_text"]]
# Split the dataset into train and test
sp = int(len(df) * 0.8)
df_train = df[:sp].reset_index(drop=True)
df_test = df[sp:].reset_index(drop=True)
# Convert categorical columns to numerical columns
def preprocess(df):
X = df.copy()
X["sex"] = pd.factorize(df["sex"])[0]
X["race"] = pd.factorize(df["race"])[0]
X["age_cat"].replace(["Greater than 45", "25 - 45", "Less than 25"], [2, 1, 0], inplace=True)
X["c_charge_degree"] = pd.factorize(df["c_charge_degree"])[0]
X.drop(columns=["score_text"], inplace=True)
X = X.to_numpy()
y = pd.factorize(df["score_text"] != "Low")[0]
return X, y
df_train = df[:sp].reset_index(drop=True)
# Train a regressor
X, y = preprocess(df_train)
clf = LogisticRegression(random_state=0).fit(X, y)
# Classify the training data
df_train["pred"] = clf.predict_proba(X)[:, 1]
# Plot the distributions
fscorer = fl.FairnessScorer(df_train, "pred", ["race", "sex", "age_cat"])
fscorer.plot_distributions()
plt.show()
X = df.copy()
X["sex"] = pd.factorize(df["sex"])[0]
X["race"] = pd.factorize(df["race"])[0]
X["age_cat"].replace(["Greater than 45", "25 - 45", "Less than 25"], [2, 1, 0], inplace=True)
X["c_charge_degree"] = pd.factorize(df["c_charge_degree"])[0]
X.drop(columns=["score_text"], inplace=True)
X.corr()
```
Above, we see the distributions of the training predictions are similar to the distributions in the data from above.
Let's see the results on the held out test set.
```
# Classify the test data
X, _ = preprocess(df_test)
df_test["pred"] = clf.predict_proba(X)[:, 1]
# Plot the distributions
fscorer = fl.FairnessScorer(df_test, "pred", ["race", "sex", "age_cat"])
fscorer.plot_distributions()
plt.show()
fscorer.distribution_score(max_comb=1, p_value=True).sort_values("Distance", ascending=False).reset_index(drop=True)
```
Lets try training a model after dropping "race" from the model and look at the results.
```
# Drop the predicted column before training again
df_train.drop(columns=["pred"], inplace=True)
# Preprocess he data and drop race
X, y = preprocess(df_train)
X = np.delete(X, df_train.columns.get_loc("race"), axis=1)
# Train a regressor and classify the training data
clf = LogisticRegression(random_state=0).fit(X, y)
df_train["pred"] = clf.predict_proba(X)[:, 1]
# Plot the distributions
fscorer = fl.FairnessScorer(df_train, "pred", ["race", "sex", "age_cat"])
fscorer.plot_distributions()
plt.show()
# Drop the predicted column before training again
df_test.drop(columns=["pred"], inplace=True)
# Classify the test data
X, _ = preprocess(df_test)
X = np.delete(X, df_test.columns.get_loc("race"), axis=1)
df_test["pred"] = clf.predict_proba(X)[:, 1]
# Plot the distributions
fscorer = fl.FairnessScorer(df_test, "pred", ["race", "sex", "age_cat"])
fscorer.plot_distributions()
plt.show()
fscorer.distribution_score(max_comb=1, p_value=True).sort_values("Distance", ascending=False).reset_index(drop=True)
```
We can see that despite dropping the attribute "race", the biases toward people in different racial groups is relatively
unaffected.
## References
[1] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the compas recidivism algorithm. 2016.
URL: <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.>
| github_jupyter |
# Assignment 5-b
## Loan Pham and Brandan Owens
```
#q.1 Load dataset "tips.csv"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv("../dataFiles/tips.csv")
df.head(5)
#q.1.a Create a dataframe, party_count, by counting the party size for each day of a week.
party_count = df.groupby('day')['size'].value_counts().reset_index(name="count")
party_count
#q.1.b Since there are not many parties with 1 person or 6 people, drop the party sizes of 1 and 6 from party_count.
party_count = party_count[(party_count['size'] > 1 ) & (party_count['size'] < 6)]
party_count
#q.1.c Normalize it so that the percentages of party size sum to 1 for each day. Then create a bar plot.
temp = party_count.groupby(['day', 'size']).agg({'count': 'sum'})
ct_pct = temp.groupby(level = 0).apply(lambda x: x / float(x.sum()))
new_df=ct_pct.groupby(['day', 'size'])['count'].sum().reset_index(name = 'counts')
sns.barplot(x = 'day', y = 'counts', data = new_df, hue = 'size')
#q.1.d From the original dataframe, create a column of tipping percentage which is given by the following formula: tip_pct = tip / (total bill – tip). Create a bar plot of the tipping percentage by day.
df['tip_pct'] = df['tip'] / (df['total_bill'] - df['tip'])
sns.barplot(x = 'tip_pct', y = 'day', data = df)
#q.1.e Create a bar plot of the tipping percentage by day and lunch/dinner.
sns.barplot(data = df, x = 'tip_pct', y = 'day', hue = 'time')
#q.1.f Plot the tipping percentage by day and lunch/dinner with seaborn for non-smokers. On the same figure, plot the tipping percentage by day and lunch/dinner for smokers.
sns.catplot(data = df, kind = 'bar', x = 'day', y = 'tip_pct', hue = 'time', col = 'smoker')
#q.1.g Create a density plot for tipping percentage for lunch. Create a density plot for tipping percentage for dinner on the same figure.
sns.distplot(df['tip_pct'][df['time'] == 'Dinner'], label = 'Dinner', kde = True)
sns.distplot(df['tip_pct'][df['time'] == 'Lunch'], label = 'Lunch', kde = True)
plt.legend()
#q.2 Load the dataset 'diamonds.csv'
df = pd.read_csv("../dataFiles/diamonds.csv")
df.head(5)
#q.2.a Create a bar plot of different cuts.
sns.catplot(x = 'cut', kind = 'count', data = df)
#q.2.b Create a stacked bar plot of cuts vs. clarity.
g = df.groupby('cut')['clarity'].value_counts().unstack()
g.plot.bar(stacked = True)
#q.2.c Plot a histogram using 'carat'.
sns.histplot(data = df, x = 'carat')
#q.2.d Make density plots on 'carat' against 'cut'.
sns.kdeplot(df['carat'][df['cut'] == 'Fair'], label = 'Fair')
sns.kdeplot(df['carat'][df['cut'] == 'Good'], label = 'Good')
sns.kdeplot(df['carat'][df['cut'] == 'Ideal'], label = 'Ideal')
sns.kdeplot(df['carat'][df['cut'] == 'Premium'], label = 'Premium')
sns.kdeplot(df['carat'][df['cut'] == 'Very Good'], label = 'Very Good')
plt.xlabel('carat')
plt.legend()
#q.2.e Make scatter plots to show the relationship between 'carat' and 'price'. Mark different 'cut' with separate colors.
sns.scatterplot(x = 'carat', y = 'price', hue = 'cut', data = df, x_bins=2)
#q.2.f Create a heatmap using 'cut' and 'color' use cmap="Greens".
dx = df.groupby(['cut', 'color']).size().unstack(fill_value = 0)
a = ['Fair', 'Good', 'Very Good', 'Premium', 'Ideal']
dx = dx.reindex(index = a)
ax = sns.heatmap(dx, cmap = 'Greens').invert_yaxis()
#q.2.g Create a bar plot using 'cut' and 'color'.
sns.barplot(x = 'cut', y = 'price', hue='color', data = df)
```
| github_jupyter |
<!-- dom:TITLE: PHY321: Classical Mechanics 1 -->
# PHY321: Classical Mechanics 1
<!-- dom:AUTHOR: First midterm project, due Wednesday March 12 -->
<!-- Author: -->
**First midterm project, due Wednesday March 12**
Date: **An attempt at a solution**
### Part 1, Particle in a one-dimensional potential
We consider a particle (for example an atom) of mass $m$ moving in a one-dimensional potential,
$$
V(x)=\frac{V_0}{d^4}\left(x^4-2x^2d^2+d^4\right).
$$
We will assume all other forces on the particle are small in
comparison, and neglect them in our model. The parameters $V_0$ and
$d$ are known constants.
1. (5pt) Plot the potential and find the equilibrium points (stable and unstable) by requiring that the first derivative of the potential is zero. Make an energy diagram (see for example Malthe-Sørenssen chapter 11.3) and mark the equilibrium points on the diagram and characterize their stability. The position of the particle is $x$.
We have chosen values $d=1$ and $V_0=4$.
The following Python code gives a plot of potential
```
%matplotlib inline
# Common imports
import numpy as np
from math import *
import matplotlib.pyplot as plt
Deltax = 0.01
#set up arrays
xinitial = -2.0
xfinal = 2.0
n = ceil((xfinal-xinitial)/Deltax)
x = np.zeros(n)
for i in range(n):
x[i] = xinitial+i*Deltax
V = np.zeros(n)
# Initial conditions as compact 2-dimensional arrays
d = 1.0
V0 = 4.0
V = (V0/d**4)*(x**4-2*x*x*d*d+d**4)
# Plot potential as function of position
fig, ax = plt.subplots()
#ax.set_xlim(0, tfinal)
ax.set_xlabel('x')
ax.set_ylabel('V')
ax.plot(x, V)
fig.tight_layout()
plt.show()
```
When we take the derivative of the potential we have
$$
\frac{dV}{dx} = \frac{V_0}{d^4}\left(4x^3-4xd^2\right).
$$
We see that we have minima or maxima when the derivative is zero. This leads to
$$
\frac{V_0}{d^4}\left(4x^3-4xd^2\right)=0,
$$
which gives us as solutions $x=0$ and $x=\pm d$.
Taking the second derivative we have that
$$
\frac{d^2V}{dx^2} = \frac{V_0}{d^4}\left(12x^2-4d^2\right).
$$
If we use $x=\pm d$ we find that the second derivative is given by
$$
\frac{d^2V}{dx^2}\vert_{x=\pm d} = \frac{8V_0}{d^2},
$$
and assuming $V_0 > 0$, these points corresponds to stable equilibrium
points since the the second derivative is posit ive, meaning that
$x=\pm d$ are minima. For $x=0$ we get
$$
\frac{d^2V}{dx^2}\vert_{x=0} = -\frac{4V_0}{d^2},
$$
and our point represents a maximum and we $x=0$ is an unstable point.
1. (5pt) Choose two different energies that give two distinct types of motions, draw them into the energy diagram, and describe the motion in each case.
We select an energy where the particle starts at a potential less then
$V_0$. Therefore, it oscillates in its starting well.
For the second energy, the particle starts at a potential greater than
$V_0$. Therefore, it oscillates through one, goes over the hump at $x=0$
and then oscillates the second well up to the same y-coordinate as
the starting point. Then, it oscillates back to the original point.
1. (5pt) If the particle starts at rest at $x=2d$, what is the velocity of the particle at the point $x=d$?
Below we show that the above potential fulfills the requirements about
being an energy conserving potential. Since energy is conserved, we
must have that the sum of potential and kinetic energies in the two
positions has to be conserved. We have At $x=2d$ we have that the
velocity $v_{2d}=0$ (we skip units) and at $x=d$ we have the unknown
velocity $v_d$. Inserting the values for the kinetic and potential
energies we get
$$
E_{2d}=0+V(x=2d)=9V_0=E_d=\frac{1}{2}mv^2_d+V(x=d)=\frac{1}{2}mv^2_d+0,
$$
which gives us
$$
v_d=\pm 3\sqrt{\frac{2V_0}{m}}.
$$
1. (5pt) If the particle starts at $x=d$ with velocity $v_0$, how large must $v_0$ be for the particle to reach the point $x=−d$?
At $x=d$ the potential energy is zero. In order to reach the othr side
of the well, the particle must have a kinetic energy which can
overcome the potential energy at the hump at $x=0$. There the
potential is $V_0$. Assuming that the velocity is zero there,
conservation of energy gives us
$$
E_{d}=\frac{1}{2}mv^2_d+=E_0=0+V_0,
$$
meaning we have
$$
v_d=\pm\sqrt{\frac{2V_0}{m}}.
$$
1. (5pt) Use the above potential to set up the total forces acting on the particle. Find the acceleration acting on the particle. Is this a conservative force? Calculate also the **curl** of the force $\boldsymbol{\nabla}\times \boldsymbol{F}$ in order to validate your conclusion.
The potential depends only on position and the **curl** of the force is
zero. The latter is easily seen since the components of the **curl** in
the $x$-, $y$- and $z$-z directions are, respectively,
$$
\frac{\partial F_y}{\partial z} -\frac{\partial F_z}{\partial y} =0,
$$
and
$$
\frac{\partial F_x}{\partial z} -\frac{\partial F_z}{\partial x} =0,
$$
and
$$
\frac{\partial F_x}{\partial y} -\frac{\partial F_y}{\partial x} =0,
$$
which means the **curl** of the force is zero.
1. (5pt) Are linear momentum and angular momentum conserved? You need to show this by calculating the quantities.
Because there only exists an external force, linear momentum is not
conserved. The time-derivative of the momentum $\boldsymbol{p}$ is given by
$$
\frac{d\boldsymbol{p}}{dt}=\boldsymbol{F},
$$
and since the problem is a one-dimensional we have
$$
\frac{dp}{dt}=F=-\frac{dV(x)}{dx}=-\frac{V_0}{d^4}\left(12x^2-4d^2\right).
$$
Linear momentum is thus not conserved. Angular momentum is defined as
$$
\boldsymbol{l} = \boldsymbol{r}\times \boldsymbol{p},
$$
and its time derivative is
$$
\frac{d\boldsymbol{l}}{dt} = \boldsymbol{r}\times \frac{d\boldsymbol{p}}{dt}=\boldsymbol{r} \times \boldsymbol{F},
$$
which gives us the following $x$, $y$ and $z$ components of the derivative of angular momentum
(and since we are in one dimension only we have $y=z=0$ and $F_y=F_z=0$)
$$
F_yz -F_zy =0,
$$
and
$$
F_xz -F_zx =0,
$$
and
$$
F_xy -F_yx =0.
$$
Angular momentum is thus conserved.
1. (10pt) Write a numerical algorithm to find the position and velocity of the particle at a time $t+\Delta t$ given the position and velocity at a time $t$. Here you can use either the standard forward Euler, or the Euler-Cromer or the Velocity Verlet algorithms. You need to justify your choice here (hint: consider energy conservation).
Since the force conserves energy, using either the Euler-Cromer or the
Velocity Verlet method, preserves the energy as function of time. The
Velocity Verlet method has also a better mathematical truncation error
$O(\Delta t^3)$, to be compared with $O(\Delta t^2)$ in the
Euler-Cromer method. The main difference is the new evaluation of the
acceleration at a time $t+1$. Thus, the Velocity Verlet method is the
preferred one here. The code examples in part 1 and part 2 focus on this latter method.
Setting up such a code has been discussed during the lectures and we refer to the slides from week 7 and week 8 at
<" and URL:>", respectively.
1. (10pt) Use now your program to find the position of the particle as function of time from $t=0$ to $t=30$ s using a mass $m=1.0$ kg, the parameter $V_0=1$ J and $d=0.1$ m. Make a plot of three distinct positions with initial conditions $x_0=d$ and $v_0=0.5$ m/s, $x_0=d$ and $v_0=1.5$ m/s, and $x_0=d$ and $v_0=2.5$ m/s. Plot also the velocity. Perform calculations with and without the term $x^4$ in the potential. Do you see a difference? Compare and discuss the results obtained with the Velocity Verlet algorithm and the Euler-Cromer algorithm.
2. (10pt) Describe the behavior of the particle for the three initial conditions and sketch the motion in an energy diagram. Is energy conserved in your simulations?
We also see that the particle exhibits an oscillatory motion in all
cases. In the first case $v\_0=\$0.5m/s, it oscillates around one
potential well around a stable equilibrium point at $x=d$. In the
other two cases $v_0=$1.5m/s,2.5m/s), the particle exhibits an
periodic oscillatory motion around the unstable equilbrium point and
across both potential wells at $x=\pm d$. And the energy is
conserved in all three initial cases.
### Part 2, two dimensions
We move then to two dimensions. Our particle/object interacts with a surface potential given by
$$
V(r)=\frac{V_0}{d^4}\left(r^4-2r^2d^2+d^4\right),
$$
where $r=\sqrt{x^2+y^2}$ is the distance to the origin.
1. (5pt) Show that the acceleration is now $\boldsymbol{a}=-\frac{4V_0}{md^4}\left(r^3-rd^2\right)\frac{\boldsymbol{r}}{r}$.
We move then to two dimensions. Our particle/object interacts with a
surface potential given by
$$
V(r)=\frac{V_0}{d^4}\left(r^4-2r^2d^2+d^4\right),
$$
where $r=\sqrt{x^2+y^2}$ is the distance to the origin.
The gradient of $r$ is thus
$$
\boldsymbol{\nabla} r=\frac{\partial r}{\partial x}\boldsymbol{e}_1+\frac{\partial r}{\partial y}\boldsymbol{e}_2,
$$
which becomes
$$
\boldsymbol{\nabla} r=\frac{x}{\sqrt{x^2+y^2}}\boldsymbol{e}_1+\frac{y}{\sqrt{x^2+y^2}}\boldsymbol{e}_2,
$$
or
$$
\boldsymbol{\nabla} r=\frac{\boldsymbol{r}}{r}.
$$
The force is given by the negative gradient of the potential, that is
$$
\boldsymbol{F}=-\boldsymbol{\nabla}V(\boldsymbol{r})=-\boldsymbol{\nabla}\left[\frac{V_0}{d^4}\left(r^4-2r^2d^2+d^4\right)\right],
$$
and using the chain rule and the above gradient of $r$, we have
$$
\boldsymbol{F}=-\frac{V_0}{d^4}\left(4r^3-4rd^2\right)\frac{\boldsymbol{r}}{r},
$$
and dividing by the mass $m$ we find the acceleration
$$
\boldsymbol{a}=-\frac{V_0}{md^4}\left(4r^3-4rd^2\right)\frac{\boldsymbol{r}}{r},
$$
1. (10pt) Rewrite your program to find the velocity and position of the atom using the new expression for the force $\boldsymbol{F}$. Use vectorized expressions in your code as you did in homework 6 for the Earth-Sun system. See eventually the code from the [lectures](https://mhjensen.github.io/Physics321/doc/pub/energyconserv/html/energyconserv.html). We recommend to revisit the Earth-Sun problem from homework 6 since it has several similarities with the problem here.
2. (10pt) Plot the motion of a particle starting in $\boldsymbol{r}_0=(d,0)$ from $t=0$ s to $t=20$ s for the initial velocities $\boldsymbol{v}_0=(0,0.5)$ m/s, $\boldsymbol{v}_0=(0,1)$ m/s, and $\boldsymbol{v}_0=(0,1.5)$ m/s. The parameters $d$ and $V_0$ are as before.
3. (5pt) Is energy conserved?
4. (10pt) Can you choose initial conditions $r_0$ and $v_0$ in such a manner that the particle moves in a circular orbit with a constant radius? If so, what initial conditions are those? Plot the motion for these conditions.
From the centripetal force, we have (for a velocity perpendicular to the
radius vector)
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\frac{v^2}{r}\frac{\boldsymbol{r}}{r} = \frac{4 V_0}{m d^4}(r^3 - r d^2) \frac{\boldsymbol{r}}{r}
\label{_auto1} \tag{1}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation} v^2 = \frac{4 V_0}{m d^4}(r^4 - r^2d^2)
\label{_auto2} \tag{2}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation} v_0 = \sqrt{\frac{4 V_0}{m d^4}(r_0^4 - r_0^2d^2)} \text{ for } r_0 \neq d
\label{_auto3} \tag{3}
\end{equation}
$$
| github_jupyter |
# Heat
In this example the laser-excitation of a sample `Structure` is shown.
It includes the actual absorption of the laser light as well as the transient temperature profile calculation.
## Setup
Do all necessary imports and settings.
```
import udkm1Dsim as ud
u = ud.u # import the pint unit registry from udkm1Dsim
import scipy.constants as constants
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
u.setup_matplotlib() # use matplotlib with pint units
```
## Structure
Refer to the [structure-example](structure.ipynb) for more details.
```
O = ud.Atom('O')
Ti = ud.Atom('Ti')
Sr = ud.Atom('Sr')
Ru = ud.Atom('Ru')
Pb = ud.Atom('Pb')
Zr = ud.Atom('Zr')
# c-axis lattice constants of the two layers
c_STO_sub = 3.905*u.angstrom
c_SRO = 3.94897*u.angstrom
# sound velocities [nm/ps] of the two layers
sv_SRO = 6.312*u.nm/u.ps
sv_STO = 7.800*u.nm/u.ps
# SRO layer
prop_SRO = {}
prop_SRO['a_axis'] = c_STO_sub # aAxis
prop_SRO['b_axis'] = c_STO_sub # bAxis
prop_SRO['deb_Wal_Fac'] = 0 # Debye-Waller factor
prop_SRO['sound_vel'] = sv_SRO # sound velocity
prop_SRO['opt_ref_index'] = 2.44+4.32j
prop_SRO['therm_cond'] = 5.72*u.W/(u.m*u.K) # heat conductivity
prop_SRO['lin_therm_exp'] = 1.03e-5 # linear thermal expansion
prop_SRO['heat_capacity'] = '455.2 + 0.112*T - 2.1935e6/T**2' # heat capacity [J/kg K]
SRO = ud.UnitCell('SRO', 'Strontium Ruthenate', c_SRO, **prop_SRO)
SRO.add_atom(O, 0)
SRO.add_atom(Sr, 0)
SRO.add_atom(O, 0.5)
SRO.add_atom(O, 0.5)
SRO.add_atom(Ru, 0.5)
# STO substrate
prop_STO_sub = {}
prop_STO_sub['a_axis'] = c_STO_sub # aAxis
prop_STO_sub['b_axis'] = c_STO_sub # bAxis
prop_STO_sub['deb_Wal_Fac'] = 0 # Debye-Waller factor
prop_STO_sub['sound_vel'] = sv_STO # sound velocity
prop_STO_sub['opt_ref_index'] = 2.1+0j
prop_STO_sub['therm_cond'] = 12*u.W/(u.m*u.K) # heat conductivity
prop_STO_sub['lin_therm_exp'] = 1e-5 # linear thermal expansion
prop_STO_sub['heat_capacity'] = '733.73 + 0.0248*T - 6.531e6/T**2' # heat capacity [J/kg K]
STO_sub = ud.UnitCell('STOsub', 'Strontium Titanate Substrate', c_STO_sub, **prop_STO_sub)
STO_sub.add_atom(O, 0)
STO_sub.add_atom(Sr, 0)
STO_sub.add_atom(O, 0.5)
STO_sub.add_atom(O, 0.5)
STO_sub.add_atom(Ti, 0.5)
S = ud.Structure('Single Layer')
S.add_sub_structure(SRO, 100) # add 100 layers of SRO to sample
S.add_sub_structure(STO_sub, 200) # add 200 layers of STO substrate
```
## Initialize Heat
The `Heat` class requires a `Structure` object and a boolean `force_recalc` in order overwrite previous simulation results.
These results are saved in the `cache_dir` when `save_data` is enabled.
Printing simulation messages can be en-/disabled using `disp_messages` and progress bars can using the boolean switch `progress_bar`.
```
h = ud.Heat(S, True)
h.save_data = False
h.disp_messages = True
print(h)
```
## Simple Excitation
In order to calculate the temperature of the sample after quasi-instantaneous (delta) photoexcitation the `excitation` must be set with the following parameters:
* `fluence`
* `delay_pump`
* `pulse_width`
* `multilayer_absorption`
* `wavelength`
* `theta`
The angle of incidence `theta` does change the footprint of the excitation on the sample for any type excitation.
The `wavelength` and `theta` angle of the excitation are also relevant if `multilayer_absorption = True`.
Otherwise the _Lambert_Beer_-law is used and its absorption profile is independent of `wavelength` and `theta`.
__Note:__ the `fluence`, `delay_pump`, and `pulse_width` must be given as `array` or `list`.
The simulation requires also a `delay` array as temporal grid as well as an initial temperature `init_temp`.
The later can be either a scalar which is then the constant temperature of the whole sample structure, or the initial temperature can be an array of temperatures for each single layer in the structure.
```
h.excitation = {'fluence': [5]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
# when calculating the laser absorption profile using Lamber-Beer-law
# the opt_pen_depth must be set manually or calculated from the refractive index
SRO.set_opt_pen_depth_from_ref_index(800*u.nm)
STO_sub.set_opt_pen_depth_from_ref_index(800*u.nm)
# temporal and spatial grid
delays = np.r_[-10:200:0.1]*u.ps
_, _, distances = S.get_distances_of_layers()
```
### Laser Absorption Profile
Here the difference in the spatial laser absorption profile is shown between the multilayer absorption algorithm and the Lambert-Beer law.
Note that Lambert-Beer does not include reflection of the incident light from the surface of the sample structure:
```
plt.figure()
dAdz, _, _, _ = h.get_multilayers_absorption_profile()
plt.plot(distances.to('nm'), dAdz, label='multilayer')
dAdz = h.get_Lambert_Beer_absorption_profile()
plt.plot(distances.to('nm'), dAdz, label='Lamber-Beer')
plt.legend()
plt.xlabel('Distance [nm]')
plt.ylabel('Differnetial Absorption')
plt.title('Laser Absorption Profile')
plt.show()
```
### Temperature Map
```
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :])
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.title('Temperature Profile')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map')
plt.tight_layout()
plt.show()
```
## Heat Diffusion
In order to enable heat diffusion the boolean switch `heat_diffusion` must be `True`.
```
# enable heat diffusion
h.heat_diffusion = True
# set the boundary conditions
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :], label=np.round(delays[101]))
plt.plot(distances.to('nm').magnitude, temp_map[501, :], label=np.round(delays[501]))
plt.plot(distances.to('nm').magnitude, temp_map[-1, :], label=np.round(delays[-1]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.legend()
plt.title('Temperature Profile with Heat Diffusion')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map with Heat Diffusion')
plt.tight_layout()
plt.show()
```
### Heat Diffusion Parameters
For heat diffusion simulations various parameters for the underlying pdepe solver can be altered.
By default, the `backend` is set to `scipy` but can be switched to `matlab`.
Currently, the is no obvious reason to choose _MATLAB_ above _SciPy_.
Depending on the `backend` either the `ode_options` or `ode_options_matlab` can be configured and are directly handed to the actual solver.
Please refer to the documentation of the actual backend and solver and the __API documentation__ for more details.
The speed but also the result of the heat diffusion simulation strongly depends on the spatial grid handed to the solver.
By default, one spatial grid point is used for every `Layer` (`AmorphousLayer` or `UnitCell`) in the `Structure`.
The resulting `temp_map` will also always be interpolated in this spatial grid which is equivalent to the distance vector returned by `S.get_distances_of_layers()`.
As the solver for the heat diffusion usually suffers from large gradients, e.g. of thermal properties or initial temperatures, additional spatial grid points are added by default only for internal calculations.
The number of additional points (should be an odd number, default is 11) is set by:
```
h.intp_at_interface = 11
```
The internally used spatial grid can be returned by:
```
dist_interp, original_indicies = S.interp_distance_at_interfaces(h.intp_at_interface)
```
The internal spatial grid can also be given by hand, e.g. to realize logarithmic steps for rather large `Structure`:
```
h.distances = np.linspace(0, distances.magnitude[-1], 100)*u.m
```
As already shown above, the heat diffusion simulation supports also an top and bottom boundary condition. The can have the types:
* `isolator`
* `temperature`
* `flux`
For the later types also a value must be provides:
```
h.boundary_conditions = {'top_type': 'temperature', 'top_value': 500*u.K,
'bottom_type': 'flux', 'bottom_value': 5e11*u.W/u.m**2}
print(h)
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :], label=np.round(delays[101]))
plt.plot(distances.to('nm').magnitude, temp_map[501, :], label=np.round(delays[501]))
plt.plot(distances.to('nm').magnitude, temp_map[-1, :], label=np.round(delays[-1]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.legend()
plt.title('Temperature Profile with Heat Diffusion and BC')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map with Heat Diffusion and BC')
plt.tight_layout()
plt.show()
```
## Multipulse Excitation
As already stated above, also multiple pulses of variable fluence, pulse width and, delay are possible.
The heat diffusion simulation automatically splits the calculation in parts with and without excitation and adjusts the initial temporal step width according to the pulse width.
Hence the solver does not miss any excitation pulses when adjusting its temporal step size.
The temporal laser pulse profile is always assumed to be Gaussian and the pulse width must be given as FWHM:
```
h.excitation = {'fluence': [5, 5, 5, 5]*u.mJ/u.cm**2,
'delay_pump': [0, 10, 20, 20.5]*u.ps,
'pulse_width': [0.1, 0.1, 0.1, 0.5]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :], label=np.round(delays[101]))
plt.plot(distances.to('nm').magnitude, temp_map[201, :], label=np.round(delays[201]))
plt.plot(distances.to('nm').magnitude, temp_map[301, :], label=np.round(delays[301]))
plt.plot(distances.to('nm').magnitude, temp_map[-1, :], label=np.round(delays[-1]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.legend()
plt.title('Temperature Profile Multiplulse')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Multiplulse')
plt.tight_layout()
plt.show()
```
## $N$-Temperature Model
The heat diffusion is also capable of simulating an _N_-temperature model which is often applied to empirically simulate the energy flow between _electrons_, _phonons_, and _spins_.
In order to run the _NTM_ all thermo-elastic properties must be given as a list of _N_ elements corresponding to different sub-systems.
The actual external laser-excitation is always set to happen within the __first__ sub-system, which is usually the electron-system.
In addition the `sub_system_coupling` must be provided in order to allow for energy-flow between the sub-systems.
`sub_system_coupling` is often set to a constant prefactor multiplied with the difference between the electronic and phononic temperatures, as in the example below.
For sufficiently high temperatures, this prefactor also depdends on temperature. See [here](https://faculty.virginia.edu/CompMat/electron-phonon-coupling/) for an overview.
In case the thermo-elastic parameters are provided as functions of the temperature $T$, the `sub_system_coupling` requires the temperature `T` to be a vector of all sub-system-temperatures which can be accessed in the function string via the underscore-notation. The `heat_capacity` and `lin_therm_exp` instead require the temperature `T` to be a scalar of only the current sub-system-temperature. For the `therm_cond` both options are available.
```
# update the relevant thermo-elastic properties of the layers in the sample structure
SRO.therm_cond = [0,
5.72*u.W/(u.m*u.K)]
SRO.lin_therm_exp = [1.03e-5,
1.03e-5]
SRO.heat_capacity = ['0.112*T',
'455.2 - 2.1935e6/T**2']
SRO.sub_system_coupling = ['5e17*(T_1-T_0)',
'5e17*(T_0-T_1)']
STO_sub.therm_cond = [0,
12*u.W/(u.m*u.K)]
STO_sub.lin_therm_exp = [1e-5,
1e-5]
STO_sub.heat_capacity = ['0.0248*T',
'733.73 - 6.531e6/T**2']
STO_sub.sub_system_coupling = ['5e17*(T_1-T_0)',
'5e17*(T_0-T_1)']
```
As no new `Structure` is build, the `num_sub_systems` must be updated by hand.
Otherwise this happens automatically.
```
S.num_sub_systems = 2
```
Set the excitation conditions:
```
h.excitation = {'fluence': [5]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0.25]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
delays = np.r_[-5:15:0.01]*u.ps
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 0], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Electrons')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 1], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Phonons')
plt.tight_layout()
plt.show()
plt.figure()
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['SRO'], 0], 1), label='SRO electrons')
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['SRO'], 1], 1), label='SRO phonons')
plt.ylabel('Temperature [K]')
plt.xlabel('Delay [ps]')
plt.legend()
plt.title('Temperature Electrons vs. Phonons')
plt.show()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
print(5*16*5*2)
# 5 datasets 16 threads 5 loops and 2 core modes
# 800 in runtime 160 for every dataset
# 400 for every socket
# 80 for every dataset divided by socket
9*8
import re
import itertools
import matplotlib.pyplot as plt
workfile = r"C:\Users\kevinpawiroredjo\traindata2411.txt"
workfile2 = r"C:\Users\kevinpawiroredjo\traindata_end.txt"
# results_sockets_newmodel:62,256,512,1024,2048
# results_new_datasets:1280,1532,1792,2304,2560
# traindata2411 1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat
# traindata_end 2304.dat 2432.dat 2560.dat
print(workfile)
#ALL FUNCTIONS
#function that divides a list in n portions. getting every nth elementh.
def dividebyN(divide,n):
x=0
newlist = []
while x <= len(divide):
newlist.append(divide[x:x+n])
x +=n
return newlist
def makeAverageValue(averaging_list,threads):
"""
make a list with the average value based on the amount of threads.
"""
loop = dividebyN(averaging_list,threads)
average = []
for val in range(threads):
values = []
for result in range(len(loop[0][0])):
value = 0
for experiment in loop:
if not experiment:
continue
value+=float(experiment[val][result])
average.append(value/len(loop))
return average
def analyse_data(workfile):
"""
analyses the given file by sorting into total energy, power and runtime.
"""
variable = re.compile("(\d) thread (.*)")
energy = re.compile("Energy consumed: (.*) Joules")
power = re.compile("Power consumed: (.*) Watt")
runtime = re.compile("Runtime: (.*) s")
variable_results = [re.findall(variable,line) for line in open(workfile) if re.findall(variable,line) !=[]]
energy_results = [re.findall(energy,line) for line in open(workfile) if re.findall(energy,line) !=[]]
power_results = [re.findall(power,line) for line in open(workfile) if re.findall(power,line) !=[]]
runtime_results = [re.findall(runtime,line) for line in open(workfile) if re.findall(runtime,line) !=[]]
powerconsumption = []
energyconsumption = []
testrange = 0
testpower = []
testenergy = []
# separate values in lists for each test
for i in range(len(power_results)):
testrange+=1
testpower.append(float(power_results[i][0]))
testenergy.append(float(energy_results[i][0]))
if testrange == 6:
testrange = 0
powerconsumption.append(testpower)
energyconsumption.append(testenergy)
testpower = []
testenergy = []
return [variable_results,energyconsumption,powerconsumption,runtime_results]
# Execute to initialise all variables.
#6 per variable in power and in energy
#5 datasets
#2 socketdiff
#16 different threadsizes
#5 loops
threads = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
threads_no_ht = [1,2,3,4,5,6,7,8]
# 5 datasets 16 threads 5 loops and 2 core modes
# 800 in runtime 160 for every dataset
# 400 for every socket
# 80 for every dataset divided by socket
analysed_data = analyse_data(workfile)
variable_results = analysed_data[0][:1440]
energyconsumption = analysed_data[1][:1440]
powerconsumption = analysed_data[2][:1440]
runtime_results = analysed_data[3][:1440]
#dividing per dataset
#every variable in here is divided into one or two socket variants of the given variable.
"""
The variable samples is a list of all datasets list
Every dataset list comprises of 80 values which are the loops and the amount of threads.
so this dataset list is all the loops and values of a given dataset
"""
one_socket_runtime_samples = dividebyN([runtime_results[i] for i in range(len(runtime_results)) if i%2 == 0],80)[:9]
two_socket_runtime_samples = dividebyN([runtime_results[i] for i in range(len(runtime_results)) if i%2 == 1],80)[:9]
one_socket_energy_samples = dividebyN([energyconsumption[i] for i in range(len(energyconsumption)) if i%2 == 0],80)[:9]
two_socket_energy_samples = dividebyN([energyconsumption[i] for i in range(len(energyconsumption)) if i%2 == 1],80)[:9]
one_socket_power_samples = dividebyN([powerconsumption[i] for i in range(len(powerconsumption)) if i%2 == 0],80)[:9]
two_socket_power_samples = dividebyN([powerconsumption[i] for i in range(len(powerconsumption)) if i%2 == 1],80)[:9]
"""
same as the sample result only this time the 80 values get averaged out on thread level to give 16 values back.
This corresponds with the amount of threads.
"""
one_socket_runtime = [makeAverageValue(dataset,16) for dataset in one_socket_runtime_samples]
two_socket_runtime = [makeAverageValue(dataset,16) for dataset in two_socket_runtime_samples]
one_socket_energy = [makeAverageValue(dataset,16) for dataset in one_socket_energy_samples]
two_socket_energy = [makeAverageValue(dataset,16) for dataset in two_socket_energy_samples]
one_socket_power = [makeAverageValue(dataset,16) for dataset in one_socket_power_samples]
two_socket_power = [makeAverageValue(dataset,16) for dataset in two_socket_power_samples]
analysed_data2 = analyse_data(workfile2)
variable_results2 = analysed_data2[0]
energyconsumption2 = analysed_data2[1]
powerconsumption2 = analysed_data2[2]
runtime_results2 = analysed_data2[3]
one_socket_runtime_samples2 = dividebyN([runtime_results2[i] for i in range(len(runtime_results2)) if i%2 == 0],80)[:3]
two_socket_runtime_samples2 = dividebyN([runtime_results2[i] for i in range(len(runtime_results2)) if i%2 == 1],80)[:3]
one_socket_energy_samples2 = dividebyN([energyconsumption2[i] for i in range(len(energyconsumption2)) if i%2 == 0],80)[:3]
two_socket_energy_samples2 = dividebyN([energyconsumption2[i] for i in range(len(energyconsumption2)) if i%2 == 1],80)[:3]
one_socket_power_samples2 = dividebyN([powerconsumption2[i] for i in range(len(powerconsumption2)) if i%2 == 0],80)[:3]
two_socket_power_samples2 = dividebyN([powerconsumption2[i] for i in range(len(powerconsumption2)) if i%2 == 1],80)[:3]
one_socket_runtime2 = [makeAverageValue(dataset,16) for dataset in one_socket_runtime_samples2]
two_socket_runtime2 = [makeAverageValue(dataset,16) for dataset in two_socket_runtime_samples2]
one_socket_energy2 = [makeAverageValue(dataset,16) for dataset in one_socket_energy_samples2]
two_socket_energy2 = [makeAverageValue(dataset,16) for dataset in two_socket_energy_samples2]
one_socket_power2 = [makeAverageValue(dataset,16) for dataset in one_socket_power_samples2]
two_socket_power2 = [makeAverageValue(dataset,16) for dataset in two_socket_power_samples2]
one_socket_energy
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat
runtime1_1 = one_socket_runtime[0]
runtime2_1 = one_socket_runtime[1]
runtime3_1 = one_socket_runtime[2]
runtime4_1 = one_socket_runtime[3]
runtime5_1 = one_socket_runtime[4]
runtime6_1 = one_socket_runtime[5]
runtime7_1 = one_socket_runtime[6]
runtime8_1 = one_socket_runtime[7]
runtime9_1 = one_socket_runtime[8]
runtime10_1 = one_socket_runtime2[0]
runtime11_1 = one_socket_runtime2[1]
runtime12_1 = one_socket_runtime2[2]
runtime1_2 = two_socket_runtime[0]
runtime2_2 = two_socket_runtime[1]
runtime3_2 = two_socket_runtime[2]
runtime4_2 = two_socket_runtime[3]
runtime5_2 = two_socket_runtime[4]
runtime6_2 = two_socket_runtime[5]
runtime7_2 = two_socket_runtime[6]
runtime8_2 = two_socket_runtime[7]
runtime9_2 = two_socket_runtime[8]
runtime10_2 = two_socket_runtime2[0]
runtime11_2 = two_socket_runtime2[1]
runtime12_2 = two_socket_runtime2[2]
#energy
energy1_1 = one_socket_energy[0][0::6]
energy2_1 = one_socket_energy[1][0::6]
energy3_1 = one_socket_energy[2][0::6]
energy4_1 = one_socket_energy[3][0::6]
energy5_1 = one_socket_energy[4][0::6]
energy6_1 = one_socket_energy[5][0::6]
energy7_1 = one_socket_energy[6][0::6]
energy8_1 = one_socket_energy[7][0::6]
energy9_1 = one_socket_energy[8][0::6]
energy10_1 = one_socket_energy2[0][0::6]
energy11_1 = one_socket_energy2[1][0::6]
energy12_1 = one_socket_energy2[2][0::6]
energy1_2 = [sum(x) for x in zip(two_socket_energy[0][0::6], two_socket_energy[0][3::6])]
energy2_2 = [sum(x) for x in zip(two_socket_energy[1][0::6], two_socket_energy[1][3::6])]
energy3_2 = [sum(x) for x in zip(two_socket_energy[2][0::6], two_socket_energy[2][3::6])]
energy4_2 = [sum(x) for x in zip(two_socket_energy[3][0::6], two_socket_energy[3][3::6])]
energy5_2 = [sum(x) for x in zip(two_socket_energy[4][0::6], two_socket_energy[4][3::6])]
energy6_2 = [sum(x) for x in zip(two_socket_energy[5][0::6], two_socket_energy[5][3::6])]
energy7_2 = [sum(x) for x in zip(two_socket_energy[6][0::6], two_socket_energy[6][3::6])]
energy8_2 = [sum(x) for x in zip(two_socket_energy[7][0::6], two_socket_energy[7][3::6])]
energy9_2 = [sum(x) for x in zip(two_socket_energy[8][0::6], two_socket_energy[8][3::6])]
energy10_2 = [sum(x) for x in zip(two_socket_energy2[0][0::6], two_socket_energy2[1][3::6])]
energy11_2 = [sum(x) for x in zip(two_socket_energy2[1][0::6], two_socket_energy2[2][3::6])]
energy12_2 = [sum(x) for x in zip(two_socket_energy2[2][0::6], two_socket_energy2[1][3::6])]
#power
power1_1 = one_socket_power[0][0::6]
power2_1 = one_socket_power[1][0::6]
power3_1 = one_socket_power[2][0::6]
power4_1 = one_socket_power[3][0::6]
power5_1 = one_socket_power[4][0::6]
power6_1 = one_socket_power[5][0::6]
power7_1 = one_socket_power[6][0::6]
power8_1 = one_socket_power[7][0::6]
power9_1 = one_socket_power[7][0::6]
power10_1 = one_socket_power2[0][0::6]
power11_1 = one_socket_power2[1][0::6]
power12_1 = one_socket_power2[2][0::6]
power1_2 = two_socket_power[0][0::6]
power2_2 = two_socket_power[1][0::6]
power3_2 = two_socket_power[2][0::6]
power4_2 = two_socket_power[3][0::6]
power5_2 = two_socket_power[4][0::6]
power6_2 = two_socket_power[5][0::6]
power7_2 = two_socket_power[6][0::6]
power8_2 = two_socket_power[7][0::6]
power9_2 = two_socket_power[8][0::6]
power10_2 = two_socket_power2[0][0::6]
power11_2 = two_socket_power2[1][0::6]
power12_2 = two_socket_power2[2][0::6]
energy1_1
one_socket_energy[0][0::3]
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat 2304.dat 2432.dat 2560.dat
plt.figure(figsize=(14, 5))
ax = plt.subplot(1,2, 1)
ax.plot(threads, runtime1_1 ,label = "runtime1028")
ax.plot(threads, runtime2_1 ,label = "runtime1280")
ax.plot(threads, runtime3_1 ,label = "runtime1408")
ax.plot(threads, runtime4_1 ,label = "runtime1532")
ax.plot(threads, runtime5_1 ,label = "runtime1658")
ax.set_ylim(ymin=0,ymax = 40)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('runtime in sec')
plt.title('one socket:threads vs runtime')
ax = plt.subplot(1,2, 2)
ax.plot(threads, runtime1_2 ,label = "runtime1028")
ax.plot(threads, runtime2_2 ,label = "runtime1152")
ax.plot(threads, runtime3_2 ,label = "runtime1408")
ax.plot(threads, runtime4_2 ,label = "runtime1532")
ax.plot(threads, runtime5_2 ,label = "runtime1658")
ax.set_ylim(ymin=0,ymax = 40)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('runtime in sec')
plt.title('two socket:threads vs runtime')
plt.show()
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat 2304.dat 2432.dat 2560.dat
plt.figure(figsize=(14, 5))
ax = plt.subplot(1,2, 1)
ax.plot(threads, runtime7_1 ,label = "runtime1792")
ax.plot(threads, runtime8_1 ,label = "runtime2048")
ax.plot(threads, runtime9_1 ,label = "runtime2176")
ax.plot(threads, runtime10_1 ,label = "runtime2304")
ax.plot(threads, runtime11_1 ,label = "runtime2432")
ax.plot(threads, runtime12_1 ,label = "runtime2560")
ax.set_ylim(ymin=0,ymax = 250)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('runtime in sec')
plt.title('one socket:threads vs runtime')
ax = plt.subplot(1,2, 2)
ax.plot(threads, runtime7_2 ,label = "runtime1792")
ax.plot(threads, runtime8_2 ,label = "runtime2048")
ax.plot(threads, runtime9_2 ,label = "runtime2176")
ax.plot(threads, runtime10_2 ,label = "runtime2304")
ax.plot(threads, runtime11_2 ,label = "runtime2432")
ax.plot(threads, runtime12_2 ,label = "runtime2560")
ax.set_ylim(ymin=0,ymax = 250)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('runtime in sec')
plt.title('two socket:threads vs runtime')
plt.show()
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat 2304.dat 2432.dat 2560.dat
plt.figure(figsize=(14, 5))
ax = plt.subplot(1,2, 1)
ax.plot(threads, energy1_1 ,label = "energy1028")
ax.plot(threads, energy2_1 ,label = "energy1280")
ax.plot(threads, energy3_1 ,label = "energy1408")
ax.plot(threads, energy4_1 ,label = "energy1532")
ax.plot(threads, energy5_1 ,label = "energy1658")
ax.set_ylim(ymin=0,ymax=1500)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('energy in joule')
plt.title('one socket:threads vs energy')
ax = plt.subplot(1,2, 2)
ax.plot(threads, energy1_2 ,label = "energy1028")
ax.plot(threads, energy2_2 ,label = "energy1152")
ax.plot(threads, energy3_2 ,label = "energy1408")
ax.plot(threads, energy4_2 ,label = "energy1532")
ax.plot(threads, energy5_2 ,label = "energy1658")
ax.set_ylim(ymin=0,ymax=1500)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('energy in joule')
plt.title('two socket:threads vs energy')
plt.show()
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat 2304.dat 2432.dat 2560.dat
plt.figure(figsize=(14, 5))
ax = plt.subplot(1,2, 1)
ax.plot(threads, energy7_1 ,label = "energy1792")
ax.plot(threads, energy8_1 ,label = "energy2048")
ax.plot(threads, energy9_1 ,label = "energy2176")
ax.plot(threads, energy10_1 ,label = "energy2304")
ax.plot(threads, energy11_1 ,label = "energy2432")
ax.plot(threads, energy12_1 ,label = "energy2560")
ax.set_ylim(ymin=0,ymax=9000)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('energy in joule')
plt.title('one socket:threads vs energy')
ax = plt.subplot(1,2, 2)
ax.plot(threads, energy7_2 ,label = "energy1792")
ax.plot(threads, energy8_2 ,label = "energy2048")
ax.plot(threads, energy9_2 ,label = "energy2176")
ax.plot(threads, energy10_2 ,label = "energy2304")
ax.plot(threads, energy11_2 ,label = "energy2432")
ax.plot(threads, energy12_2 ,label = "energy2560")
ax.set_ylim(ymin=0,ymax=9000)
#setup
ax.legend()
plt.xlabel('threads')
plt.ylabel('energy in joule')
plt.title('two socket:threads vs energy')
plt.show()
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat 2304.dat 2432.dat 2560.dat
plt.figure(figsize=(14, 5))
ax.plot(threads, power1_1 ,label = "power1028")
ax.plot(threads, power2_1 ,label = "power1152")
ax.plot(threads, power3_1 ,label = "power1280")
ax.plot(threads, power4_1 ,label = "power1408")
ax.plot(threads, power5_1 ,label = "power1532")
ax.plot(threads, power6_1 ,label = "power1658")
ax.set_ylim(ymin=0)
plt.title("one socket:power vs threads")
#setup
ax.legend()
ax = plt.subplot(1,2, 2)
ax.plot(threads, power1_2 ,label = "power1028")
ax.plot(threads, power2_2 ,label = "power1152")
ax.plot(threads, power3_2 ,label = "power1280")
ax.plot(threads, power4_2 ,label = "power1408")
ax.plot(threads, power5_2 ,label = "power1532")
ax.plot(threads, power6_2 ,label = "power1658")
ax.set_ylim(ymin=0)
#setup
plt.legend()
plt.xlabel('threads')
plt.ylabel('power in watts')
plt.title("two socket:power vs threads")
```
power1028 = two_socket_power[0][3::6]
power2048 = two_socket_power[1][3::6]
power256 = two_socket_power[2][3::6]
power512 = two_socket_power[3][3::6]
power64 = two_socket_power[4][3::6]
power1028_1 = one_socket_power[0][3::6]
power2048_1 = one_socket_power[1][3::6]
power256_1 = one_socket_power[2][3::6]
power512_1 = one_socket_power[3][3::6]
power64_1 = one_socket_power[4][3::6]
plt.figure(figsize=(14, 5))
ax = plt.subplot(1,2, 1)
ax.plot(threads, power2048_1 ,label = "power2048")
ax.plot(threads, power1028_1 ,label = "power1028")
ax.plot(threads, power256_1 ,label = "power256")
ax.plot(threads, power512_1 ,label = "power512")
ax.plot(threads, power64_1 ,label = "power64")
ax.set_ylim(ymin=0)
plt.title("one socket:power S1:pkg vs threads")
#setup
ax.legend()
ax = plt.subplot(1,2, 2)
ax.plot(threads, power2048 ,label = "power2048")
ax.plot(threads, power1028 ,label = "power1028")
ax.plot(threads, power256 ,label = "power256")
ax.plot(threads, power512 ,label = "power512")
ax.plot(threads, power64 ,label = "power64")
ax.set_ylim(ymin=0)
#setup
ax.legend()
plt.legend()
plt.xlabel('threads')
plt.ylabel('power in watts')
plt.title("2")
plt.legend()
plt.xlabel('threads')
plt.ylabel('power in watts')
plt.title('two socket:power S1:pkg vs threads')
plt.show()
```
# benchmark is simple interpolation
# 2045
# 1532
# 1028
# ideas for machine learning:
# -linear regression model setting limits on exponentials to reduce overfitting
# -neural network using pca to make it less complex
# LU Decomposition* (LUD)
# -dwarf:Dense Linear Algebra
# -domain:Linear Algebra
# what is the bottleneck at 1 core 8 threads? thermal design power (TDP), inter-threadsharing overload?
# measure internal thread sharing?
def elongate_list(threads,experiment):
thread_list = []
for i in range(experiment):
for j in range(threads):
thread_list.append(j+1)
return thread_list
def add_dataset_class(datasets,threads,experiments):
appendable = []
for dataset in datasets:
x = 0
for j in range(threads*experiments):
x+=1
appendable.append([dataset,x])
if x == 16:
x = 0
return appendable
#X is thread axis
X = np.array(elongate_list(16,5))
print(X[:, np.newaxis].shape)
#y is result axis
y = np.array([i[3] for i in two_socket_energy_samples[3]])
degrees = range(1,5)
plt.figure(figsize=(14, 5))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees), i + 1)
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X[:, np.newaxis], y)
# Do we want to capture the outlier?
scores = cross_val_score(pipeline, X[:, np.newaxis], y,
scoring="neg_mean_squared_error")
X_test = np.linspace(1, 16,16)
plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="fitted")
plt.scatter(X, y, edgecolor='b', label="results")
plt.xlabel("threads")
plt.ylabel("")
plt.title("energy model poly:"+ str(degrees[i]))
plt.legend(loc="best")
plt.show()
#X is thread axis
X = np.array(elongate_list(16,5))
#y is result axis
y = np.array([i[3] for i in two_socket_energy_samples[0]])
print(y[0])
degrees = range(5,9)
plt.figure(figsize=(14, 5))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees), i + 1)
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X[:, np.newaxis], y)
X_test = np.linspace(1, 16,16)
plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="fitted")
plt.scatter(X, y, edgecolor='b', label="results")
plt.xlabel("Threads")
plt.ylabel("Energy consumed()")
plt.title("energy model poly:"+ str(degrees[i]))
plt.legend(loc="best")
plt.show()
#energy consumption
#two socket
training_half_sizes2 = [two_socket_energy_samples[datasetnr] for datasetnr in range(len(two_socket_energy_samples)) if datasetnr%2 == 0]
validation_half_sizes2 = [two_socket_energy_samples[datasetnr] for datasetnr in range(len(two_socket_energy_samples)) if datasetnr%2 == 1]
print(len(training_half_sizes2))
#one socket
training_half_sizes1 = [one_socket_energy_samples[datasetnr] for datasetnr in range(len(one_socket_energy_samples)) if datasetnr%2 == 0]
validation_half_sizes1 = [one_socket_energy_samples[datasetnr] for datasetnr in range(len(one_socket_energy_samples)) if datasetnr%2 == 1]
print(len(training_half_sizes1))
#one_socket no ht
def separate_ht(data):
no_ht = []
ht = []
for dataset in data:
dataset_no_ht = []
dataset_ht = []
nr = 0
for val in dataset:
nr+=1
if nr <= 8:
dataset_no_ht.append(val)
else:
dataset_ht.append(val)
if nr == 16:
nr = 0
ht.append(dataset_ht)
no_ht.append(dataset_no_ht)
return no_ht,ht
training_no_ht,training_ht = separate_ht(training_half_sizes1)
valid_no_ht,valid_ht = separate_ht(validation_half_sizes1)
print(len(valid_no_ht))
print(len(valid_ht))
print(len(training_half_sizes1[0]))
"""
initialising x and y
"""
energy_two_socket = []
for i in range(len(training_half_sizes2)):
for j in training_half_sizes2[i]:
energy_two_socket.append(j[0]+j[3])
energy_one_socket = []
for i in range(len(training_half_sizes1)):
for j in training_half_sizes1[i]:
energy_one_socket.append(j[0])
print(len(energy_one_socket))
energy_no_ht = []
for i in range(len(training_no_ht)):
for j in training_no_ht[i]:
energy_no_ht.append(j[0])
print(len(energy_no_ht))
#validation
validation_two_socket = []
for i in range(len(validation_half_sizes2)):
for j in validation_half_sizes2[i]:
validation_two_socket.append(j[0]+j[3])
validation_one_socket = []
for i in range(len(validation_half_sizes1)):
for j in validation_half_sizes1[i]:
validation_one_socket.append(j[0])
validation_no_ht = []
for i in range(len(valid_no_ht)):
for j in valid_no_ht[i]:
validation_one_socket.append(j[0])
#X is thread axis
X_test = []
X_valid = []
#X = np.array(add_dataset_class([1028, 1152, 1280,1408, 1532, 1658, 1792, 2048, 2176 ],16,5))
#X1 = np.array(add_dataset_class([2304, 2432, 2560 ],16,5))
X_test = np.array(add_dataset_class([1028, 1280,1532,1792,2176],16,5))
X_valid = np.array(add_dataset_class([1152, 1408, 1658, 2048],16,5))
X_test_no_ht = np.array(add_dataset_class([1028, 1280,1532,1792,2176],8,5))
X_valid_no_ht = np.array(add_dataset_class([1152, 1408, 1658, 2048],8,5))
#y is result axis
ytwo_socket = np.array(energy_two_socket)
yone_socket = np.array(energy_one_socket)
yno_ht = np.array(energy_no_ht)
polynomial_features = PolynomialFeatures(degree=3,
include_bias=False)
linear_regression = LinearRegression(normalize=True)
regr = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
polynomial_features1 = PolynomialFeatures(degree=3,
include_bias=False)
linear_regression1 = LinearRegression(normalize=True)
regr_1 = Pipeline([("polynomial_features", polynomial_features1),
("linear_regression", linear_regression1)])
polynomial_features2 = PolynomialFeatures(degree=3,
include_bias=False)
linear_regression2 = LinearRegression(normalize=True)
regr_no_HT = Pipeline([("polynomial_features", polynomial_features2),
("linear_regression", linear_regression2)])
regr.fit(X_test, ytwo_socket)
regr_1.fit(X_test, yone_socket)
regr_no_HT.fit(X_test_no_ht, yno_ht)
print(regr.get_params(linear_regression))
print(regr)
#1028.dat 1152.dat 1280.dat 1408.dat 1532.dat 1658.dat 1792.dat 2048.dat 2176.dat
# test datasets:1028, 1280,1532,1792,2176
# validation 1152, 1408, 1658, 2048
# two socket
predicted_1 = []
predicted_2 = []
predicted_3 = []
predicted_4 = []
#control
predicted_check1 = []
predicted_check2 = []
#one socket
predicted_11 = []
predicted_21 = []
predicted_31 = []
predicted_41 = []
#no ht
predicted_12 = []
predicted_22 = []
predicted_32 = []
predicted_42 = []
for i in threads:
predicted_1.append(regr.predict([[1152, i]]))
predicted_2.append(regr.predict([[1408, i]]))
predicted_3.append(regr.predict([[1658, i]]))
predicted_4.append(regr.predict([[2048, i]]))
predicted_check1.append(regr.predict([[1280, i]]))
predicted_11.append(regr_1.predict([[1152, i]]))
predicted_21.append(regr_1.predict([[1408, i]]))
predicted_31.append(regr_1.predict([[1658, i]]))
predicted_41.append(regr_1.predict([[2048, i]]))
if i<=8:
predicted_12.append(regr_no_HT.predict([[1152, i]]))
predicted_22.append(regr_no_HT.predict([[1408, i]]))
predicted_32.append(regr_no_HT.predict([[1658, i]]))
predicted_42.append(regr_no_HT.predict([[2048, i]]))
print(predicted_1[0])
print(energy_two_socket[0])
def nmser(x,y):
z=0
if len(x)==len(y):
for k in range(len(x)):
z+=(((x[k]-y[k])**2)/x[k])
z=z/(len(x))
return z
#1152
error_list =[]
a = np.array(predicted_1)
b = np.array(validation_two_socket[0:16])
meansquared_error = ((a-b)**2).mean(axis=1)
error_list.append(sum(meansquared_error))
a = np.array(predicted_2)
b = np.array(validation_two_socket[80:96])
meansquared_error = ((a-b)**2).mean(axis=1)
error_list.append(sum(meansquared_error))
a = np.array(predicted_3)
b = np.array(validation_two_socket[160:176])
meansquared_error = ((a-b)**2).mean(axis=1)
error_list.append(sum(meansquared_error))
a = np.array(predicted_4)
b = np.array(validation_two_socket[240:256])
meansquared_error = ((a-b)**2).mean(axis=1)
error_list.append(sum(meansquared_error))
print(error_list)
predicted_1
predicted_11
plt.plot(threads, predicted_1 ,label = "prediction_1152")
plt.plot(threads, validation_two_socket[0:16] ,label = "actualenergy_1152")
plt.plot(threads, predicted_2 ,label = "prediction_1408")
plt.plot(threads, validation_two_socket[80:96] ,label = "actualenergy_1408")
plt.legend()
plt.xlabel('threads')
plt.ylabel('energy in joules')
plt.title('two socket:energy vs threads')
plt.show()
print(nmser(predicted_1,validation_two_socket[0:16]))
print(nmser(predicted_2,validation_two_socket[80:96]))
plt.plot(threads, predicted_3 ,label = "prediction_1658")
plt.plot(threads, validation_two_socket[160:176] ,label = "actualenergy_1658")
plt.plot(threads, predicted_4 ,label = "prediction_2048")
plt.plot(threads, validation_two_socket[240:256] ,label = "actualenergy_2048")
plt.legend()
plt.xlabel('threads')
plt.ylabel('energy in joules')
plt.title('two socket:energy vs threads')
plt.show()
print(nmser(predicted_3,validation_two_socket[160:176]))
print(nmser(predicted_3,validation_two_socket[240:256]))
```
Now trying:
0-8 - single socket, no HT
0-16 - single socket, HT
0-16 - dual socket, no HT
0-32 - dual socket, HT
```
#standard training on hyperthreading experiment
plt.plot(threads, predicted_11 ,label = "prediction_1152")
plt.plot(threads, validation_one_socket[0:16] ,label = "actualenergy_1152")
plt.plot(threads, predicted_21 ,label = "prediction_1408")
plt.plot(threads, validation_one_socket[80:96] ,label = "actualenergy_1408")
plt.legend()
plt.xlabel('threads')
plt.ylabel('energy in joules')
plt.title('one socket:energy vs threads')
plt.show()
print(nmser(predicted_11,validation_one_socket[0:16]))
print(nmser(predicted_21,validation_one_socket[80:96]))
```
how did i get these results.
used data that was outside cache(l1,L2,L3)
```
plt.plot(threads, predicted_31 ,label = "prediction_1658")
plt.plot(threads, validation_one_socket[160:176] ,label = "actualenergy_1658")
plt.plot(threads, predicted_41 ,label = "prediction_2048")
plt.plot(threads, validation_one_socket[240:256] ,label = "actualenergy_2048")
plt.legend()
plt.xlabel('threads')
plt.ylabel('energy in joules')
plt.title('one socket:energy vs threads')
plt.show()
print(nmser(predicted_31,validation_one_socket[160:176]))
print(nmser(predicted_41,validation_one_socket[240:256]))
#no ht
predicted_12
#standard training on hyperthreading experiment
plt.plot(threads_no_ht, predicted_12 ,label = "prediction_1152")
plt.plot(threads_no_ht, validation_one_socket[0:8] ,label = "actualenergy_1152")
plt.plot(threads_no_ht, predicted_22 ,label = "prediction_1408")
plt.plot(threads_no_ht, validation_one_socket[80:88] ,label = "actualenergy_1408")
plt.legend()
plt.xlabel('threads')
plt.ylabel('energy in joules')
plt.title('one socket no HT:energy vs threads')
plt.show()
print(nmser(predicted_12,validation_one_socket[0:8]))
print(nmser(predicted_22,validation_one_socket[80:88]))
plt.plot(threads_no_ht, predicted_32 ,label = "prediction_1658")
plt.plot(threads_no_ht, validation_one_socket[160:168] ,label = "actualenergy_1658")
plt.plot(threads_no_ht, predicted_42 ,label = "prediction_2048")
plt.plot(threads_no_ht, validation_one_socket[240:248] ,label = "actualenergy_2048")
plt.legend()
plt.xlabel('threads')
plt.ylabel('energy in joules')
plt.title('one socket no HT:energy vs threads')
plt.show()
print(nmser(predicted_12,validation_one_socket[160:168]))
print(nmser(predicted_22,validation_one_socket[240:248]))
print(yone_socket)
X_test
```
| github_jupyter |
## 1. Import Python libraries
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_374/img/honey.jpg" alt="honey bee">
<em>A honey bee.</em></p>
<p>The question at hand is: can a machine identify a bee as a honey bee or a bumble bee? These bees have different <a href="http://bumblebeeconservation.org/about-bees/faqs/honeybees-vs-bumblebees/">behaviors and appearances</a>, but given the variety of backgrounds, positions, and image resolutions it can be a challenge for machines to tell them apart.</p>
<p>Being able to identify bee species from images is a task that ultimately would allow researchers to more quickly and effectively collect field data. Pollinating bees have critical roles in both ecology and agriculture, and diseases like <a href="http://news.harvard.edu/gazette/story/2015/07/pesticide-found-in-70-percent-of-massachusetts-honey-samples/">colony collapse disorder</a> threaten these species. Identifying different species of bees in the wild means that we can better understand the prevalence and growth of these important insects.</p>
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_374/img/bumble.jpg" alt="bumble bee">
<em>A bumble bee.</em></p>
<p>This notebook walks through loading and processing images. After loading and processing these images, they will be ready for building models that can automatically detect honeybees and bumblebees.</p>
```
# Used to change filepaths
from pathlib import Path
# We set up matplotlib, pandas, and the display function
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display
import pandas as pd
# import numpy to use in this cell
import numpy as np
# import Image from PIL so we can use it later
from PIL import Image
# generate test_data
test_data = np.random.beta(1,1, size=(100,100,3))
# display the test_data
plt.imshow(test_data)
```
## 2. Opening images with PIL
<p>Now that we have all of our imports ready, it is time to work with some real images.</p>
<p>Pillow is a very flexible image loading and manipulation library. It works with many different image formats, for example, <code>.png</code>, <code>.jpg</code>, <code>.gif</code> and more. For most image data, one can work with images using the Pillow library (which is imported as <code>PIL</code>).</p>
<p>Now we want to load an image, display it in the notebook, and print out the dimensions of the image. By dimensions, we mean the width of the image and the height of the image. These are measured in pixels. The documentation for <a href="https://pillow.readthedocs.io/en/5.1.x/reference/Image.html">Image</a> in Pillow gives a comprehensive view of what this object can do.</p>
```
# open the image
img = Image.open('datasets/bee_1.jpg')
# Get the image size
img_size = img.size
print("The image size is: {}".format(img_size))
# Just having the image as the last line in the cell will display it in the notebook
img
```
## 3. Image manipulation with PIL
<p>Pillow has a number of common image manipulation tasks built into the library. For example, one may want to resize an image so that the file size is smaller. Or, perhaps, convert an image to black-and-white instead of color. Operations that Pillow provides include:</p>
<ul>
<li>resizing</li>
<li>cropping</li>
<li>rotating</li>
<li>flipping</li>
<li>converting to greyscale (or other <a href="https://pillow.readthedocs.io/en/5.1.x/handbook/concepts.html#concept-modes">color modes</a>)</li>
</ul>
<p>Often, these kinds of manipulations are part of the pipeline for turning a small number of images into more images to create training data for machine learning algorithms. This technique is called <a href="http://cs231n.stanford.edu/reports/2017/pdfs/300.pdf">data augmentation</a>, and it is a common technique for image classification.</p>
<p>We'll try a couple of these operations and look at the results.</p>
```
# Crop the image to 25, 25, 75, 75
img_cropped = img.crop([25, 25, 75, 75])
display(img_cropped)
# rotate the image by 45 degrees
img_rotated = img.rotate(45, expand = 25)
display(img_rotated)
# flip the image left to right
img_flipped = img.transpose(Image.FLIP_LEFT_RIGHT)
display(img_flipped)
```
## 4. Images as arrays of data
<p>What is an image? So far, PIL has handled loading images and displaying them. However, if we're going to use images as data, we need to understand what that data looks like.</p>
<p>Most image formats have three color <a href="https://en.wikipedia.org/wiki/RGB_color_model">"channels": red, green, and blue</a> (some images also have a fourth channel called "alpha" that controls transparency). For each pixel in an image, there is a value for every channel.</p>
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_374/img/AdditiveColor.png" alt="RGB Colors"></p>
<p>The way this is represented as data is as a three-dimensional matrix. The width of the matrix is the width of the image, the height of the matrix is the height of the image, and the depth of the matrix is the number of channels. So, as we saw, the height and width of our image are both 100 pixels. This means that the underlying data is a matrix with the dimensions <code>100x100x3</code>.</p>
```
# Turn our image object into a NumPy array
img_data = np.array(img)
# get the shape of the resulting array
img_data_shape = img_data.shape
print("Our NumPy array has the shape: {}".format(img_data_shape))
# plot the data with `imshow`
plt.show()
# plot the red channel
plt.imshow(img_data[:, :, 0], cmap=plt.cm.Reds_r)
plt.show()
# plot the green channel
plt.imshow(img_data[:, :, 1], cmap=plt.cm.Greens_r)
plt.show()
# plot the blue channel
plt.imshow(img_data[:, :, 2], cmap=plt.cm.Blues_r)
plt.show()
```
## 5. Explore the color channels
<p>Color channels can help provide more information about an image. A picture of the ocean will be more blue, whereas a picture of a field will be more green. This kind of information can be useful when building models or examining the differences between images.</p>
<p>We'll look at the <a href="https://en.wikipedia.org/wiki/Kernel_density_estimation">kernel density estimate</a> for each of the color channels on the same plot so that we can understand how they differ.</p>
<p>When we make this plot, we'll see that a shape that appears further to the right means more of that color, whereas further to the left means less of that color.</p>
```
def plot_kde(channel, color):
""" Plots a kernel density estimate for the given data.
`channel` must be a 2d array
`color` must be a color string, e.g. 'r', 'g', or 'b'
"""
data = channel.flatten()
return pd.Series(data).plot.density(c=color)
# create the list of channels
channels = ['r', 'g', 'b']
def plot_rgb(image_data):
# use enumerate to loop over colors and indexes
for ix, color in enumerate(channels):
plot_kde(img_data[:, :, ix], color)
plt.show()
plot_rgb(img_data)
```
## 6. Honey bees and bumble bees (i)
<p>Now we'll look at two different images and some of the differences between them. The first image is of a honey bee, and the second image is of a bumble bee.</p>
<p>First, let's look at the honey bee.</p>
```
# load bee_12.jpg as honey
honey = Image.open('datasets/bee_12.jpg')
# display the honey bee image
display(honey)
# NumPy array of the honey bee image data
honey_data = np.array(honey)
# plot the rgb densities for the honey bee image
plot_rgb(honey_data)
```
## 7. Honey bees and bumble bees (ii)
<p>Now let's look at the bumble bee.</p>
<p>When one compares these images, it is clear how different the colors are. The honey bee image above, with a blue flower, has a strong peak on the right-hand side of the blue channel. The bumble bee image, which has a lot of yellow for the bee and the background, has almost perfect overlap between the red and green channels (which together make yellow).</p>
```
# load bee_3.jpg as bumble
bumble = Image.open('datasets/bee_3.jpg')
# display the bumble bee image
display(bumble)
# NumPy array of the bumble bee image data
bumble_data = np.array(bumble)
# plot the rgb densities for the bumble bee image
plot_rgb(bumble_data)
```
## 8. Simplify, simplify, simplify
<p>While sometimes color information is useful, other times it can be distracting. In this examples where we are looking at bees, the bees themselves are very similar colors. On the other hand, the bees are often on top of different color flowers. We know that the colors of the flowers may be distracting from separating honey bees from bumble bees, so let's convert these images to <a href="https://en.wikipedia.org/wiki/Grayscale">black-and-white, or "grayscale."</a></p>
<p>Grayscale is just one of the <a href="https://pillow.readthedocs.io/en/5.0.0/handbook/concepts.html#modes">modes that Pillow supports</a>. Switching between modes is done with the <code>.convert()</code> method, which is passed a string for the new mode.</p>
<p>Because we change the number of color "channels," the shape of our array changes with this change. It also will be interesting to look at how the KDE of the grayscale version compares to the RGB version above.</p>
```
# convert honey to grayscale
honey_bw = honey.convert("L")
display(honey_bw)
# convert the image to a NumPy array
honey_bw_arr = np.array(honey_bw)
# get the shape of the resulting array
honey_bw_arr_shape = honey_bw_arr.shape
print("Our NumPy array has the shape: {}".format(honey_bw_arr_shape))
# plot the array using matplotlib
plt.imshow(honey_bw_arr, cmap=plt.cm.gray)
plt.show()
# plot the kde of the new black and white array
plot_kde(honey_bw_arr, 'k')
```
## 9. Save your work!
<p>We've been talking this whole time about making changes to images and the manipulations that might be useful as part of a machine learning pipeline. To use these images in the future, we'll have to save our work after we've made changes.</p>
<p>Now, we'll make a couple changes to the <code>Image</code> object from Pillow and save that. We'll flip the image left-to-right, just as we did with the color version. Then, we'll change the NumPy version of the data by clipping it. Using the <code>np.maximum</code> function, we can take any number in the array smaller than <code>100</code> and replace it with <code>100</code>. Because this reduces the range of values, it will increase the <a href="https://en.wikipedia.org/wiki/Contrast_(vision)">contrast of the image</a>. We'll then convert that back to an <code>Image</code> and save the result.</p>
```
# flip the image left-right with transpose
honey_bw_flip = honey_bw.transpose(Image.FLIP_LEFT_RIGHT)
# show the flipped image
display(honey_bw_flip)
# save the flipped image
honey_bw_flip.save("saved_images/bw_flipped.jpg")
# create higher contrast by reducing range
honey_hc_arr = np.maximum(honey_bw_arr,100)
# show the higher contrast version
plt.imshow(honey_hc_arr, cmap=plt.cm.gray)
# convert the NumPy array of high contrast to an Image
honey_bw_hc = Image.fromarray(honey_hc_arr)
# save the high contrast version
honey_bw_hc.save("saved_images/bw_hc.jpg")
```
## 10. Make a pipeline
<p>Now it's time to create an image processing pipeline. We have all the tools in our toolbox to load images, transform them, and save the results.</p>
<p>In this pipeline we will do the following:</p>
<ul>
<li>Load the image with <code>Image.open</code> and create paths to save our images to</li>
<li>Convert the image to grayscale</li>
<li>Save the grayscale image</li>
<li>Rotate, crop, and zoom in on the image and save the new image</li>
</ul>
```
image_paths = ['datasets/bee_1.jpg', 'datasets/bee_12.jpg', 'datasets/bee_2.jpg', 'datasets/bee_3.jpg']
def process_image(path):
img = Image.open(path)
# create paths to save files to
bw_path = "saved_images/bw_{}.jpg".format(path.stem)
rcz_path = "saved_images/rcz_{}.jpg".format(path.stem)
print("Creating grayscale version of {} and saving to {}.".format(path, bw_path))
bw = img.convert('L')
# ... YOUR CODE FOR TASK 10 ...
bw.save(bw_path)
print("Creating rotated, cropped, and zoomed version of {} and saving to {}.".format(path, rcz_path))
rcz = bw.rotate(45).crop([25,25,75,75]).resize((100,100))
# ... YOUR CODE FOR TASK 10 ...
rcz.save(rcz_path)
# for loop over image paths
for img_path in image_paths:
process_image(Path(img_path))
```
| github_jupyter |
# Detect moving objects in the screen
This document is used to analyze whether there are moving or changing objects in the frame, based on openCV.
# Import camera function libraries
After running the following code block, wait a while and wait for the camera to initialize. After the initialization is successful, a 300x300-sized real-time video screen will appear below the code block.
You can right-click on this screen and click `Create New View for Output`, so that you can place the camera screen in the window again. Even if you browse to other part of the document, you can still watch the camera screen at any time. This method applies to other widgets.
The initialization may fail due to running this code block multiple times. The solution is already included in `jetbot.Camera`, you only need to restart the Kernel, but be careful not to use the circle arrow above the tab, chances are the camera will still fail to initialize.
It is a recommended method to restart the Kernel:
In `File Browser` on the left, right-click on the `*.ipynb` file with a green dot in front (the green origin indicates that the Kernel is running), select `Shut Down Kernel`, and you will find a green dot disappears, then close this tab and double-click the `*.ipynb` file that was closed just now to restart the kernel.
Run the following code again, and the camera should be initialized normally.
```
import traitlets
import ipywidgets
from IPython.display import display
from jetbot import Camera, bgr8_to_jpeg
camera = Camera.instance(width=300, height=300)
image_widget = ipywidgets.Image() # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
display(image_widget)
```
# Motion detection function
The motion detection function is based on openCV. OpenCV is pre-installed in Jetpack, so you can run the following code block to import the required function library directly. If you are not using jetpack, you may need to manually install openCV or imutils in the terminal, and use `sudo pip3 install opencv-python` and `sudo pip3 install imutils` to install the libraries respectively. If there is no error prompting that these two libraries are missing, you can ignore these and proceed directly The next code block runs.
```
import cv2
import imutils
import datetime
# avg is used to save a frame of reference picture (background)
# the new picture is compared with it to determine where in the picture has changed.
avg = None
lastMovtionCaptured = datetime.datetime.now()
# Motion detection function
def motionDetect(imgInput):
global avg, lastMovtionCaptured
# Get the current timestamp.
timestamp = datetime.datetime.now()
# Convert the frame to black and white, which can increase the efficiency of analysis.
gray = cv2.cvtColor(imgInput, cv2.COLOR_BGR2GRAY)
# Gaussian blur the frame to avoid misjudgment caused by noise.
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# If the reference frame (background) has not been obtained, create a new one.
if avg is None:
avg = gray.copy().astype("float")
return imgInput
# background update.
cv2.accumulateWeighted(gray, avg, 0.5)
# Compare the difference between the new frame and the background.
frameDelta = cv2.absdiff(gray, cv2.convertScaleAbs(avg))
# Get the outline of the changed area in the frame.
thresh = cv2.threshold(frameDelta, 5, 255,
cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# There may be more than one area changes in the frame, so you need to use a for loop to get all the contours.
for c in cnts:
# The default here is 30, which is the threshold of the change area. We only analyze the area greater than 800.
# The smaller the value, the more sensitive the motion detection, but it may also detect meaningless noise.
if cv2.contourArea(c) < 30:
continue
# Draw elements, including rectangle and text.
(mov_x, mov_y, mov_w, mov_h) = cv2.boundingRect(c)
cv2.rectangle(imgInput, (mov_x, mov_y), (mov_x+mov_w, mov_y+mov_h), (128, 255, 0), 1)
# Save the current timestamp to mark the time when the change is detected.
lastMovtionCaptured = timestamp
# In order to avoid the high flickering frequency of drawing elements
# within 0.5 seconds after the motion ends, elements stay.
if (timestamp - lastMovtionCaptured).seconds >= 0.5:
cv2.putText(imgInput,"Motion Detecting",(10,80), cv2.FONT_HERSHEY_SIMPLEX, 0.5,(128,255,0),1,cv2.LINE_AA)
else:
cv2.putText(imgInput,"Motion Detected",(10,80), cv2.FONT_HERSHEY_SIMPLEX, 0.5,(0,128,255),1,cv2.LINE_AA)
# Return to the processed frame.
return imgInput
```
# Process video frames and display
After running the following code, you can see that the color of the frame has changed, indicating that the video screen has been successfully processed by the `motionDetect()` function.
```
def execute(change):
global image_widget
image = change['new']
image_widget.value = bgr8_to_jpeg(motionDetect(image))
execute({'new': camera.value})
camera.unobserve_all()
camera.observe(execute, names='value')
```
At this point you have run all the code. When an object moves or changes in the frame, the text content will change, and a green rectangle will mark the changed area.
# Turn off this processing and stop the camera
Run the following code to turn off the image processing function.
```
camera.unobserve(execute, names='value')
```
Again, let's close the camera conneciton properly so that we can use the camera in the later notebook.
```
camera.stop()
```
| github_jupyter |
# Intermediate UNIX Shell
### The command line interface in a Linux environment
#### John Stachurski
```
!date
```
This notebook is to familiarize you with the UNIX shell in a Linux environment.
I'm using the [Z Shell](https://en.wikipedia.org/wiki/Z_shell) but most of the following will work on any standard UNIX shell (bash, etc.)
```
from IPython.display import Image
Image("terminal.png", width=600)
```
This lesson is written up in a Jupyter notebook because it's an easy way to store a list of commands and their output. In Jupyter, input is treated as a shell command whenever it starts with ``!``. For example
```
!echo $SHELL
```
If I forget the ``!`` I get an error because my command is interpreted as a Python command:
```
echo $SHELL
```
However, there are some [IPython magics](http://ipython.readthedocs.org/en/stable/interactive/magics.html) that mimic shell commands and work without any quanlifier. The next command moves us to my home directory.
```
cd ~
```
To show the present working dir use `pwd`
```
pwd
```
To list its contents use `ls`
```
ls
```
We could add the ``!`` but it wouldn't make any difference.
```
!pwd
```
If you're working directly in the shell then of course you should omit the ``!``
### File System
The top of the directory tree looks like this on my machine:
```
ls /
```
Some comments
* ``bin`` and ``usr/bin`` directories are where most executables (applications) live
* many shared libraries in ``usr/lib``
* The ``home`` directory is where users store personal files
* ``etc`` is home to system wide configuration files
* ``var`` is where logs are written to
* ``media`` is where you'll find your USB stick after you plug it in
### Searching
I have a paper by Lars Hansen on asset pricing somewhere in my file system but I can't remember where. One quick way to find files is to use the ``locate`` command.
```
!locate hansen
```
For more sophisticated searches I use ``find``.
For example, let's find all Julia files in ``/home/john/sync_dir`` and below that contain the phrase ``cauchy``.
```
!find ~/sync_dir/ -name "*cauchy*.jl"
```
The next command finds all files ending in "tex" modified within the last week:
```
!find /home/john/sync_dir/ -mtime -7 -name "*.tex"
```
### Working with Text Files
When working with specific text files we often use a text editor. However, it's also possible to do a significant amount of work directly from the command line. Here are some examples.
```
cd ~/temp_dir
ls
```
We're going to make a text file using the shell redirection operator ``>``
```
!ls -l ~/sync_dir/books
!ls -l ~/sync_dir/books > list_books.txt
ls
```
We can read the contents of this text file with ``cat``
```
!cat list_books.txt
```
We can search within this file using ``grep``
```
!grep Dec list_books.txt
```
We can show just the top of the file using ``head"
```
!head -2 list_books.txt
```
We can also append to files using the ``>>`` operator
```
!date > new_file.txt
ls
!cat new_file.txt
!date >> new_file.txt
!cat new_file.txt
```
We can change "Jan" to "January" using the ``sed`` line editor
```
!sed 's/Jan/January/' new_file.txt
```
### Putting Commands Together
The command line becomes very powerful when we start linking the commands shown above into compound commands. Most often this is done with a pipe. The symbol for a pipe is ``|``.
Let's look at some examples.
The first command searches for files with the phrase ``hansen`` and pipes the output to ``grep``, which filters for lines containing ``sargent``
```
!locate hansen | grep sargent
```
Let's do the same but print only the first 5 hits
```
!locate hansen | grep sargent | head -5
```
As an exercise, let's see how many Python files I have in ``/home/john/``.
```
!find ~ -name "*.py" | wc -l
```
Here ``wc`` is a program that counts words, lines or characters, and ``-l`` requests the number of lines
Let's see if any Python files in my papers directory contain the phrase ``bellman_operator``. To do this we'll use ``xargs``, which sends a list of files or similar consecutively to the filter on its right.
```
!find /home/john/sync_dir/papers -name "*.py" | xargs grep bellman_operator
```
### Final Comments
One useful trick with bash and zsh is that CTRL-R implements backwards search through command history. Thus we can recall an earlier command like
by typing CTRL-R and then `JET` or similar
Another comment is that file names starting with ``.`` are hidden by default. To view them use ``ls -a``
```
cd ~
ls
ls -a
```
Another thing to note is that files have permissions associated with them, so your system can keep track of whether they are executable, who is allowed to read / write to them and so on. To view permissions use ``ls -l``
```
ls -l ~/bin
```
The permissions are the characters on the far left. Here ``x`` means executable, ``r`` is readable and ``w`` is writable, ``d`` is directory and ``l`` is link. To learn more about permissions try googling ``linux file permissions``. To learn more about links google ``linux file links``.
| github_jupyter |
<a href="https://colab.research.google.com/github/Jamiil92/masakhane/blob/master/en-yo/jw300-baseline/English_to_Yoruba_BPE_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Masakhane - Machine Translation for African Languages (Using JoeyNMT)
## Note before beginning:
### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus.
### - The tl;dr: Go to the **"TODO"** comments which will tell you what to update to get up and running
### - If you actually want to have a clue what you're doing, read the text and peek at the links
### - With 100 epochs, it should take around 7 hours to run in Google Colab
### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com
### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)
## Retrieve your data & make a parallel corpus
If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.
Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe.
```
from google.colab import drive
drive.mount('/content/drive')
# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:
# These will also become the suffix's of all vocab and corpus files used throughout
import os
source_language = "en"
target_language = "yo"
lc = False # If True, lowercase the data.
seed = 42 # Random seed for shuffling.
tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
os.environ["tag"] = tag
# This will save it to a folder in our gdrive instead!
!mkdir -p "/content/drive/My Drive/masakhane/$src-$tgt-$tag"
os.environ["gdrive_path"] = "/content/drive/My Drive/masakhane/%s-%s-%s" % (source_language, target_language, tag)
!echo $gdrive_path
# Install opus-tools
! pip install opustools-pkg
# Downloading our corpus
! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q
# extract the corpus file
! gunzip JW300_latest_xml_$src-$tgt.xml.gz
# Download the global test set.
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en
# And the specific test set for this language pair.
os.environ["trg"] = target_language
os.environ["src"] = source_language
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en
! mv test.en-$trg.en test.en
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg
! mv test.en-$trg.$trg test.$trg
# Read the test data to filter from train and dev splits.
# Store english portion in set for quick filtering checks.
en_test_sents = set()
filter_test_sents = "test.en-any.en"
j = 0
with open(filter_test_sents) as f:
for line in f:
en_test_sents.add(line.strip())
j += 1
print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))
import pandas as pd
# TMX file to dataframe
source_file = 'jw300.' + source_language
target_file = 'jw300.' + target_language
source = []
target = []
skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.
with open(source_file) as f:
for i, line in enumerate(f):
# Skip sentences that are contained in the test set.
if line.strip() not in en_test_sents:
source.append(line.strip())
else:
skip_lines.append(i)
with open(target_file) as f:
for j, line in enumerate(f):
# Only add to corpus if corresponding source was not skipped.
if j not in skip_lines:
target.append(line.strip())
print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))
df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])
# if you get TypeError: data argument can't be an iterator is because of your zip version run this below
#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])
df.head(3)
```
## Pre-processing and export
It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.
In addition we will split our data into dev/test/train and export to the filesystem.
```
# drop duplicate translations
df_pp = df.drop_duplicates()
# drop conflicting translations
# (this is optional and something that you might want to comment out
# depending on the size of your corpus)
df_pp.drop_duplicates(subset='source_sentence', inplace=True)
df_pp.drop_duplicates(subset='target_sentence', inplace=True)
# Shuffle the data to remove bias in dev set selection.
df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)
# Install fuzzy wuzzy to remove "almost duplicate" sentences in the
# test and training sets.
! pip install fuzzywuzzy
! pip install python-Levenshtein
import time
from fuzzywuzzy import process
import numpy as np
# reset the index of the training set after previous filtering
df_pp.reset_index(drop=False, inplace=True)
# Remove samples from the training data set if they "almost overlap" with the
# samples in the test set.
# Filtering function. Adjust pad to narrow down the candidate matches to
# within a certain length of characters of the given sample.
def fuzzfilter(sample, candidates, pad):
candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad]
if len(candidates) > 0:
return process.extractOne(sample, candidates)[1]
else:
return np.nan
# NOTE - This might run slow depending on the size of your training set. We are
# printing some information to help you track how long it would take.
scores = []
start_time = time.time()
for idx, row in df_pp.iterrows():
scores.append(fuzzfilter(row['source_sentence'], list(en_test_sents), 5))
if idx % 1000 == 0:
hours, rem = divmod(time.time() - start_time, 3600)
minutes, seconds = divmod(rem, 60)
print("{:0>2}:{:0>2}:{:05.2f}".format(int(hours),int(minutes),seconds), "%0.2f percent complete" % (100.0*float(idx)/float(len(df_pp))))
# Filter out "almost overlapping samples"
df_pp['scores'] = scores
df_pp = df_pp[df_pp['scores'] < 95]
# This section does the split between train/dev for the parallel corpora then saves them as separate files
# We use 1000 dev test and the given test set.
import csv
# Do the split between dev/train and create parallel corpora
num_dev_patterns = 1000
# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.
if lc: # Julia: making lowercasing optional
df_pp["source_sentence"] = df_pp["source_sentence"].str.lower()
df_pp["target_sentence"] = df_pp["target_sentence"].str.lower()
# Julia: test sets are already generated
dev = df_pp.tail(num_dev_patterns) # Herman: Error in original
stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)
with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as trg_file:
for index, row in stripped.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as trg_file:
for index, row in dev.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
#stripped[["source_sentence"]].to_csv("train."+source_language, header=False, index=False) # Herman: Added `header=False` everywhere
#stripped[["target_sentence"]].to_csv("train."+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.
#dev[["source_sentence"]].to_csv("dev."+source_language, header=False, index=False)
#dev[["target_sentence"]].to_csv("dev."+target_language, header=False, index=False)
# Doublecheck the format below. There should be no extra quotation marks or weird characters.
! head train.*
! head dev.*
```
---
## Installation of JoeyNMT
JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io)
```
# Install JoeyNMT
! git clone https://github.com/joeynmt/joeynmt.git
! cd joeynmt; pip3 install .
```
# Preprocessing the Data into Subword BPE Tokens
- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).
- It was also shown that by optimizing the number of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)
- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable.
```
# One of the huge boosts in NMT performance was to use a different method of tokenizing.
# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance
# Do subword NMT
from os import path
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
# Learn BPEs on the training data.
os.environ["data_path"] = path.join("joeynmt", "data", source_language + target_language) # Herman!
! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt
# Apply BPE splits to the development and test data.
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt
# Create directory, move everyone we care about to the correct location
! mkdir -p $data_path
! cp train.* $data_path
! cp test.* $data_path
! cp dev.* $data_path
! cp bpe.codes.4000 $data_path
! ls $data_path
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path joeynmt/data/$src$tgt/vocab.txt
# Some output
! echo "BPE Yoruba Sentences"
! tail -n 5 test.bpe.$tgt
! echo "Combined BPE Vocab"
! tail -n 10 joeynmt/data/$src$tgt/vocab.txt # Herman
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
```
# Creating the JoeyNMT Config
JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!
- We used Transformer architecture
- We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))
Things worth playing with:
- The batch size (also recommended to change for low-resourced languages)
- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)
- The decoder options (beam_size, alpha)
- Evaluation metrics (BLEU versus Crhf4)
```
# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update
# (You can of course play with all the parameters if you'd like!)
name = '%s%s' % (source_language, target_language)
gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{name}_transformer"
data:
src: "{source_language}"
trg: "{target_language}"
train: "data/{name}/train.bpe"
dev: "data/{name}/dev.bpe"
test: "data/{name}/test.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "data/{name}/vocab.txt"
trg_vocab: "data/{name}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
#load_model: "{gdrive_path}/models/{name}_transformer/1.ckpt" # if uncommented, load a pre-trained model from this checkpoint
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "plateau" # TODO: try switching from plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0003
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 4096
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 30 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "models/{name}_transformer"
overwrite: False # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language)
with open("joeynmt/configs/transformer_{name}.yaml".format(name=name),'w') as f:
f.write(config)
```
# Train the Model
This single line of joeynmt runs the training using the config we made above
```
# Train the model
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml
# Copy the created models from the notebook storage to google drive for persistant storage
!cp -r joeynmt/models/${src}${tgt}_transformer/* "$gdrive_path/models/${src}${tgt}_transformer/"
# Copy the created models from the notebook storage to google drive for persistant storage
!cp joeynmt/models/${src}${tgt}_transformer/best.ckpt "$gdrive_path/models/${src}${tgt}_transformer/"
!ls joeynmt/models/${src}${tgt}_transformer
# Output our validation accuracy
! cat "$gdrive_path/models/${src}${tgt}_transformer/validations.txt"
# Test our model
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${src}${tgt}_transformer/config.yaml"
```
| github_jupyter |
<img style="float: center;" src="./images/CI_horizontal.png" width="600">
<center>
<span style="font-size: 1.5em;">
<a href='https://www.coleridgeinitiative.org'>Website</a>
</span>
</center>
Ghani, Rayid, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke, Daniela Hochfellner, Clayton Hunter, Brian Kim, Avishek Kumar, and Jonathan Morgan.
_Citation to be updated on export_
# Data Preparation for Machine Learning - Feature Creation
----
## Python Setup
- Back to [Table of Contents](#Table-of-Contents)
Before we begin, run the code cell below to initialize the libraries we'll be using in this assignment. We're already familiar with `numpy`, `pandas`, and `psycopg2` from previous tutorials.
```
%pylab inline
import pandas as pd
import psycopg2
from sqlalchemy import create_engine
db_name = "appliedda"
hostname = "10.10.2.10"
```
## Creating Features
Our features are our independent variables or predictors. Good features make machine learning systems effective.
The better the features the easier it is the capture the structure of the data. You generate features using domain knowledge. In general, it is better to have more complex features and a simpler model rather than vice versa. Keeping the model simple makes it faster to train and easier to understand rather then extensively searching for the "right" model and "right" set of parameters.
Machine Learning Algorithms learn a solution to a problem from sample data. The set of features is the best representation of the sample data to learn a solution to a problem.
- **Feature engineering** is the process of transforming raw data into features that better represent the underlying problem/data/structure to the predictive models, resulting in improved model accuracy on unseen data." ( from [Discover Feature Engineering](http://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/) ). In text, for example, this might involve deriving traits of the text like word counts, verb counts, or topics to feed into a model rather than simply giving it the raw text.
Example of feature engineering are:
- **Transformations**, such a log, square, and square root.
- **Dummy (binary) variables**, also known as *indicator variables*, often done by taking categorical variables
(such as city) which do not have a numerical value, and adding them to models as a binary value.
- **Discretization**. Several methods require features to be discrete instead of continuous. This is often done
by binning, which you can do by various approaches like equal width, deciles, Fisher-Jenks, etc.
- **Aggregation.** Aggregate features often constitute the majority of features for a given problem. These use
different aggregation functions (*count, min, max, average, standard deviation, etc.*) which summarize several
values into one feature, aggregating over varying windows of time and space. For example, for policing or criminal justice problems, we may want to calculate the *number* (and *min, max, mean, variance*, etc.) of crimes within an *m*-mile radius of an address in the past *t* months for varying values of *m* and *t*, and then use all of them as features.
## Graduate demographics
### Step by Step Approach
```
conn = psycopg2.connect(database = db_name, host = hostname)
cursor = conn.cursor()
# first let's confirm what our cohort table currently has
sql = '''
select * from ada_edwork.no_job_cohort_2007
limit 10;
'''
pd.read_sql(sql, conn)
# add demographic columns to the cohort table
sql = """
ALTER TABLE ada_edwork.no_job_cohort_2007
ADD COLUMN years_old int,
ADD COLUMN gender text,
ADD COLUMN ethnicity text;
"""
cursor.execute(sql)
# update columns from oh_hei_demo table
sql = '''
UPDATE ada_edwork.no_job_cohort_2007 a SET (years_old, gender, ethnicity)
= (2007 - birth_year, b.gender_code, b.ethnicity_code)
FROM in_data_2019.che_completions b
WHERE a.ssn = b.ssn AND a.degree_conferred_date = b.degree_conferred_date;
'''
cursor.execute(sql)
df = pd.read_sql('select * from ada_edwork.no_job_cohort_2007;', conn)
df.head()
cursor.close()
conn = psycopg2.connect(database=db_name, host = hostname)
```
### Define Function
In order to facilitate creating this feature for several years of data, we combined all the above steps into a Python function, and added a final step that writes the feature table to the database.
Note that we assume the corresponding `<prefix>cohort_<year>` table has already been created.
```
# Insert team table prefix
tbl_prefix = 'no_job_'
def grad_demographics(YEAR, prefix = tbl_prefix):
# set the database connection
conn = psycopg2.connect(database=db_name, host = hostname)
cursor = conn.cursor()
print("Adding demographic features")
sql = '''
ALTER TABLE ada_edwork.{pref}cohort_{year}
ADD COLUMN years_old int,
ADD COLUMN gender text,
ADD COLUMN ethnicity text;
commit;
UPDATE ada_edwork.{pref}cohort_{year} a
SET (years_old, gender, ethnicity)
= ({year} - birth_year, b.gender_code, b.ethnicity_code)
FROM in_data_2019.che_completions b
WHERE a.ssn = b.ssn AND a.degree_conferred_date = b.degree_conferred_date;
commit;
'''.format(pref=prefix, year=YEAR)
# print(sql) # to debug
cursor.execute(sql)
print("demographic features added")
cursor.close()
sql = '''
SELECT * FROM ada_edwork.{pref}cohort_{year};
'''.format(pref=prefix, year=YEAR)
df = pd.read_sql(sql, conn)
return df
start_time = time.time()
df_test1 = grad_demographics(2007)
print('demographic features added in {:.2f} seconds'.format(time.time()-start_time))
years = [2008, 2009, 2010, 2011]
for year in years:
start_time = time.time()
df = grad_demographics(year)
print('demographic features added in {:.2f} seconds'.format(time.time()-start_time))
```
## Removing Outliers
**It is never a good idea to drop observations without prior investigation AND a good reason to believe the data is wrong!**
## Imputing Missing Values
There are many ways of imputing missing values based on the rest of the data. Missing values can be imputed to median of the rest of the data, or you can use other characteristics (eg industry, geography, etc.).
For our data, we have made an assumption about what "missing" means for each of our data's components (eg if the individual does not show up in the IDES data we say they do not have a job in that time period).
| github_jupyter |
```
# Import Packages
import os
from glob import glob
import matplotlib.pyplot as plt
import requests
# import descartes
import urllib
import pandas as pd
from pandas.io.json import json_normalize
import geopandas as gpd
import rasterio as rio
from rasterio.plot import plotting_extent
import rasterstats as rs
from shapely.geometry import Point
import earthpy as et
import earthpy.plot as ep
%run ./data_grabber.ipynb
# Set working directory
os.chdir(os.path.join(et.io.HOME,'earth-analytics'))
# Get data
CPER_tif_files=open_ecosystem_structure('CPER','2017-05')
CPER_insitu_df=open_woody_veg_structure('CPER','2017-09')
ONAQ_tif_files=open_ecosystem_structure('ONAQ','2017-06')
ONAQ_insitu_df=open_woody_veg_structure('ONAQ','2017-09')
# Create shapefile of buffered insitu sites
CPER_insitu_gdf=gpd.GeoDataFrame(CPER_insitu_df,geometry=gpd.points_from_xy(
x=CPER_insitu_df.easting,y=CPER_insitu_df.northing),crs='epsg:32613')
CPER_buffered_points=CPER_insitu_gdf.copy()
CPER_buffered_points=CPER_insitu_gdf.geometry.buffer(100)
CPER_buffered_points_path=os.path.join(
'data','NEON','CPER','outputs')
ONAQ_insitu_gdf=gpd.GeoDataFrame(ONAQ_insitu_df,geometry=gpd.points_from_xy(
x=ONAQ_insitu_df.easting,y=ONAQ_insitu_df.northing),crs='epsg:32613')
ONAQ_buffered_points=ONAQ_insitu_gdf.copy()
ONAQ_buffered_points=ONAQ_insitu_gdf.geometry.buffer(100)
ONAQ_buffered_points_path=os.path.join(
'data','NEON','ONAQ','outputs')
# ONAQ_buffered_points.to_file(os.path.join(
# buffered_points_path, 'ONAQ_buffered_points.shp'))
with rio.open(CPER_tif_files[184]) as src:
CHM_arr=src.read(1,masked=True)
extent=src.bounds
CHM_meta=src.profile
extent=plotting_extent(src)
# buffered_points_import=gpd.read_file(buffered_points_shp)
# 0 overlaps slightly
# fig, ax = plt.subplots()
ep.plot_bands(CHM_arr,cmap='BrBG')
# extent=extent)
# CPER_buffered_points.plot(ax=ax,
# color='yellow')
# arr.max()
x=0
for tif_file in tif_files:
with rio.open(i) as src:
CHM_arr=src.read(1,masked=True)
extent=src.bounds
CHM_meta=src.profile
CPER_tree_heights = rs.zonal_stats(buffered_points_shp,
CHM_arr,
nodata=-999,
affine=CHM_meta['transform'],
geojson_out=True,
copy_properties=True,
stats=['count', 'min', 'mean','max','median'])
for i in CPER_tree_heights:
if not i['properties']['max'] == None:
x+=1
print(i['properties']['max'],x)
```
| github_jupyter |
```
%matplotlib inline
```
모델 저장하기 & 불러오기
=========================
**Author:** `Matthew Inkawhich <https://github.com/MatthewInkawhich>`_
**번역**: `박정환 <http://github.com/9bow>`_
이 문서에서는 PyTorch 모델을 저장하고 불러오는 다양한 방법을 제공합니다.
이 문서 전체를 다 읽는 것도 좋은 방법이지만, 필요한 사용 예의 코드만 참고하는
것도 고려해보세요.
모델을 저장하거나 불러올 때는 3가지의 핵심 함수와 익숙해질 필요가 있습니다:
1) `torch.save <https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save>`__:
직렬화된 객체를 디스크에 저장합니다. 이 함수는 Python의
`pickle <https://docs.python.org/3/library/pickle.html>`__ 을 사용하여 직렬화합니다
이 함수를 사용하여 모든 종류의 객체의 모델, Tensor 및 사전을 저장할 수 있습니다.
2) `torch.load <https://pytorch.org/docs/stable/torch.html?highlight=torch%20load#torch.load>`__:
`pickle <https://docs.python.org/3/library/pickle.html>`__\ 을 사용하여
저장된 객체 파일들을 역직렬화하여 메모리에 올립니다. 이 함수는 데이터를 장치에
불러올 때도 사용합니다.
(`장치간 모델 저장하기 & 불러오기 <#device>`__ 참고)
3) `torch.nn.Module.load_state_dict <https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict>`__:
역직렬화된 *state_dict* 를 사용하여 모델의 매개변수들을 불러옵니다.
*state_dict* 에 대한 더 자세한 정보는 `state_dict가 무엇인가요?
<#state-dict>`__ 를 참고하세요.
**목차:**
- `state_dict가 무엇인가요? <#state-dict>`__
- `추론(inference)를 위해 모델 저장하기 & 불러오기 <#inference>`__
- `일반 체크포인트(checkpoint) 저장하기 & 불러오기 <#checkpoint>`__
- `여러개(multiple)의 모델을 하나의 파일에 저장하기 <#multiple>`__
- `다른 모델의 매개변수를 사용하여 빠르게 모델 시작하기(warmstart) <#warmstart>`__
- `장치(device)간 모델 저장하기 & 불러오기 <#device>`__
``state_dict`` 가 무엇인가요?
-------------------------------
PyTorch에서 ``torch.nn.Module`` 모델의 학습 가능한 매개변수(예. 가중치와 편향)들은
모델의 매개변수에 포함되어 있습니다(model.parameters()로 접근합니다).
*state_dict* 는 간단히 말해 각 계층을 매개변수 텐서로 매핑되는 Python 사전(dict)
객체입니다. 이 때, 학습 가능한 매개변수를 갖는 계층(합성곱 계층, 선형 계층 등)
및 등록된 버퍼들(batchnorm의 running_mean)만이 모델의 *state_dict* 에 항목을
가짐을 유의하시기 바랍니다. 옵티마이저 객체(``torch.optim``) 또한 옵티마이저의
상태 뿐만 아니라 사용된 하이퍼 매개변수(Hyperparameter) 정보가 포함된
*state_dict* 를 갖습니다.
*state_dict* 객체는 Python 사전이기 때문에 쉽게 저장하거나 갱신하거나 바꾸거나
되살릴 수 있으며, PyTorch 모델과 옵티마이저에 엄청난 모듈성(modularity)을 제공합니다.
예제:
^^^^^^^^
:doc:`/beginner/blitz/cifar10_tutorial` 튜토리얼에서 사용한 간단한 모델의
*state_dict* 를 살펴보도록 하겠습니다.
.. code:: python
# 모델 정의
class TheModelClass(nn.Module):
def __init__(self):
super(TheModelClass, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# 모델 초기화
model = TheModelClass()
# 옵티마이저 초기화
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# 모델의 state_dict 출력
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
# 옵티마이저의 state_dict 출력
print("Optimizer's state_dict:")
for var_name in optimizer.state_dict():
print(var_name, "\t", optimizer.state_dict()[var_name])
**출력:**
::
Model's state_dict:
conv1.weight torch.Size([6, 3, 5, 5])
conv1.bias torch.Size([6])
conv2.weight torch.Size([16, 6, 5, 5])
conv2.bias torch.Size([16])
fc1.weight torch.Size([120, 400])
fc1.bias torch.Size([120])
fc2.weight torch.Size([84, 120])
fc2.bias torch.Size([84])
fc3.weight torch.Size([10, 84])
fc3.bias torch.Size([10])
Optimizer's state_dict:
state {}
param_groups [{'lr': 0.001, 'momentum': 0.9, 'dampening': 0, 'weight_decay': 0, 'nesterov': False, 'params': [4675713712, 4675713784, 4675714000, 4675714072, 4675714216, 4675714288, 4675714432, 4675714504, 4675714648, 4675714720]}]
추론(inference)를 위해 모델 저장하기 & 불러오기
------------------------------------------------
``state_dict`` 저장하기 / 불러오기 (권장)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**저장하기:**
.. code:: python
torch.save(model.state_dict(), PATH)
**불러오기:**
.. code:: python
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
<div class="alert alert-info"><h4>Note</h4><p>PyTorch 버전 1.6에서는 새로운 Zip파일-기반의 파일 포맷을 사용하도록
``torch.save`` 가 변경되었습니다. ``torch.load`` 는 예전 방식의 파일들을
읽어올 수 있도록 하고 있습니다. 어떤 이유에서든 ``torch.save`` 가 예전
방식을 사용하도록 하고 싶다면, ``_use_new_zipfile_serialization=False`` 을
kwarg로 전달하세요.</p></div>
추론을 위해 모델을 저장할 때는 학습된 모델의 학습된 매개변수만 저장하면 됩니다.
``torch.save()`` 를 사용하여 모델의 *state_dict* 를 저장하는 것이 나중에 모델을
사용할 때 가장 유연하게 사용할 수 있는, 모델 저장 시 권장하는 방법입니다.
PyTorch에서는 모델을 저장할 때 ``.pt`` 또는 ``.pth`` 확장자를 사용하는 것이
일반적인 규칙입니다.
추론을 실행하기 전에는 반드시 ``model.eval()`` 을 호출하여 드롭아웃 및 배치
정규화를 평가 모드로 설정하여야 합니다. 이것을 하지 않으면 추론 결과가 일관성
없게 출력됩니다.
.. Note ::
``load_state_dict()`` 함수에는 저장된 객체의 경로가 아닌, 사전 객체를
전달해야 하는 것에 유의하세요. 따라서 저장된 *state_dict* 를 ``load_state_dict()``
함수에 전달하기 전에 반드시 역직렬화를 해야 합니다. 예를 들어,
``model.load_state_dict(PATH)`` 과 같은 식으로는 사용하면 안됩니다.
.. Note ::
만약 (검증 손실(validation loss) 결과에 따라) 가장 성능이 좋은 모델만 유지할
계획이라면, ``best_model_state = model.state_dict()`` 은 모델의 복사본이 아닌
모델의 현재 상태에 대한 참조(reference)만 반환한다는 사실을 잊으시면 안됩니다!
따라서 ``best_model_state``` 을 직렬화(serialize)하거나,
``best_model_state = deepcopy(model.state_dict())`` 을 사용해야 합니다.
그렇지 않으면, 제일 좋은 성능을 내는 ``best_model_state`` 은 계속되는 학습 단계에서
갱신될 것입니다. 결과적으로, 최종 모델의 상태는 과적합(overfit)된 상태가 됩니다.
전체 모델 저장하기/불러오기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**저장하기:**
.. code:: python
torch.save(model, PATH)
**불러오기:**
.. code:: python
# 모델 클래스는 어딘가에 반드시 선언되어 있어야 합니다
model = torch.load(PATH)
model.eval()
이 저장하기/불러오기 과정은 가장 직관적인 문법을 사용하며 적은 양의
코드를 사용합니다. 이러한 방식으로 모델을 저장하는 것은 Python의
`pickle <https://docs.python.org/3/library/pickle.html>`__ 모듈을 사용하여
전체 모듈을 저장하게 됩니다. 하지만 pickle은 모델 그 자체를 저장하지 않기 때문에
직렬화된 데이터가 모델을 저장할 때 사용한 특정 클래스 및 디렉토리 경로(구조)에
얽매인다는 것이 이 방식의 단점입니다. 대신에 클래스가 위치한 파일의 경로를
저장해두고, 불러오는 시점에 사용합니다. 이러한 이유 때문에, 만들어둔 코드를
다른 프로젝트에서 사용하거나 리팩토링 후에 다양한 이유로 동작하지 않을 수
있습니다.
PyTorch에서는 모델을 저장할 때 ``.pt`` 또는 ``.pth`` 확장자를 사용하는 것이
일반적인 규칙입니다.
추론을 실행하기 전에는 반드시 ``model.eval()`` 을 호출하여 드롭아웃 및 배치
정규화를 평가 모드로 설정하여야 합니다. 이것을 하지 않으면 추론 결과가 일관성
없게 출력됩니다.
추론 / 학습 재개를 위해 일반 체크포인트(checkpoint) 저장하기 & 불러오기
--------------------------------------------------------------------------
저장하기:
^^^^^^^^^^
.. code:: python
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
불러오기:
^^^^^^^^^^
.. code:: python
model = TheModelClass(*args, **kwargs)
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
추론 또는 학습 재개를 위해 일반 체크포인트를 저장할 때는 반드시 모델의
*state_dict* 보다 많은 것들을 저장해야 합니다. 모델이 학습을 하며 갱신되는
버퍼와 매개변수가 포함된 옵티마이저의 *state_dict* 도 함께 저장하는 것이
중요합니다. 그 외에도 마지막 에폭(epoch), 최근에 기록된 학습 손실, 외부
``torch.nn.Embedding`` 계층 등도 함께 저장합니다. 결과적으로, 이런 체크포인트는
종종 모델만 저장하는 것보다 2~3배 정도 커지게 됩니다.
여러가지를 함께 저장하려면, 사전(dictionary) 자료형으로 만든 후
``torch.save()`` 를 사용하여 직렬화합니다. PyTorch가 이러한 체크포인트를 저장할
때는 ``.tar`` 확장자를 사용하는 것이 일반적인 규칙입니다.
항목들을 불러올 때에는 먼저 모델과 옵티마이저를 초기화한 후, ``torch.load()``
를 사용하여 사전을 불러옵니다. 이후로는 저장된 항목들을 사전에 원하는대로 사전에
질의하여 쉽게 접근할 수 있습니다.
추론을 실행하기 전에는 반드시 ``model.eval()`` 을 호출하여 드롭아웃 및 배치
정규화를 평가 모드로 설정하여야 합니다. 이것을 하지 않으면 추론 결과가 일관성
없게 출력됩니다. 만약 학습을 계속하고 싶다면, ``model.train()`` 을 호출하여
학습 모드로 전환되도록 해야 합니다.
여러개(multiple)의 모델을 하나의 파일에 저장하기
-------------------------------------------------------
저장하기:
^^^^^^^^^^
.. code:: python
torch.save({
'modelA_state_dict': modelA.state_dict(),
'modelB_state_dict': modelB.state_dict(),
'optimizerA_state_dict': optimizerA.state_dict(),
'optimizerB_state_dict': optimizerB.state_dict(),
...
}, PATH)
불러오기:
^^^^^^^^^^
.. code:: python
modelA = TheModelAClass(*args, **kwargs)
modelB = TheModelBClass(*args, **kwargs)
optimizerA = TheOptimizerAClass(*args, **kwargs)
optimizerB = TheOptimizerBClass(*args, **kwargs)
checkpoint = torch.load(PATH)
modelA.load_state_dict(checkpoint['modelA_state_dict'])
modelB.load_state_dict(checkpoint['modelB_state_dict'])
optimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])
optimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])
modelA.eval()
modelB.eval()
# - or -
modelA.train()
modelB.train()
GAN, Seq2Seq 또는 앙상블 모델과 같이 여러개의 여러개의 ``torch.nn.Modules`` 로
구성된 모델을 저장하는 경우에는 일반 체크포인트를 저장할 때와 같은 방식을
따릅니다. 즉, 각 모델의 *state_dict* 와 해당 옵티마이저를 사전으로 저장합니다.
앞에서 언급했던 것과 같이, 학습을 재개하는데 필요한 다른 항목들을 사전에 추가하여
저장할 수 있습니다.
PyTorch가 이러한 체크포인트를 저장할 때는 ``.tar`` 확장자를 사용하는 것이
일반적인 규칙입니다.
항목들을 불러올 때에는 먼저 모델과 옵티마이저를 초기화한 후, ``torch.load()``
를 사용하여 사전을 불러옵니다. 이후로는 저장된 항목들을 사전에 원하는대로 사전에
질의하여 쉽게 접근할 수 있습니다.
추론을 실행하기 전에는 반드시 ``model.eval()`` 을 호출하여 드롭아웃 및 배치
정규화를 평가 모드로 설정하여야 합니다. 이것을 하지 않으면 추론 결과가 일관성
없게 출력됩니다. 만약 학습을 계속하고 싶다면, ``model.train()`` 을 호출하여
학습 모드로 설정해야 합니다.
다른 모델의 매개변수를 사용하여 빠르게 모델 시작하기(warmstart)
--------------------------------------------------------------------
저장하기:
^^^^^^^^^^
.. code:: python
torch.save(modelA.state_dict(), PATH)
불러오기:
^^^^^^^^^^
.. code:: python
modelB = TheModelBClass(*args, **kwargs)
modelB.load_state_dict(torch.load(PATH), strict=False)
부분적으로 모델을 불러오거나, 모델의 일부를 불러오는 것은 전이학습 또는
새로운 복잡한 모델을 학습할 때 일반적인 시나리오입니다. 학습된 매개변수를
사용하면, 일부만 사용한다 하더라도 학습 과정을 빠르게 시작할 수 있고,
처음부터 시작하는 것보다 훨씬 빠르게 모델이 수렴하도록 도울 것입니다.
몇몇 키를 제외하고 *state_dict* 의 일부를 불러오거나, 적재하려는 모델보다
더 많은 키를 갖고 있는 *state_dict* 를 불러올 때에는 ``load_state_dict()``
함수에서 ``strict`` 인자를 **False** 로 설정하여 일치하지 않는 키들을
무시하도록 해야 합니다.
한 계층에서 다른 계층으로 매개변수를 불러오고 싶지만, 일부 키가 일치하지
않을 때에는 적재하려는 모델의 키와 일치하도록 *state_dict* 의 매개변수 키의
이름을 변경하면 됩니다.
장치(device)간 모델 저장하기 & 불러오기
----------------------------------------
GPU에서 저장하고 CPU에서 불러오기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**저장하기:**
.. code:: python
torch.save(model.state_dict(), PATH)
**불러오기:**
.. code:: python
device = torch.device('cpu')
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=device))
GPU에서 학습한 모델을 CPU에서 불러올 때는 ``torch.load()`` 함수의
``map_location`` 인자에 ``torch.device('cpu')`` 을 전달합니다.
이 경우에는 Tensor에 저장된 내용들은 ``map_location`` 인자를 사용하여 CPU 장치에
동적으로 재배치됩니다.
GPU에서 저장하고 GPU에서 불러오기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**저장하기:**
.. code:: python
torch.save(model.state_dict(), PATH)
**불러오기:**
.. code:: python
device = torch.device("cuda")
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.to(device)
# 모델에서 사용하는 input Tensor들은 input = input.to(device) 을 호출해야 합니다.
GPU에서 학습한 모델을 GPU에서 불러올 때에는, 초기화된 ``model`` 에
``model.to(torch.device('cuda'))`` 을 호출하여 CUDA 최적화된 모델로 변환해야
합니다. 또한, 모델에 데이터를 제공하는 모든 입력에 ``.to(torch.device('cuda'))``
함수를 호출해야 합니다. ``my_tensor.to(device)`` 를 호출하면 GPU에 ``my_tensor``
의 복사본을 반환하기 때문에, Tensor를 직접 덮어써야 합니다:
``my_tensor = my_tensor.to(torch.device('cuda'))`` .
CPU에서 저장하고 GPU에서 불러오기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**저장하기:**
.. code:: python
torch.save(model.state_dict(), PATH)
**불러오기:**
.. code:: python
device = torch.device("cuda")
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location="cuda:0")) # 사용할 GPU 장치 번호를 선택합니다.
model.to(device)
# 모델에서 사용하는 input Tensor들은 input = input.to(device) 을 호출해야 합니다.
CPU에서 학습한 모델을 GPU에서 불러올 때는 ``torch.load()`` 함수의
``map_location`` 인자에 *cuda:device_id* 을 설정합니다. 이렇게 하면 모델이 해당
GPU 장치에 불러와집니다. 다음으로 ``model.to(torch.device('cuda'))`` 을 호출하여
모델의 매개변수 Tensor들을 CUDA Tensor들로 변환해야 합니다. 마지막으로 모든
모델 입력에 ``.to(torch.device('cuda'))`` 을 사용하여 CUDA 최적화된 모델을 위한
데이터로 만들어야 합니다. ``my_tensor.to(device)`` 를 호출하면 GPU에 ``my_tensor``
의 복사본을 반환합니다. 이 동작은 ``my_tensor`` 를 덮어쓰지 않기 때문에, Tensor를
직접 덮어써야 합니다: ``my_tensor = my_tensor.to(torch.device('cuda'))`` .
``torch.nn.DataParallel`` 모델 저장하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**저장하기:**
.. code:: python
torch.save(model.module.state_dict(), PATH)
**불러오기:**
.. code:: python
# 사용할 장치에 불러옵니다.
``torch.nn.DataParallel`` 은 병렬 GPU 활용을 가능하게 하는 모델 래퍼(wrapper)입니다.
``DataParallel`` 모델을 범용적으로 저장하려면 ``model.module.state_dict()`` 을
사용하면 됩니다. 이렇게 하면 원하는 모든 장치에 원하는 방식으로 유연하게 모델을
불러올 수 있습니다.
| github_jupyter |
```
import numpy as np
import pandas as pd
import timeit
# Please, to run the experiment donwload the following dataset and put it in the /Dataset folder:
# - household_power_consumption.txt -
# https://archive.ics.uci.edu/ml/machine-learning-databases/00235/household_power_consumption.zip
# (extract the .txt file)
filename = "Datasets/household_power_consumption.txt"
df = pd.read_csv(filename, sep=';', header=0, usecols=[2,3,4])
df = df.dropna()
print(list(df.columns.values))
df['Global_active_power'] = pd.to_numeric(df['Global_active_power'], errors='coerce')
df['Global_reactive_power'] = pd.to_numeric(df['Global_reactive_power'], errors='coerce')
df['Voltage'] = pd.to_numeric(df['Voltage'], errors='coerce')
df = df.dropna()
print(df.shape)
print(df.dtypes)
df.head()
x = df[['Global_active_power','Global_reactive_power']]
x = x.to_numpy()
y = df['Voltage']
y = y.to_numpy()
n = x.shape[1]
import recombination as rb
print(x.shape)
X = np.append(x,y[np.newaxis].T,1)
xy_sq = rb.tens_sq(X)
print(xy_sq.shape)
print(xy_sq[:,n+1:].shape)
N, d = xy_sq[:,n+1:].shape
mean_t = 0.
time_rand = []
iterations_rand = []
min_t = np.inf
max_t = 0.
sample = 1000
COV = np.matmul(x.T,x)/N
for i in range(sample):
tic = timeit.default_timer()
w_star, idx_star, _, _, _, iterations, eliminated_points = rb.recomb_Mor_reset(
xy_sq[:,n+1:]-np.mean(xy_sq[:,n+1:],0), 400)
time_rand.append((timeit.default_timer()-tic)*1000)
iterations_rand.append(iterations)
################ CHECK THE BARYCENTER IS THE SAME
COV_recomb = np.zeros(COV.shape)
jj = 0
for j in idx_star:
tmp = np.matmul(x[j,:][np.newaxis].T,x[j,:][np.newaxis])
COV_recomb += tmp * w_star[jj]
jj += 1
assert np.allclose(COV_recomb,COV), "ERROR COV != COV_RECOMB"
################ CHCEK FINISHED
mean_t += time_rand[-1]
print("sample = ", i)
print("time = ", time_rand[-1], "ms")
print("mean time = ", mean_t/(i+1), "ms")
min_t = min(time_rand)
max_t = max(time_rand)
print("---------------------------------------")
print("max t = ", max_t, "ms")
print("min t = ", min_t, "ms")
print("mean = ", mean_t/sample, "ms")
print("---------------------------------------")
mean_t = 0.
sample = 100
time_MT = []
min_t = np.inf
max_t = 0.
COV = np.matmul(x.T,x)/N
for i in range(sample):
x_cp = np.copy(xy_sq[:,n+1:])
tic = timeit.default_timer()
w_star, idx_star, _, _, _, iterations, eliminated_points = rb.Tchernychova_Lyons(
x_cp)
time_MT.append((timeit.default_timer()-tic)*1000)
################ CHECK
COV_recomb = np.zeros(COV.shape)
jj = 0
for j in idx_star:
tmp = np.matmul(x[j,:][np.newaxis].T,x[j,:][np.newaxis])
COV_recomb += tmp * w_star[jj]
jj += 1
assert np.allclose(COV_recomb,COV), "ERROR COV != COV_RECOMB"
################ CHECK FINISHED
mean_t += time_MT[-1]
print("sample = ", i)
print("time = ", time_MT[-1], "ms")
print("mean time = ", mean_t/(i+1), "ms")
min_t = min(time_MT)
max_t = max(time_MT)
print("---------------------------------------")
print("max t = ", max_t, "ms")
print("min t = ", min_t, "ms")
print("mean = ", mean_t/sample, "ms")
print("std FC = ", np.std(time_MT))
print("---------------------------------------")
from Maalouf_Jubran_Feldman import Fast_Caratheodory
time_FC = []
mean_t = 0.
for i in range(100):
tic = timeit.default_timer()
Fast_Caratheodory(xy_sq[:,n+1:],np.ones(N),d+1)
time_FC.append((timeit.default_timer()-tic)*1000)
mean_t += time_FC[-1]
print("sample = ", i)
print("time = ", time_FC[-1], "ms")
print("mean time = ", mean_t/(i+1), "ms")
print("---------------------------------------")
print("max FC = ", np.max(time_FC), " ms")
print("min FC = ", np.min(time_FC), " ms")
print("mean FC = ", np.mean(time_FC), " ms")
print("std FC = ", np.std(time_FC))
print("---------------------------------------")
mean_t = 0.
sample = 1000
time_log = np.zeros(sample)
min_t = 0.
max_t = 0.
COV = np.matmul(x[:,:].T,x[:,:])/N
for i in range(sample):
x_cp = np.copy(xy_sq[:,n+1:])
tic = timeit.default_timer()
w_star, idx_star, _, _, _, _, _ = rb.recomb_log(x_cp)
time_log[i] = (timeit.default_timer()-tic)*1000
################ CHECK
COV_recomb = np.zeros(COV.shape)
jj = 0
for j in idx_star:
tmp = np.matmul(x[j,:][np.newaxis].T,x[j,:][np.newaxis])
COV_recomb += tmp * w_star[jj]
jj += 1
assert np.allclose(COV_recomb,COV), "ERROR COV != COV_RECOMB"
################ CHECK FINISHED
mean_t += time_log[i]
print("sample = ", i)
print("time = ", time_log[i], "ms")
print("mean time = ", mean_t/(i+1), "ms")
mean_t = np.mean(time_log)
min_t = np.min(time_log)
max_t = np.max(time_log)
print("---------------------------------------")
print("max t = ", max_t, "ms")
print("min t = ", min_t, "ms")
print("mean = ", mean_t, "ms")
print("---------------------------------------")
time_rand = np.array(time_rand)
iterations_rand = np.array(iterations_rand)
time_FC = np.array(time_FC)
time_log = np.array(time_log)
time_MT = np.array(time_MT)
np.set_printoptions(precision=1)
print("Probability to be faster = ",
np.sum(np.array(time_rand)<np.mean(time_FC))/sample*100, "%")
print("Probability to be 4x faster = ",
np.sum(np.array(time_rand)<np.mean(time_FC)/4)/sample*100, "%")
print("Standard deviation = ", np.std(time_rand))
print("The expected time of the log-random is ", np.mean(time_log), "ms")
print("Standard deviation of the log-random is = ", np.std(time_log))
np.set_printoptions(precision=1)
print('''Some statistics for the randomized algorithm are:
average running time = ''', np.round(np.mean(time_rand),1),
"ms, min = " , np.round(np.min(time_rand),1), "ms, max = ", np.round(np.max(time_rand),1),
"ms, std ", np.round(np.std(time_rand),1),
"ms. Using the log-random strategy they are: average running time = ", np.round(np.mean(time_log),1),
"ms, min = ", np.round(np.min(time_log),1), "ms, max = ", np.round(np.max(time_log),1),
", std = ", np.round(np.std(time_log),1), "ms.",
" Average runnig times of determinsitic: TL = ", np.round(np.mean(time_MT),1),
"ms, MJF = ", np.round(np.mean(time_FC),1),"ms.")
import matplotlib.pyplot as plt
fig, axs = plt.subplots(5,1,figsize=(7,12))
################################################
plt.subplot(5, 1, 1)
plt.hist(time_rand, bins=int(90))
plt.axvline(np.mean(time_rand), 0, max(time_rand), linestyle='dashed', color="blue", label="mean randomized algo")
plt.axvline(np.mean(time_MT), 0, max(time_MT), linestyle='dashed', color="orange", label="mean det3")
plt.axvline(np.mean(time_FC), 0, max(time_rand), linestyle='dashed', color="red", label="mean det4")
plt.xlim((0, max(time_rand)))
plt.legend()
plt.title('Distribution of the running time - Power Consumption')
plt.xlabel('time (ms)')
################################################
plt.subplot(5, 1, 2)
plt.hist(iterations_rand, bins=int(90))
plt.title('Distribution of the iterations')
plt.xlabel('number of iterations')
plt.xscale('linear')
################################################
plt.subplot(5, 1, 3)
plt.plot(iterations_rand,time_rand, '.')
plt.xlabel('iterations')
plt.ylabel('time (ms)')
plt.title('Iterations vs time')
################################################
plt.subplot(5, 1, 4)
plt.hist(time_log, bins=int(10),color='limegreen')
plt.axvline(np.mean(time_rand), 0, max(time_rand), linestyle='dashed', color="blue", label="mean randomized algo")
plt.axvline(np.mean(time_log), 0, max(time_log), linestyle='dashed', color="green", label="mean log-random algo")
plt.axvline(np.mean(time_MT), 0, max(time_MT), linestyle='dashed', color="orange", label="mean det3")
plt.axvline(np.mean(time_FC), 0, max(time_rand), linestyle='dashed', color="red", label="mean det4")
plt.xlim((0, max(time_rand)))
plt.title('Distribution of the running time of the log-random algorithm')
plt.legend()
plt.xlabel('time (ms)')
################################################
plt.subplot(5, 1, 5)
plt.hist(time_rand, bins=int(250))
plt.hist(time_log, bins=int(10),color='limegreen')
plt.axvline(np.mean(time_rand), 0, max(time_rand), linestyle='dashed', color="blue", label="mean randomized algo")
plt.axvline(np.mean(time_log), 0, max(time_log), linestyle='dashed', color="green", label="mean log-random algo")
plt.axvline(np.mean(time_MT), 0, max(time_MT), linestyle='dashed', color="orange", label="mean det3")
plt.axvline(np.mean(time_FC), 0, max(time_rand), linestyle='dashed', color="red", label="mean det4")
plt.xlim((min(time_rand), max(time_rand)/4.))
plt.title('Distribution of the running time of the log-random algorithm')
plt.legend()
plt.xlabel('time (ms)')
fig.tight_layout()
# plt.savefig('Distrib_running_time_elec.pdf')#, bbox_inches='tight')
plt.show()
mean_t = 0.
time_combined = []
min_t = np.inf
max_t = 0.
sample = 1000
COV = np.matmul(x.T,x)/N
for i in range(sample):
tic = timeit.default_timer()
w_star, idx_star, _, _, _, iterations, eliminated_points = rb.recomb_combined(
xy_sq[:,n+1:], 400)
time_combined.append((timeit.default_timer()-tic)*1000)
################ CHECK THE BARYCENTER IS THE SAME
COV_recomb = np.zeros(COV.shape)
jj = 0
for j in idx_star:
tmp = np.matmul(x[j,:][np.newaxis].T,x[j,:][np.newaxis])
COV_recomb += tmp * w_star[jj]
jj += 1
assert np.allclose(COV_recomb,COV), "ERROR COV != COV_RECOMB"
################ CHECK FINISHED
mean_t += time_combined[-1]
print("sample = ", i)
print("time = ", time_combined[-1], "ms")
print("mean time = ", mean_t/(i+1), "ms")
min_t = min(time_combined)
max_t = max(time_combined)
print("---------------------------------------")
print("max t = ", max_t, "ms")
print("min t = ", min_t, "ms")
print("mean = ", mean_t/sample, "ms")
print("std = ", np.std(time_combined))
print("---------------------------------------")
maximum = max(np.mean(time_rand),np.mean(time_log),np.mean(time_MT),np.mean(time_rand),np.mean(time_combined))*2
plt.hist(time_combined,bins=int(70),color='grey')
plt.axvline(np.mean(time_combined), linestyle='dashed', color="grey", label="mean combined algo")
plt.axvline(np.mean(time_rand), linestyle='dashed', color="blue", label="mean randomized algo")
plt.axvline(np.mean(time_log), linestyle='dashed', color="green", label="mean log-random algo")
plt.axvline(np.mean(time_MT), linestyle='dashed', color="orange", label="mean det3")
plt.axvline(np.mean(time_FC), linestyle='dashed', color="red", label="mean det4")
plt.xlim((0, maximum))
plt.legend()
plt.title('Distr. running time Combined Algorithm - Power Consumption')
plt.xlabel('time (ms)')
fig.tight_layout()
# plt.savefig('Distrib_running_time_elec_combined.pdf')#, bbox_inches='tight')
plt.show()
```
| github_jupyter |

# Intro to Deep Learning with Keras
#### Author: Alexander Fred Ojala
_____
# Why Keras
Modular, powerful and intuitive Deep Learning python library built on TensorFlow, CNTK, Theano.
* Minimalist, user-friendly interface
* Integrated with Tensorflow (`tf.keras`)
* Works on CPUs and GPUs
* Open-source, developed and maintained by a community of contributors, and
publicly hosted on github
* Extremely well documented, lots of working examples: https://keras.io/
* Very shallow learning curve —> it is by far one of the best tools for experimenting, both for beginners and experts
# Comparison: Deep Learning Framewroks
Compile code down to the deep learning framework (i.e. takes longer to run). See comparison of speed for different DL frameworks:
<img src='imgs/train_times.png' width=600px></img>
```
# Suppress TensorFlow and Keras warnings for cleaner output
import warnings
warnings.simplefilter("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import keras
```
# Keras backend
We want Keras to use Tensorflow as a backend (should be default). If the warning above does not say:
<div class='alert alert-danger'>**Using TensorFlow backend.**</div>
Then open up the keras configuration file located in:
`$HOME/.keras/keras.json`
(On Windows replace `$HOME` with `%USERPROFILE%`)
and change the entries in the JSON file to:
```json
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "tensorflow",
"image_data_format": "channels_last"
}
```
After that restart your Kernel and run the code again.
# Keras "Hello World" on Iris
### Data preprocessing
```
from sklearn import datasets
data = datasets.load_iris()
print(data.DESCR[:980])
x = data['data']
y = data['target']
y[:5]
# one hot encode y
import pandas as pd
y = pd.get_dummies(y).values
y[:5,:]
# train test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x,
y, test_size=0.4,
random_state=1337,
shuffle=True)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
```
### The Sequential model
The simplest model in Keras is the Sequential model, a linear stack of layers.
* **Sequential model** linear stack of layers: It allows us to build NNs like legos, by adding one layer on top of the other, and swapping layers
* Graph: multi-input, multi-output, with arbitrary connections inside
```
# Core data structure in Keras is a model
# The model is an object in which we organize layers
# model initialization
from keras.models import Sequential
model = Sequential() # instantiate empty Sequential model
```
We can import layer classes and stack layers (in an NN model for example), by using `.add()`
# Specifying the input shape
The model needs to know what input shape it should expect. For this reason, the first layer in a Sequential model needs to receive information about its input shape. There are several possible ways to do this:
* Pass an input_shape argument to the first layer. This is a shape tuple (a tuple of integers or `None` entries, where `None` indicates that any positive integer may be expected).
* Some 2D layers, such as Dense, support the specification of their input shape via the argument input_dim, and some 3D temporal layers support the arguments `input_dim` and `input_length`.
**The following snippets are strictly equivalent:**
> * `model.add(Dense(32, input_shape=(784,)))`
> * `model.add(Dense(32, input_dim=784))`
# Construction Phase
```
# model contruction (architecture build computational graph)
from keras.layers import Dense
model.add( Dense(units=64, activation='relu', input_shape=(4,) ))
model.add( Dense(units=3, activation='softmax') )
```
# Compilation phase, specify learning process
Run `.compile()` on the model to specify learning process.
Before training a model, you need to configure the learning process, which is done via the compile method. It receives three arguments:
* **An optimizer:** This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the Optimizer class.
* **A loss function:** This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function (such as `categorical_crossentropy` or `mse`), or it can be an objective function.
* **(Optional) A list of metrics:** For any classification problem you will want to set this to `metrics=['accuracy']`. A metric could be the string identifier of an existing metric or a custom metric function.
```
model.compile(loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
```
## We can also specify our own optimizer or loss function (even build it ourselves)
```python
# or with we can specify loss function
from keras.optimizers import SGD
model.compile(loss = 'categorical_crossentropy',
optimizer = SGD(lr=0.001, momentum = 0.9, nesterov=True),
metrics = ['accuracy'])
```
### Different optimizers and their trade-offs
To read more about gradient descent optimizers, hyperparameters etc. This is a recommended reading: http://ruder.io/optimizing-gradient-descent/index.html
### Training
Keras models are trained on Numpy arrays of input data and labels. For training a model, you will typically use the fit function.
```
# Fit the model by iterating over the training data in batches
model.fit(X_train, y_train, epochs = 50, batch_size= 32)
# # Evaluate the model Accuracy on test set
model.evaluate(X_test, y_test, batch_size=60,verbose=False)[1]
# Predictions on new data:
class_probabilities = model.predict(X_test, batch_size=128)
# gives output of the softmax function
class_probabilities[:5,:]
```
# Keras DNN on MNIST
Data preprocessing
```
# Load MNIST data
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
img_dim = 28*28
num_classes = 10
X_train = X_train.reshape(X_train.shape[0], img_dim)
X_test = X_test.reshape(X_test.shape[0], img_dim)
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(y_train[:3])
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train[:3])
# Sequential model to stack layers
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
```
# Model contruction
```
# Initialize model constructor
model = Sequential()
# Add layers sequentially
model.add(Dense(300, activation=tf.nn.leaky_relu, input_shape=(784,) ) )
model.add(Dropout(.1))
# Second..
model.add(Dense(200, activation=tf.nn.leaky_relu))
model.add(Dropout(.1))
# Third..
model.add(Dense(100, activation=tf.nn.leaky_relu))
model.add(Dropout(.1))
model.add(Dense(10, activation='softmax'))
model.summary()
# For a multi-class classification problem
model.compile(optimizer='adam', #chooses suitable learning rate for you.
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=4, batch_size=128,
verbose=True)
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
plt.plot(range(4),history.history['acc'])
plt.title('accuracy per iteration')
plt.grid();
# Great accuracy for an ANN in so few training steps
```
# CNN in Keras
## 99.5% accuracy on MNIST in 12 epochs
Note this takes ~1hr to run on a CPU
### 1. Data preprocessing
```
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
# notice that we don't flatten image
input_shape = (img_rows, img_cols, 1)
#normalize
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
```
## Model construction
```
# Almost LeNet architecture
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
```
# Model compilation
```
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
```
# Model training
```
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
```
# Model evaluation
```
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
| github_jupyter |
```
from adversarial_autoencoder import *
aae = AdversarialAutoencoder()
aae.autoencoder.load_weights("autoencoder.h5")
from skimage.io import imread, imshow
import pickle
from sklearn.decomposition import PCA
# svm_clf = pickle.load(open("finalized_model.sav", "rb"))
# X = np.load('embeddings.npy')
# print(X.shape)
# pca = PCA(n_components=3)
# pca.fit(X)
%matplotlib inline
imgs = os.listdir("data/forged_patches")
im = imread("data/forged_path/" + np.random.choice(imgs))
plt.subplot(121)
plt.imshow(im)
if im.shape != (64,64,3):
pass
im = (im.astype(np.float32)- 175.0) / 175.0
restored = aae.autoencoder.predict(np.expand_dims(im, 0))
restored = 0.5 * restored + 0.5
plt.subplot(122)
plt.imshow(restored[0])
imshow(np.squeeze(restored))
import os
def sliding_window(image, stepSize, windowSize):
# slide a window across the image
for y in range(0, image.shape[0], stepSize):
for x in range(0, image.shape[1], stepSize):
# yield the current window
left_x = x if x + windowSize[1] < image.shape[1] else image.shape[1] - windowSize[1]
right_x = x + windowSize[1] if x + windowSize[1] < image.shape[1] else image.shape[1]
left_y = y if y + windowSize[0] < image.shape[0] else image.shape[0] - windowSize[0]
right_y = y + windowSize[0] if y + windowSize[0] < image.shape[0] else image.shape[0]
yield (left_x, left_y, image[left_y: right_y, left_x: right_x])
imgs = os.listdir("test_data")
im = imread("test_data/" + np.random.choice(imgs))
print(im.shape)
imshow(im)
mask = np.zeros((640, 640)).astype(np.float32)
print(mask.shape)
for (x, y, patch) in sliding_window(im, 32, (64, 64)):
patch = patch.astype(np.float32)/255.0
encoding = aae.adversarial_autoencoder.layers[1].predict(np.expand_dims(patch, 0))
encoding.resize((1, 2048))
en_3d = pca.transform(encoding)
probs = svm_clf.predict_proba(en_3d)
mask[y:y+64, x:x+64] = np.maximum(mask[y:y+64, x:x+64], np.ones((64,64), dtype=np.float32) * probs[0][0])
imshow(mask)
X_train = np.load("train_patches.npy")
# X_valid = np.load("valid_patches.npy")
mean = np.mean(X_train, axis=(0, 1, 2, 3))
std = np.std(X_train, axis=(0, 1, 2, 3))
import os
import cv2
# test_imgs = np.random.choice(os.listdir("forged_patches"), 500)
test_imgs = os.listdir("forged_patches")
pristines = []
stage = len(test_imgs) // 100
cur_stage = 0
for i, idx in enumerate(test_imgs):
if (i+1) % 100 == 0:
cur_stage +=1
print("[" + "="*cur_stage + " "*(stage - cur_stage) + "]", end='\r', flush= True)
im = cv2.imread("forged_patches/" + idx)
if im.shape != (64,64,3):
continue
im = im.astype(np.float32)/255.
encoding = aae.adversarial_autoencoder.layers[1].predict(np.expand_dims(im, 0))
encoding = encoding.reshape(2048, 1)
pristines.append(encoding)
pristines = np.asarray(pristines)
print(pristines.shape)
np.save('forged', np.squeeze(pristines))
aae.autoencoder.summary()
```
| github_jupyter |
```
import os
import scenic
scenic_script = "./examples/carla/car.scenic"
scenario = scenic.scenarioFromFile(scenic_script)
pos=scenario.objects[1].position
print(pos)
regions = pos.operands[0].arguments[0].region.regions
print(regions[1])
from scenic.core.vectors import *
from scenic.core.distributions import *
from scenic.core.regions import *
import shapely.geometry
import scenic.domains.driving.roads as roads
# map_path = '/Users/edwardkim/Desktop/Scenic-devel/examples/carla/../../tests/formats/opendrive/maps/CARLA/Town05.xodr'
# network = Network.fromFile(map_path)
sample = Samplable.sampleAll(scenario.dependencies)
objs = scenario.objects
# print(scenario.egoObject.position)
# print("\n")
# print(scenario.objects[1].position)
pos = sample[objs[1]].position
print(objs[1].position)
d = objs[1].position.region.dist
print(d)
lane = d.operands[0].center.object.region
print(type(lane._conditioned))
# print(lane._conditioned)
# print(isinstance(lane, Region))
# print(lane.containsPoint(pos))
from scenic.core.distributions import *
import math
from scenic.nusc_query_api import NuscQueryAPI
query = NuscQueryAPI(version = 'v1.0-trainval', dataroot='/Users/edwardkim/Desktop/nusc-query')
map_name = 'boston-seaport'
directory = "/Users/edwardkim/Desktop/nuScenes_data/samples/boston_seaport"
import os
image_dir = '/Users/edwardkim/Desktop/nusc-query/nusc_500_images'
image_fileNames = [img for img in os.listdir(image_dir) if img.endswith('.jpg')]
img = image_fileNames[0]
print(query.get_img_data(img))
ego_x = 2154.481131509386
ego_y = 876.9252939736548
ego_heading = 1.5957193453819762
ego_visibleDistance = 50
ego_viewAngle = 140 * math.pi/ 180
ego = scenario.egoObject
pos = ego.position
ego_orientedVector = OrientedVector(ego_x, ego_y, ego_heading)
pos.conditionTo(ego_orientedVector)
ego_sectorRegion = SectorRegion(ego_orientedVector, ego_visibleDistance, \
ego_orientedVector.heading, ego_viewAngle)
otherCar = scenario.objects[1]
other_pos = otherCar.position
if scenic_script == "./examples/carla/lead_car.scenic":
operatorDist = other_pos.region.dist
sectorRegion = operatorDist.operands[0]
sectorRegion._conditioned = ego_sectorRegion
smt_file_path = './test_smt_encoding.smt2'
open(smt_file_path, 'w').close()
writeSMTtoFile(smt_file_path, '(set-logic QF_NRA)')
cached_variables ={'ego': ego_orientedVector}
cached_variables['network'] = network
cached_variables['ego_view_radius'] = ego_visibleDistance
cached_variables['ego_viewAngle'] = ego_viewAngle
cached_variables['variables'] = []
cached_variables['ego_visibleRegion'] = ego_sectorRegion
cached_variables['ego_sector_polygon'] = cached_variables['ego_visibleRegion'].polygon
x = findVariableName(cached_variables, smt_file_path, cached_variables['variables'],"x")
y = findVariableName(cached_variables, smt_file_path, cached_variables['variables'],"y")
cached_variables['current_obj'] = (x, y)
point = other_pos.encodeToSMT(smt_file_path, cached_variables, debug=False)
real_x = 2143.6701080247767
real_y = 881.0164951423136
x_label = str(real_x)
y_label = str(real_y)
x_diff = '(abs '+smt_subtract(point[0], x_label)+')'
y_diff = '(abs '+smt_subtract(point[1], y_label)+')'
smt_x = smt_lessThan(x_diff, '0.01')
smt_y = smt_lessThan(y_diff, '0.01')
smt_encoding = smt_assert("and", smt_x, smt_y)
writeSMTtoFile(smt_file_path, smt_encoding)
writeSMTtoFile(smt_file_path, '(check-sat)')
writeSMTtoFile(smt_file_path, '(get-model)')
writeSMTtoFile(smt_file_path, '(exit)')
Samplable
ego = scenario.egoObject
pos = ego.position
otherCar = scenario.objects[1]
other_pos = otherCar.position
print(type(other_pos))
print(other_pos)
operatorDist = other_pos.region.dist
print("OPERATORDISTRIBUTION")
print("object type: ", type(operatorDist.object))
print(operatorDist.operator)
print(operatorDist.operands[0])
sectorRegion = operatorDist.operands[0]
attributeDist = operatorDist.object
print("AttributeDistribution: ")
print("object: ", type(attributeDist.object.dist))
print("attribute: ", attributeDist.attribute)
methodDist = attributeDist.object.dist
print("MethodDistribution")
print("object: ", type(methodDist.object))
print("method: ", methodDist.method)
print("arguments: ", type(methodDist.arguments[0]))
print(ego.position)
# import matplotlib.pyplot as plt
# from scenic.core.geometry import triangulatePolygon
# from scenic.core.vectors import Vector, OrientedVector
# def VectorToTuple(vector):
# return (vector.x, vector.y)
# half_angle = 2 / 2
# radius = 10
# resolution = 24
# center = OrientedVector(-224.72849179, -82.508920774, 2)
# circle_center_pt = (center.x, center.y)
# heading = center.heading
# ctr = shapely.geometry.Point(circle_center_pt)
# circle = ctr.buffer(radius, resolution)
# mask = shapely.geometry.Polygon([circle_center_pt, VectorToTuple(center.offsetRadially(radius, heading + half_angle)), \
# VectorToTuple(center.offsetRadially(2*radius, heading)), VectorToTuple(center.offsetRadially(radius, heading - half_angle))])
# sector = circle & mask
# print(multipolygon[0])
# intersection = multipolygon[0] & sector
# inter = triangulatePolygon(intersection)
# print(inter)
# plt.plot(*inter[0].exterior.xy)
# plt.plot(*inter[1].exterior.xy)
# plt.show()
# plt.plot(*multipolygon[0].exterior.xy, color='k')
# plt.show()
# plt.plot(*sector.exterior.xy, color='g')
# plt.show()
# intersection = []
# for polygon in multipolygon:
# inter = polygon & circle
# if inter != shapely.geometry.Polygon():
# intersection.append(inter)
# import matplotlib.pyplot as plt
# print(intersection[0])
# plt.plot(*intersection[1].exterior.xy)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed training with TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/distributed_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
`tf.distribute.Strategy` is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.
`tf.distribute.Strategy` has been designed with these key goals in mind:
* Easy to use and support multiple user segments, including researchers, ML engineers, etc.
* Provide good performance out of the box.
* Easy switching between strategies.
`tf.distribute.Strategy` can be used with a high-level API like [Keras](https://www.tensorflow.org/guide/keras), and can also be used to distribute custom training loops (and, in general, any computation using TensorFlow).
In TensorFlow 2.x, you can execute your programs eagerly, or in a graph using [`tf.function`](function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution, but works best with `tf.function`. Eager mode is only recommended for debugging purpose and not supported for `TPUStrategy`. Although we discuss training most of the time in this guide, this API can also be used for distributing evaluation and prediction on different platforms.
You can use `tf.distribute.Strategy` with very few changes to your code, because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we explain various types of strategies and how you can use them in different situations.
Note: For a deeper understanding of the concepts, please watch [this deep-dive presentation](https://youtu.be/jKV53r9-H14). This is especially recommended if you plan to write your own training loop.
```
# Import TensorFlow
import tensorflow as tf
```
## Types of strategies
`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
* *Synchronous vs asynchronous training:* These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
* *Hardware platform:* You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, there are six strategies available. In the next section we explain which of these are supported in which scenarios in TF 2.2 at this time. Here is a quick overview:
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:----------------------- |:------------------- |:--------------------- |:--------------------------------- |:--------------------------------- |:-------------------------- |
| **Keras API** | Supported | Supported | Experimental support | Experimental support | Supported planned post 2.3 |
| **Custom training loop** | Supported | Supported | Experimental support | Experimental support | Supported planned post 2.3 |
| **Estimator API** | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
Note: [Experimental support](https://www.tensorflow.org/guide/versions#what_is_not_covered) means the APIs are not covered by any compatibilities guarantees.
Note: Estimator support is limited. Basic training and evaluation are experimental, and advanced features—such as scaffold—are not implemented. We recommend using Keras or custom training loops if a use case is not covered.
### MirroredStrategy
`tf.distribute.MirroredStrategy` supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. You can choose from a few other options we provide, or write your own.
Here is the simplest way of creating `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
```
This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
```
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
```
If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently, `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` are two options other than `tf.distribute.NcclAllReduce` which is the default.
```
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
```
### TPUStrategy
`tf.distribute.TPUStrategy` lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Cloud TPU](https://cloud.google.com/tpu).
In terms of distributed training architecture, `TPUStrategy` is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`.
Here is how you would instantiate `TPUStrategy`:
Note: To run this code in Colab, you should select TPU as the Colab runtime. See [TensorFlow TPU Guide](https://www.tensorflow.org/guide/tpu).
```
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.TPUStrategy(cluster_resolver)
```
The `TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it.
If you want to use this for Cloud TPUs:
- You must specify the name of your TPU resource in the `tpu` argument.
- You must initialize the tpu system explicitly at the *start* of the program. This is required before TPUs can be used for computation. Initializing the tpu system also wipes out the TPU memory, so it's important to complete this step first in order to avoid losing state.
### MultiWorkerMirroredStrategy
`tf.distribute.experimental.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.
It uses [CollectiveOps](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/collective_ops.py) as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.
It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, you will be able to plugin algorithms that are better tuned for your hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather.
Here is the simplest way of creating `MultiWorkerMirroredStrategy`:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
`MultiWorkerMirroredStrategy` currently allows you to choose between two different implementations of collective ops. `CollectiveCommunication.RING` implements ring-based collectives using gRPC as the communication layer. `CollectiveCommunication.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `CollectiveCommunication.AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. You can specify them in the following way:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
```
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. Learn more about [setting up TF_CONFIG](#TF_CONFIG).
Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/versions#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### CentralStorageStrategy
`tf.distribute.experimental.CentralStorageStrategy` does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
Create an instance of `CentralStorageStrategy` by:
```
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
```
This will create a `CentralStorageStrategy` instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables.
Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/versions#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### ParameterServerStrategy
In TF1, `ParameterServerStrategy` is available only with estimator via `tf.compat.v1.distribute.experimental.ParameterServerStrategy` symbol, and usage in TF2 is being developed for a later release. For this strategy, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all the workers.
### Other strategies
In addition to the above strategies, there are two other strategies which might be useful for prototyping and debugging when using `tf.distribute` APIs.
#### Default Strategy
Default strategy is a distribution strategy which is present when no explicit distribution strategy is in scope. It implements the `tf.distribute.Strategy` interface but is a pass-through and provides no actual distribution. For instance, `strategy.run(fn)` will simply call `fn`. Code written using this strategy should behave exactly as code written without any strategy. You can think of it as a "no-op" strategy.
Default strategy is a singleton - and one cannot create more instances of it. It can be obtained using `tf.distribute.get_strategy()` outside any explicit strategy's scope (the same API that can be used to get the current strategy inside an explicit strategy's scope).
```
default_strategy = tf.distribute.get_strategy()
```
This strategy serves two main purposes:
* It allows writing distribution aware library code unconditionally. For example, in optimizer, we can do `tf.distribute.get_strategy()` and use that strategy for reducing gradients - it will always return a strategy object on which we can call the reduce API.
```
# In optimizer or other library code
# Get currently active strategy
strategy = tf.distribute.get_strategy()
strategy.reduce("SUM", 1., axis=None) # reduce some values
```
* Similar to library code, it can be used to write end users' programs to work with and without distribution strategy, without requiring conditional logic. A sample code snippet illustrating this:
```
if tf.config.list_physical_devices('gpu'):
strategy = tf.distribute.MirroredStrategy()
else: # use default strategy
strategy = tf.distribute.get_strategy()
with strategy.scope():
# do something interesting
print(tf.Variable(1.))
```
#### OneDeviceStrategy
`tf.distribute.OneDeviceStrategy` is a strategy to place all variables and computation on a single specified device.
```
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
```
This strategy is distinct from the default strategy in a number of ways. In default strategy, the variable placement logic remains unchanged when compared to running TensorFlow without any distribution strategy. But when using `OneDeviceStrategy`, all variables created in its scope are explicitly placed on the specified device. Moreover, any functions called via `OneDeviceStrategy.run` will also be placed on the specified device.
Input distributed through this strategy will be prefetched to the specified device. In default strategy, there is no input distribution.
Similar to the default strategy, this strategy could also be used to test your code before switching to other strategies which actually distribute to multiple devices/machines. This will exercise the distribution strategy machinery somewhat more than default strategy, but not to the full extent as using `MirroredStrategy` or `TPUStrategy` etc. If you want code that behaves as if no strategy, then use default strategy.
So far we've talked about what are the different strategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end.
## Using `tf.distribute.Strategy` with `tf.keras.Model.fit`
We've integrated `tf.distribute.Strategy` into `tf.keras` which is TensorFlow's implementation of the
[Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for you to distribute your training written in the Keras training framework using `model.fit`.
Here's what you need to change in your code:
1. Create an instance of the appropriate `tf.distribute.Strategy`.
2. Move the creation of Keras model, optimizer and metrics inside `strategy.scope`.
We support all types of Keras models - sequential, functional and subclassed.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
```
In this example we used `MirroredStrategy` so we can run this on a machine with multiple GPUs. `strategy.scope()` indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows us to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
```
Here we used a `tf.data.Dataset` to provide the training and eval input. You can also use numpy arrays:
```
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
```
In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas.
```
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
```
### What's supported now?
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras APIs | Supported | Supported | Experimental support | Experimental support | Support planned post 2.3 |
### Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
1. [Tutorial](https://www.tensorflow.org/tutorials/distribute/keras) to train MNIST with `MirroredStrategy`.
2. [Tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) to train MNIST using `MultiWorkerMirroredStrategy`.
3. [Guide](https://www.tensorflow.org/guide/tpu#train_a_model_using_keras_high_level_apis) on training MNIST using `TPUStrategy`.
4. TensorFlow Model Garden [repository](https://github.com/tensorflow/models/tree/master/official) containing collections of state-of-the-art models implemented using various strategies.
## Using `tf.distribute.Strategy` with custom training loops
As you've seen, using `tf.distribute.Strategy` with Keras `model.fit` requires changing only a couple lines of your code. With a little more effort, you can also use `tf.distribute.Strategy` with custom training loops.
If you need more flexibility and control over your training loops than is possible with Estimator or Keras, you can write custom training loops. For instance, when using a GAN, you may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training.
To support custom training loops, we provide a core set of methods through the `tf.distribute.Strategy` classes. Using these may require minor restructuring of the code initially, but once that is done, you should be able to switch between GPUs, TPUs, and multiple machines simply by changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
First, we create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
```
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
```
Next, we create the input dataset and call `tf.distribute.Strategy.experimental_distribute_dataset` to distribute the dataset based on the strategy.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
```
Then, we define one step of the training. We will use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, we put in a function `train_step` and pass it to `tf.distrbute.Strategy.run` along with the dataset inputs that we get from `dist_dataset` created before:
```
loss_object = tf.keras.losses.BinaryCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)
def train_step(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
predictions = model(features, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def distributed_train_step(dist_inputs):
per_replica_losses = mirrored_strategy.run(train_step, args=(dist_inputs,))
return mirrored_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
```
A few other things to note in the code above:
1. We used `tf.nn.compute_average_loss` to compute the loss. `tf.nn.compute_average_loss` sums the per example loss and divide the sum by the global_batch_size. This is important because later after the gradients are calculated on each replica, they are aggregated across the replicas by **summing** them.
2. We used the `tf.distribute.Strategy.reduce` API to aggregate the results returned by `tf.distribute.Strategy.run`. `tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results` to get the list of values contained in the result, one per local replica.
3. When `apply_gradients` is called within a distribution strategy scope, its behavior is modified. Specifically, before applying gradients on each parallel instance during synchronous training, it performs a sum-over-all-replicas of the gradients.
Finally, once we have defined the training step, we can iterate over `dist_dataset` and run the training in a loop:
```
for dist_inputs in dist_dataset:
print(distributed_train_step(dist_inputs))
```
In the example above, we iterated over the `dist_dataset` to provide input to your training. We also provide the `tf.distribute.Strategy.make_experimental_numpy_dataset` to support numpy inputs. You can use this API to create a dataset before calling `tf.distribute.Strategy.experimental_distribute_dataset`.
Another way of iterating over your data is to explicitly use iterators. You may want to do this when you want to run for a given number of steps as opposed to iterating over the entire dataset.
The above iteration would now be modified to first create an iterator and then explicitly call `next` on it to get the input data.
```
iterator = iter(dist_dataset)
for _ in range(10):
print(distributed_train_step(next(iterator)))
```
This covers the simplest case of using `tf.distribute.Strategy` API to distribute custom training loops. We are in the process of improving these APIs. Since this use case requires more work to adapt your code, we will be publishing a separate detailed guide in the future.
### What's supported now?
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:----------------------- |:------------------- |:------------------- |:----------------------------- |:------------------------ |:------------------------- |
| Custom Training Loop | Supported | Supported | Experimental support | Experimental support | Support planned post 2.3 |
### Examples and Tutorials
Here are some examples for using distribution strategy with custom training loops:
1. [Tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training) to train MNIST using `MirroredStrategy`.
2. [Guide](https://www.tensorflow.org/guide/tpu#train_a_model_using_custom_training_loop) on training MNIST using `TPUStrategy`.
3. TensorFlow Model Garden [repository](https://github.com/tensorflow/models/tree/master/official) containing collections of state-of-the-art models implemented using various strategies.
## Using `tf.distribute.Strategy` with Estimator (Limited support)
`tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. Like with Keras, we've integrated `tf.distribute.Strategy` into `tf.Estimator`. If you're using Estimator for your training, you can easily change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. See [What's supported now](#estimator_support) section below for more details.
The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator.
Here is a snippet of code that shows this with a premade Estimator `LinearRegressor` and `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
```
We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_distribute` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval.
Now we can train and evaluate this Estimator with an input function:
```
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
```
Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split automatically across the multiple replicas. In Estimator, however, we do not do automatic splitting of batch, nor automatically shard the data across different workers. You have full control over how you want your data to be distributed across workers and devices, and you must provide an `input_fn` to specify how to distribute your data.
Your `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`.
When doing multi worker training, you should either split your data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb).
And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set `TF_CONFIG` environment variables for each binary running in your cluster.
<a name="estimator_support"></a>
### What's supported now?
There is limited support for training with Estimator using all strategies except `TPUStrategy`. Basic training and evaluation should work, but a number of advanced features such as scaffold do not yet work. There may also be a number of bugs in this integration. At this time, we do not plan to actively improve this support, and instead are focused on Keras and custom training loop support. If at all possible, you should prefer to use `tf.distribute` with those APIs instead.
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:--------------- |:------------------ |:------------- |:----------------------------- |:------------------------ |:------------------------- |
| Estimator API | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
### Examples and Tutorials
Here are some examples that show end to end usage of various strategies with Estimator:
1. [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb) to train MNIST with multiple workers using `MultiWorkerMirroredStrategy`.
2. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kubernetes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API.
3. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/resnet_imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
## Other topics
In this section, we will cover some topics that are relevant to multiple use cases.
<a name="TF_CONFIG"></a>
### Setting up TF\_CONFIG environment variable
For multi-worker training, as mentioned before, you need to set `TF_CONFIG` environment variable for each
binary running in your cluster. The `TF_CONFIG` environment variable is a JSON string which specifies what
tasks constitute a cluster, their addresses and each task's role in the cluster. We provide a Kubernetes template in the
[tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets
`TF_CONFIG` for your training tasks.
There are two components of TF_CONFIG: cluster and task. cluster provides information about the training cluster, which is a dict consisting of different types of jobs such as worker. In multi-worker training, there is usually one worker that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular worker does. Such worker is referred to as the 'chief' worker, and it is customary that the worker with index 0 is appointed as the chief worker (in fact this is how tf.distribute.Strategy is implemented). task on the other hand provides information of the current task. The first component cluster is the same for all workers, and the second component task is different on each worker and specifies the type and index of that worker.
One example of `TF_CONFIG` is:
```
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["host1:port", "host2:port", "host3:port"],
"ps": ["host4:port", "host5:port"]
},
"task": {"type": "worker", "index": 1}
})
```
This `TF_CONFIG` specifies that there are three workers and two ps tasks in the
cluster along with their hosts and ports. The "task" part specifies that the
role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is
"chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`.
## What's next?
`tf.distribute.Strategy` is actively under development. We welcome you to try it out and provide and your feedback using [GitHub issues](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
### Dependencies
```
import os
import cv2
import math
import random
import shutil
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import albumentations as albu
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.model_selection import train_test_split
from keras import optimizers
from keras import backend as K
from keras.utils import Sequence
from keras.losses import binary_crossentropy
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, Callback
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(seed)
seed = 0
seed_everything(seed)
warnings.filterwarnings("ignore")
!pip install segmentation-models
import segmentation_models as sm
```
### Load data
```
train = pd.read_csv('../input/understanding_cloud_organization/train.csv')
submission = pd.read_csv('../input/understanding_cloud_organization/sample_submission.csv')
hold_out_set = pd.read_csv('../input/clouds-data-split/hold-out.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
print('Compete set samples:', len(train))
print('Train samples: ', len(X_train))
print('Validation samples: ', len(X_val))
print('Test samples:', len(submission))
# Preprocecss data
train['image'] = train['Image_Label'].apply(lambda x: x.split('_')[0])
submission['image'] = submission['Image_Label'].apply(lambda x: x.split('_')[0])
test = pd.DataFrame(submission['image'].unique(), columns=['image'])
test['set'] = 'test'
display(X_train.head())
```
# Model parameters
```
BACKBONE = 'resnet34'
BATCH_SIZE = 32
EPOCHS = 15
LEARNING_RATE = 3e-4
HEIGHT = 320
WIDTH = 480
CHANNELS = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
LR_WARMUP_EPOCHS = 1
WARMUP_BATCHES = (LR_WARMUP_EPOCHS * len(X_train)) // BATCH_SIZE
model_path = '../working/uNet_%s_%sx%s.h5' % (BACKBONE, HEIGHT, WIDTH)
preprocessing = sm.backbones.get_preprocessing(BACKBONE)
augmentation = albu.Compose([albu.HorizontalFlip(p=0.5),
albu.VerticalFlip(p=0.5),
albu.GridDistortion(p=0.2),
albu.ElasticTransform(p=0.2)
])
```
### Auxiliary functions
```
def np_resize(img, input_shape):
height, width = input_shape
return cv2.resize(img, (width, height))
def mask2rle(img):
pixels= img.T.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def build_rles(masks, reshape=None):
width, height, depth = masks.shape
rles = []
for i in range(depth):
mask = masks[:, :, i]
if reshape:
mask = mask.astype(np.float32)
mask = np_resize(mask, reshape).astype(np.int64)
rle = mask2rle(mask)
rles.append(rle)
return rles
def build_masks(rles, input_shape, reshape=None):
depth = len(rles)
if reshape is None:
masks = np.zeros((*input_shape, depth))
else:
masks = np.zeros((*reshape, depth))
for i, rle in enumerate(rles):
if type(rle) is str:
if reshape is None:
masks[:, :, i] = rle2mask(rle, input_shape)
else:
mask = rle2mask(rle, input_shape)
reshaped_mask = np_resize(mask, reshape)
masks[:, :, i] = reshaped_mask
return masks
def rle2mask(rle, input_shape):
width, height = input_shape[:2]
mask = np.zeros( width*height ).astype(np.uint8)
array = np.asarray([int(x) for x in rle.split()])
starts = array[0::2]
lengths = array[1::2]
current_position = 0
for index, start in enumerate(starts):
mask[int(start):int(start+lengths[index])] = 1
current_position += lengths[index]
return mask.reshape(height, width).T
def dice_coefficient(y_true, y_pred):
y_true = np.asarray(y_true).astype(np.bool)
y_pred = np.asarray(y_pred).astype(np.bool)
intersection = np.logical_and(y_true, y_pred)
return (2. * intersection.sum()) / (y_true.sum() + y_pred.sum())
def dice_coef(y_true, y_pred, smooth=1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def post_process(probability, threshold=0.5, min_size=10000):
mask = cv2.threshold(probability, threshold, 1, cv2.THRESH_BINARY)[1]
num_component, component = cv2.connectedComponents(mask.astype(np.uint8))
predictions = np.zeros(probability.shape, np.float32)
for c in range(1, num_component):
p = (component == c)
if p.sum() > min_size:
predictions[p] = 1
return predictions
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (WIDTH, HEIGHT))
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['image']
item_set = item['set']
if item_set == 'train':
preprocess_image(image_id, train_base_path, train_images_dest_path)
if item_set == 'validation':
preprocess_image(image_id, train_base_path, validation_images_dest_path)
if item_set == 'test':
preprocess_image(image_id, test_base_path, test_images_dest_path)
def get_metrics(model, df, generator, min_mask_sizes, set_name='Complete set'):
generator.shuffle = False
generator.augment = None
generator.batch_size = 1
column_names = ['Fish', 'Flower', 'Gravel', 'Sugar', set_name]
index_name = ['Dice Coeff']
dice = []
dice_post = []
for sample in range(len(df)):
x, y = generator.__getitem__(sample)
preds = model.predict(x)[0]
sample_dice = []
sample_dice_post = []
for class_index in range(N_CLASSES):
label_mask = y[..., class_index]
pred_mask = preds[..., class_index]
class_dice = dice_coefficient(pred_mask, label_mask)
pred_mask_post = post_process(pred_mask, min_size=min_mask_sizes[class_index])
class_dice_post = dice_coefficient(pred_mask_post, label_mask)
if math.isnan(class_dice_post):
class_dice_post = 0.0
sample_dice.append(class_dice)
sample_dice_post.append(class_dice_post)
dice.append(sample_dice)
dice_post.append(sample_dice_post)
dice_class = np.mean(dice, axis=0)
dice = np.mean(dice_class, axis=0)
metrics = np.append(dice_class, dice)
metrics = pd.DataFrame(metrics.reshape(1, metrics.shape[0]), columns=column_names, index=index_name)
dice_class_post = np.mean(dice_post, axis=0)
dice_post = np.mean(dice_class_post, axis=0)
metrics_post = np.append(dice_class_post, dice_post)
metrics_post = pd.DataFrame(metrics_post.reshape(1, metrics_post.shape[0]), columns=column_names, index=index_name)
return metrics, metrics_post
def plot_metrics(history):
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['dice_coef'], label='Train Dice coefficient')
ax2.plot(history['val_dice_coef'], label='Validation Dice coefficient')
ax2.legend(loc='best')
ax2.set_title('Dice coefficient')
ax3.plot(history['score'], label='Train F-Score')
ax3.plot(history['val_score'], label='Validation F-Score')
ax3.legend(loc='best')
ax3.set_title('F-Score')
plt.xlabel('Epochs')
sns.despine()
plt.show()
def pre_process_set(df, preprocess_fn):
n_cpu = mp.cpu_count()
df_n_cnt = df.shape[0]//n_cpu
pool = mp.Pool(n_cpu)
dfs = [df.iloc[df_n_cnt*i:df_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = df.iloc[df_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_fn, [x_df for x_df in dfs])
pool.close()
class WarmUpLearningRateScheduler(Callback):
def __init__(self, warmup_batches, init_lr, verbose=0):
"""
Constructor for warmup learning rate scheduler
:param warmup_batches {int}: Number of batch for warmup.
:param init_lr {float}: Learning rate after warmup.
:param verbose {int}: 0: quiet, 1: update messages. (default: {0})
"""
super(WarmUpLearningRateScheduler, self).__init__()
self.warmup_batches = warmup_batches
self.init_lr = init_lr
self.verbose = verbose
self.batch_count = 0
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.batch_count = self.batch_count + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
if self.batch_count <= self.warmup_batches:
lr = self.batch_count * self.init_lr / self.warmup_batches
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: WarmUpLearningRateScheduler setting learning rate to %s.' % (self.batch_count + 1, lr))
class RAdam(optimizers.Optimizer):
"""RAdam optimizer.
# Arguments
lr: float >= 0. Learning rate.
beta_1: float, 0 < beta < 1. Generally close to 1.
beta_2: float, 0 < beta < 1. Generally close to 1.
epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
decay: float >= 0. Learning rate decay over each update.
weight_decay: float >= 0. Weight decay for each param.
amsgrad: boolean. Whether to apply the AMSGrad variant of this
algorithm from the paper "On the Convergence of Adam and
Beyond".
# References
- [Adam - A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980v8)
- [On the Convergence of Adam and Beyond](https://openreview.net/forum?id=ryQu7f-RZ)
- [On The Variance Of The Adaptive Learning Rate And Beyond](https://arxiv.org/pdf/1908.03265v1.pdf)
"""
def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999,
epsilon=None, decay=0., weight_decay=0., amsgrad=False, **kwargs):
super(RAdam, self).__init__(**kwargs)
with K.name_scope(self.__class__.__name__):
self.iterations = K.variable(0, dtype='int64', name='iterations')
self.lr = K.variable(lr, name='lr')
self.beta_1 = K.variable(beta_1, name='beta_1')
self.beta_2 = K.variable(beta_2, name='beta_2')
self.decay = K.variable(decay, name='decay')
self.weight_decay = K.variable(weight_decay, name='weight_decay')
if epsilon is None:
epsilon = K.epsilon()
self.epsilon = epsilon
self.initial_decay = decay
self.initial_weight_decay = weight_decay
self.amsgrad = amsgrad
def get_updates(self, loss, params):
grads = self.get_gradients(loss, params)
self.updates = [K.update_add(self.iterations, 1)]
lr = self.lr
if self.initial_decay > 0:
lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
t = K.cast(self.iterations, K.floatx()) + 1
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='m_' + str(i)) for (i, p) in enumerate(params)]
vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='v_' + str(i)) for (i, p) in enumerate(params)]
if self.amsgrad:
vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='vhat_' + str(i)) for (i, p) in enumerate(params)]
else:
vhats = [K.zeros(1, name='vhat_' + str(i)) for i in range(len(params))]
self.weights = [self.iterations] + ms + vs + vhats
beta_1_t = K.pow(self.beta_1, t)
beta_2_t = K.pow(self.beta_2, t)
sma_inf = 2.0 / (1.0 - self.beta_2) - 1.0
sma_t = sma_inf - 2.0 * t * beta_2_t / (1.0 - beta_2_t)
for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
m_corr_t = m_t / (1.0 - beta_1_t)
if self.amsgrad:
vhat_t = K.maximum(vhat, v_t)
v_corr_t = K.sqrt(vhat_t / (1.0 - beta_2_t) + self.epsilon)
self.updates.append(K.update(vhat, vhat_t))
else:
v_corr_t = K.sqrt(v_t / (1.0 - beta_2_t) + self.epsilon)
r_t = K.sqrt((sma_t - 4.0) / (sma_inf - 4.0) *
(sma_t - 2.0) / (sma_inf - 2.0) *
sma_inf / sma_t)
p_t = K.switch(sma_t > 5, r_t * m_corr_t / v_corr_t, m_corr_t)
if self.initial_weight_decay > 0:
p_t += self.weight_decay * p
p_t = p - lr * p_t
self.updates.append(K.update(m, m_t))
self.updates.append(K.update(v, v_t))
new_p = p_t
# Apply constraints.
if getattr(p, 'constraint', None) is not None:
new_p = p.constraint(new_p)
self.updates.append(K.update(p, new_p))
return self.updates
def get_config(self):
config = {
'lr': float(K.get_value(self.lr)),
'beta_1': float(K.get_value(self.beta_1)),
'beta_2': float(K.get_value(self.beta_2)),
'decay': float(K.get_value(self.decay)),
'weight_decay': float(K.get_value(self.weight_decay)),
'epsilon': self.epsilon,
'amsgrad': self.amsgrad,
}
base_config = super(RAdam, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
```
## Pre-process data
```
train_base_path = '../input/understanding_cloud_organization/train_images/'
test_base_path = '../input/understanding_cloud_organization/test_images/'
train_images_dest_path = 'base_dir/train_images/'
validation_images_dest_path = 'base_dir/validation_images/'
test_images_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_images_dest_path):
shutil.rmtree(train_images_dest_path)
if os.path.exists(validation_images_dest_path):
shutil.rmtree(validation_images_dest_path)
if os.path.exists(test_images_dest_path):
shutil.rmtree(test_images_dest_path)
# Creating train, validation and test directories
os.makedirs(train_images_dest_path)
os.makedirs(validation_images_dest_path)
os.makedirs(test_images_dest_path)
# Pre-procecss train set
pre_process_set(X_train, preprocess_data)
# Pre-procecss validation set
pre_process_set(X_val, preprocess_data)
# Pre-procecss test set
pre_process_set(test, preprocess_data)
```
### Data generator
```
class DataGenerator(Sequence):
def __init__(self, df, target_df=None, mode='fit', base_path=train_images_dest_path,
batch_size=BATCH_SIZE, n_channels=CHANNELS, reshape=(HEIGHT, WIDTH),
n_classes=N_CLASSES, random_state=seed, shuffle=True, preprocessing=None, augmentation=None):
self.batch_size = batch_size
self.df = df
self.mode = mode
self.base_path = base_path
self.target_df = target_df
self.reshape = reshape
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.augmentation = augmentation
self.preprocessing = preprocessing
self.list_IDs = self.df.index
self.random_state = random_state
self.mask_shape = (1400, 2100)
if self.random_state is not None:
np.random.seed(self.random_state)
self.on_epoch_end()
def __len__(self):
return len(self.list_IDs) // self.batch_size
def __getitem__(self, index):
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
list_IDs_batch = [self.list_IDs[k] for k in indexes]
X = self.__generate_X(list_IDs_batch)
if self.mode == 'fit':
Y = self.__generate_y(list_IDs_batch)
if self.augmentation:
X, Y = self.__augment_batch(X, Y)
return X, Y
elif self.mode == 'predict':
return X
def on_epoch_end(self):
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __generate_X(self, list_IDs_batch):
X = np.empty((self.batch_size, *self.reshape, self.n_channels))
for i, ID in enumerate(list_IDs_batch):
im_name = self.df['image'].loc[ID]
img_path = self.base_path + im_name
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if self.preprocessing:
img = self.preprocessing(img)
img = img.astype(np.float32) / 255.
X[i,] = img
return X
def __generate_y(self, list_IDs_batch):
Y = np.empty((self.batch_size, *self.reshape, self.n_classes), dtype=int)
for i, ID in enumerate(list_IDs_batch):
im_name = self.df['image'].loc[ID]
image_df = self.target_df[self.target_df['image'] == im_name]
rles = image_df['EncodedPixels'].values
masks = build_masks(rles, input_shape=self.mask_shape, reshape=self.reshape)
Y[i, ] = masks
return Y
def __augment_batch(self, img_batch, masks_batch):
for i in range(img_batch.shape[0]):
img_batch[i, ], masks_batch[i, ] = self.__random_transform(img_batch[i, ], masks_batch[i, ])
return img_batch, masks_batch
def __random_transform(self, img, masks):
composed = self.augmentation(image=img, mask=masks)
aug_img = composed['image']
aug_masks = composed['mask']
return aug_img, aug_masks
train_generator = DataGenerator(
base_path=train_images_dest_path,
df=X_train,
target_df=train,
batch_size=BATCH_SIZE,
reshape=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
augmentation=augmentation,
random_state=seed)
valid_generator = DataGenerator(
base_path=validation_images_dest_path,
df=X_val,
target_df=train,
batch_size=BATCH_SIZE,
reshape=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
random_state=seed)
```
# Model
```
model = sm.Unet(encoder_name=BACKBONE,
encoder_weights='imagenet',
classes=N_CLASSES,
activation='sigmoid',
input_shape=(HEIGHT, WIDTH, CHANNELS))
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
warmup_lr = WarmUpLearningRateScheduler(WARMUP_BATCHES, LEARNING_RATE)
loss = sm.losses.bce_dice_loss
metric_list = [dice_coef, sm.metrics.f1_score]
callback_list = [checkpoint, es, rlrop, warmup_lr]
optimizer = RAdam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss=loss, metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = len(X_train)//BATCH_SIZE
STEP_SIZE_VALID = len(X_val)//BATCH_SIZE
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callback_list,
epochs=EPOCHS,
verbose=2).history
```
## Model loss graph
```
plot_metrics(history)
```
# Mask size tunning
```
mask_grid = [500, 1000, 5000, 7500, 10000, 15000, 20000]
class_names = ['Fish', 'Flower', 'Gravel', 'Sugar']
score = np.zeros((N_CLASSES, len(mask_grid)))
best_masks = []
for i in range(0, X_val.shape[0], 500):
batch_score = []
batch_idx = list(range(i, min(X_val.shape[0], i + 500)))
batch_set = X_val[batch_idx[0]: batch_idx[-1]+1]
ratio = len(batch_set) / len(X_val)
generator = DataGenerator(
base_path=validation_images_dest_path,
df=batch_set,
target_df=train,
batch_size=len(batch_set),
reshape=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
random_state=seed,
mode='fit',
shuffle=False)
x, y = generator.__getitem__(0)
preds = model.predict(x)
for class_index in range(N_CLASSES):
class_score = []
label_class = y[..., class_index]
pred_class = preds[..., class_index]
for mask_size in mask_grid:
mask_score = []
for index in range(len(batch_idx)):
label_mask = label_class[index, ]
pred_mask = pred_class[index, ]
pred_mask = post_process(pred_mask, min_size=mask_size)
dice_score = dice_coefficient(pred_mask, label_mask)
if math.isnan(dice_score):
dice_score = 0.0
mask_score.append(dice_score)
class_score.append(np.mean(mask_score))
batch_score.append(class_score)
score += np.asarray(batch_score) * ratio
display(pd.DataFrame(score, columns=mask_grid, index=class_names))
for class_index in range(N_CLASSES):
pm = score[class_index].argmax()
best_mask, best_score = mask_grid[pm], score[class_index][pm].item()
best_masks.append(best_mask)
print('%s: mask size=%s, Dice=%.3f' % (class_names[class_index], best_mask, best_score))
```
# Model evaluation
## Without post processing
```
# Train metrics
train_metrics, train_metrics_post = get_metrics(model, X_train, train_generator, best_masks, 'Train')
display(train_metrics)
# Validation metrics
validation_metrics, validation_metrics_post = get_metrics(model, X_val, valid_generator, best_masks, 'Validation')
display(validation_metrics)
```
## With post processing
```
display(train_metrics_post)
display(validation_metrics_post)
```
# Apply model to test set
```
test_df = []
for i in range(0, test.shape[0], 500):
batch_idx = list(range(i, min(test.shape[0], i + 500)))
batch_set = test[batch_idx[0]: batch_idx[-1]+1]
test_generator = DataGenerator(
base_path=test_images_dest_path,
df=batch_set,
target_df=submission,
batch_size=1,
reshape=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
random_state=seed,
mode='predict',
shuffle=False)
preds = model.predict_generator(test_generator)
for index, b in enumerate(batch_idx):
filename = test['image'].iloc[b]
image_df = submission[submission['image'] == filename].copy()
pred_masks = preds[index, ].round().astype(int)
pred_rles = build_rles(pred_masks, reshape=(350, 525))
image_df['EncodedPixels'] = pred_rles
### Post procecssing
pred_masks_post = preds[index, ].astype('float32')
for class_index in range(N_CLASSES):
pred_mask = pred_masks_post[...,class_index]
pred_mask = post_process(pred_mask, min_size=best_masks[class_index])
pred_masks_post[...,class_index] = pred_mask
pred_rles_post = build_rles(pred_masks_post, reshape=(350, 525))
image_df['EncodedPixels_post'] = pred_rles_post
###
### Post procecssing 2
pred_masks_post = preds[index, ].astype('float32')
for class_index in range(N_CLASSES):
pred_mask = pred_masks_post[...,class_index]
pred_mask = post_process(pred_mask, min_size=10000)
pred_masks_post[...,class_index] = pred_mask
pred_rles_post = build_rles(pred_masks_post, reshape=(350, 525))
image_df['EncodedPixels_post_10000'] = pred_rles_post
###
test_df.append(image_df)
sub_df = pd.concat(test_df)
```
### Regular submission
```
submission_df = sub_df[['Image_Label' ,'EncodedPixels']]
submission_df.to_csv('submission.csv', index=False)
display(submission_df.head())
```
### Submission with post processing
```
submission_df_post = sub_df[['Image_Label' ,'EncodedPixels_post']]
submission_df_post.columns = ['Image_Label' ,'EncodedPixels']
submission_df_post.to_csv('submission_post.csv', index=False)
display(submission_df_post.head())
submission_df_post = sub_df[['Image_Label' ,'EncodedPixels_post_10000']]
submission_df_post.columns = ['Image_Label' ,'EncodedPixels']
submission_df_post.to_csv('submission_post_10000.csv', index=False)
display(submission_df_post.head())
# Cleaning created directories
if os.path.exists(train_images_dest_path):
shutil.rmtree(train_images_dest_path)
if os.path.exists(validation_images_dest_path):
shutil.rmtree(validation_images_dest_path)
if os.path.exists(test_images_dest_path):
shutil.rmtree(test_images_dest_path)
```
| github_jupyter |
```
#format the book
%matplotlib inline
from __future__ import division, print_function
import sys;sys.path.insert(0,'..')
from book_format import load_style;load_style('..')
```
# Computing and plotting PDFs of discrete data
So let's investigate how to compute and plot probability distributions.
First, let's make some data according to a normal distribution. We use `numpy.random.normal` for this. The parameters are not well named. `loc` is the mean of the distribution, and `scale` is the standard deviation. We can call this function to create an arbitrary number of data points that are distributed according to that mean and std.
```
import numpy as np
import numpy.random as random
mean = 3
std = 2
data = random.normal(loc=mean, scale=std, size=50000)
print(len(data))
print(data.mean())
print(data.std())
```
As you can see from the print statements we got 5000 points that have a mean very close to 3, and a standard deviation close to 2.
We can plot this Gaussian by using `scipy.stats.norm` to create a frozen function that we will then use to compute the pdf (probability distribution function) of the Gaussian.
```
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.stats as stats
def plot_normal(xs, mean, std, **kwargs):
norm = stats.norm(mean, std)
plt.plot(xs, norm.pdf(xs), **kwargs)
xs = np.linspace(-5, 15, num=200)
plot_normal(xs, mean, std, color='k')
```
But we really want to plot the PDF of the discrete data, not the idealized function.
There are a couple of ways of doing that. First, we can take advantage of `matplotlib`'s `hist` method, which computes a histogram of a collection of data. Normally `hist` computes the number of points that fall in a bin, like so:
```
plt.hist(data, bins=200)
plt.show()
```
that is not very useful to us - we want the PDF, not bin counts. Fortunately `hist` includes a `density` parameter which will plot the PDF for us.
```
plt.hist(data, bins=200, normed=True)
plt.show()
```
I may not want bars, so I can specify the `histtype` as 'step' to get a line.
```
plt.hist(data, bins=200, normed=True, histtype='step', lw=2)
plt.show()
```
To be sure it is working, let's also plot the idealized Gaussian in black.
```
plt.hist(data, bins=200, normed=True, histtype='step', lw=2)
norm = stats.norm(mean, std)
plt.plot(xs, norm.pdf(xs), color='k', lw=2)
plt.show()
```
There is another way to get the approximate distribution of a set of data. There is a technique called *kernel density estimate* that uses a kernel to estimate the probability distribution of a set of data. NumPy implements it with the function `gaussian_kde`. Do not be mislead by the name - Gaussian refers to the type of kernel used in the computation. This works for any distribution, not just Gaussians. In this section we have a Gaussian distribution, but soon we will not, and this same function will work.
```
kde = stats.gaussian_kde(data)
xs = np.linspace(-5, 15, num=200)
plt.plot(xs, kde(xs))
plt.show()
```
## Monte Carlo Simulations
We (well I) want to do this sort of thing because I want to use monte carlo simulations to compute distributions. It is easy to compute Gaussians when they pass through linear functions, but difficult to impossible to compute them analytically when passed through nonlinear functions. Techniques like particle filtering handle this by taking a large sample of points, passing them through a nonlinear function, and then computing statistics on the transformed points. Let's do that.
We will start with the linear function $f(x) = 2x + 12$ just to prove to ourselves that the code is working. I will alter the mean and std of the data we are working with to help ensure the numbers that are output are unique It is easy to be fooled, for example, if the formula multipies x by 2, the mean is 2, and the std is 2. If the output of something is 4, is that due to the multication factor, the mean, the std, or a bug? It's hard to tell.
```
def f(x):
return 2*x + 12
mean = 1.
std = 1.4
data = random.normal(loc=mean, scale=std, size=50000)
d_t = f(data) # transform data through f(x)
plt.hist(data, bins=200, normed=True, histtype='step', lw=2)
plt.hist(d_t, bins=200, normed=True, histtype='step', lw=2)
plt.ylim(0, .35)
plt.show()
print('mean = {:.2f}'.format(d_t.mean()))
print('std = {:.2f}'.format(d_t.std()))
```
This is what we expected. The input is the Gaussian $\mathcal{N}(\mu=1, \sigma=1.4)$, and the function is $f(x) = 2x+12$. Therefore we expect the mean to be shifted to $f(\mu) = 2*1+12=14$. We can see from the plot and the print statement that this is what happened.
Before I go on, can you explain what happened to the standard deviation? You may have thought that the new $\sigma$ should be passed through $f(x)$ like so $2(1.4) + 12=14.81$. But that is not correct - the standard deviation is only affected by the multiplicative factor, not the shift. If you think about that for a moment you will see it makes sense. We multiply our samples by 2, so they are twice as spread out as before. Standard deviation is a measure of how spread out things are, so it should also double. It doesn't matter if we then shift that distribution 12 places, or 12 million for that matter - the spread is still twice the input data.
## Nonlinear Functions
Now that we believe in our code, lets try it with nonlinear functions.
```
def f2(x):
return (np.cos((1.5*x + 2.1))) * np.sin(0.3*x) - 1.6*x
d_t = f2(data)
plt.subplot(121)
plt.hist(d_t, bins=200, normed=True, histtype='step', lw=2)
plt.subplot(122)
kde = stats.gaussian_kde(d_t)
xs = np.linspace(-10, 10, 200)
plt.plot(xs, kde(xs), 'k')
plot_normal(xs, d_t.mean(), d_t.std(), color='g', lw=3)
plt.show()
print('mean = {:.2f}'.format(d_t.mean()))
print('std = {:.2f}'.format(d_t.std()))
```
Here I passed the data through the nonlinear function $f(x) = \cos(1.5x+2.1)\sin(\frac{x}{3}) - 1.6x$. That function is quite close to linear, but we can see how much it alters the pdf of the sampled data.
There is a lot of computation going on behind the scenes to transform 50,000 points and then compute their PDF. The Extended Kalman Filter (EKF) gets around this by linearizing the function at the mean and then passing the Gaussian through the linear equation. We saw above how easy it is to pass a Gaussian through a linear function. So lets try that.
We can linearize this by taking the derivative of the function at x. We can use sympy to get the derivative.
```
import sympy
x = sympy.symbols('x')
f = sympy.cos(1.5*x+2.1) * sympy.sin(x/3) - 1.6*x
dfx = sympy.diff(f, x)
dfx
```
We can now compute the slope of the function by evaluating the derivative at the mean.
```
m = dfx.subs(x, mean)
m
```
The equation of a line is $y=mx+b$, so the new standard deviation should be $~1.67$ times the input std. We can compute the new mean by passing it through the original function because the linearized function is just the slope of f(x) evaluated at the mean. The slope is a tangent that touches the function at $x$, so both will return the same result. So, let's plot this and compare it to the results from the monte carlo simulation.
```
plt.hist(d_t, bins=200, normed=True, histtype='step', lw=2)
plot_normal(xs, f2(mean), abs(float(m)*std), color='k', lw=3, label='EKF')
plot_normal(xs, d_t.mean(), d_t.std(), color='r', lw=3, label='MC')
plt.legend()
plt.show()
```
We can see from this that the estimate from the EKF (in red) is not exact, but it is not a bad approximation either.
| github_jupyter |
### Testing for Interactive use case
```
import mlflow
from azureml.core import Workspace, Experiment, Environment, Datastore, Dataset, ScriptRunConfig
from azureml.core.runconfig import PyTorchConfiguration
# from azureml.widgets import RunDetails
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.runconfig import PyTorchConfiguration
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from IPython.display import clear_output
import time
import platform
# from ray_on_azureml.ray_on_aml import getRay
import sys
sys.path.append("../") # go to parent dir
import importlib
# pip install --upgrade ray[default]
pip install raydp
# pip install raydp /azureml-envs/azureml_6cb52194be4f7fc697297312b8f55547/lib/python3.8/site-packages/ray/jars/ray_dist.jar
base_image="FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1"
dockerfile = r"""
{0}
ARG HTTP_PROXY
ARG HTTPS_PROXY
# set http_proxy & https_proxy
ENV http_proxy=${{HTTPS_PROXY}}
ENV https_proxy=${{HTTPS_PROXY}}
RUN http_proxy=${{HTTPS_PROXY}} https_proxy=${{HTTPS_PROXY}} apt-get update -y \
&& mkdir -p /usr/share/man/man1 \
&& http_proxy=${{HTTPS_PROXY}} https_proxy=${{HTTPS_PROXY}} apt-get install -y openjdk-11-jdk \
&& mkdir /raydp \
&& pip --no-cache-dir install raydp
WORKDIR /raydp
# unset http_proxy & https_proxy
ENV http_proxy=
ENV https_proxy=
""".format(base_image)
print(dockerfile)
from src.ray_on_aml.core import Ray_On_AML
# from ray_on_aml.core import Ray_On_AML
ws = Workspace.from_config()
ray_on_aml =Ray_On_AML(ws=ws, compute_cluster ="d15-v2", additional_pip_packages=['torch==1.10.0', 'torchvision', 'sklearn'], maxnode=2)
# ray_on_aml =Ray_On_AML(ws=ws, compute_cluster ="gpunc6", base_pip_dep = ['ray[tune]==1.9.2', 'ray[rllib]==1.9.2','ray[serve]==1.9.2', 'xgboost_ray==0.1.6', 'dask==2021.12.0','pyarrow >= 5.0.0','fsspec==2021.10.1','fastparquet==0.7.2','tabulate==0.8.9','raydp'], maxnode=2)
# ray_on_aml =Ray_On_AML(ws=ws, compute_cluster ="worker-cpu-v3", base_pip_dep = ['ray[default]==1.8.0', 'xgboost_ray==0.1.6', 'dask==2021.12.0',\
# 'pyarrow >= 5.0.0','fsspec==2021.10.1','fastparquet==0.7.2','tabulate==0.8.9','pyspark'], maxnode=2)
ray = ray_on_aml.getRay()
# ray = ray_on_aml.getRay(gpu_support=True)
ray.cluster_resources()
ray.cluster_resources()
# You can run this code outside of the Ray cluster!
import ray
ray.__version__
conda_lib_path = sys.executable.split('/')[-3]+"/lib/python"+sys.version[:3]
conda_lib_path
pip install --upgrade pyspark
!ls /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pyspark/jars/
ray.cluster_resources()
```
### Testing with Dask on Ray
```
from adlfs import AzureBlobFileSystem
abfs = AzureBlobFileSystem(account_name="azureopendatastorage", container_name="isdweatherdatacontainer")
#if read all years and months
# data = ray.data.read_parquet("az://isdweatherdatacontainer/ISDWeather//", filesystem=abfs)
data =ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2015/"], filesystem=abfs)
#Scaling up date with Dask dataframe API
import dask
from ray.util.dask import ray_dask_get,enable_dask_on_ray
enable_dask_on_ray()
import pandas as pd
import numpy as np
import dask.dataframe as dd
storage_options = {'account_name': 'azureopendatastorage'}
ddf = dd.read_parquet('az://nyctlc/green/puYear=2015/puMonth=*/*.parquet', storage_options=storage_options)
ddf.count().compute()
# Set the scheduler to ray_dask_get in your config so you don't have to
# specify it on each compute call.
df = dd.from_pandas(
pd.DataFrame(
np.random.randint(0, 10000, size=(1024, 2)), columns=["age", "grade"]),
npartitions=2)
df.groupby(["age"]).mean().compute()
# ray.shutdown()
import dask.dataframe as dd
storage_options = {'account_name': 'azureopendatastorage'}
ddf = dd.read_parquet('az://nyctlc/green/puYear=2019/puMonth=*/*.parquet', storage_options=storage_options)
ddf.count().compute()
#dask
# import ray
from ray.util.dask import ray_dask_get
import dask.array as da
import dask.dataframe as dd
import numpy as np
import pandas as pd
import dask.dataframe as dd
import matplotlib.pyplot as plt
from datetime import datetime
from azureml.core import Workspace, Dataset, Model
from adlfs import AzureBlobFileSystem
account_key = ws.get_default_keyvault().get_secret("adls7-account-key")
account_name="adlsgen7"
abfs = AzureBlobFileSystem(account_name="adlsgen7",account_key=account_key, container_name="mltraining")
abfs2 = AzureBlobFileSystem(account_name="azureopendatastorage", container_name="isdweatherdatacontainer")
storage_options={'account_name': account_name, 'account_key': account_key}
# ddf = dd.read_parquet('az://mltraining/ISDWeatherDelta/year2008', storage_options=storage_options)
data = ray.data.read_parquet("az://isdweatherdatacontainer/ISDWeather/year=2009", filesystem=abfs2)
data2 = ray.data.read_parquet("az://mltraining/ISDWeatherDelta/year2008", filesystem=abfs)
data.count()
from datetime import datetime
from azureml.core import Workspace, Dataset, Model
from adlfs import AzureBlobFileSystem
account_key = ws.get_default_keyvault().get_secret("adls7-account-key")
account_name="adlsgen7"
# abfs = AzureBlobFileSystem(account_name="adlsgen7",account_key=account_key, container_name="mltraining")
abfs2 = AzureBlobFileSystem(account_name="azureopendatastorage", container_name="isdweatherdatacontainer")
storage_options={'account_name': account_name, 'account_key': account_key}
# ddf = dd.read_parquet('az://mltraining/ISDWeatherDelta/year2008', storage_options=storage_options)
data = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2012/"], filesystem=abfs2)
data1 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2015/"], filesystem=abfs2)
data2 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2010/"], filesystem=abfs2)
data3 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2009/"], filesystem=abfs2)
data4 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2011/"], filesystem=abfs2)
data5 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2013/"], filesystem=abfs2)
data6 = ray.data.read_parquet(["az://isdweatherdatacontainer/ISDWeather/year=2014/"], filesystem=abfs2)
all_data =data.union(data1).union(data2).union(data3).union(data4).union(data5).union(data6)
all_data.count()
start = time.time()
#convert Ray dataset to Dask dataframe
all_data_dask = data.to_dask().describe().compute()
print(all_data_dask)
stop = time.time()
print("duration ", (stop-start))
#717s for single machine nc6
# duration 307.69699811935425s for CI as head and 4 workers of DS14_v2
```
### Testing Ray Tune for distributed ML tunning
```
import numpy as np
import torch
import torch.optim as optim
import torch.nn as nn
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch.nn.functional as F
# import ray
from ray import tune
from ray.tune.schedulers import ASHAScheduler
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
# In this example, we don't change the model architecture
# due to simplicity.
self.conv1 = nn.Conv2d(1, 3, kernel_size=3)
self.fc = nn.Linear(192, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 3))
x = x.view(-1, 192)
x = self.fc(x)
return F.log_softmax(x, dim=1)
# Change these values if you want the training to run quicker or slower.
EPOCH_SIZE = 512
TEST_SIZE = 256
def train(model, optimizer, train_loader):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# We set this just for the example to run quickly.
if batch_idx * len(data) > EPOCH_SIZE:
return
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
def test(model, data_loader):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(data_loader):
# We set this just for the example to run quickly.
if batch_idx * len(data) > TEST_SIZE:
break
data, target = data.to(device), target.to(device)
outputs = model(data)
_, predicted = torch.max(outputs.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
return correct / total
def train_mnist(config):
# Data Setup
mnist_transforms = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.1307, ), (0.3081, ))])
train_loader = DataLoader(
datasets.MNIST("~/data", train=True, download=True, transform=mnist_transforms),
batch_size=64,
shuffle=True)
test_loader = DataLoader(
datasets.MNIST("~/data", train=False, transform=mnist_transforms),
batch_size=64,
shuffle=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = ConvNet()
model.to(device)
optimizer = optim.SGD(
model.parameters(), lr=config["lr"], momentum=config["momentum"])
for i in range(10):
train(model, optimizer, train_loader)
acc = test(model, test_loader)
# Send the current training result back to Tune
tune.report(mean_accuracy=acc)
if i % 5 == 0:
# This saves the model to the trial directory
torch.save(model.state_dict(), "./model.pth")
search_space = {
"lr": tune.sample_from(lambda spec: 10**(-10 * np.random.rand())),
"momentum": tune.uniform(0.01, 0.09)
}
# Uncomment this to enable distributed execution
# ray.shutdown()
# ray.init(address="auto",ignore_reinit_error=True)
# ray.init(address =f'ray://{headnode_private_ip}:10001',allow_multiple=True,ignore_reinit_error=True )
# Download the dataset first
datasets.MNIST("~/data", train=True, download=True)
analysis = tune.run(train_mnist, config=search_space)
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import xgboost as xgb
from ray import tune
def train_breast_cancer(config):
# Load dataset
data, labels = sklearn.datasets.load_breast_cancer(return_X_y=True)
# Split into train and test set
train_x, test_x, train_y, test_y = train_test_split(
data, labels, test_size=0.25)
# Build input matrices for XGBoost
train_set = xgb.DMatrix(train_x, label=train_y)
test_set = xgb.DMatrix(test_x, label=test_y)
# Train the classifier
results = {}
xgb.train(
config,
train_set,
evals=[(test_set, "eval")],
evals_result=results,
verbose_eval=False)
# Return prediction accuracy
accuracy = 1. - results["eval"]["error"][-1]
tune.report(mean_accuracy=accuracy, done=True)
config = {
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"max_depth": tune.randint(1, 9),
"min_child_weight": tune.choice([1, 2, 3]),
"subsample": tune.uniform(0.5, 1.0),
"eta": tune.loguniform(1e-4, 1e-1)
}
analysis = tune.run(
train_breast_cancer,
resources_per_trial={"cpu": 1},
config=config,
num_samples=20)
pip install raydp-nightly
import subprocess
python_path = subprocess.check_output("which python", shell=True).strip()
python_path = python_path.decode('utf-8')
python_path = python_path +"/site-packages/raydp/jars/raydp-0.5.0-SNAPSHOT.jar"
os.listdir('/anaconda/envs/azureml_py38/bin//python/site-packages/raydp/jars')
```
### Testing Spark on Ray
```
os.copy(/anaconda/envs/azureml_py38/lib/python3.8/site-packages/raydp/jars/raydp-0.5.0-SNAPSHOT.jar
import raydp
os.path.abspath(os.path.join(os.path.abspath(raydp.__file__), "../../jars/*"))
!ls /anaconda/envs/azureml_py38/lib/python3.8/site-packages/jars/*
# RAYDP_CP = os.path.abspath(os.path.join(os.path.abspath(__file__), "../../jars/*"))
import ray
RAY_CP = os.path.abspath(os.path.join(os.path.dirname(ray.__file__), "jars/*"))
RAY_CP
from azureml.core import Workspace, Experiment, Environment,ScriptRunConfig
from azureml.widgets import RunDetails
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.runconfig import RunConfiguration
#Remember the AML job has to have distribted setings (MPI type) for ray-on-aml to work correctly.
ws = Workspace.from_config()
compute_cluster = 'd12-ssh-novnet' #This can be another cluster different from the interactive cluster.
ray_cluster = ComputeTarget(workspace=ws, name=compute_cluster)
aml_run_config_ml = RunConfiguration(communicator='OpenMpi')
#Check the conda_env.yml, it has an entry of ray-on-aml
rayEnv = Environment.from_conda_specification(name = "RLEnv",
file_path = "conda_env.yml")
dockerfile = r"""
FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1
ARG HTTP_PROXY
ARG HTTPS_PROXY
# set http_proxy & https_proxy
ENV http_proxy=${HTTP_PROXY}
ENV https_proxy=${HTTPS_PROXY}
RUN http_proxy=${HTTP_PROXY} https_proxy=${HTTPS_PROXY} apt-get update -y \
&& mkdir -p /usr/share/man/man1 \
&& http_proxy=${HTTP_PROXY} https_proxy=${HTTPS_PROXY} apt-get install -y openjdk-11-jdk \
&& mkdir /raydp \
&& pip --no-cache-dir install raydp
WORKDIR /raydp
# unset http_proxy & https_proxy
ENV http_proxy=
ENV https_proxy=
"""
# Set the base image to None, because the image is defined by Dockerfile.
rayEnv.docker.base_image = None
rayEnv.docker.base_dockerfile = dockerfile
aml_run_config_ml.target = ray_cluster
aml_run_config_ml.node_count = 3
aml_run_config_ml.environment = rayEnv
src = ScriptRunConfig(source_directory='job',
script='aml_job.py',
run_config = aml_run_config_ml,
)
run = Experiment(ws, "rl_on_aml_job").submit(src)
from azureml.widgets import RunDetails
RunDetails(run).show()
run.cancel()
import raydp
raydp.__version__
additional_spark_configs ={"fs.azure.account.key.adlsdatalakegen6.blob.core.windows.net":"AcDil/MwM9KlDvJu0LBcBIQxogAncv306NMRYABtjphXfWgaDTV3yjZgoSNckUb/3nhG04ND2Nqn553fq36Pqw==",
"fs.azure.account.key.adlsdatalakegen6.dfs.core.windows.net":"AcDil/MwM9KlDvJu0LBcBIQxogAncv306NMRYABtjphXfWgaDTV3yjZgoSNckUb/3nhG04ND2Nqn553fq36Pqw=="}
other_configs = {"raydp.executor.extraClassPath":"/azureml-envs/azureml_6cb52194be4f7fc697297312b8f55547/lib/python3.8/site-packages/pyspark/jars/*"}
spark = ray_on_aml.getSpark(executor_cores =3,num_executors =3 ,executor_memory='10GB', additional_spark_configs=additional_spark_configs)
# ray.cluster_resources()
# import ray
# import raydp
# # # # import os
# # # ray.init(address ='ray://10.0.0.11:6379')
# spark = raydp.init_spark(
# app_name = "example",
# num_executors = 2,
# executor_cores = 2,
# executor_memory = "1GB",
# # configs = {"spark.jars":"jars/*.jar"}
# configs = {"spark.jars":"jars/azure-storage-8.6.6.jar,jars/hadoop-azure-3.3.1.jar,jars/jetty-util-ajax-11.0.7.jar,jars/jetty-util-9.3.24.v20180605.jar,jars/delta-core_2.12-1.1.0.jar,jars/mssql-jdbc-9.4.1.jre8.jar",
# "fs.azure.account.key.adlsdatalakegen6.blob.core.windows.net":"AcDil/MwM9KlDvJu0LBcBIQxogAncv306NMRYABtjphXfWgaDTV3yjZgoSNckUb/3nhG04ND2Nqn553fq36Pqw==",
# "fs.azure.account.key.adlsdatalakegen6.dfs.core.windows.net":"AcDil/MwM9KlDvJu0LBcBIQxogAncv306NMRYABtjphXfWgaDTV3yjZgoSNckUb/3nhG04ND2Nqn553fq36Pqw=="}
# # configs = {"spark.jars":"jars/mssql-jdbc-9.4.1.jre8.jar"}
# # configs = {
# # "spark.driver.extraClassPath":"jars/mssql-jdbc-9.4.1.jre8.jar"}
# # configs = {"spark.jars.packages":"org.apache.hadoop:hadoop-azure:3.3.1,com.microsoft.azure:azure-storage:8.6.6"}
# )
import raydp
raydp.stop_spark()
adls_data = spark.read.format("delta").load("wasbs://mltraining@adlsdatalakegen6.blob.core.windows.net/ISDWeatherDelta")
adls_data = spark.read.format("delta").load("abfss://mltraining@adlsdatalakegen6.dfs.core.windows.net/ISDWeatherDelta")
adls_data.groupby("stationName").count().head(20)
# 73,696,631
ray.cluster_resources()
adls_data.write.format("delta").mode("overwrite").save("wasbs://mltraining@adlsdatalakegen6.blob.core.windows.net/FirstRaySave")
adls_data.selectExpr("snowDepth","stationName")
ray_data = ray.data.from_spark(data)
```
#### Synapse SQL Pool Data Access
```
server_name = "jdbc:sqlserver://sy2qwhqqkv7eacsws1.sql.azuresynapse.net:1433"
database_name = "sy2qwhqqkv7eacsws1p1"
url = server_name + ";" + "databaseName=" + database_name + ";"
table_name = "ISDWeatherDelta"
username = "azureuser"
password = "abcd@12345" # Please specify password here
# try:
# adls_data.write \
# .format("com.microsoft.sqlserver.jdbc.spark") \
# .mode("overwrite") \
# .option("url", url) \
# .option("dbtable", table_name) \
# .option("user", username) \
# .option("password", password) \
# .save()
# except ValueError as error :
# print("Connector write failed", error)
jdbcDF = spark.read \
.format("jdbc") \
.option("url", url) \
.option("dbtable", table_name) \
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver") \
.option("user", username) \
.option("password", password).load()
# sparkdf = spark.createDataFrame([(1, 2),(2, 9),(3, 7)],
# ("id", "name"))
# try:
# adls_data.selectExpr("snowDepth","latitude").write \
# .format("jdbc") \
# .mode("overwrite") \
# .option("url", url) \
# .option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver") \
# .option("dbtable", table_name) \
# .option("user", username) \
# .option("batchsize", 10000) \
# .option("password", password) \
# .save()
# except ValueError as error :
# print("Connector write failed", error)
import pandas as pd
from pyspark.sql.functions import col, pandas_udf
from pyspark.sql.types import LongType
def multiply_func(a: pd.Series, b: pd.Series) -> pd.Series:
return a * b
multiply = pandas_udf(multiply_func, returnType=LongType())
# The function for a pandas_udf should be able to execute with local Pandas data
x = pd.Series([1, 2, 3])
df = spark.createDataFrame(pd.DataFrame(x, columns=["x"]))
# Execute function as a Spark vectorized UDF
df.select(multiply(col("x"), col("x"))).show()
df2.write.format("delta").mode("overwrite").save("data/delta_test")
import os
import re
import pandas as pd, numpy as np
from pyspark.sql.functions import *
from tensorflow import keras
import raydp
from raydp.tf import TFEstimator
from raydp.utils import random_split
def fill_na(data):
# Fill NA in column Fare, Age and Embarked
data = data.fillna({"Embarked": "S"})
fare_avg = data.select(mean(col("Fare")).alias("mean")).collect()
data = data.na.fill({"Fare": fare_avg[0]["mean"]})
age_avg = data.select(mean(col("Age")).alias("mean")).collect()
data = data.na.fill({'Age': age_avg[0]["mean"]})
return data
def do_features(data):
# Add some new features
data = data.withColumn("name_length", length("Name"))
data = data.withColumn("has_cabin", col("Cabin").isNotNull().cast('int'))
data = data.withColumn("family_size", col("SibSp") + col("Parch") + 1)
data = data.withColumn("is_alone", (col("family_size") == 1).cast('int'))
# Add some features about passengers' title with spark udf
@udf("string")
def get_title(name):
title = ''
title_match = re.search(' ([A-Za-z]+)\.', name)
if (title_match):
title = title_match.group(1)
if (title in ['Lady', 'Countess','Capt', 'Col','Don', 'Dr',
'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona']):
title = 'Rare'
return title
return title
data = data.withColumn("Title", get_title(col("Name")))
data = data.withColumn("Title", regexp_replace("Title", "Mlle|Ms", "Miss"))
data = data.withColumn("Title", regexp_replace("Title", "Mme", "Mrs"))
# Encode column Sex
sex_udf = udf(lambda x: 0 if x == "female" else 1)
data = data.withColumn("Sex", sex_udf(col("Sex")).cast('int'))
# Encode column Title
title_map = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
title_udf = udf(lambda x: title_map[x])
data = data.withColumn("Title", title_udf(col("Title")).cast('int'))
# Encode column Embarked
embarked_map = {'S': 0, 'C': 1, 'Q': 2}
embarked_udf = udf(lambda x: embarked_map[x])
data = data.withColumn("Embarked", embarked_udf(col("Embarked")).cast('int'))
# Categorize column Fare
@udf("int")
def fare_map(fare):
if (fare <= 7.91):
return 0
elif fare <= 14.454:
return 1
elif fare <= 31:
return 2
else:
return 3
data = data.withColumn("Fare", fare_map(col("Fare")))
# Categorize column Age
@udf("int")
def age_map(age):
if age <= 16:
return 0
elif age <= 32:
return 1
elif age <= 48:
return 2
elif age <= 64:
return 3
else:
return 4
data = data.withColumn("Age", age_map(col("Age")))
return data
def drop_cols(data):
# Drop useless columns
data = data.drop("PassengerId") \
.drop("Name") \
.drop("Ticket") \
.drop("Cabin") \
.drop("SibSp")
return data
train = fill_na(train)
train = do_features(train)
train = drop_cols(train)
train.show(5)
features = [field.name for field in list(train.schema) if field.name != "Survived"]
inTensor = []
for _ in range(len(features)):
inTensor.append(keras.Input((1,)))
concatenated = keras.layers.concatenate(inTensor)
fc1 = keras.layers.Dense(32, activation='relu')(concatenated)
fc2 = keras.layers.Dense(32, activation='relu')(fc1)
dp1 = keras.layers.Dropout(0.25)(fc2)
fc3 = keras.layers.Dense(16, activation='relu')(dp1)
dp2 = keras.layers.Dropout(0.25)(fc3)
fc4 = keras.layers.Dense(1, activation='sigmoid')(dp2)
model = keras.models.Model(inTensor, fc4)
rmsp = keras.optimizers.RMSprop()
loss = keras.losses.BinaryCrossentropy()
estimator = TFEstimator(num_workers=3, model=model, optimizer=rmsp, loss=loss, metrics=["binary_accuracy"],
feature_columns=features, label_column="Survived", batch_size=32, num_epochs=100,
config={"fit_config": {"steps_per_epoch": train.count() // 32}})
estimator.fit_on_spark(train, None)
raydp.stop_spark()
```
## Testing Ray on Job Cluster
```
# pyarrow >=6.0.1
# dask >=2021.11.2
# adlfs >=2021.10.0
# fsspec==2021.10.1
# ray[default]==1.9.0
ws = Workspace.from_config()
base_conda_dep =['adlfs==2021.10.0','pip']
base_pip_dep = ['ray[tune]==1.9.1', 'xgboost_ray==0.1.5', 'dask==2021.12.0','pyarrow >= 5.0.0','fsspec==2021.10.1', 'torch','torchvision==0.8.1']
compute_cluster = 'worker-cpu-v3'
maxnode =5
vm_size='STANDARD_DS3_V2'
vnet='rayvnet'
subnet='default'
exp ='ray_on_aml_job'
ws_detail = ws.get_details()
ws_rg = ws_detail['id'].split("/")[4]
vnet_rg=None
try:
ray_cluster = ComputeTarget(workspace=ws, name=compute_cluster)
print('Found existing cluster, use it.')
except ComputeTargetException:
if vnet_rg is None:
vnet_rg = ws_rg
compute_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=0, max_nodes=maxnode,
vnet_resourcegroup_name=vnet_rg,
vnet_name=vnet,
subnet_name=subnet)
ray_cluster = ComputeTarget.create(ws, compute_cluster, compute_config)
ray_cluster.wait_for_completion(show_output=True)
python_version = ["python="+platform.python_version()]
conda_packages = python_version+base_conda_dep
pip_packages = base_pip_dep
conda_dep = CondaDependencies()
rayEnv = Environment(name="rayEnv")
# rayEnv = Environment.get(ws, "rayEnv", version=16)
for conda_package in conda_packages:
conda_dep.add_conda_package(conda_package)
for pip_package in pip_packages:
conda_dep.add_pip_package(pip_package)
# # Adds dependencies to PythonSection of myenv
rayEnv.python.conda_dependencies=conda_dep
src = ScriptRunConfig(source_directory='job',
script='aml_job.py',
environment=rayEnv,
compute_target=ray_cluster,
distributed_job_config=PyTorchConfiguration(node_count=maxnode),
# arguments = ["--master_ip",master_ip]
)
run = Experiment(ws, exp).submit(src)
from azureml.widgets import RunDetails
RunDetails(run).show()
import requests
from ray import serve
serve.start()
@serve.deployment
def hello(request):
name = request.query_params["name"]
return f"Hello {name}!"
hello.deploy()
# Query our endpoint over HTTP.
response = requests.get("http://127.0.0.1:8000/hello?name=serve").text
assert response == "Hello serve!"
import requests
import ray
from ray import serve
serve.start()
@serve.deployment
class Counter:
def __init__(self):
self.count = 0
def __call__(self, *args):
self.count += 1
return {"count": self.count}
# Deploy our class.
Counter.deploy()
# Query our endpoint in two different ways: from HTTP and from Python.
assert requests.get("http://127.0.0.1:8000/Counter").json() == {"count": 1}
assert ray.get(Counter.get_handle().remote()) == {"count": 2}
from fastapi import FastAPI
app = FastAPI()
@serve.deployment
@serve.ingress(app)
class Counter:
def __init__(self):
self.count = 0
@app.get("/")
def get(self):
return {"count": self.count}
@app.get("/incr")
def incr(self):
self.count += 1
return {"count": self.count}
@app.get("/decr")
def decr(self):
self.count -= 1
return {"count": self.count}
Counter.deploy()
!curl -X GET localhost:8000/Counter/
```
| github_jupyter |
# Scraping MetaData and getting Fulltexts
This tutorial notebook uses the functions from `CorpusGenerator` and shows how to get the abstracts and fulltexts from Scopus.
The first part of the notebook is used for pulling metadata from articles via Scopus' literature search. It can technically be used to scrape abstracts from anywhere within Scopus' database, but we've specifically limited it to Elsevier journals as that is the only journal that we have access to the fulltext options from. Specifically, this sets up a way to pull PII identification numbers automatically.
To manually test queries, go to https://www.scopus.com/search/form.uri?display=advanced
Elsevier maintains a list of all journals in a single excel spreadsheet. The link to that elsevier active journals link: https://www.elsevier.com/__data/promis_misc/sd-content/journals/jnlactivesubject.xls
The second part of the notebook uses the metadata generated from the first part and gets the fulltexts out of that.
```
import sys
sys.path.append('/Users/nisarg/Desktop/summer research/BETO_NLP/modules')
import corpus_generation
from corpus_generation import CorpusGenerator
```
In order to get the articles, the first step requires you to get an API key from Scopus and adding it to your local config file. You can easily get an API key from https://dev.elsevier.com/documentation/SCOPUSSearchAPI.wadl with a quick registration.
Once you have your API key, you need to add it to your computer using the following command:
`import pybliometrics`
`pybliometrics.scopus.utils.create_config()`
This will prompt you to enter an API key which you obtained from the Scopus website. Once you're done with that you are good to download the articles using the following functions.
**Note**: While downloading the articles from the Scopus, make sure you are connected to UW VPN (All Internet Traffic) using the BIG-IP Edge Client. Without that you might end up getting the Scopus authorization error.
Your `scopus path` would be under the `.scopus` directory in your local.
`scopus_path = '/Users/nisarg/.scopus/'`
The config path for `pybliometrics` is: `/Users/nisarg/.scopus/config.ini` (Would vary as per your local path)
## Walking through the algorithm
The algorithm will take the apikey and cache_path. We will also be defining the other parameters which are required for the functions in the class and show how to use the class to generate the stuff you need.
`apikey:` could be one apikey or multiple keys which you generated from Scopus.
```
#Enter your keys in the below list
apikey = ['a', 'b', 'c']
scopus_path = '/Users/nisarg/.scopus/'
c_gen = CorpusGenerator(apikey, scopus_path)
```
`term_list` is the list of the keywords through which the function generates the Corpus.
### Example for getting the corpus and the metadata from the journals
After mentioning the term_lists and save_dir (where the corpus generated from the function are stored) we use the `get_corpus` function. `save_dir` stores the corpus as well as the fulltexts in the same path as different `.json` files. If you want to obtain the full texts with the corpus, set `fulltexts=True` as shown in the example below.
User also has the option to generate fulltexts later, by using the function `get_fulltexts`
```
term_list = ['deposition', 'corrosion', 'inhibit', 'corrosive', 'resistance', 'protect', 'acid', 'base', 'coke', 'coking', 'anti', \
'layer', 'steel', 'mild steel', 'coating', 'degradation', 'oxidation', \
'film', 'photo-corrosion', 'hydrolysis', 'Schiff']
save_dir = '/Users/nisarg/Desktop/summer research/Ci_pii'
c_gen.get_corpus(term_list, range(1995,2021), save_dir, fulltexts=True)
```
After obtaining the piis, the metadata will be generated in the `save_dir` which will be used for obtaining the fulltexts as well as the dataframe for all the abstracts.
```
#If fulltexts not obtained using the get_corpus, can be obtained separately using the below function
c_gen.get_fulltexts(save_dir)
dataframe_path = '/Users/nisarg/Desktop'
c_gen.make_dataframe(dataframe_path, save_dir)
```
| github_jupyter |
# Illustration of loading GTC_FPSDP output files
This example will show how to use `FPSDP` package to load and exam GTC output files
## 1. Importing the GTC_Loader module
`GTC_Loader` module is located at `FPSDP.Plasma.GTC_Profile` package, in addition to the main `GTC_Loader` class, some other useful modules are loaded, so can be accessed via this module. e.g. `FPSDP.Geometry.Grid.Cartesian2D` can be used to create 2D Cartesian grids. The following import statement will fetch `GTC_Loader` module and name it `gtc`.
```
import FPSDP.Plasma.GTC_Profile.GTC_Loader as gtc
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## 2. Define relevant quantities
Before loading GTC output files, it is convenient to define some quantities for initialization. They are:
1.The directory that contains all GTC output files
```
gtc_path = 'Data/GTC_Outputs/oct25/'
```
2.The grid on which all data will be interpolated or extrapolated.
For now, only `Cartesian2D` grids are accepted. `Cartesian2D` grids can be generated by giving __*DownLeft*__ and __*UpRight*__ coordinates (Z,R) of the box (in meter), and __*NR*__ and __*NZ*__ of grid points, or __*ResR*__ and __*ResZ*__ as the resolution on each direction (in meter). For our GTC run, let's make a box that's larger than the simulation domain:
```
grid2d = gtc.Cartesian2D(DownLeft = (-0.6,0.6), UpRight = (0.6,1.2), ResR = 0.01, ResZ = 0.01)
```
3.The time steps we are interested in
Valid time steps are the integer number in "*snapXXXXXXX_fpsdp.json*" file names. We should provide a list of integer to `GTC_Loader`, if any of the time steps are not available, an exception will be raised along with the valid time steps information. We'll see the example later.
```
timesteps = [1,2,3]
```
## 3. Load data files
Now, we are ready to load the output files:
```
gtcdata = gtc.GTC_Loader(gtc_path,grid2d,timesteps)
```
As we can see, first, our 2D grid are detected and accepted. Then, an error occurs. Since our output files only contain time steps 1 and 2, when we try to aquire time 3, our Loader will complain and tell us only `[1,2]` are available. Let's try again:
```
timesteps = [1,2]
gtcdata = gtc.GTC_Loader(gtc_path,grid2d,timesteps)
```
This time, all went well. Now, gtcdata is ready, we can then take a look at it's content.
## 4. Exam data
Python objects use `__dict__` to store all attributes, we can use it's `keys()` method to list all the attribute names.
```
gtcdata.__dict__.keys()
```
That's a LOT of stuff... Fortunately, only some of them are supposed to be used directly. Let me introduce them one by one.
#### 1.GTC run parameters
Some relevant GTC run parameters are determined by data in gtc.in.out and gtc.out files. They are: isEM, HaveElectron **(MORE IS TO BE ADDED)**
`isEM` is a boolean flag showing if the GTC run has **electromagnetic perturbations**.
```
gtcdata.isEM
```
It's shown that our GTC run here has electromagnetic perturbations.
`HaveElectron` is a boolean flag showing if the GTC run has **non-adiabatic electrons**.
```
gtcdata.HaveElectron
```
Our GTC run apparently doesn't include non-adiabatic electrons
#### 2.Raw data got from GTC output files
Second kind of attributes store the raw data read from GTC output files. They are normally named after their data entry names in the output files. They are: **`R_gtc`, `Z_gtc`, `a_gtc`, `R_eq`, `Z_eq`, `a_eq`, `B_phi`, `B_R`, `B_Z`**, and **`phi`**.
`R_gtc`, `Z_gtc` are R,Z coordinates for each mesh grid point in GTC simulation. `a_gtc` is the radial flux coordinate on this mesh. In our case, it's the poloidal magnetic flux $\psi_p$. `theta_gtc` is the poloidal flux coordinate $\theta$ on the same mesh. Let's take a look at them:
```
fig=plt.figure()
plt.scatter(gtcdata.R_gtc, gtcdata.Z_gtc, s=2, c=gtcdata.a_gtc, linewidth = 0.1)
plt.colorbar()
fig.axes[0].set_aspect(1)
fig = plt.figure()
plt.scatter(gtcdata.R_gtc, gtcdata.Z_gtc, s=2, c=gtcdata.theta_gtc, linewidth = 0.1)
plt.colorbar()
fig.axes[0].set_aspect(1)
```
It is clear that $\psi_p$ is not normalized, and $\theta$ is defined between $[0,2\pi)$
`R_eq`, `Z_eq` and `a_eq` have similar physical meaning as their `_gtc` counter parts. The difference is that, by definition, they should cover the whole poloidal cross-section, from magnetic axis to the outmost closed flux surface, while `_gtc` quantities only cover GTC simulation region which usually excludes the magnetic axis and edge region. These `_eq` quantities are used to interpolate all equilibrium profiles and magnetic field. While equilibrium electron density and temperature are functions of $\psi_p$ only, i.e. $n_e(\psi_p)$ and $T_e(\psi_p)$, equilibrium magnetic field is a vector field on the whole poloidal cross-section. We use `B_phi`, `B_R` and `B_Z` to store the 3 components of the equilibrium magnetic field. Let's take a look at them: **(ISSUE #1 NEEDS TO BE RESOLVED)**
```
fig = plt.figure()
plt.scatter(gtcdata.R_eq,gtcdata.Z_eq, s=2, c= gtcdata.B_phi, linewidth = 0.1)
plt.colorbar()
fig.axes[0].set_aspect(1)
fig = plt.figure()
plt.scatter(gtcdata.R_eq,gtcdata.Z_eq, s=2, c= gtcdata.B_R, linewidth = 0.1)
plt.colorbar()
fig.axes[0].set_aspect(1)
fig = plt.figure()
plt.scatter(gtcdata.R_eq,gtcdata.Z_eq, s=2, c= gtcdata.B_Z, linewidth = 0.1)
plt.colorbar()
fig.axes[0].set_aspect(1)
```
Finally, `phi` stores the perturbed potential at each requested time step. Let's see snapshot 1:
```
fig = plt.figure()
plt.scatter(gtcdata.R_gtc,gtcdata.Z_gtc, s=2, c= gtcdata.phi[0], linewidth = 0.1)
plt.colorbar()
fig.axes[0].set_aspect(1)
```
#### 3. 1D equilibrium profiles
As mentioned before, equilibrium density and temperature profiles are given as functions of $\psi_p$ only. These functions are specified by a $\psi_p$ array (`a_1D`) and corresponding $n_e$ (`ne0_1D`) and $T_e$ (`Te0_1D`) values. **(ISSUE #2 NEEDS TO BE RESOLVED)**
```
plt.plot(gtcdata.a_1D,gtcdata.ne0_1D)
plt.plot(gtcdata.a_1D,gtcdata.Te0_1D)
```
#### 4. Interpolated and/or extrapolated data
All interpolated data are stored in `_on_grid` quantities. Let's look at them one by one:
$\psi_p$ (`a_on_grid`) is interpolated inside the convex hull of points given by `R_eq` and `Z_eq`, and linearly extrapolated outside based on the two partial derivatives on the boundary.
```
fig = plt.figure()
plt.imshow(gtcdata.a_on_grid, extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
```
$n_{e0}$ and $T_{e0}$ are interpolated on $\psi_p$, and then applied to `a_on_grid` to obtain values on grid. **(ISSUE #2 NEEDS TO BE RESOLVED)**
```
fig = plt.figure()
plt.imshow(gtcdata.ne0_on_grid, extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
fig = plt.figure()
plt.imshow(gtcdata.Te0_on_grid, extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
```
`Bphi_on_grid`, `BR_on_grid`, and `BZ_on_grid` are similarly interpolated and extrapolated. **(ISSUE #1 NEEDS TO BE RESOLVED)**
```
fig = plt.figure()
plt.imshow(gtcdata.Bphi_on_grid, extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
fig = plt.figure()
plt.imshow(gtcdata.BR_on_grid, extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
fig = plt.figure()
plt.imshow(gtcdata.BZ_on_grid, extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
```
`phi` is interpolated on `R_gtc` and `Z_gtc`, but not extrapolated. All points outside the simulation grid are assigned 0 values.
```
fig = plt.figure()
plt.imshow(gtcdata.phi_on_grid[0], extent = [0.6,1.2,-0.6,0.6])
plt.colorbar()
fig.axes[0].set_aspect(1)
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlow Addons Optimizers: LazyAdam
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_lazyadam"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This notebook will demonstrate how to use the lazy adam optimizer from the Addons package.
## LazyAdam
> LazyAdam is a variant of the Adam optimizer that handles sparse updates more efficiently.
The original Adam algorithm maintains two moving-average accumulators for
each trainable variable; the accumulators are updated at every step.
This class provides lazier handling of gradient updates for sparse
variables. It only updates moving-average accumulators for sparse variable
indices that appear in the current batch, rather than updating the
accumulators for all indices. Compared with the original Adam optimizer,
it can provide large improvements in model training throughput for some
applications. However, it provides slightly different semantics than the
original Adam algorithm, and may lead to different empirical results.
## Setup
```
import tensorflow as tf
import tensorflow_addons as tfa
# Hyperparameters
batch_size=64
epochs=10
```
## Build the Model
```
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
```
## Prepare the Data
```
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
```
## Train and Evaluate
Simply replace typical keras optimizers with the new tfa optimizer
```
# Compile the model
model.compile(
optimizer=tfa.optimizers.LazyAdam(0.001), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
# Train the network
history = model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs)
# Evaluate the network
print('Evaluate on test data:')
results = model.evaluate(x_test, y_test, batch_size=128, verbose = 2)
print('Test loss = {0}, Test acc: {1}'.format(results[0], results[1]))
```
| github_jupyter |
# Notebook on Diagnosing common LSTM problems and how to tune hyperparameters
## Common mistakes
### Evaluation of model skill
Remember to correctly evaluate your models. If one only trains on training data and evaluate on test data, we take the stochastic nature of deep learning models into consideration, this include random weight initialization, and shuffling after each epoch during SGD.
A recommended way of estimating the skill of a model is shown below. Note that even though this is a recommended way, it might take too long if you have large amounts of data and have deadlines to meet
```
scores = []
for i in some_range:
train, test = random_split(data)
model = fit(train.X, train.y)
predictions = model.predict(test.X)
skill = compare(test.y, predictions)
scores.append(skill)
final_skill = mean(scores)
```
### Unstability of models
When having created a model, it can be a good idea to train the same model on the same dataset, changing only the seed for the random number generator. Afterwards you should review the mean and standard deviation of the scores. If you have a large number as a standard deviation your model might be unstable.
## Diagnosing under- and overfitting
Start of by looking at the history of the fitted model. Keras tracks various parameters that you specify. Also remember to specify a validation set, if your working on a serious project. The history of parameters from a fitted model in Keras can be retrieved like so
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X, Y, epochs=100, validation_split=0.33)
print(history.history['loss'])
print(history.history['acc'])
print(history.history['val_loss'])
print(history.history['val_acc'])
```
You can plot these values retrieved from the history method. This gives you a visual perspective on loss and accuracy values.
```
history = model.fit(X, Y, epochs=100, validation_data=(valX, valY))
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['val_loss'])
pyplot.title('model train vs validation loss')
pyplot.ylabel('loss')
pyplot.xlabel('epoch')
pyplot.legend(['train', 'validation'], loc='upper right')
pyplot.show()
```
### Underfitting example
An underfit model is one that is demonstrated to perform well on the training dataset and poor
on the test dataset.
```
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from matplotlib import pyplot
from numpy import array
# return training data
def get_train():
seq = [[0.0, 0.1], [0.1, 0.2], [0.2, 0.3], [0.3, 0.4], [0.4, 0.5]]
seq = array(seq)
X, y = seq[:, 0], seq[:, 1]
X = X.reshape((len(X), 1, 1))
return X, y
# return validation data
def get_val():
seq = [[0.5, 0.6], [0.6, 0.7], [0.7, 0.8], [0.8, 0.9], [0.9, 1.0]]
seq = array(seq)
X, y = seq[:, 0], seq[:, 1]
X = X.reshape((len(X), 1, 1))
return X, y
# define model
model = Sequential()
model.add(LSTM(10, input_shape=(1,1)))
model.add(Dense(1, activation='linear'))
# compile model
model.compile(loss='mse', optimizer='adam')
# fit model
X,y = get_train()
valX, valY = get_val()
history = model.fit(X, y, epochs=100,
validation_data=(valX, valY),
shuffle=False, verbose=0)
# plot train and validation loss
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['val_loss'])
pyplot.title('model train vs validation loss')
pyplot.ylabel('loss')
pyplot.xlabel('epoch')
pyplot.legend(['train', 'validation'], loc='upper right')
pyplot.show()
```
Another sign of underperforming models are if performance on training set is a lot better than validation set. Below is an example of a model with too few memory cells, thus underfitting the model.
```
# define model
model = Sequential()
model.add(LSTM(1, input_shape=(1,1)))
model.add(Dense(1, activation='linear'))
# compile model
model.compile(loss='mae', optimizer='sgd')
# fit model
X,y = get_train()
valX, valY = get_val()
history = model.fit(X, y, epochs=400,
validation_data=(valX, valY),
shuffle=False, verbose=0)
# plot train and validation loss
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['val_loss'])
pyplot.title('model train vs validation loss')
pyplot.ylabel('loss')
pyplot.xlabel('epoch')
pyplot.legend(['train', 'validation'], loc='upper right')
pyplot.show()
```
this example shows the characteristic of an undert model that appears underprovisioned.
### Overfitting example
An overfit model is one where performance on the train set is good and continues to improve, whereas performance on the validation set improves to a point and then begins to degrade.
```
# define model
model = Sequential()
model.add(LSTM(10, input_shape=(1,1)))
model.add(Dense(1, activation='linear'))
# compile model
model.compile(loss='mse', optimizer='adam')
# fit model
X,y = get_train()
valX, valY = get_val()
history = model.fit(X, y, epochs=1200,
validation_data=(valX, valY),
shuffle=False, verbose=0)
# plot train and validation loss
pyplot.plot(history.history['loss'][500:])
pyplot.plot(history.history['val_loss'][500:])
pyplot.title('model train vs validation loss')
pyplot.ylabel('loss')
pyplot.xlabel('epoch')
pyplot.legend(['train', 'validation'], loc='upper right')
pyplot.show()
```
### Plotting multiple model fittings
Multiple fit functions can be a good idea to plot. This way you can see if the over or underfitting is consistent or due to the stochastic nature of NN's
```
from pandas import DataFrame
train = DataFrame()
val = DataFrame()
for i in range(5):
# define model
model = Sequential()
model.add(LSTM(10, input_shape=(1,1)))
model.add(Dense(1, activation='linear'))
# compile model
model.compile(loss='mse', optimizer='adam')
X,y = get_train()
valX, valY = get_val()
# fit model
history = model.fit(X, y, epochs=300,
validation_data=(valX, valY),
shuffle=False, verbose=0)
# story history
train[str(i)] = history.history['loss']
val[str(i)] = history.history['val_loss']
# plot train and validation loss across multiple runs
pyplot.plot(train, color='blue', label='train')
pyplot.plot(val, color='orange', label='validation')
pyplot.title('model train vs validation loss')
pyplot.ylabel('loss')
pyplot.xlabel('epoch')
pyplot.show()
```
## Framing the problem
Outline of biggest things to consider when framing the problem, so that we can create a robust model.
### Scaling values
Try different ways of scaling you data such as:
* Normalization
* Standardization
### Encoding values
Try different ways of encoding you data such as:
* Real-value encoding
* Integer encoding
* One hot encoding
* Word2Vec
* ...
### Stationarity
When working with data such as time-series attempt to make the series stationary in order to properly perform statistical analysis. Somem things to do include:
* Remove trends by you can using differencing.
* Remove seasonality with seasonal adjustment.
* Remove variance by transform values to log.
### Sequence Length (n_timesteps)
Remember, the length
of the input sequence also impacts the Backpropagation through time used to estimate the error
gradient when updating the weights. It can have an effect on how quickly the model learns and
what is learned.
### Model type
* One-to-one.
* One-to-many.
* Many-to-one.
* Many-to-many.
## Tune The Model
### Architectures
Choose an appropiate model as those implemente in this repo, or try all of them if in doubt, to see which yields the best result.
### Memory Cells
You can't know the *best* number of cells from the get-go but you have to use either some type of bayesian inference like the sk-opt library. More common ways are things like
* Grid Search.
* See quoted papers describing architectures.
### Hidden Layers
* Try grid searching the number of layers and memory cells together.
* Try using patterns of stacking LSTM layers quoted in research papers.
### Weight Initialization
Some timeseries problems framed in as a regression problem in a specific way, might have to use a linear activation function. If this is the case you have to be careful with weight initialization to avoid vanishing gradients when training the network. here are a couple of mehods to try out.
* random uniform
* random normal
* glorot uniform
* glorot normal
### Activation functions
* Sigmoid
* ReLu
* Linear
* tanh
## Tune Learning Behavior
### Optimization algorithms
A good default implementation of gradient descent is the Adam algorithm. This is because it
automatically uses a custom learning rate for each parameter (weight) in the model, combining
the best properties of the AdaGrad and RMSProp methods.
Other popular optimizers include:
* RMSprop
* Adagrad
* Nadam
* SGD
### Learning Rate
The learning rate controls how much to update the weights in response to the estimated gradient
at the end of each batch. This can have a large impact on the trade off between how quickly or
how well the model learns the problem. Further more for small LR values a large number of epochs is commonly needed. Consider using same optimzer with different LR values to correctly see the difference.
* Grid search learning rate values (e.g. 0.1, 0.001, 0.0001).
* Experiment with a learning rate that decays with the number of epochs (e.g. via callback).
* Experiment with updating a fit model with training runs with smaller and smaller learning
rates.
### Batch Size
The batch size is the number of samples between updates to the model weights. A good default batch size is 32 samples. Sometimes however it can be reasonable to set the batch size for a larger vvalue in order to fully utilize your GPU at hand for instance. Common values for batch size includes:
* Batch size of 1 for stochastic gradient descent.
* Batch size of n, where n is the number of samples for batch gradient descent.
* Grid search batch sizes in powers of 2 from 2 to 256 and beyond.
### Regularization
To avoid overfitting your model you can use regularitazion to counter it. Dropout randomly skips neurons during training. Dropout values between 0 and 1 are common. In Keras you have
* dropout: dropout applied on input connections.
* recurrent dropout: dropout applied to recurrent connections.
Experiment with dropout values, and dropout for different layers of your model.
LSTMs also supports other forms of regularization such as weight regularization that imposes
pressure to decrease the size of network weights. Again, these can be set on the LSTM layer with
the arguments:
* bias regularizer: regularization on the bias weights.
* kernel regularizer: regularization on the input weights.
* recurrent regularizer: regularization on the recurrent weights.
Some have found Dropout on the input connections and regularization on input
weights separately to result in better performing models.
| github_jupyter |
---
**API requests to UniProt**
---
Ressources:
https://www.uniprot.org/help/api_queries
- https://www.uniprot.org/help/query-fields
- https://www.uniprot.org/help/api_idmapping
- https://www.uniprot.org/docs/dbxref
- https://www.uniprot.org/taxonomy/
- https://www.uniprot.org/help/uniprotkb_column_names
- https://www.uniprot.org/docs/userman.htm#linetypes
```
import requests
import os
import pandas as pd
import re
import regex as re2
from sklearn.feature_extraction.text import CountVectorizer
from io import StringIO
import numpy as np
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from toolbox import *
cfg = load_cfg()
logVersions = load_LogVersions()
```
# Download data
- v3.4 is downloaded on 09/11/2021
```
version_uniprot = "3-4"
baseRequest= "https://www.uniprot.org/uniprot/?"
requestParameters = {
"query":[
"active:yes", # not obsolete
"reviewed:yes", # Swiss-prot
"organism:9606" # Human only
],
"format":["tab"],
"columns":["id,"+
"go(biological process),"+
"go(cellular component),"+
"go(molecular function),"+
# "database(ensembl),"+
"database(bgee),"+
"feature(DOMAIN EXTENT),"+
"feature(MOTIF),"+
"sequence"
]
}
baseRequest += '&'.join(['%s=%s' % (k,"+AND+".join(v)) for k,v in requestParameters.items()])
print(baseRequest)
results = requests.get(baseRequest)
# Sanity check
assert results.ok
uniprotEnriched1 = pd.read_csv(StringIO(results.content.decode("utf-8")), sep="\t")
glance(uniprotEnriched1)
```
---
**Clean columns**
```
uniprotEnriched1['Cross-reference (bgee)'] = uniprotEnriched1['Cross-reference (bgee)'].str.replace(';','')
uniprotEnriched1['Cross-reference (bgee)'][:5]
print(uniprotEnriched1.Sequence[0])
```
----
**Sort by alphabetical order**
```
uniprotEnriched2 = uniprotEnriched1.sort_values(by="Entry").reset_index(drop=True)
glance(uniprotEnriched2)
```
---
**Test uniprot matching**
```
uniprotkbIdsList = list(set(uniprotEnriched2.Entry))
uniprotMapping = mappingUniprotIDs(fromID = 'ACC', listIDs = uniprotkbIdsList)
assert len(uniprotMapping.loc[uniprotMapping.From != uniprotMapping.To]) == 0
```
---
**Export**
```
uniprotEnriched_export = uniprotEnriched2
# logVersions['UniProt'] = dict()
logVersions['UniProt']['rawData'] = version_uniprot
dump_LogVersions(logVersions)
# Export raw enriched data
uniprotEnriched_export.to_pickle(os.path.join(cfg['rawDataUniProt'],
"uniprot_allProteinsEnriched_Human_v{}.pkl".format(version_uniprot)))
# Export protein list
with open(os.path.join(cfg['rawDataUniProt'], "uniprot_allProteins_Human_v{}.pkl".format(version_uniprot)), 'w') as f:
for item in uniprotEnriched_export.Entry:
f.write("%s\n" % item)
# Export UniProt/Bgee matching
uniprotEnriched_export.loc[:, ['Entry', 'Cross-reference (bgee)']].to_pickle(os.path.join(cfg['rawDataUniProt'],
"uniprot_allProteinsBgee_Human_v{}.pkl".format(version_uniprot)))
uniprotEnriched_export.loc[:, ['Entry', 'Cross-reference (bgee)']]
uniprotBgeeMatching = pd.read_pickle(
os.path.join(cfg['rawDataUniProt'],
"uniprot_allProteinsBgee_Human_v{}.pkl".format(logVersions['UniProt']['rawData'])))
glance(uniprotBgeeMatching)
```
# Create features datasets
- v2.0 is current preprocessing
```
logVersions = load_LogVersions()
myVersionUniprot = '2-0'
logVersions['UniProt']['preprocessed'] = myVersionUniprot
dump_LogVersions(logVersions)
uniprotEnriched = pd.read_pickle(os.path.join(cfg['rawDataUniProt'],
"uniprot_allProteinsEnriched_Human_v{}.pkl".format(logVersions['UniProt']['rawData'])))
glance(uniprotEnriched)
uniprotEnriched.info()
uniprotEnriched.isna().sum()
uniprotEnriched2 = uniprotEnriched.fillna('')
```
## Biological process
```
bow, vectorizer = createBoW(
createGOlist(GOcol = uniprotEnriched2["Gene ontology (biological process)"],
regex0 = r"(?<=\[GO:)[\d]+(?=\])")
)
bow
# This one takes a while to run
bioProcessUniprot_BoW = pd.DataFrame(bow.todense())
bioProcessUniprot_BoW.columns = vectorizer.get_feature_names()
bioProcessUniprot_BoW['uniprotID'] = uniprotEnriched2['Entry']
glance(bioProcessUniprot_BoW)
```
---
**Export**
```
bioProcessUniprot_BoW.to_pickle(
os.path.join(
cfg['outputPreprocessingUniprot'],
"bioProcessUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
)
)
```
## Cellular component
```
bow, vectorizer = createBoW(
createGOlist(GOcol = uniprotEnriched2["Gene ontology (cellular component)"],
regex0 = r"(?<=\[GO:)[\d]+(?=\])")
)
bow
cellCompUniprot_BoW = pd.DataFrame(bow.todense())
cellCompUniprot_BoW.columns = vectorizer.get_feature_names()
cellCompUniprot_BoW['uniprotID'] = uniprotEnriched2['Entry']
glance(cellCompUniprot_BoW)
```
---
**Export**
```
cellCompUniprot_BoW.to_pickle(
os.path.join(
cfg['outputPreprocessingUniprot'],
"cellCompUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
)
)
```
## Molecular function
```
bow, vectorizer = createBoW(
createGOlist(GOcol = uniprotEnriched2["Gene ontology (molecular function)"],
regex0 = r"(?<=\[GO:)[\d]+(?=\])")
)
bow
molFuncUniprot_BoW = pd.DataFrame(bow.todense())
molFuncUniprot_BoW.columns = vectorizer.get_feature_names()
molFuncUniprot_BoW['uniprotID'] = uniprotEnriched2['Entry']
glance(molFuncUniprot_BoW)
```
---
**Export**
```
molFuncUniprot_BoW.to_pickle(
os.path.join(
cfg['outputPreprocessingUniprot'],
"molFuncUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
)
)
```
## Domain
```
bow, vectorizer = createBoW(
createGOlist(GOcol = uniprotEnriched2["Domain [FT]"],
regex0 = r"(?<=note=\")[^\"]+(?=\")")
)
bow
# This one takes a while to run
domain_BoW = pd.DataFrame(bow.todense())
domain_BoW.columns = vectorizer.get_feature_names()
domain_BoW['uniprotID'] = uniprotEnriched2['Entry']
glance(domain_BoW)
```
---
**Test of the parsing method**
```
temp1 = pd.DataFrame({'a': uniprotEnriched2["Domain [FT]"],
'b': createGOlist(GOcol = uniprotEnriched2["Domain [FT]"],
regex0 = r"(?<=note=\")[^\"]+(?=\")"
)}
)
temp1.loc[temp1.a != '']
# Sanity check
foo = temp1.loc[(temp1.a != '')&(temp1.b == '')]
assert len(foo) == 0
```
---
**Export**
```
domain_BoW.to_pickle(
os.path.join(
cfg['outputPreprocessingUniprot'],
"domainFT_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
)
)
```
## Motif
```
bow, vectorizer = createBoW(
createGOlist(GOcol = uniprotEnriched2["Motif"],
regex0 = r"(?<=note=\")[^\"]+(?=\")")
)
bow
# This one takes a while to run
motif_BoW = pd.DataFrame(bow.todense())
motif_BoW.columns = vectorizer.get_feature_names()
motif_BoW['uniprotID'] = uniprotEnriched2['Entry']
glance(motif_BoW)
```
---
**Test of the parsing method**
```
temp1 = pd.DataFrame({'a': uniprotEnriched2["Motif"],
'b': createGOlist(GOcol = uniprotEnriched2["Motif"],
regex0 = r"(?<=note=\")[^\"]+(?=\")"
)}
)
temp1.loc[temp1.a != '']
# Sanity check
foo = temp1.loc[(temp1.a != '')&(temp1.b == '')]
assert len(foo) == 0
```
---
**Export**
```
motif_BoW.to_pickle(
os.path.join(
cfg['outputPreprocessingUniprot'],
"motif_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
)
)
```
## Sequence
```
uniprotEnriched2.Sequence[0]
sequenceData = uniprotEnriched2.loc[:,['Entry','Sequence']]
sequenceData.columns = ['uniprotID','sequence']
glance(sequenceData)
```
---
**Export**
```
sequenceData.to_pickle(
os.path.join(
cfg['outputPreprocessingUniprot'],
"sequenceData_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
)
)
```
| github_jupyter |
<img src="images/QISKit-c copy.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="250 px" align="left">
# Hadamard Action: Approach 3
## Jupyter Notebook 3/3 for the *Teach Me QISKIT* Tutorial Competition
- Connor Fieweger
<img src="images/hadamard_action.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="750 px" align="left">
### Starting with QISKit:
In order to run this notebook, one must first download the Quantum Information Software Kit (QISKit) library from IBM at https://github.com/QISKit/qiskit-sdk-py (as well as supplementary libraries numpy and SciPy and an up-to-date version of python).
One ought to also sign up for an IBM Q Experience account at https://quantumexperience.ng.bluemix.net/qx/experience in order to generate an APIToken (go to My Account > Advanced) for accessing the backends provided by IBM. The account sign up and APIToken specifcation is not actually necessary since this notebook assumes use of the local qasm simulator for the sake of simplicity, but its recommended, as seeing your code executed on an actual quantum device in some other location is really quite amazing and one of the unique capabilities of the QISKit library.
```
# import necessary libraries
import numpy as np
from pprint import pprint
from qiskit import QuantumProgram
from qiskit.tools.visualization import plot_histogram
#import Qconfig
# When working worth external backends (more on this below),
# be sure that the working directory has a
# Qconfig.py file for importing your APIToken from
# your IBM Q Experience account.
# An example file has been provided, so for working
# in this notebook you can simply set
# the variable values to your credentials and rename
# this file as 'Qconfig.py'
```
The final approach to showing equivalence of the presented circuit diagrams is to implement the QISKit library in order to compute and measure the final state. This is done by creating instances of classes in python that represent a circuit with a given set of registers and then using class methods on these circuits to make the class equivalent of gate operations on the qubits. The operations are then executed using a method that calls a backend, i.e. some computing machine invisible to the programmer, to perform the computation and then stores the results. The backend can either be a classical simulator that attempts to mimick the behavior of a quantum circuit as best as it can or an actual quantum computer chip in the dilution refrigerators at the Watson research center.
In reading this notebook, one ought to dig around in the files for QISKit to find the relevant class and method definitions -- the particularly relevant ones in this notebook will be QuantumProgram, QuantumCircuit, and the Register family (ClassicalRegister, QuantumRegister, Register), so take some time now to read through these files.
## Circuit i)
For i), the initial state of the input is represented by the tensor product of the two input qubits in the initial register. This is given by:
$$|\Psi> = |\psi_1> \otimes |\psi_2> = |\psi_2\psi_1>$$
Where each |$\psi$> can be either |0> or |1>
*Note the convention change in the order of qubits in the product state representation on the right -- see appendix notebook under 'Reading a circuit diagram' for why there is a discrepancy here. This notebook will follow the above for consistency with IBM's documentation, which follows the same convention: (https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=006-Multi-Qubit_Gates~2F001-Multi-Qubit_Gates)*
```
# This initial state register
# can be realized in python by creating an instance of the
# QISKit QuantumProgram Class with a quantum register of 2 qubits
# and 2 classical ancilla bits for measuring the states
i = QuantumProgram()
n = 2
i_q = i.create_quantum_register("i_q", n)
i_c = i.create_classical_register("i_c", n)
#i.set_api(Qconfig.APItoken, Qconfig.config['url']) # set the APIToken and API url
i.available_backends() #check backends - if you've set up your APIToken properly you
#should be able to see the quantum chips and simulators at IBM
```
https://github.com/QISKit/ibmqx-backend-information/tree/master/backends/ -- follow this url for background on how the quantum chips/simulators work.
*Note: when working with the quantum chip backends, especially when applying CNOTs, be sure to check documentation on the allowed two-qubit gate configurations.*
```
for backend in i.available_backends(): #check backend status
print(backend)
pprint(i.get_backend_status(backend))
```
Throughout the notebook, we'll need to evaluate the final state of a given the circuit and display the results, so let's define a function for this:
```
def execute_and_plot(qp, circuits, backend = "local_qasm_simulator"):
"""Executes circuits and plots the final
state histograms the for each circuit.
Adapted from 'execute_and_plot' function
in the beginners_guide_composer_examples
notebook provided in IBM's QISKit
tutorial library on GitHub.
Args:
qp: QuantumProgram containing the circuits
circuits (list): list of circuits to execute
backend (string): allows for specifying the backend
to execute on. Defaults to local qasm simulator
downloaded with QISKit library, but can be specified
to run on an actual quantum chip by using the string
names of the available backends at IBM.
"""
# Store the results of the circuit implementation
# using the .execute() method
results = qp.execute(circuits, backend = backend)
for circuit in circuits:
plot_histogram(results.get_counts(circuit)) # .get_counts()
# method returns a dictionary that maps each possible
# final state to the number of instances of
# said state over n evaluations
# (n defaults to 1024 for local qasm simulator),
# where multiple evaluations are a necessity since
# quantum computation outputs are statistically
# informed
```
Note: when working with the quantum chip backends, especially when applying CNOTs, be sure to check documentation on the allowed two-qubit gate configurations at: https://github.com/QISKit/ibmqx-backend-information/tree/master/backends/ . This program assumes use of the local qasm simulator.
Creating a QuantumCircuit instance and storing it in our QuantumProgram allows us to build up a set of operations to apply to this circuit through class methods and then execute this set of operation, so lets do this for each possible input state and read out the end result.
```
# Initialize circuit:
cnot_i_00 = i.create_circuit("cnot_i_00", [i_q], [i_c])
# Note: qubits are assumed by QISKit
# to be initialized in the |0> state
# Apply gates according to diagram:
cnot_i_00.cx(i_q[0], i_q[1]) # Apply CNOT on line 2 controlled by line 1
# Measure final state:
cnot_i_00.measure(i_q[0], i_c[0]) # Write qubit 1 state onto classical ancilla bit 1
cnot_i_00.measure(i_q[1], i_c[1]) # Write qubit 2 state onto classical ancilla bit 2
# Display final state probabilities:
execute_and_plot(i, ["cnot_i_00"])
```
*Note: The set of circuit operations to be executed can also be specified through a 'QASM', or a string that contains the registers and the set of operators to apply. We can get this string for the circuit we just made through the `.get_qasm()` method. This is also helpful for checking our implementation of the circuit, as we can read off the operations and make sure they match up with the diagram*
```
print(i.get_qasm('cnot_i_00'))
```
*These QASM strings can also be used the other way around to create a circuit through the `.load_qasm_file()` and `load_qasm_text()` methods for the QuantumProgram class.*
Continuing input by input,
```
# Initialize circuit:
cnot_i_01 = i.create_circuit("cnot_i_01", [i_q], [i_c])
cnot_i_01.x(i_q[0]) # Set the 1st qubit to |1> by flipping
# the initialized |0> with an X gate before implementing
# the circuit
# Apply gates according to diagram:
cnot_i_01.cx(i_q[0], i_q[1]) # Apply CNOT controlled by line 1
# Measure final state:
cnot_i_01.measure(i_q[0], i_c[0])
cnot_i_01.measure(i_q[1], i_c[1])
# Display final state probabilities:
execute_and_plot(i, ["cnot_i_01"])
# Initialize circuit:
cnot_i_10 = i.create_circuit("cnot_i_10", [i_q], [i_c])
cnot_i_10.x(i_q[1]) # Set the 2nd qubit to |1>
# Apply gates according to diagram:
cnot_i_10.cx(i_q[0], i_q[1]) # Apply CNOT controlled by line 1
# Measure final state:
cnot_i_10.measure(i_q[0], i_c[0])
cnot_i_10.measure(i_q[1], i_c[1])
# Display final state probabilities:
execute_and_plot(i, ["cnot_i_10"])
# Initialize circuit:
cnot_i_11 = i.create_circuit("cnot_i_11", [i_q], [i_c])
cnot_i_11.x(i_q[0]) # Set the 1st qubit to |1>
cnot_i_11.x(i_q[1]) # Set the 2nd qubit to |1>
# Apply gates according to diagram:
cnot_i_11.cx(i_q[0], i_q[1]) # Apply CNOT controlled by line 1
# Measure final states:
cnot_i_11.measure(i_q[0], i_c[0])
cnot_i_11.measure(i_q[1], i_c[1])
# Display final state probabilities:
execute_and_plot(i, ["cnot_i_11"])
```
Reading these off, we have $[\Psi = |00>,|10>,|01>,|11>]\rightarrow [\Psi' = |00>,|10>,|11>,|01>]$.
Note that this is the same answer (up to convention in product-state notation) as obtained for approaches 1 and 2, only this time we have had a far less tedious time of writing out logic operations or matrices thanks to the QISKit library abstracting much of this away for us. While the numpy library was helpful for making linear algebra operations, the matrices had to be user defined and this method does not have nearly the scalability or ease of computation that QISKit offers.
## Circuit ii)
```
# For circuit ii, we can again create a QuantumProgram instance to
# realize a quantum register of size 2 with 2 classical ancilla bits
# for measurement
ii = QuantumProgram()
n = 2
ii_q = ii.create_quantum_register("ii_q", n)
ii_c = ii.create_classical_register("ii_c", n)
#ii.set_api(Qconfig.APItoken, Qconfig.config['url']) # set the APIToken and API url
ii.available_backends() #check backends - if you've set up your APIToken properly you
#should be able to see the quantum chips and simulators at IBM
for backend in ii.available_backends(): #check backend status
print(backend)
pprint(ii.get_backend_status(backend))
```
Now for executing circuit ii):
```
# Initialize circuit:
cnot_ii_00 = ii.create_circuit("cnot_ii_00", [ii_q], [ii_c])
# Apply gates according to diagram:
cnot_ii_00.h(ii_q) # Apply hadamards in parallel, note that specifying
# a register a a gate method argument applies the operation to all
# qubits in the register
cnot_ii_00.cx(ii_q[1], ii_q[0]) #apply CNOT controlled by line 2
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
# Measure final state:
cnot_ii_00.measure(ii_q[0], ii_c[0])
cnot_ii_00.measure(ii_q[1], ii_c[1])
# Display final state probabilities
execute_and_plot(ii, ["cnot_ii_00"])
# Initialize circuit:
cnot_ii_01 = ii.create_circuit("cnot_ii_01", [ii_q], [ii_c])
cnot_ii_01.x(ii_q[0]) # Set the 1st qubit to |1>
# Apply gates according to diagram:
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
cnot_ii_01.cx(ii_q[1], ii_q[0]) # Apply CNOT controlled by line 2
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
# Measure final state:
cnot_ii_01.measure(ii_q[0], ii_c[0])
cnot_ii_01.measure(ii_q[1], ii_c[1])
# Display final state probabilities:
execute_and_plot(ii, ["cnot_ii_01"])
# Initialize circuits
cnot_ii_10 = ii.create_circuit("cnot_ii_10", [ii_q], [ii_c])
cnot_ii_10.x(ii_q[1]) # Set the 2nd qubit to |1>
# Apply gates according to diagram:
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
cnot_ii_10.cx(ii_q[1], ii_q[0]) # Apply CNOT controlled by line 2
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
# Measure final state:
cnot_ii_10.measure(ii_q[0], ii_c[0])
cnot_ii_10.measure(ii_q[1], ii_c[1])
# Display final state probabilities:
execute_and_plot(ii, ["cnot_ii_10"])
# Initialize circuits:
cnot_ii_11 = ii.create_circuit("cnot_ii_11", [ii_q], [ii_c])
cnot_ii_11.x(ii_q[0]) # Set the 1st qubit to |1>
cnot_ii_11.x(ii_q[1]) # Set the 2nd qubit to |1>
# Apply gates according to diagram:
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
cnot_ii_11.cx(ii_q[1], ii_q[0]) # Apply CNOT controlled by line 2
cnot_ii_00.h(ii_q) # Apply hadamards in parallel
# Measure final state
cnot_ii_11.measure(ii_q[0], ii_c[0])
cnot_ii_11.measure(ii_q[1], ii_c[1])
# Display final state probabilities
execute_and_plot(ii, ["cnot_ii_11"])
```
Reading off the computed final state, we see that it matches the computed final state of i), and so the circuits are considered equivalent $\square$.
<hr>
### Another implementation:
The input-by-input approach is helpful for first steps in understanding QISKit, but is also more long-winded than necessary. For a solution to the problem that uses QISKit more concisely/cleverly:
```
def circuit_i():
i = QuantumProgram()
i_q = i.create_quantum_register('i_q', 2)
i_c = i.create_classical_register('i_c', 2)
initial_states = ['00','01','10','11']
initial_circuits = {state: i.create_circuit('%s'%(state), [i_q], [i_c]) \
for state in initial_states}
final_circuits = {}
for state in initial_states:
if state[0] is '1':
initial_circuits[state].x(i_q[0])
if state[1] is '1':
initial_circuits[state].x(i_q[1])
initial_circuits[state].cx(i_q[0], i_q[1])
initial_circuits[state].measure(i_q[0], i_c[0])
initial_circuits[state].measure(i_q[1], i_c[1])
final_circuits[state] = initial_circuits[state]
return i
def circuit_ii():
ii = QuantumProgram()
ii_q = ii.create_quantum_register('ii_q', 2)
ii_c = ii.create_classical_register('ii_c', 2)
initial_states = ['00','01','10','11']
circuits = {state: ii.create_circuit('%s'%(state), [ii_q], [ii_c]) \
for state in initial_states}
for state in initial_states:
if state[0] is '1':
circuits[state].x(ii_q[0])
if state[1] is '1':
circuits[state].x(ii_q[1])
circuits[state].h(ii_q)
circuits[state].cx(ii_q[1], ii_q[0])
circuits[state].h(ii_q)
circuits[state].measure(ii_q[0], ii_c[0])
circuits[state].measure(ii_q[1], ii_c[1])
return ii
i = circuit_i()
ii = circuit_ii()
#i.set_api(Qconfig.APItoken, Qconfig.config['url'])
#ii.set_api(Qconfig.APItoken, Qconfig.config['url'])
results_i = i.execute(list(i.get_circuit_names()))
results_ii = ii.execute(list(ii.get_circuit_names()))
results_i_mapping = {circuit: results_i.get_counts(circuit) for circuit in list(i.get_circuit_names())}
results_ii_mapping = {circuit: results_ii.get_counts(circuit) for circuit in list(ii.get_circuit_names())}
print(results_i_mapping)
print(results_ii_mapping)
```
$\square$.
## Next steps:
Thank you for reading through this tutorial! The author hopes that it has been a helpful experience in getting started with QISKit and quantum circuitry. For moving forward, consider the following projects:
- Edit this notebook such that it can run on an actual quantum chip! By looking at the documentation/configuration of backends either online or through the `.get_backend_configuration()` method, you can find the connectivity map and then set up a circuit implementation that satisfies the 2-qubit gate constraints from this connectivity map. I strongly recommend this, as this is a vital step to unlocking the true power of QISKit, which is interfacing with actual quantum backends.
- Implement an entirely different circuit.
- Try generating circuits by uploading QASM strings.
- Follow through with the provided tutorial files on the QISKit GitHub
- Look through the provided further readings in the appendix notebook.
| github_jupyter |
# Scaling
```
# import packages
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# read dataframe in
df = pd.read_csv('data/kickstarter_preprocess.csv')
df.columns
```
### features to keep: preparation, duration_days, goal, pledged_per_backer, parent_name, blurb_len_w, slug_len_w, 'launched_month'
```
# drop unimportant features
df.drop(['backers_count', 'country', 'usd_pledged', 'blurb_len_c', 'slug_len_c', 'cat_in_slug',
'category_parent_id', 'category_id', 'category_name', 'created_year', 'created_month', 'deadline_year',
'deadline_month', 'launched_year', 'rel_pledged_goal', 'filled_parent', 'staff_pick'],
axis=1, inplace=True)
df.columns
df.info()
```
## drop rows with state == canceled, rows with wrong categories
```
df = df.drop(df[df['state'] == "canceled" ].index)
df.shape
categories = ["Games", "Art", "Photography", "Film & Video", "Design", "Technology"]
df = df[df.parent_name.isin(categories)]
df.shape
```
## make dummies (state, category_name)
```
#df.staff_pick = df.staff_pick.astype('int')
df['state'] = np.where(df['state'] == 'successful', 1, 0)
df.groupby('state').state.count()
# convert the categorical variable parent_name into dummy/indicator variables
df_dum2 = pd.get_dummies(df.parent_name, prefix='parent_name')
df = df.drop(['parent_name'], axis=1)
df = pd.concat([df, df_dum2], axis=1)
# making a categorical variable for launched_month q1, q2, q3, q4
df.loc[df['launched_month'] < 4, 'time_yr'] = 'q1'
df.loc[(df['launched_month'] >= 4) & (df['launched_month'] < 7), 'time_yr'] = 'q2'
df.loc[(df['launched_month'] >= 7) & (df['launched_month'] < 10), 'time_yr'] = 'q3'
df.loc[df['launched_month'] > 9, 'time_yr'] = 'q4'
df_dum3 = pd.get_dummies(df.time_yr, prefix='time_yr')
df = df.drop(['time_yr'], axis=1)
df = df.drop(['launched_month'], axis=1)
df = pd.concat([df, df_dum3], axis=1)
df.columns
df.info()
df.head()
```
## Train-Test-Split
```
from sklearn.model_selection import train_test_split, cross_val_score
y = df.state
X = df.drop('state', axis=1)
# Train-test-split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
```
## Scaling
```
from sklearn.preprocessing import StandardScaler
# we have to define which columns we want to scale.
col_scale = ['goal', 'blurb_len_w', 'slug_len_w', 'duration_days', 'preparation', 'pledged_per_backer']
```
### Data standardization
```
# Scaling with standard scaler
scaler = StandardScaler()
X_train_scaled_st = scaler.fit_transform(X_train[col_scale])
X_test_scaled_st = scaler.transform(X_test[col_scale])
# Concatenating scaled and dummy columns
X_train_preprocessed_st = np.concatenate([X_train_scaled_st, X_train.drop(col_scale, axis=1)], axis=1)
X_test_preprocessed_st = np.concatenate([X_test_scaled_st, X_test.drop(col_scale, axis=1)], axis=1)
```
### Data normalization
# Scaling with MinMaxScaler
# Try to scale you data with the MinMaxScaler() from sklearn.
# It follows the same syntax as the StandardScaler.
# Don't forget: you have to import the scaler at the top of your notebook.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train_scaled_nor = scaler.fit_transform(X_train[col_scale])
X_test_scaled_nor = scaler.transform(X_test[col_scale])
# Concatenating scaled and dummy columns
X_train_preprocessed_nor = np.concatenate([X_train_scaled_nor, X_train.drop(col_scale, axis=1)], axis=1)
X_test_preprocessed_nor = np.concatenate([X_test_scaled_nor, X_test.drop(col_scale, axis=1)], axis=1)
```
df.groupby('state').state.count()
```
## Model Classification and Gridsearch (tuning hyperparameters)
### Logistic Regression
```
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
# fit model
lr = LogisticRegression()
lr.fit(X_train_preprocessed_st, y_train)
y_pred = lr.predict(X_test_preprocessed_st)
confusion_matrix(y_test, y_pred)
# normalization
#print (classification_report(y_test, y_pred))
# standardization
print classification_report(y_test, y_pred))
# Gridsearch https://www.kaggle.com/enespolat/grid-search-with-logistic-regression
grid = {"C":np.logspace(-3,3,7), "penalty":["l1","l2"]} # l1 lasso l2 ridge
logreg = LogisticRegression()
logreg_cv = GridSearchCV(logreg,grid,cv=10)
logreg_cv.fit(X_train_preprocessed_st,y_train)
print("tuned hpyerparameters :(best parameters) ",logreg_cv.best_params_)
print("accuracy :",logreg_cv.best_score_)
# fit model
lr2 = LogisticRegression(C=1000.0,penalty="l2")
lr2.fit(X_train_preprocessed_st, y_train)
y_pred = lr2.predict(X_test_preprocessed_st)
confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
```
### Kernel SVM
```
import pylab as pl
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train_preprocessed_st.shape, y_train.shape)
print ('Test set:', X_test_preprocessed_st.shape, y_test.shape)
from sklearn import svm
clf = svm.SVC(kernel='rbf')
clf.fit(X_train_preprocessed_st, y_train)
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred, labels=[0,1])
np.set_printoptions(precision=2)
print (classification_report(y_test, y_pred))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['failed','successful'],normalize= False, title='Confusion matrix')
param_grid = [{'kernel': ['rbf'],
'gamma': [0.0001, 0.001, 0.01, 0.1, 1],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'],
'C': [1, 10, 100, 1000]}]
grid = GridSearchCV(clf, param_grid, verbose=True, n_jobs=-1)
result = grid.fit(X_train_preprocessed_st, y_train)
# Print best parameters
print('Best Parameters:', result.best_params_)
# Print best score
print('Best Score:', result.best_score_)
clf2 = svm.SVC(kernel='rbf')
clf2.fit(X_train_preprocessed_st, y_train)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred, labels=[0,1])
np.set_printoptions(precision=2)
print (classification_report(y_test, y_pred))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['failed','successful'],normalize= False, title='Confusion matrix')
```
### Random Forest
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
# Create the model with 100 trees
model = RandomForestClassifier(n_estimators=100,
random_state=42,
max_features = 'sqrt',
n_jobs=-1, verbose = 1)
# Fit on training data
model.fit(X_train_preprocessed_st, y_train)
y_pred = model.predict(X_test_preprocessed_st)
# Training predictions (to demonstrate overfitting)
train_rf_predictions = model.predict(X_train_preprocessed_st)
train_rf_probs = model.predict_proba(X_train_preprocessed_st)[:, 1]
# Testing predictions (to determine performance)
rf_predictions = model.predict(X_test_preprocessed_st)
rf_probs = model.predict_proba(X_test_preprocessed_st)[:, 1]
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred, labels=[0,1])
np.set_printoptions(precision=2)
print (classification_report(y_test, y_pred))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['failed','successful'],normalize= False, title='Confusion matrix')
print (classification_report(y_test, y_pred))
```
### Random Forest: Optimization through Random Search
```
# Hyperparameter grid
param_grid = {
'n_estimators': np.linspace(10, 200).astype(int),
'max_depth': [None] + list(np.linspace(3, 20).astype(int)),
'max_features': ['auto', 'sqrt', None] + list(np.arange(0.5, 1, 0.1)),
'max_leaf_nodes': [None] + list(np.linspace(10, 50, 500).astype(int)),
'min_samples_split': [2, 5, 10],
'bootstrap': [True, False]
}
# Estimator for use in random search
estimator = RandomForestClassifier(random_state = 42)
# Create the random search model
rs = RandomizedSearchCV(estimator, param_grid, n_jobs = -1,
scoring = 'roc_auc', cv = 3,
n_iter = 10, verbose = 5, random_state=42)
# Fit
rs.fit(X_train_preprocessed_st, y_train)
rs.best_params_
# Create the model with 100 trees
model = RandomForestClassifier(n_estimators=196,
random_state=42,
min_samples_split=10,
max_leaf_nodes=49,
max_features=0.7,
max_depth=17,
bootstrap=True,
n_jobs=-1, verbose = 1)
# Fit on training data
model.fit(X_train_preprocessed_st, y_train)
y_pred = model.predict(X_test_preprocessed_st)
# Training predictions (to demonstrate overfitting)
train_rf_predictions = model.predict(X_train_preprocessed_st)
train_rf_probs = model.predict_proba(X_train_preprocessed_st)[:, 1]
# Testing predictions (to determine performance)
rf_predictions = model.predict(X_test_preprocessed_st)
rf_probs = model.predict_proba(X_test_preprocessed_st)[:, 1]
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred, labels=[0,1])
np.set_printoptions(precision=2)
print (classification_report(y_test, y_pred))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['failed','successful'],normalize= False, title='Confusion matrix')
```
### Use best model
```
best_model = rs.best_estimator_
train_rf_predictions = best_model.predict(X_train_preprocessed_st)
train_rf_probs = best_model.predict_proba(X_train_preprocessed_st)[:, 1]
rf_predictions = best_model.predict(X_test_preprocessed_st)
rf_probs = best_model.predict_proba(X_test_preprocessed_st)[:, 1]
n_nodes = []
max_depths = []
for ind_tree in best_model.estimators_:
n_nodes.append(ind_tree.tree_.node_count)
max_depths.append(ind_tree.tree_.max_depth)
print(f'Average number of nodes {int(np.mean(n_nodes))}')
print(f'Average maximum depth {int(np.mean(max_depths))}')
def evaluate_model(predictions, probs, train_predictions, train_probs):
"""Compare machine learning model to baseline performance.
Computes statistics and shows ROC curve."""
baseline = {}
baseline['recall'] = recall_score(y_test, [1 for _ in range(len(y_test))])
baseline['precision'] = precision_score(y_test, [1 for _ in range(len(y_test))])
baseline['roc'] = 0.5
results = {}
results['recall'] = recall_score(y_test, predictions)
results['precision'] = precision_score(y_test, predictions)
results['roc'] = roc_auc_score(y_test, probs)
train_results = {}
train_results['recall'] = recall_score(y_train, train_predictions)
train_results['precision'] = precision_score(y_train, train_predictions)
train_results['roc'] = roc_auc_score(y_train, train_probs)
for metric in ['recall', 'precision', 'roc']:
print(f'{metric.capitalize()} Baseline: {round(baseline[metric], 2)} Test: {round(results[metric], 2)} Train: {round(train_results[metric], 2)}')
# Calculate false positive rates and true positive rates
base_fpr, base_tpr, _ = roc_curve(y_test, [1 for _ in range(len(y_test))])
model_fpr, model_tpr, _ = roc_curve(y_test, probs)
plt.figure(figsize = (8, 6))
plt.rcParams['font.size'] = 16
# Plot both curves
plt.plot(base_fpr, base_tpr, 'b', label = 'baseline')
plt.plot(model_fpr, model_tpr, 'r', label = 'model')
plt.legend();
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate'); plt.title('ROC Curves');
evaluate_model(rf_predictions, rf_probs, train_rf_predictions, train_rf_probs)
```
| github_jupyter |
https://plotly.com/
```
# pip3 install plotly
# pip3 install psutil
import pandas as pd
import numpy as np
import psutil
import plotly.io as pio
# Ploty offline
from plotly.offline import init_notebook_mode, iplot, plot
from plotly.graph_objs import Scatter, Box, Histogram2dContour, Contours, Marker
import plotly.graph_objects as go
df = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/school_earnings.csv")
df
init_notebook_mode(connected=True) # initiate notebook for offline plot
trace0 = Scatter(
x=df["Gap"],
y=df["Men"]
)
trace1 = Scatter(
x=df["Gap"],
y=df["Women"]
)
iplot([trace0, trace1])
iplot([Box(y = np.random.randn(50), showlegend=False) for i in range(45)], show_link=False)
x = np.random.randn(2000)
y = np.random.randn(2000)
iplot([Histogram2dContour(x=x, y=y, contours=Contours(coloring='heatmap')),
Scatter(x=x, y=y, mode='markers', marker=Marker(color='white', size=3, opacity=0.3))], show_link=False)
df_airports = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2011_february_us_airport_traffic.csv')
df_airports.head()
df_flight_paths = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2011_february_aa_flight_paths.csv')
df_flight_paths.head()
airports = [ dict(
type = 'scattergeo',
locationmode = 'USA-states',
lon = df_airports['long'],
lat = df_airports['lat'],
hoverinfo = 'text',
text = df_airports['airport'],
mode = 'markers',
marker = dict(
size=2,
color='rgb(255, 0, 0)',
line = dict(
width=3,
color='rgba(68, 68, 68, 0)'
)
))]
flight_paths = []
for i in range( len( df_flight_paths ) ):
flight_paths.append(
dict(
type = 'scattergeo',
locationmode = 'USA-states',
lon = [ df_flight_paths['start_lon'][i], df_flight_paths['end_lon'][i] ],
lat = [ df_flight_paths['start_lat'][i], df_flight_paths['end_lat'][i] ],
mode = 'lines',
line = dict(
width = 1,
color = 'red',
),
opacity = float(df_flight_paths['cnt'][i])/float(df_flight_paths['cnt'].max()),
)
)
layout = dict(
title = 'Feb. 2011 American Airline flight paths<br>(Hover for airport names)',
showlegend = False,
height = 800,
geo = dict(
scope='north america',
projection=dict( type='azimuthal equal area' ),
showland = True,
landcolor = 'rgb(243, 243, 243)',
countrycolor = 'rgb(204, 204, 204)',
),
)
fig = dict( data=flight_paths + airports, layout=layout )
iplot(fig)
fig = go.Figure(
data=[go.Bar(y=[2, 1, 3])],
layout_title_text="A Figure Displayed with fig.show()"
)
fig.show()
np.random.seed(1)
N = 100
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
sz = np.random.rand(N) * 30
fig = go.Figure()
fig.add_trace(go.Scatter(
x=x,
y=y,
mode="markers",
marker=go.scatter.Marker(
size=sz,
color=colors,
opacity=0.6,
colorscale="Viridis"
)
))
html_file = 'plotly-circles-plot.html'
fname = 'plotly-circles-plot'
fig.show()
plot(fig, filename=html_file, auto_open=True,
image_width=1280, image_height=800,
image_filename=fname, image='png')
import plotly
import plotly.graph_objs as go
data = [go.Bar(x=['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries'],
y=[5, 3, 4, 2, 4, 6])]
html_file = 'plotly-fruit-plot.html'
fname = 'plotly-fruit-plot'
plotly.offline.plot(data, filename=html_file, auto_open=False,
image_width=1280, image_height=800,
image_filename=fname, image='png')
```
| github_jupyter |
# Example 1 Physics
In this notebook, we'll explore some of the physics related methods that come with DarkHistory.
## Notebook Initialization
```
%load_ext autoreload
import sys
sys.path.append("..")
%matplotlib inline
%autoreload
import matplotlib
matplotlib.rc_file('matplotlibrc')
import matplotlib.pyplot as plt
import numpy as np
import darkhistory.physics as phys
```
## Constants
DarkHistory comes with a list of physical and cosmological constants, with values taken from the PDG 2018 (particle physics) [[1]](#cite_PDG) and the Planck 2018 results (cosmology) [[2]](#cite_Planck). We use centimeters for length, seconds for time and the electronvolt for energy, mass and temperature.
```
print('Speed of light (cm/s): ', phys.c)
print('Planck Constant (eV s): ', phys.hbar)
print('Boltzmann Constant (eV/K): ', phys.kB)
print('Hubble Constant (s^-1): ', phys.H0)
print('Critical Density (eV/cm^3): ', phys.rho_crit)
print('Ionization Potential of Hydrogen (eV): ', phys.rydberg)
```
A complete list of constants can be found at the [*darkhistory.physics*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/darkhistory.physics.html).
## Methods
The code also comes with several functions that are useful for cosmology and atomic physics. Let's take a look at a few of them: we again refer the user to the documentation for a complete list.
## Hubble Parameter
The Hubble parameter as a function of redshift is built-in. In the code, we usually refer to redshift using the variable ``rs``, which is taken to be $1+z$, i.e. the value of ``rs`` today would be 1 throughout the code.
```
rs = 10**np.arange(-4, 5, 0.1)
hubble_arr = phys.hubble(rs)
plt.figure()
plt.loglog()
plt.plot(rs, hubble_arr)
plt.xlabel('Redshift $(1+z)$')
plt.ylabel('Hubble Parameter [s$^{-1}$]')
plt.title('Hubble Parameter')
```
## CMB Spectrum
The CMB spectrum $dn_\gamma/dE$ where $n_\gamma$ is the number density of photons is returned by [*physics.CMB_spec*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/physics/darkhistory.physics.CMB_spec.html). It takes an array of energy values or energy *abscissa*, and a temperature in eV.
```
eng = 10**np.arange(-4, 2, 0.1)
spec = phys.CMB_spec(eng, 0.1)
plt.figure()
plt.loglog()
plt.plot(eng, spec)
plt.xlabel('Energy [eV]')
plt.ylabel(r'$dn_\gamma/dE$ [eV$^{-1}$ cm$^{-3}$]')
plt.title('CMB Spectrum')
plt.axis([1e-4, 10, 1e2, 1e12])
```
## Bibliography
[1]<a id='cite_PDG'></a> M. Tanabashi et al. (Particle Data Group), “Review of Particle Physics,” Phys. Rev. D98, 030001 (2018).
[2]<a id='cite_Planck'></a> N. Aghanim et al. (Planck), “Planck 2018 results. VI. Cosmological parameters,” (2018), arXiv:1807.06209 [astro-ph.CO].
| github_jupyter |
# Time Series From Scratch (part. 7) — Train/Test Splits and Evaluation Metrics (Dario Radečić)
[Source](https://towardsdatascience.com/time-series-from-scratch-train-test-splits-and-evaluation-metrics-4fd654de1b37). From [Time Series From Scratch](https://towardsdatascience.com/tagged/time-series-from-scratch).
- Author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20'<prof.israel@gmail.com>)
```
%load_ext watermark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
from cycler import cycler
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error
rcParams['figure.figsize'] = 18, 5
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
rcParams['axes.grid'] = True
rcParams['axes.prop_cycle'] = cycler(color=['#365977'])
rcParams['lines.linewidth'] = 2.5
# from tqdm import tqdm
# from glob import glob
# import matplotlib.pyplot as plt
# %matplotlib inline
# from IPython.core.pylabtools import figsize
# figsize(12, 8)
# import seaborn as sns
# sns.set_theme()
# pd.set_option("max_columns", None)
# pd.set_option("max_rows", None)
# pd.set_option('display.max_colwidth', None)
# from IPython.display import Markdown, display
# def md(arg):
# display(Markdown(arg))
# from pandas_profiling import ProfileReport
# #report = ProfileReport(#DataFrame here#, minimal=True)
# #report.to
# import pyarrow.parquet as pq
# #df = pq.ParquetDataset(path_to_folder_with_parquets, filesystem=None).read_pandas().to_pandas()
# import json
# def open_file_json(path,mode='r',var=None):
# if mode == 'w':
# with open(path,'w') as f:
# json.dump(var, f)
# if mode == 'r':
# with open(path,'r') as f:
# return json.load(f)
# import functools
# import operator
# def flat(a):
# return functools.reduce(operator.iconcat, a, [])
# import json
# from glob import glob
# from typing import NewType
# DictsPathType = NewType("DictsPath", str)
# def open_file_json(path):
# with open(path, "r") as f:
# return json.load(f)
# class LoadDicts:
# def __init__(self, dict_path: DictsPathType = "./data"):
# Dicts_glob = glob(f"{dict_path}/*.json")
# self.List = []
# self.Dict = {}
# for path_json in Dicts_glob:
# name = path_json.split("/")[-1].replace(".json", "")
# self.List.append(name)
# self.Dict[name] = open_file_json(path_json)
# setattr(self, name, self.Dict[name])
# Run this cell before close.
%watermark -d --iversion -b -r -g -m -v
!cat /proc/cpuinfo |grep 'model name'|head -n 1 |sed -e 's/model\ name/CPU/'
!free -h |cut -d'i' -f1 |grep -v total
# Load
df = pd.read_csv('/work/tmp/airline-passengers.csv', index_col='Month', parse_dates=True)
# Visualize
plt.title('Airline Passengers dataset', size=20)
plt.plot(df);
test_size = 24
df_train = df[:-test_size]
df_test = df[-test_size:]
plt.title('Airline passengers train and test sets', size=20)
plt.plot(df_train, label='Training set')
plt.plot(df_test, label='Test set', color='orange')
plt.legend();
rmse = lambda act, pred: np.sqrt(mean_squared_error(act, pred))
# Arbitrary data
actual_passengers = [300, 290, 320, 400, 500, 350]
predicted_passengers = [291, 288, 333, 412, 488, 344]
# Error metrics
print(f'RMSE: {rmse(actual_passengers, predicted_passengers)}')
print(f'MAPE: {mean_absolute_percentage_error(actual_passengers, predicted_passengers)}')
from prophet import Prophet
df = df_train.reset_index().copy()
df.columns
df = df.rename(columns={"Month": "ds", "Passengers": "y"})
df.head()
Prophet?
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(periods=12 , freq='M')
future = pd.DataFrame()
future['ds'] = df_test.reset_index().Month
forecast = m.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig1 = m.plot(forecast)
fig2 = m.plot_components(forecast)
from prophet.plot import plot_plotly, plot_components_plotly
plot_plotly(m, forecast)
plot_components_plotly(m, forecast)
forecast_test = forecast.loc[forecast.ds > df_train.reset_index().Month.max()].copy()
df_hat = pd.DataFrame()
df_hat['Month'] = forecast_test['ds']
df_hat['Passengers'] = forecast_test['yhat']
df_hat = df_hat.set_index('Month')
df_hat.head()
test_size = 24
plt.title('Airline passengers train and test sets', size=20)
plt.plot(df_train, label='Training set')
plt.plot(df_hat, label='Test set', color='orange')
plt.legend();
```
| github_jupyter |
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg"
align="right"
width="30%"
alt="Dask logo\">
# Parallelize code with `dask.delayed`
In this section we parallelize simple for-loop style code with Dask and `dask.delayed`. Often, this is the only function that you will need to convert functions for use with Dask.
This is a simple way to use `dask` to parallelize existing codebases or build [complex systems](https://blog.dask.org/2018/02/09/credit-models-with-dask). This will also help us to develop an understanding for later sections.
**Related Documentation**
* [Delayed documentation](https://docs.dask.org/en/latest/delayed.html)
* [Delayed screencast](https://www.youtube.com/watch?v=SHqFmynRxVU)
* [Delayed API](https://docs.dask.org/en/latest/delayed-api.html)
* [Delayed examples](https://examples.dask.org/delayed.html)
* [Delayed best practices](https://docs.dask.org/en/latest/delayed-best-practices.html)
As well see in the [distributed scheduler notebook](05_distributed.ipynb), Dask has several ways of executing code in parallel. We'll use the distributed scheduler by creating a `dask.distributed.Client`. For now, this will provide us with some nice diagnostics. We'll talk about schedulers in depth later.
```
from dask.distributed import Client
client = Client(n_workers=4)
```
## Basics
First let's make some toy functions, `inc` and `add`, that sleep for a while to simulate work. We'll then time running these functions normally.
In the next section we'll parallelize this code.
```
from time import sleep
def inc(x):
sleep(1)
return x + 1
def add(x, y):
sleep(1)
return x + y
```
We time the execution of this normal code using the `%%time` magic, which is a special function of the Jupyter Notebook.
```
%%time
# This takes three seconds to run because we call each
# function sequentially, one after the other
x = inc(1)
y = inc(2)
z = add(x, y)
```
### Parallelize with the `dask.delayed` decorator
Those two increment calls *could* be called in parallel, because they are totally independent of one-another.
We'll transform the `inc` and `add` functions using the `dask.delayed` function. When we call the delayed version by passing the arguments, exactly as before, but the original function isn't actually called yet - which is why the cell execution finishes very quickly.
Instead, a *delayed object* is made, which keeps track of the function to call and the arguments to pass to it.
```
from dask import delayed
%%time
# This runs immediately, all it does is build a graph
x = delayed(inc)(1)
y = delayed(inc)(2)
z = delayed(add)(x, y)
```
This ran immediately, since nothing has really happened yet.
To get the result, call `compute`. Notice that this runs faster than the original code.
```
%%time
# This actually runs our computation using a local process pool
z.compute()
```
## What just happened?
The `z` object is a lazy `Delayed` object. This object holds everything we need to compute the final result, including references to all of the functions that are required and their inputs and relationship to one-another. We can evaluate the result with `.compute()` as above or we can visualize the task graph for this value with `.visualize()`.
```
z
# Look at the task graph for `z`
z.visualize()
```
Notice that this includes the names of the functions from before, and the logical flow of the outputs of the `inc` functions to the inputs of `add`.
### Some questions to consider:
- Why did we go from 3s to 2s? Why weren't we able to parallelize down to 1s?
- What would have happened if the inc and add functions didn't include the `sleep(1)`? Would Dask still be able to speed up this code?
- What if we have multiple outputs or also want to get access to x or y?
## Exercise: Parallelize a for loop
`for` loops are one of the most common things that we want to parallelize. Use `dask.delayed` on `inc` and `sum` to parallelize the computation below:
```
data = [1, 2, 3, 4, 5, 6, 7, 8]
%%time
# Sequential code
results = []
for x in data:
y = inc(x)
results.append(y)
total = sum(results)
total
%%time
# Your parallel code here...
results = []
for x in data:
y = delayed(inc)(x)
results.append(y)
total = delayed(sum)(results)
print("Before computing:", total) # Let's see what type of thing total is
result = total.compute()
print("After computing :", result) # After it's computed
```
How do the graph visualizations compare with the given solution, compared to a version with the `sum` function used directly rather than wrapped with `delay`? Can you explain the latter version? You might find the result of the following expression illuminating
```python
delayed(inc)(1) + delayed(inc)(2)
```
## Exercise: Parallelizing a for-loop code with control flow
Often we want to delay only *some* functions, running a few of them immediately. This is especially helpful when those functions are fast and help us to determine what other slower functions we should call. This decision, to delay or not to delay, is usually where we need to be thoughtful when using `dask.delayed`.
In the example below we iterate through a list of inputs. If that input is even then we want to call `inc`. If the input is odd then we want to call `double`. This `is_even` decision to call `inc` or `double` has to be made immediately (not lazily) in order for our graph-building Python code to proceed.
```
def double(x):
sleep(1)
return 2 * x
def is_even(x):
return not x % 2
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
%%time
# Sequential code
results = []
for x in data:
if is_even(x):
y = double(x)
else:
y = inc(x)
results.append(y)
total = sum(results)
print(total)
%%time
# Your parallel code here...
# TODO: parallelize the sequential code above using dask.delayed
# You will need to delay some functions, but not all
results = []
for x in data:
if is_even(x): # even
y = delayed(double)(x)
else: # odd
y = delayed(inc)(x)
results.append(y)
total = delayed(sum)(results)
%time total.compute()
total.visualize()
```
### Some questions to consider:
- What are other examples of control flow where we can't use delayed?
- What would have happened if we had delayed the evaluation of `is_even(x)` in the example above?
- What are your thoughts on delaying `sum`? This function is both computational but also fast to run.
## Exercise: Parallelizing a Pandas Groupby Reduction
In this exercise we read several CSV files and perform a groupby operation in parallel. We are given sequential code to do this and parallelize it with `dask.delayed`.
The computation we will parallelize is to compute the mean departure delay per airport from some historical flight data. We will do this by using `dask.delayed` together with `pandas`. In a future section we will do this same exercise with `dask.dataframe`.
## Create data
Run this code to prep some data.
This downloads and extracts some historical flight data for flights out of NYC between 1990 and 2000. The data is originally from [here](http://stat-computing.org/dataexpo/2009/the-data.html).
```
%run prep.py -d flights
```
### Inspect data
```
import os
sorted(os.listdir(os.path.join('data', 'nycflights')))
```
### Read one file with `pandas.read_csv` and compute mean departure delay
```
import pandas as pd
df = pd.read_csv(os.path.join('data', 'nycflights', '1990.csv'))
df.head()
# What is the schema?
df.dtypes
# What originating airports are in the data?
df.Origin.unique()
# Mean departure delay per-airport for one year
df.groupby('Origin').DepDelay.mean()
```
### Sequential code: Mean Departure Delay Per Airport
The above cell computes the mean departure delay per-airport for one year. Here we expand that to all years using a sequential for loop.
```
from glob import glob
filenames = sorted(glob(os.path.join('data', 'nycflights', '*.csv')))
%%time
sums = []
counts = []
for fn in filenames:
# Read in file
df = pd.read_csv(fn)
# Groupby origin airport
by_origin = df.groupby('Origin')
# Sum of all departure delays by origin
total = by_origin.DepDelay.sum()
# Number of flights by origin
count = by_origin.DepDelay.count()
# Save the intermediates
sums.append(total)
counts.append(count)
# Combine intermediates to get total mean-delay-per-origin
total_delays = sum(sums)
n_flights = sum(counts)
mean = total_delays / n_flights
mean
```
### Parallelize the code above
Use `dask.delayed` to parallelize the code above. Some extra things you will need to know.
1. Methods and attribute access on delayed objects work automatically, so if you have a delayed object you can perform normal arithmetic, slicing, and method calls on it and it will produce the correct delayed calls.
```python
x = delayed(np.arange)(10)
y = (x + 1)[::2].sum() # everything here was delayed
```
2. Calling the `.compute()` method works well when you have a single output. When you have multiple outputs you might want to use the `dask.compute` function:
```python
>>> x = delayed(np.arange)(10)
>>> y = x ** 2
>>> min_, max_ = compute(y.min(), y.max())
>>> min_, max_
(0, 81)
```
This way Dask can share the intermediate values (like `y = x**2`)
So your goal is to parallelize the code above (which has been copied below) using `dask.delayed`. You may also want to visualize a bit of the computation to see if you're doing it correctly.
```
from dask import compute
%%time
# copied sequential code
sums = []
counts = []
for fn in filenames:
# Read in file
df = pd.read_csv(fn)
# Groupby origin airport
by_origin = df.groupby('Origin')
# Sum of all departure delays by origin
total = by_origin.DepDelay.sum()
# Number of flights by origin
count = by_origin.DepDelay.count()
# Save the intermediates
sums.append(total)
counts.append(count)
# Combine intermediates to get total mean-delay-per-origin
total_delays = sum(sums)
n_flights = sum(counts)
mean = total_delays / n_flights
mean
%%time
# your code here
```
If you load the solution, add `%%time` to the top of the cell to measure the running time.
```
# This is just one possible solution, there are
# several ways to do this using `delayed`
sums = []
counts = []
for fn in filenames:
# Read in file
df = delayed(pd.read_csv)(fn)
# Groupby origin airport
by_origin = df.groupby('Origin')
# Sum of all departure delays by origin
total = by_origin.DepDelay.sum()
# Number of flights by origin
count = by_origin.DepDelay.count()
# Save the intermediates
sums.append(total)
counts.append(count)
# Compute the intermediates
sums, counts = compute(sums, counts)
# Combine intermediates to get total mean-delay-per-origin
total_delays = sum(sums)
n_flights = sum(counts)
mean = total_delays / n_flights
# ensure the results still match
mean
```
### Some questions to consider:
- How much speedup did you get? Is this how much speedup you'd expect?
- Experiment with where to call `compute`. What happens when you call it on `sums` and `counts`? What happens if you wait and call it on `mean`?
- Experiment with delaying the call to `sum`. What does the graph look like if `sum` is delayed? What does the graph look like if it isn't?
- Can you think of any reason why you'd want to do the reduction one way over the other?
### Learn More
Visit the [Delayed documentation](https://docs.dask.org/en/latest/delayed.html). In particular, this [delayed screencast](https://www.youtube.com/watch?v=SHqFmynRxVU) will reinforce the concepts you learned here and the [delayed best practices](https://docs.dask.org/en/latest/delayed-best-practices.html) document collects advice on using `dask.delayed` well.
## Close the Client
Before moving on to the next exercise, make sure to close your client or stop this kernel.
```
client.close()
```
| github_jupyter |
<div class="alert alert-success">
</div>
<div>
<h1 align="center">KoBERT Multi-label text classifier</h1></h1>
<h4 align="center">By: Myeonghak Lee</h4>
</div>
<div class="alert alert-success">
</div>
```
# Input Data 가공 파트
# import torchtext
import pandas as pd
import numpy as np
import os
import re
import config
from config import expand_pandas
from preprocess import preprocess
DATA_PATH=config.DATA_PATH
model_config=config.model_config
import warnings
warnings.filterwarnings("ignore")
config.expand_pandas(max_rows=100, max_cols=100,width=1000,max_info_cols=500)
```
### **configs**
```
num_class=17
ver_num=1
except_labels=["변경/취소","예약기타"]
version_info="{:02d}".format(ver_num)
weight_path=f"../weights/weight_{version_info}.pt"
```
### **preprocess**
```
data=preprocess()
# data_orig=data.voc_total["종합본"]
data.make_table()
# put labels
data.label_process(num_labels=num_class, except_labels=except_labels)
orig=data.voc_total["종합본"]
label_cols=data.label_cols
df=data.data.copy()
voc_dataset=df.reset_index(drop=True)
```
# Modeling part
```
import torch
from torch import nn
from metrics_for_multilabel import calculate_metrics, colwise_accuracy
from bert_model import Data_for_BERT, BERTClassifier, EarlyStopping
from transformers import get_linear_schedule_with_warmup, AdamW
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from sklearn.model_selection import train_test_split
train_input, test_input, train_target, test_target = train_test_split(voc_dataset, voc_dataset["국내선"], test_size = 0.25, random_state = 42)
# train=pd.concat([train_input,train_target],axis=1)
# test=pd.concat([test_input,test_target],axis=1)
train=train_input.copy()
test=test_input.copy()
train=train.reset_index(drop=True)
test=test.reset_index(drop=True)
data_train = Data_for_BERT(train, model_config["max_len"], True, False, label_cols=label_cols)
data_test = Data_for_BERT(test, model_config["max_len"], True, False, label_cols=label_cols)
# 파이토치 모델에 넣을 수 있도록 데이터를 처리함.
# data_train을 넣어주고, 이 테이터를 batch_size에 맞게 잘라줌. num_workers는 사용할 subprocess의 개수를 의미함(병렬 프로그래밍)
train_dataloader = torch.utils.data.DataLoader(data_train, batch_size=model_config["batch_size"], num_workers=0)
test_dataloader = torch.utils.data.DataLoader(data_test, batch_size=model_config["batch_size"], num_workers=0)
# KoBERT 라이브러리에서 bertmodel을 호출함. .to() 메서드는 모델 전체를 GPU 디바이스에 옮겨 줌.
model = BERTClassifier(num_classes=num_class, dr_rate = model_config["dr_rate"]).to(device)
# 옵티마이저와 스케쥴 준비 (linear warmup과 decay)
no_decay = ['bias', 'LayerNorm.weight']
# no_decay에 해당하는 파라미터명을 가진 레이어들은 decay에서 배제하기 위해 weight_decay를 0으로 셋팅, 그 외에는 0.01로 decay
# weight decay란 l2 norm으로 파라미터 값을 정규화해주는 기법을 의미함
optimizer_grouped_parameters = [
{"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay' : 0.01},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
# 옵티마이저는 AdamW, 손실함수는 BCE
# optimizer_grouped_parameters는 최적화할 파라미터의 그룹을 의미함
optimizer = AdamW(optimizer_grouped_parameters, lr= model_config["learning_rate"])
# loss_fn = nn.CrossEntropyLoss()
loss_fn=nn.BCEWithLogitsLoss()
# t_total = train_dataloader.dataset.labels.shape[0] * num_epochs
# linear warmup을 사용해 학습 초기 단계(배치 초기)의 learning rate를 조금씩 증가시켜 나가다, 어느 지점에 이르면 constant하게 유지
# 초기 학습 단계에서의 변동성을 줄여줌.
t_total = len(train_dataloader) * model_config["num_epochs"]
warmup_step = int(t_total * model_config["warmup_ratio"])
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_step, num_training_steps=t_total)
# model_save_name = 'classifier'
# model_file='.pt'
# path = f"./bert_weights/{model_save_name}_{model_file}"
def train_model(model, batch_size, patience, n_epochs,path):
# to track the training loss as the model trains
train_losses = []
# to track the validation loss as the model trains
valid_losses = []
# to track the average training loss per epoch as the model trains
avg_train_losses = []
# to track the average validation loss per epoch as the model trains
avg_valid_losses = []
early_stopping = EarlyStopping(patience=patience, verbose=True, path=path)
for epoch in range(1, n_epochs + 1):
# initialize the early_stopping object
model.train()
train_epoch_pred=[]
train_loss_record=[]
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(train_dataloader):
optimizer.zero_grad()
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
# label = label.long().to(device)
label = label.float().to(device)
out= model(token_ids, valid_length, segment_ids)#.squeeze(1)
loss = loss_fn(out, label)
train_loss_record.append(loss)
train_pred=out.detach().cpu().numpy()
train_real=label.detach().cpu().numpy()
train_batch_result = calculate_metrics(np.array(train_pred), np.array(train_real))
if batch_id%50==0:
print(f"batch number {batch_id}, train col-wise accuracy is : {train_batch_result['Column-wise Accuracy']}")
# save prediction result for calculation of accuracy per batch
train_epoch_pred.append(train_pred)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), model_config["max_grad_norm"])
optimizer.step()
scheduler.step() # Update learning rate schedule
train_losses.append(loss.item())
train_epoch_pred=np.concatenate(train_epoch_pred)
train_epoch_target=train_dataloader.dataset.labels
train_epoch_result=calculate_metrics(target=train_epoch_target, pred=train_epoch_pred)
print(f"=====Training Report: mean loss is {sum(train_loss_record)/len(train_loss_record)}=====")
print(train_epoch_result)
print("=====train done!=====")
# if e % log_interval == 0:
# print("epoch {} batch id {} loss {} train acc {}".format(e+1, batch_id+1, loss.data.cpu().numpy(), train_acc / (batch_id+1)))
# print("epoch {} train acc {}".format(e+1, train_acc / (batch_id+1)))
test_epoch_pred=[]
test_loss_record=[]
model.eval()
with torch.no_grad():
for batch_id, (token_ids, valid_length, segment_ids, test_label) in enumerate(test_dataloader):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length = valid_length
# test_label = test_label.long().to(device)
test_label = test_label.float().to(device)
test_out = model(token_ids, valid_length, segment_ids)
test_loss = loss_fn(test_out, test_label)
test_loss_record.append(test_loss)
valid_losses.append(test_loss.item())
test_pred=test_out.detach().cpu().numpy()
test_real=test_label.detach().cpu().numpy()
test_batch_result = calculate_metrics(np.array(test_pred), np.array(test_real))
if batch_id%50==0:
print(f"batch number {batch_id}, test col-wise accuracy is : {test_batch_result['Column-wise Accuracy']}")
# save prediction result for calculation of accuracy per epoch
test_epoch_pred.append(test_pred)
test_epoch_pred=np.concatenate(test_epoch_pred)
test_epoch_target=test_dataloader.dataset.labels
test_epoch_result=calculate_metrics(target=test_epoch_target, pred=test_epoch_pred)
print(f"=====Testing Report: mean loss is {sum(test_loss_record)/len(test_loss_record)}=====")
print(test_epoch_result)
train_loss = np.average(train_losses)
valid_loss = np.average(valid_losses)
avg_train_losses.append(train_loss)
avg_valid_losses.append(valid_loss)
# clear lists to track next epoch
train_losses = []
valid_losses = []
# early_stopping needs the validation loss to check if it has decresed,
# and if it has, it will make a checkpoint of the current model
early_stopping(valid_loss, model)
if early_stopping.early_stop:
print("Early stopping")
break
# load the last checkpoint with the best model
model.load_state_dict(torch.load(path))
return model, avg_train_losses, avg_valid_losses
# early stopping patience; how long to wait after last time validation loss improved.
patience = 10
model, train_loss, valid_loss = train_model(model,
model_config["batch_size"],
patience,
model_config["num_epochs"],
path=weight_path)
```
# test performance
```
weight_path="../weights/weight_01.pt"
model.load_state_dict(torch.load(weight_path))
test_epoch_pred=[]
test_loss_record=[]
valid_losses=[]
model.eval()
with torch.no_grad():
for batch_id, (token_ids, valid_length, segment_ids, test_label) in enumerate(test_dataloader):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length = valid_length
# test_label = test_label.long().to(device)
test_label = test_label.float().to(device)
test_out = model(token_ids, valid_length, segment_ids)
test_loss = loss_fn(test_out, test_label)
test_loss_record.append(test_loss)
valid_losses.append(test_loss.item())
test_pred=test_out.detach().cpu().numpy()
test_real=test_label.detach().cpu().numpy()
test_batch_result = calculate_metrics(np.array(test_pred), np.array(test_real))
if batch_id%50==0:
print(f"batch number {batch_id}, test col-wise accuracy is : {test_batch_result['Column-wise Accuracy']}")
# save prediction result for calculation of accuracy per epoch
test_epoch_pred.append(test_pred)
# if batch_id%10==0:
# print(test_batch_result["Accuracy"])
test_epoch_pred=np.concatenate(test_epoch_pred)
test_epoch_target=test_dataloader.dataset.labels
test_epoch_result=calculate_metrics(target=test_epoch_target, pred=test_epoch_pred)
# print(test_epoch_pred)
# print(test_epoch_target)
print(f"=====Testing Report: mean loss is {sum(test_loss_record)/len(test_loss_record)}=====")
print(test_epoch_result)
import metrics_for_multilabel as metrics
metrics.mean_ndcg_score(test_epoch_target,test_epoch_pred, k=17)
acc_cnt=0
for n in range(test_epoch_pred.shape[0]):
tar_cnt=np.count_nonzero(test_epoch_target[n])
pred_=test_epoch_pred[n].argsort()[-tar_cnt:]
tar_=test_epoch_target[n].argsort()[-tar_cnt:]
acc_cnt+=len(set(pred_)&set(tar_))/len(pred_)
print(f"accuracy: {acc_cnt/test_epoch_pred.shape[0]}")
calculate_metrics(target=test_epoch_target, pred=test_epoch_pred, threshold=-1)
label_cases_sorted_target=data.label_cols
transform = nlp.data.BERTSentenceTransform(tok, max_seq_length = max_len, pad=True, pair=False)
def get_prediction_from_txt(input_text, threshold=0.0):
sentences = transform([input_text])
get_pred=model(torch.tensor(sentences[0]).long().unsqueeze(0).to(device),torch.tensor(sentences[1]).unsqueeze(0),torch.tensor(sentences[2]).to(device))
pred=np.array(get_pred.to("cpu").detach().numpy()[0] > threshold, dtype=float)
pred=np.nonzero(pred)[0].tolist()
print(f"분석 결과, 대화의 예상 태그는 {[label_cases_sorted_target[i] for i in pred]} 입니다.")
true=np.nonzero(input_text_label)[0].tolist()
print(f"실제 태그는 {[label_cases_sorted_target[i] for i in true]} 입니다.")
input_text_num=17
input_text=voc_dataset.iloc[input_text_num,0]
# input_text=test.iloc[input_text_num,0]
input_text_label=voc_dataset.iloc[input_text_num,1:].tolist()
get_prediction_from_txt(input_text, -1)
```
# XAI
```
from captum_tools_vocvis import *
from captum.attr import LayerIntegratedGradients, TokenReferenceBase, visualization
# model = BERTClassifier(bertmodel, dr_rate = 0.4).to(device)
# model.load_state_dict(torch.load(os.getcwd()+"/chat_voc_model.pt", map_location=device))
model.eval()
PAD_IND = tok.vocab.padding_token
PAD_IND = tok.convert_tokens_to_ids(PAD_IND)
token_reference = TokenReferenceBase(reference_token_idx=PAD_IND)
lig = LayerIntegratedGradients(model,model.bert.embeddings)
transform = nlp.data.BERTSentenceTransform(tok, max_seq_length = 64, pad=True, pair=False)
voc_label_dict_inverse={ele:label_cols.index(ele) for ele in label_cols}
voc_label_dict={label_cols.index(ele):ele for ele in label_cols}
def forward_with_sigmoid_for_bert(input,valid_length,segment_ids):
return torch.sigmoid(model(input,valid_length,segment_ids))
def forward_for_bert(input,valid_length,segment_ids):
return torch.nn.functional.softmax(model(input,valid_length,segment_ids),dim=1)
# accumalate couple samples in this array for visualization purposes
vis_data_records_ig = []
def interpret_sentence(model, sentence, min_len = 64, label = 0, n_steps=10):
# text = [token for token in tok.sentencepiece(sentence)]
# if len(text) < min_len:
# text += ['pad'] * (min_len - len(text))
# indexed = tok.convert_tokens_to_ids(text)
# print(text)
# 토크나이징, 시퀀스 생성
seq_tokens=transform([sentence])
indexed=torch.tensor(seq_tokens[0]).long()#.to(device)
valid_length=torch.tensor(seq_tokens[1]).long().unsqueeze(0)
segment_ids=torch.tensor(seq_tokens[2]).long().unsqueeze(0).to(device)
sentence=[token for token in tok.sentencepiece(sentence)]
with torch.no_grad():
model.zero_grad()
input_indices = torch.tensor(indexed, device=device)
input_indices = input_indices.unsqueeze(0)
seq_length = min_len
# predict
pred = forward_with_sigmoid_for_bert(input_indices,valid_length,segment_ids).detach().cpu().numpy().argmax().item()
print(forward_with_sigmoid_for_bert(input_indices,valid_length,segment_ids))
pred_ind = round(pred)
# generate reference indices for each sample
reference_indices = token_reference.generate_reference(seq_length, device=device).unsqueeze(0)
# compute attributions and approximation delta using layer integrated gradients
attributions_ig, delta = lig.attribute(input_indices, reference_indices,\
n_steps=n_steps, return_convergence_delta=True,target=label,\
additional_forward_args=(valid_length,segment_ids))
print('pred: ', Label.vocab.itos[pred_ind], '(', '%.2f'%pred, ')', ', delta: ', abs(delta))
add_attributions_to_visualizer(attributions_ig, sentence, pred, pred_ind, label, delta, vis_data_records_ig)
def add_attributions_to_visualizer(attributions, input_text, pred, pred_ind, label, delta, vis_data_records):
attributions = attributions.sum(dim=2).squeeze(0)
attributions = attributions / torch.norm(attributions)
attributions = attributions.cpu().detach().numpy()
# storing couple samples in an array for visualization purposes
vis_data_records.append(visualization.VisualizationDataRecord(
attributions,
pred,
voc_label_dict[pred_ind], #Label.vocab.itos[pred_ind],
voc_label_dict[label], # Label.vocab.itos[label],
100, # Label.vocab.itos[1],
attributions.sum(),
input_text,
delta))
sentence=voc_dataset.iloc[22].text
visualize_text(vis_data_records_ig)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/BenM1215/fastai-v3/blob/master/00_notebook_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Important note:** You should always work on a duplicate of the course notebook. On the page you used to open this, tick the box next to the name of the notebook and click duplicate to easily create a new version of this notebook.
You will get errors each time you try to update your course repository if you don't do this, and your changes will end up being erased by the original course version.
```
!curl -s https://course.fast.ai/setup/colab | bash
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/"
base_dir = root_dir + 'fastai-v3/'
```
# Welcome to Jupyter Notebooks!
If you want to learn how to use this tool you've come to the right place. This article will teach you all you need to know to use Jupyter Notebooks effectively. You only need to go through Section 1 to learn the basics and you can go into Section 2 if you want to further increase your productivity.
You might be reading this tutorial in a web page (maybe Github or the course's webpage). We strongly suggest to read this tutorial in a (yes, you guessed it) Jupyter Notebook. This way you will be able to actually *try* the different commands we will introduce here.
## Section 1: Need to Know
### Introduction
Let's build up from the basics, what is a Jupyter Notebook? Well, you are reading one. It is a document made of cells. You can write like I am writing now (markdown cells) or you can perform calculations in Python (code cells) and run them like this:
```
1+1
```
Cool huh? This combination of prose and code makes Jupyter Notebook ideal for experimentation: we can see the rationale for each experiment, the code and the results in one comprehensive document. In fast.ai, each lesson is documented in a notebook and you can later use that notebook to experiment yourself.
Other renowned institutions in academy and industry use Jupyter Notebook: Google, Microsoft, IBM, Bloomberg, Berkeley and NASA among others. Even Nobel-winning economists [use Jupyter Notebooks](https://paulromer.net/jupyter-mathematica-and-the-future-of-the-research-paper/) for their experiments and some suggest that Jupyter Notebooks will be the [new format for research papers](https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/).
### Writing
A type of cell in which you can write like this is called _Markdown_. [_Markdown_](https://en.wikipedia.org/wiki/Markdown) is a very popular markup language. To specify that a cell is _Markdown_ you need to click in the drop-down menu in the toolbar and select _Markdown_.
Click on the the '+' button on the left and select _Markdown_ from the toolbar.
Now you can type your first _Markdown_ cell. Write 'My first markdown cell' and press run.

You should see something like this:
My first markdown cell
Now try making your first _Code_ cell: follow the same steps as before but don't change the cell type (when you add a cell its default type is _Code_). Type something like 3/2. You should see '1.5' as output.
```
3/2
```
### Modes
If you made a mistake in your *Markdown* cell and you have already ran it, you will notice that you cannot edit it just by clicking on it. This is because you are in **Command Mode**. Jupyter Notebooks have two distinct modes:
1. **Edit Mode**: Allows you to edit a cell's content.
2. **Command Mode**: Allows you to edit the notebook as a whole and use keyboard shortcuts but not edit a cell's content.
You can toggle between these two by either pressing <kbd>ESC</kbd> and <kbd>Enter</kbd> or clicking outside a cell or inside it (you need to double click if its a Markdown cell). You can always know which mode you're on since the current cell has a green border if in **Edit Mode** and a blue border in **Command Mode**. Try it!
### Other Important Considerations
1. Your notebook is autosaved every 120 seconds. If you want to manually save it you can just press the save button on the upper left corner or press <kbd>s</kbd> in **Command Mode**.

2. To know if your kernel is computing or not you can check the dot in your upper right corner. If the dot is full, it means that the kernel is working. If not, it is idle. You can place the mouse on it and see the state of the kernel be displayed.

3. There are a couple of shortcuts you must know about which we use **all** the time (always in **Command Mode**). These are:
<kbd>Shift</kbd>+<kbd>Enter</kbd>: Runs the code or markdown on a cell
<kbd>Up Arrow</kbd>+<kbd>Down Arrow</kbd>: Toggle across cells
<kbd>b</kbd>: Create new cell
<kbd>0</kbd>+<kbd>0</kbd>: Reset Kernel
You can find more shortcuts in the Shortcuts section below.
4. You may need to use a terminal in a Jupyter Notebook environment (for example to git pull on a repository). That is very easy to do, just press 'New' in your Home directory and 'Terminal'. Don't know how to use the Terminal? We made a tutorial for that as well. You can find it [here](https://course.fast.ai/terminal_tutorial.html).

That's it. This is all you need to know to use Jupyter Notebooks. That said, we have more tips and tricks below ↓↓↓
## Section 2: Going deeper
### Markdown formatting
#### Italics, Bold, Strikethrough, Inline, Blockquotes and Links
The five most important concepts to format your code appropriately when using markdown are:
1. *Italics*: Surround your text with '\_' or '\*'
2. **Bold**: Surround your text with '\__' or '\**'
3. `inline`: Surround your text with '\`'
4. > blockquote: Place '\>' before your text.
5. [Links](https://course.fast.ai/): Surround the text you want to link with '\[\]' and place the link adjacent to the text, surrounded with '()'
#### Headings
Notice that including a hashtag before the text in a markdown cell makes the text a heading. The number of hashtags you include will determine the priority of the header ('#' is level one, '##' is level two, '###' is level three and '####' is level four). We will add three new cells with the '+' button on the left to see how every level of heading looks.
Double click on some headings and find out what level they are!
#### Lists
There are three types of lists in markdown.
Ordered list:
1. Step 1
2. Step 1B
3. Step 3
Unordered list
* learning rate
* cycle length
* weight decay
Task list
- [x] Learn Jupyter Notebooks
- [x] Writing
- [x] Modes
- [x] Other Considerations
- [ ] Change the world
Double click on each to see how they are built!
### Code Capabilities
**Code** cells are different than **Markdown** cells in that they have an output cell. This means that we can _keep_ the results of our code within the notebook and share them. Let's say we want to show a graph that explains the result of an experiment. We can just run the necessary cells and save the notebook. The output will be there when we open it again! Try it out by running the next four cells.
```
# Import necessary libraries
from fastai.vision import *
import matplotlib.pyplot as plt
from PIL import Image
a = 1
b = a + 1
c = b + a + 1
d = c + b + a + 1
a, b, c ,d
plt.plot([a,b,c,d])
plt.show()
```
We can also print images while experimenting. I am watching you.
```
Image.open(base_dir +'images/notebook_tutorial/cat_example.jpg')
```
### Running the app locally
You may be running Jupyter Notebook from an interactive coding environment like Gradient, Sagemaker or Salamander. You can also run a Jupyter Notebook server from your local computer. What's more, if you have installed Anaconda you don't even need to install Jupyter (if not, just `pip install jupyter`).
You just need to run `jupyter notebook` in your terminal. Remember to run it from a folder that contains all the folders/files you will want to access. You will be able to open, view and edit files located within the directory in which you run this command but not files in parent directories.
If a browser tab does not open automatically once you run the command, you should CTRL+CLICK the link starting with 'https://localhost:' and this will open a new tab in your default browser.
### Creating a notebook
Click on 'New' in the upper left corner and 'Python 3' in the drop-down list (we are going to use a [Python kernel](https://github.com/ipython/ipython) for all our experiments).

Note: You will sometimes hear people talking about the Notebook 'kernel'. The 'kernel' is just the Python engine that performs the computations for you.
### Shortcuts and tricks
#### Command Mode Shortcuts
There are a couple of useful keyboard shortcuts in `Command Mode` that you can leverage to make Jupyter Notebook faster to use. Remember that to switch back and forth between `Command Mode` and `Edit Mode` with <kbd>Esc</kbd> and <kbd>Enter</kbd>.
<kbd>m</kbd>: Convert cell to Markdown
<kbd>y</kbd>: Convert cell to Code
<kbd>D</kbd>+<kbd>D</kbd>: Delete cell
<kbd>o</kbd>: Toggle between hide or show output
<kbd>Shift</kbd>+<kbd>Arrow up/Arrow down</kbd>: Selects multiple cells. Once you have selected them you can operate on them like a batch (run, copy, paste etc).
<kbd>Shift</kbd>+<kbd>M</kbd>: Merge selected cells.
<kbd>Shift</kbd>+<kbd>Tab</kbd>: [press once] Tells you which parameters to pass on a function
<kbd>Shift</kbd>+<kbd>Tab</kbd>: [press three times] Gives additional information on the method
#### Cell Tricks
```
from fastai import*
from fastai.vision import *
```
There are also some tricks that you can code into a cell.
`?function-name`: Shows the definition and docstring for that function
```
?ImageDataBunch
```
`??function-name`: Shows the source code for that function
```
??ImageDataBunch
```
`doc(function-name)`: Shows the definition, docstring **and links to the documentation** of the function
(only works with fastai library imported)
```
doc(ImageDataBunch)
```
#### Line Magics
Line magics are functions that you can run on cells and take as an argument the rest of the line from where they are called. You call them by placing a '%' sign before the command. The most useful ones are:
`%matplotlib inline`: This command ensures that all matplotlib plots will be plotted in the output cell within the notebook and will be kept in the notebook when saved.
`%reload_ext autoreload`, `%autoreload 2`: Reload all modules before executing a new line. If a module is edited, it is not necessary to rerun the import commands, the modules will be reloaded automatically.
These three commands are always called together at the beginning of every notebook.
```
%matplotlib inline
%reload_ext autoreload
%autoreload 2
```
`%timeit`: Runs a line a ten thousand times and displays the average time it took to run it.
```
%timeit [i+1 for i in range(1000)]
```
`%debug`: Allows to inspect a function which is showing an error using the [Python debugger](https://docs.python.org/3/library/pdb.html).
```
for i in range(1000):
a = i+1
b = 'string'
c = b+1
%debug
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
data = np.load("type4_data.npy",allow_pickle=True)
mosaic_list_of_images = data[0]["mosaic_list"]
mosaic_label = data[0]["mosaic_label"]
fore_idx = data[0]["fore_idx"]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50) #,self.output)
self.linear2 = nn.Linear(50,50)
self.linear3 = nn.Linear(50,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,50], dtype=torch.float64) # number of features of output
features = torch.zeros([batch,self.K,50],dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
features = features.to("cuda")
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
log_x = F.log_softmax(x,dim=1) #log alpha
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x,log_x
def helper(self,x):
x = self.linear1(x)
x1 = F.tanh(x)
x = F.relu(x)
x = self.linear2(x)
x = F.relu(x)
x = self.linear3(x)
#print(x1.shape)
return x,x1
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(50,50)
self.linear2 = nn.Linear(50,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
loss = criterion(x,y)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
def calculate_attn_loss(dataloader,what,where,k):
what.eval()
where.eval()
r_loss = 0
cc_loss = 0
cc_entropy = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha,log_alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
r_loss += loss.item()
cc_loss += closs.item()
cc_entropy += entropy.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,cc_loss/i,cc_entropy/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
number_runs = 20
full_analysis = []
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
k = 0.009
r_loss = []
r_closs = []
r_centropy = []
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
torch.manual_seed(n)
what = Classification_deep(50,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)#,momentum=0.9)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)#,momentum=0.9)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
cc_loss_curi = []
cc_entropy_curi = []
epochs = 3000
# calculate zeroth epoch loss and FTPT values
running_loss,_,_,anlys_data = calculate_attn_loss(train_loader,what,where,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha,log_alpha = where(inputs)
outputs = what(avg)
loss,_,_ = my_cross_entropy( outputs,labels,alpha,log_alpha,k)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,k)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy))
loss_curi.append(running_loss) #loss per epoch
cc_loss_curi.append(ccloss)
cc_entropy_curi.append(ccentropy)
if running_loss<=0.01:
break
print('Finished Training run ' +str(n))
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
r_loss.append(np.array(loss_curi))
r_closs.append(np.array(cc_loss_curi))
r_centropy.append(np.array(cc_entropy_curi))
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha,_ = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
a,b= full_analysis[0]
print(a)
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.title("Training trends for run "+str(cnt))
#plt.savefig("/content/drive/MyDrive/Research/alpha_analysis/100_300/k"+str(k)+"/"+"run"+str(cnt)+name+".png",bbox_inches="tight")
#plt.savefig("/content/drive/MyDrive/Research/alpha_analysis/100_300/k"+str(k)+"/"+"run"+str(cnt)+name+".pdf",bbox_inches="tight")
cnt+=1
# plt.figure(figsize=(6,6))
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.plot(loss_curi)
np.mean(np.array(FTPT_analysis),axis=0)
FTPT_analysis.to_csv("type4_first_k_value_01_lr_001.csv",index=False)
FTPT_analysis
```
# Entropy
```
entropy_1 = r_centropy[11] # FTPT 100 ,FFPT 0 k value =0.01
loss_1 = r_loss[11]
ce_loss_1 = r_closs[11]
entropy_2 = r_centropy[16] # kvalue = 0 FTPT 99.96, FFPT 0.03
ce_loss_2 = r_closs[16]
# plt.plot(r_closs[1])
plt.plot(entropy_1,label = "entropy k_value=0.01")
plt.plot(loss_1,label = "overall k_value=0.01")
plt.plot(ce_loss_1,label = "ce kvalue = 0.01")
plt.plot(entropy_2,label = "entropy k_value = 0")
plt.plot(ce_loss_2,label = "ce k_value=0")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.savefig("second_layer.png")
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.