text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
**Chapter 09**
Support Vector Machines
# 1 Maximal Margin Classifier
## 1.1 What Is a Hyperplane?
In a $p$-dimensional space, a *hyperplane* is a flat affine subspace of dimension $p-1$.
In mathematical definition, a $p$-dimensional hyperplane satisfies the following equation:
$$
\beta_0+\beta_1X_1+\beta_2X_2+\ldots+\beta_pX_p=0
$$
## 1.2 Classification Using a Separating Hyperplane
Suppose that we have $n \times p$ data matrix $X$ that consist of $n$ training observations in $p$-dimensional space, and these observation fall into two classes-that is, $y_1,\ldots,y_n \in \{-1,1\}$.
Then a separating hyperplane has property that
$$
(\beta_0+\beta_1x_{i1} +\beta_2x_{i2}+\ldots+\beta_px_{ip})y_i > 0
$$
## 1.3 The Maximal Margin Classifier
A natural choice is the maximal margin hyperplane, which is the separating hyperplane that is farthest from the training observations.
Briefly, the maximal margin hyperplane is the solution to the optimization problem
$$
\underset{\beta_0,\beta_1,\ldots,\beta_p}{\text{maximize}}M
$$
subject to
$$
(\beta_0+\beta_1x_{i1} +\beta_2x_{i2}+\ldots+\beta_px_{ip})y_i \ge M
$$
# 2 Support Vector Classifier
## soft margin classifier
It is the solution to the optimization problem
$$
\underset{\beta_0,\beta_1,\ldots,\beta_p}{\text{maximize}}M
$$
subject to
$$\sum_{j=1}^{p}\beta_j^2=1$$
$$(\beta_0+\beta_1x_{i1} +\beta_2x_{i2}+\ldots+\beta_px_{ip})y_i \ge M(1-\epsilon_i)$$
$$\epsilon_i \ge 0, \sum_{i=1}^n\epsilon_i \le C$$
# 3 Support Vector Machines
It is an extension of the support vector classifier that results from enlarging the features space in a specific way, using *kernels*. The solution to the suppport vector classifier problem involves only the *inner products* of the observations. The inner product of two $r$-vector $a$ and $b$ is defined as $<a,b>=\sum_{i=1}^ra_ib_i$. Thus the inner product of two observations $x_i, x_{i'}$ is given by
$$
<x_i,x_{i'}>=\sum_{j=1}^{p}x_{ij}x_{x'j}
$$
# 4 SVMs with more than two classes
## 4.1 One-Versus-One Classification
A approach constructs ${k \choose 2}$ SVMs. We classify a test observation using each of the ${k \choose 2}$ classifiers, and tally the number of times that test observation is assigned to each of the $K$ classes.
## 4.2 One-Versus-All Classification
We fit K SVMs, each time comparing one the K classes to the remaining $K-1$ classes. Let $x^{*}$ denote a test observation. We assign the observation to the class for which $\beta_{0k}+\beta_{1k}x_1^*+\ldots+\beta_{pk}x_p^*$ is largest, as this amounts to a high level of confidence that the test observaton belongs to the $k$th class.
| github_jupyter |
# Building Neural Networks
### Contents
1. Introduction
2. MNIST Dataset
3. Define your own Neural Network
4. Train the Model
5. Test the Model
6. Demonstrate the Model
7. Save the Model
## 1. Introduction
A neural network is a network composed of artificial neurons and nodes, being modeled after the human brain architecture and activation processes. It is mainly used to solve problems of artificial intelligence problems. <br/>
<br> The below diagram describes a brief process of neural network. Inputs are multiplied by weights, and summed, and it is passed to activation function, which converts the output to a value in a specific range, -1 to 1 or 0 to 1.
<img src = "./Images/DNN.png">
There are variations of the neural network, including fully connected networks, convolutional neural networks, recurrent neural networks, that works in different areas. <br/>
<br> **Fully-connected networks**, or deep neural networks, is comprised of linear networks. <br/>
<br> **Convolutional Neural Networks (CNN)** is typically made up of convolutional layers, pooling layers, and fully-connected layers. It effectively extracts useful features from visual data, thus it is widely used in the field of visual tasks. <br/>
<br> **Recurrent Neural Networks (RNN)** save all the pre-processed information so as to reuse it in the future, thus effective in dealing with natural language and time-series data.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.autograd import Variable
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import helper
import os
import random
import numpy as np
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings('ignore')
print(torch.__version__)
```
Setting device on GPU if available, else CPU
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
```
## 1. MNIST Dataset
A neural network introduced for this tutorial is for **digit classification** using `MNIST(Modified National Institute of Standard and Technology database)`. <br/>
<br> It is a large database of handwritten digits and most widely used dataset for training and testing neural networks and for beginners. It contains 60,000 training images and 10,000 testing images and each image has a size of 28 * 28 (= 784) and is a binary image (black and white or 1-color channel).
```
# define a transform to normalize the data
transform = transforms.Compose([
transforms.ToTensor(), # ToTensor() must be declared before Normalize()
# It converts the range from [0, 255] to [0, 1]
transforms.Normalize(mean = (0.5, ), std = (0.5, ))
# mean: (m1, m2, ..., mn) and std: (s1, s2, ..., sn)
])
# batch_size is the number of training examples in one forward and backward pass.
# As the computer is a binary machine, the magnitude of batch size is recommended to set powers of two (2^n).
batch_size = 64
# download and load the train data
train_data = datasets.MNIST('./data/MNIST',
download = True,
train = True,
transform = transform)
train_loader = DataLoader(train_data,
batch_size = batch_size,
shuffle = True)
# download and load the test data
test_data = datasets.MNIST('./data/MNIST',
download = True, # already downloaded so don't have to declare again
train = False,
transform = transform)
test_loader = DataLoader(test_data,
batch_size = batch_size,
shuffle = False)
train_data.data.size(), test_data.data.size()
```
To check the data, the below code shows the random data within training dataset.
```
idx = torch.randint(0, len(train_data), (1, )).item()
random_image = train_data[idx][0].squeeze().numpy()
target_num = train_data[idx][1]
print("Target: {}".format(target_num))
print("Size of Image: {}".format(random_image.shape))
plt.imshow(random_image, cmap = "gray")
plt.axis("off")
plt.show()
```
## 2. Define your own Neural Network
<img src = "./Images/MNIST_Network.png" align = "left">
<br> [Image source](http://neuralnetworksanddeeplearning.com/chap1.html) </br>
Let's build a neural network that classifies this dataset with PyTorch. Its input should be 784 (= 28 * 28) and output 10. The figure above is for simple reference, hence does not match to the network built below. More layers or deep learning techniques can be added to increase the accuracy. <br/>
<br> `nn.Sequential` is a 'sequential' container that includes `nn.Linear`, `nn.Conv2D`, and `nn.ReLU`. It processes in the order of methods that the user put on. <br/>
<br> `forward` function processes the functions declared in `__init__`. Read the below code for better understanding.
```
# define hyper-parameters
input_size = 28 * 28 #(= 784)
hidden_size = [512, 256, 64] # multi-layer perceptron
num_classes = 10
learning_rate = 0.001
class Model(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Model, self).__init__()
self.main = nn.Sequential(
nn.Linear(input_size, hidden_size[0]),
nn.ReLU(),
nn.Linear(hidden_size[0], hidden_size[1]),
nn.ReLU(),
nn.Linear(hidden_size[1], hidden_size[2]),
nn.ReLU(),
nn.Linear(hidden_size[2], num_classes)
)
def forward(self, x):
out = self.main(x)
return out
model = Model(input_size, hidden_size, num_classes)
print(model.to(device))
```
For training, objective funciton and optimizer are required. Objective function calculates the difference between the answer the output from the model. Optimizer updates the weight to derive outputs closer to answers during back propagation.<br/>
<br> `torch.nn.CrossEntropyLoss` is used especially when solving classification problems.
<br> `torch.optim.Adam` optimizer is used as it shows the best performance so far.
```
# Loss Function and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr = learning_rate)
# train list
train_cost = []
```
## 3. Train the Model
The designed neural network should be trained using MNIST dataset to achieve its original goal, derivation of desired output on un-seen data. The below diagram illustrates the brief overview of training of neural networks.
<img src = './images/TrainingNN.png'>
The input is fed into the neural networks. The, it outputs a prediction and it is compared with true targets using loss function. Then, the optimizer updates the weights of the network using the loss value.
In addition, `num_epochs` is total number of epochs.
<br> `model.train()` makes model to be train mode.
```
def train(model, num_epochs):
model.train()
for epoch in range(num_epochs):
total_batch = len(train_data) // batch_size
for i, (batch_images, batch_labels) in enumerate(train_loader):
X = batch_images.view(-1, 28 * 28)
Y = batch_labels
#######################################################################
### The below five or six lines should be memorized for further use.###
# forward pass
pred = model(X)
# calculation of loss value
cost = criterion(pred, Y)
# Adding cost value to the cost_losses class for graph plot later.
train_cost.append(cost.item())
## backward pass and optimization
# gradient initialization
optimizer.zero_grad()
# backward pass
cost.backward()
# parameter update
optimizer.step()
#######################################################################
# print statistics of training process
if (i+1) % 300 == 0:
print('Epoch [%d/%d], lter [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, total_batch, np.mean(train_cost)))
# Save the model weights for future inference
torch.save(model.state_dict(), './data/Tutorial_2_BasicNN.pkl')
print("Learning Finished!")
train(model = model, num_epochs = 7)
```
Addition of more deep learning techniques should improve the model performance.
## 4. Test the Model
After training the model, we should evaluate the model using unseen data. First, use `load_state_dict(torch.load()` to load the saved model weights. Then, run `model.eval()` to set the model to evaluation mode. It turns off any drop-out or batch normalization layers in the model that are not desriable to be used during the test. Also, `torch.no_grad()` disables autograd function.
```
model = Model(input_size, hidden_size, num_classes).to(device)
model.load_state_dict(torch.load('./data/Tutorial_2_BasicNN.pkl'))
model.eval()
def test(model):
# declare that the model is about to evaluate
model.eval()
correct = 0
total = 0
with torch.no_grad():
for images, labels in test_data:
images = images.view(-1, 28 * 28).to(device)
# forward pass
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += 1
correct += (predicted == labels).sum().item()
print("Accuracy of Test Images: %f %%" % (100 * float(correct) / total))
test(model = model)
```
## 5. Demonstrate the Model
It imports an image from test dataset and see if the learning is conducted properly or not.
```
r = random.randint(0, len(test_data)-1)
X_single_data = Variable(test_data.test_data[r:r + 1].view(-1,28*28).float())
Y_single_data = Variable(test_data.test_labels[r:r + 1])
single_prediction = model(X_single_data)
plt.imshow(X_single_data.data.view(28, 28).numpy(), cmap='gray')
print('Label: ', Y_single_data.data.view(1).numpy())
print('Prediction: ', torch.max(single_prediction.data, 1)[1].numpy())
```
| github_jupyter |
# Data Processing Pipelines
```
import pandas as pd
import holoviews as hv
from holoviews import opts
from bokeh.sampledata import stocks
from holoviews.operation.timeseries import rolling, rolling_outlier_std
hv.extension('bokeh')
opts.defaults(opts.Curve(width=600, framewise=True))
```
In the previous guides we discovered how to load and declare [dynamic, live data](./07-Live_Data.ipynb) and how to [transform elements](./11-Transforming_Elements.ipynb) using `dim` expressions and operations. In this guide we will discover how to combine dynamic data with operations to declare lazy and declarative data processing pipelines, which can be used for interactive exploration but can also drive complex dashboards or even bokeh apps.
## Declaring dynamic data
We will begin by declaring a function which loads some data. In this case we will just load some stock data from the bokeh but you could imagine querying this data using REST interface or some other API or even loading some large collection of data from disk or generating the data from some simulation or data processing job.
```
def load_symbol(symbol, **kwargs):
df = pd.DataFrame(getattr(stocks, symbol))
df['date'] = df.date.astype('datetime64[ns]')
return hv.Curve(df, ('date', 'Date'), ('adj_close', 'Adjusted Close'))
stock_symbols = ['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT']
dmap = hv.DynamicMap(load_symbol, kdims='Symbol').redim.values(Symbol=stock_symbols)
```
We begin by displaying our DynamicMap to see what we are dealing with. Recall that a ``DynamicMap`` is only evaluated when you request the key so the ``load_symbol`` function is only executed when first displaying the ``DynamicMap`` and whenever we change the widget dropdown:
```
dmap
```
## Processing data
It is very common to want to process some data, for this purpose HoloViews provides so-called ``Operations``, which are described in detail in the [Transforming Elements](./11-Transforming_Elements.ipynb). ``Operations`` are simply parameterized functions, which take HoloViews objects as input, transform them in some way and then return the output.
In combination with [Dimensioned Containers](./05-Dimensioned_Containers.ipynb) such as ``HoloMap`` and ``GridSpace`` they are a powerful way to explore how the parameters of your transform affect the data. We will start with a simple example. HoloViews provides a ``rolling`` function which smoothes timeseries data with a rolling window. We will apply this operation with a ``rolling_window`` of 30, i.e. roughly a month of our daily timeseries data:
```
smoothed = rolling(dmap, rolling_window=30)
smoothed
```
As you can see the ``rolling`` operation applies directly to our ``DynamicMap``, smoothing each ``Curve`` before it is displayed. Applying an operation to a ``DynamicMap`` keeps the data as a ``DynamicMap``, this means the operation is also applied lazily whenever we display or select a different symbol in the dropdown widget.
### Dynamically evaluating parameters on operations and transforms with ``.apply``
The ``.apply`` method allows us to automatically build a dynamic pipeline given an object and some operation or function along with parameter, stream or widget instances passed in as keyword arguments. Internally it will then build a `Stream` to ensure that whenever one of these changes the plot is updated. To learn more about streams see the [Responding to Events](./12-Responding_to_Events.ipynb).
This mechanism allows us to build powerful pipelines by linking parameters on a user defined class or even an external widget, e.g. here we import an ``IntSlider`` widget from [``panel``](https://pyviz.panel.org):
```
import panel as pn
slider = pn.widgets.IntSlider(name='rolling_window', start=1, end=100, value=50)
```
Using the ``.apply`` method we could now apply the ``rolling`` operation to the DynamicMap and link the slider to the operation's ``rolling_window`` parameter (which also works for simple functions as will be shown below). However, to further demonstrate the features of `dim` expressions and the `.transform` method, which we first introduced in the [Transforming elements user guide](11-Transforming_Elements.ipynb), we will instead apply the rolling mean using the `.df` namespace accessor on a `dim` expression:
```
rolled_dmap = dmap.apply.transform(adj_close=hv.dim('adj_close').df.rolling(slider).mean())
rolled_dmap
```
The ``rolled_dmap`` is another DynamicMap that defines a simple two-step pipeline, which calls the original callback when the ``symbol`` changes and reapplies the expression whenever the slider value changes. Since the widget's value is now linked to the plot via a ``Stream`` we can display the widget and watch the plot update:
```
slider
```
The power of building pipelines is that different visual components can share the same inputs but compute very different things from that data. The part of the pipeline that is shared is only evaluated once making it easy to build efficient data processing code. To illustrate this we will also apply the ``rolling_outlier_std`` operation which computes outliers within the ``rolling_window`` and again we will supply the widget ``value``:
```
outliers = dmap.apply(rolling_outlier_std, rolling_window=slider.param.value)
rolled_dmap * outliers.opts(color='red', marker='triangle')
```
We can chain operations like this indefinitely and attach parameters or explicit streams to each stage. By chaining we can watch our visualization update whenever we change a stream value anywhere in the pipeline and HoloViews will be smart about which parts of the pipeline are recomputed, which allows us to build complex visualizations very quickly.
The ``.apply`` method is also not limited to operations. We can just as easily apply a simple Python function to each object in the ``DynamicMap``. Here we define a function to compute the residual between the original ``dmap`` and the ``rolled_dmap``.
```
def residual_fn(overlay):
# Get first and second Element in overlay
el1, el2 = overlay.get(0), overlay.get(1)
# Get x-values and y-values of curves
xvals = el1.dimension_values(0)
yvals = el1.dimension_values(1)
yvals2 = el2.dimension_values(1)
# Return new Element with subtracted y-values
# and new label
return el1.clone((xvals, yvals-yvals2),
vdims='Residual')
```
If we overlay the two DynamicMaps we can then dynamically broadcast this function to each of the overlays, producing a new DynamicMap which responds to both the symbol selector widget and the slider:
```
residual = (dmap * rolled_dmap).apply(residual_fn)
residual
```
In later guides we will see how we can combine HoloViews plots and Panel widgets into custom layouts allowing us to define complex dashboards. For more information on how to deploy bokeh apps from HoloViews and build dashboards see the [Deploying Bokeh Apps](./Deploying_Bokeh_Apps.ipynb) and [Dashboards](./17-Dashboards.ipynb) guides. To get a quick idea of what this might look like let's compose all the components we have no built:
| github_jupyter |
# Portfolio Selection Optimization
This model is an example of the classic [Markowitz portfolio selection optimization model](https://en.wikipedia.org/wiki/Markowitz_model). We want to find the fraction of the portfolio to invest among a set of stocks that balances risk and return. It is a Quadratic Programming (QP) model with vector and matrix data for returns and risk, respectively. This is best suited to a matrix formulation, so we use the Gurobi Python *matrix* interface. The basic model is fairly simple, so we also solve it parametrically to find the efficient frontier.
**Download the Repository** <br />
You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip).
## Model Formulation
### Parameters
We use the [Greek values](https://en.wikipedia.org/wiki/Greeks_\(finance\)) that are traditional in finance:
- $\delta$: n-element vector measuring the change in price for each stock
- $\sigma$: n x n matrix measuring the covariance among stocks
There is one additional parameter when solving the model parametrically:
- r: target return
### Decision Variables
- $x \ge 0$: n-element vector where each element represents the fraction of the porfolio to invest in each stock
### Objective Function
Minimize the total risk, a convex quadratic function:
\begin{equation}
\min x^t \cdot \sigma \cdot x
\end{equation}
### Constraints
Allocate the entire portfolio: the total investments should be 1.0 (100%), where $e$ is a unit vector (all 1's):
\begin{equation}
e \cdot x = 1
\end{equation}
Return: When we solve the model parametrically for different return values $r$, we add a constraint on the target return:
\begin{equation}
\delta \cdot x = r
\end{equation}
## Python Implementation
### Stock data
Use [yfinance](https://pypi.org/project/yfinance/) library to get the latest 2 years of _actual stock data_ from the 20 most profitable US companies, [according to Wikipedia in April 2021](https://en.wikipedia.org/wiki/List_of_largest_companies_in_the_United_States_by_revenue#List_of_companies_by_profit).
```
%pip install gurobipy yfinance
import yfinance as yf
stocks = ['BRK-A', 'AAPL', 'MSFT', 'JPM', 'GOOG', 'BAC', 'INTC', 'WFC',
'C', 'VZ', 'FB', 'PFE', 'JNJ', 'WMT', 'XOM',
'FNMA', 'T', 'UNH', 'CMCSA', 'V' ]
data = yf.download(stocks, period='2y')
```
### Compute Greeks
Using the downloaded stock data, find the delta (return), sigma (covariance) and standard deviation values for stock prices:
```
import numpy as np
closes = np.transpose(np.array(data.Close)) # matrix of daily closing prices
absdiff = np.diff(closes) # change in closing price each day
reldiff = np.divide(absdiff, closes[:,:-1]) # relative change in daily closing price
delta = np.mean(reldiff, axis=1) # mean price change
sigma = np.cov(reldiff) # covariance (standard deviations)
std = np.std(reldiff, axis=1) # standard deviation
```
## Minimize risk by solving QP model
```
import gurobipy as gp
from gurobipy import GRB
from math import sqrt
# Create an empty model
m = gp.Model('portfolio')
# Add matrix variable for the stocks
x = m.addMVar(len(stocks))
# Objective is to minimize risk (squared). This is modeled using the
# covariance matrix, which measures the historical correlation between stocks
portfolio_risk = x @ sigma @ x
m.setObjective(portfolio_risk, GRB.MINIMIZE)
# Fix budget with a constraint
m.addConstr(x.sum() == 1, 'budget')
# Verify model formulation
m.write('portfolio_selection_optimization.lp')
# Optimize model to find the minimum risk portfolio
m.optimize()
```
## Display minimum risk portfolio using Pandas
```
import pandas as pd
minrisk_volatility = sqrt(m.ObjVal)
minrisk_return = delta @ x.X
pd.DataFrame(data=np.append(x.X, [minrisk_volatility, minrisk_return]),
index=stocks + ['Volatility', 'Expected Return'],
columns=['Minimum Risk Portfolio'])
```
## Compute the efficient frontier
Solve the QP parametrically to find the lowest risk portfolio for different expected returns.
```
# Create an expression representing the expected return for the portfolio
portfolio_return = delta @ x
target = m.addConstr(portfolio_return == minrisk_return, 'target')
# Solve for efficient frontier by varying target return
frontier = np.empty((2,0))
for r in np.linspace(delta.min(), delta.max(), 25):
target[0].rhs = r
m.optimize()
frontier = np.append(frontier, [[sqrt(m.ObjVal)],[r]], axis=1)
```
## Plot results
Use the matplot library to plot the optimized solutions, along with the individual stocks:
```
import matplotlib.pyplot as plt
#plt.figure(figsize=(10,10))
fig, ax = plt.subplots(figsize=(10,8))
# Plot volatility versus expected return for individual stocks
ax.scatter(x=std, y=delta,
color='Blue', label='Individual Stocks')
for i, stock in enumerate(stocks):
ax.annotate(stock, (std[i], delta[i]))
# Plot volatility versus expected return for minimum risk portfolio
ax.scatter(x=minrisk_volatility, y=minrisk_return, color='DarkGreen')
ax.annotate('Minimum\nRisk\nPortfolio', (minrisk_volatility, minrisk_return),
horizontalalignment='right')
# Plot efficient frontier
ax.plot(frontier[0], frontier[1], label='Efficient Frontier', color='DarkGreen')
# Format and display the final plot
ax.axis([frontier[0].min()*0.7, frontier[0].max()*1.3, delta.min()*1.2, delta.max()*1.2])
ax.set_xlabel('Volatility (standard deviation)')
ax.set_ylabel('Expected Return')
ax.legend()
ax.grid()
plt.show()
```
| github_jupyter |
** Objectives **
* How to load a large file into memory using Pandas ?
* How to take a representative sample from a population ?
* Stratified sample
* Feature Selection
```
%matplotlib inline
import pandas as pd
import numpy as np
import os, sys
import re
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.cross_validation import train_test_split, cross_val_score, StratifiedKFold
from sklearn.feature_selection import SelectKBest, f_classif, chi2
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import Imputer, MinMaxScaler
from sklearn.metrics import roc_auc_score
from sklearn.decomposition import TruncatedSVD
from scipy import stats
from itertools import combinations
import warnings
warnings.filterwarnings('ignore')
np.random.seed(1)
basepath = os.path.expanduser('~/Desktop/src/Loan_Default_Prediction/')
sys.path.append(os.path.join(basepath, 'src'))
from data import *
```
** Stratified Sample **
```
def get_stratified_sample(X, y, train_size, random_state=10):
"""
Takes in a feature set and target with percentage of training size and a seed for reproducability.
Returns indices for the training and test sets.
"""
itrain, itest = train_test_split(range(len(X)), stratify=y, train_size=train_size, random_state=random_state)
return itrain, itest
# load files
chunksize = 10 ** 4
train_chunks = pd.read_table(os.path.join(basepath, 'data/raw/train_v2.csv'), \
chunksize=chunksize, \
sep=',', \
index_col='id'
)
train = pd.concat(train_chunks)
# create a binary variable based on the target
train['is_default'] = (train.loss > 0).astype(np.int)
itrain, itest = get_stratified_sample(train, train.is_default, 0.4)
train_sample = train.iloc[itrain]
del train
print('Shape of the sample: ', (train_sample.shape))
features = train_sample.columns.drop(['is_default', 'loss'])
```
** Histogram of features ( training set ) **
```
start_index = 760
end_index = 770
train_sample.ix[:, start_index:end_index].hist(figsize=(16, 12), bins=50)
plt.savefig(os.path.join(basepath, 'reports/figures/feat_%s-%s'%(start_index, end_index)))
```
** Save the histograms to disk so that we can observe the distribution. **
```
itrain, itest = get_stratified_sample(train_sample, train_sample.is_default, train_size=0.7, random_state=11)
X_train = train_sample.iloc[itrain][features]
X_test = train_sample.iloc[itest][features]
y_train = train_sample.is_default.iloc[itrain]
y_test = train_sample.is_default.iloc[itest]
class GoldenFeature(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, X, y=None):
return self
def transform(self, X):
X['f528-f527'] = X['f528'] - X['f527']
X['f528-f274'] = X['f528'] - X['f274']
return X
```
** Feature Selection **
```
class TreeBasedSelection(object):
def __init__(self, estimator, target, n_features_to_select=None):
self.estimator = estimator
self.n_features_to_select = n_features_to_select
self.target = target
def fit(self, X, y=None):
self.estimator.fit(X, self.target)
self.importances = self.estimator.feature_importances_
self.indices = np.argsort(self.importances)[::-1]
return self
def transform(self, X):
return X[:, self.indices[:self.n_features_to_select]]
```
** Feature Intearction. **
```
class FeatureInteraction(BaseEstimator, TransformerMixin):
def __init__(self):
pass
@staticmethod
def _combinations(features):
return combinations(features, 2)
def fit(self, X, y=None):
return self
def transform(self, X):
features = map(str, list(range(X.shape[1])))
interactions = []
for comb in self._combinations(features):
feat_1, feat_2 = comb
interactions.append(X[:, int(feat_2)] - X[:, int(feat_1)])
return np.vstack(interactions).T
class VarSelect(BaseEstimator, TransformerMixin):
def __init__(self, features, regexp_feature=r'.*-.*'):
self.keys = [col for col in features if len(re.findall(regexp_feature, col)) > 0]
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.keys]
cv = StratifiedKFold(y_train, n_folds=3, random_state=11)
score = 0
index = 0
for tr, ts in cv:
print('Fold: %d'%(index))
index += 1
Xtr = X_train.iloc[tr]
Xte = X_train.iloc[ts]
ytr = y_train.iloc[tr]
yte = y_train.iloc[ts]
pipeline = Pipeline([
('feature_union', FeatureUnion([
('golden_feature', GoldenFeature())
])),
('imputer', Imputer()),
('scaler', MinMaxScaler()),
('select', TreeBasedSelection(ExtraTreesClassifier(), ytr, n_features_to_select=30)),
# ('select', TruncatedSVD(n_components=30)),
('union', FeatureUnion([
('feature_interaction', FeatureInteraction())
])),
('model', RandomForestClassifier(n_estimators=25, n_jobs=2, random_state=5))
])
pipeline.fit(Xtr, ytr)
preds = pipeline.predict_proba(Xte)[:, 1]
score += roc_auc_score(yte, preds)
print('CV scores ', score/len(cv))
preds = pipeline.predict_proba(X_test)[:, 1]
print('AUC score on unseen examples %f'%(roc_auc_score(y_test, preds)))
class FeatureExtractor:
def __init__(self, train, test):
self.train = train
self.test = test
def extract(self):
self.round_values()
self.create_features()
return self.get_train(), self.get_test()
def round_values(self):
self.train = np.around(self.train, decimals=1)
self.test = np.around(self.test, decimals=1)
def create_features(self):
# feature based out of f1
self.train['f1_cat'] = (self.train['f1'] < 140).astype(np.int)
self.test['f1_cat'] = (self.test['f1'] < 140).astype(np.int)
# feature based out of f9
self.train['f9_cat'] = (self.train['f9'] < 140).astype(np.int)
self.test['f9_cat'] = (self.test['f9'] < 140).astype(np.int)
# feature based out of 10
self.train['f10_cat'] = (self.train['f10'] < 140).astype(np.int)
self.test['f10_cat'] = (self.test['f10'] < 140).astype(np.int)
# feature out of f14
self.train['f14_cat'] = (self.train['f14'] == 0.0).astype(np.int)
self.test['f14_cat'] = (self.test['f14'] == 0.0).astype(np.int)
# feature out of f6
self.train['f6_cat'] = (self.train['f6'] < 2e4).astype(np.int)
self.test['f6_cat'] = (self.test['f6'] < 2e4).astype(np.int)
def get_train(self):
return self.train
def get_test(self):
return self.test
feat = FeatureExtractor(train[train.columns[:12]], test[test.columns[:12]])
train_sub, test_sub = feat.extract()
train_sub.to_csv(os.path.join(basepath, 'data/processed/train_sub.csv'), index=False)
test_sub.to_csv(os.path.join(basepath, 'data/processed/test_sub.csv'), index=False)
train[['loss']].to_csv(os.path.join(basepath, 'data/processed/target.csv'), index=False)
```
| github_jupyter |
# Consultas no data set do Titanic
## Primeiro, farei algumas alterações na base de dados para depois fazer a extração de informações
```
import pandas as pd
df_titanic = pd.read_csv("data.csv")
print(df_titanic.info())
df_titanic.head()
```
## Removendo linhas duplicadas na base de dados
```
df_titanic.drop_duplicates(keep='last', inplace=True )
```
## Alterando generos para letras maiusculas, e o tipo da idade para Int
```
df_titanic['Sex'] = df_titanic['Sex'].str.upper()
df_titanic['Age'] = df_titanic['Age'].astype(int)
print(df_titanic.dtypes)
df_titanic.head(2)
```
## Criando uma nova coluna a partir da soma das colunas 'Siblings' e 'Parents'
```
df_titanic['n_relatives'] = df_titanic['Siblings/Spouses Aboard'] + df_titanic['Parents/Children Aboard']
df_titanic.head()
```
## Criando nova coluna para classificar as faixas etaria, como: criança, adolecente, adulto e idoso
```
idade_min = df_titanic['Age'].min()
idade_max = df_titanic['Age'].max()
intervalos = [idade_min-1, 10, 18, 65, idade_max]
rotulos = ['criança', 'adolecente', 'adulto', 'idoso']
df_titanic['faixa_idade'] = pd.cut(x = df_titanic['Age'], bins= intervalos, labels= rotulos )
print(df_titanic.info())
df_titanic.head()
```
## Obtendo a contagem de registros, a quantidade de valores únicos e o valor que mais se repete
```
df_titanic['faixa_idade'].describe()
```
## Obtendo a frequência de cada categoria
```
pd.Categorical(df_titanic['faixa_idade']).describe()
df_titanic.head()
```
## Removendo colunas que não serão mais necessarias
```
df_titanic.drop(columns= ['Siblings/Spouses Aboard', 'Parents/Children Aboard'], inplace=True)
df_titanic.head()
```
# Extração de informações
## Perguntas que serão respondidas:
* #### Quantas classes distintas (PClass) existem?
* #### Qual a quantidade de sobreviventes e não sobreviventes?
* #### Quantos idosos estavam a bordo e quantos sobreviveram?
* #### Das pessoas que sobreviveram, qual a taxa de sobrevivência entre homens e mulheres?
* #### Qual foi o valor médio da passagem entre os que sobreviveram e os que não sobreviveram?
```
#Quantas classes distintas (PClass) existem?
classes = sorted(list(df_titanic['Pclass'].unique()))
print(f'Classes distintas = {classes}')
#Qual a quantidade de sobreviventes e não sobreviventes?
filtro_sobreviventes = df_titanic['Survived'] == 1
filtro_nao_sobreviventes = df_titanic['Survived'] == 0
qtde_sobreviventes = df_titanic.loc[filtro_sobreviventes].shape[0]
qtde_nao_sobreviventes = df_titanic.loc[filtro_nao_sobreviventes].shape[0]
print(f'Quantidade de sobreiventes = {qtde_sobreviventes}')
print(f'Quantidade de não sobreiventes = {qtde_nao_sobreviventes}')
#Quantos idosos estavam a bordo e quantos sobreviveram?
filtro_idosos = df_titanic['faixa_idade'] == 'idoso'
qtde_idosos = df_titanic.loc[filtro_idosos].shape[0]
qtde_idosos_sobreviventes= df_titanic.loc[(filtro_idosos) & (filtro_sobreviventes)].shape[0]
print(f'Quantidade de idosos a bordo = {qtde_idosos}')
print(f'Quantidade de idosos que sobreviveram = {qtde_idosos_sobreviventes}')
#Das pessoas que sobreviveram, qual a taxa de sobrevivência entre homens e mulheres?
filtro_homens = df_titanic['Sex'] == 'MALE'
filtro_mulheres = df_titanic['Sex'] == 'FEMALE'
qtde_homens_sobreviventes = df_titanic.loc[(filtro_homens) & (filtro_sobreviventes)].shape[0]
qtde_mulheres_sobreviventes = df_titanic.loc[(filtro_mulheres) & (filtro_sobreviventes)].shape[0]
#Para calcular a taxa de sobreviventes, é necessário o valor total de sobreviventes
qtde_total_sobreviventes = df_titanic.loc[filtro_sobreviventes].shape[0]
taxa_homens = qtde_homens_sobreviventes * 100 /qtde_total_sobreviventes
taxa_mulheres = qtde_mulheres_sobreviventes * 100 /qtde_total_sobreviventes
print('Taxa de sobreviventes homens = {:.1f} %'.format(taxa_homens))
print('Taxa de sobreviventes mulheres = {:.1f} %'.format(taxa_mulheres))
#Qual foi o valor médio da passagem entre os que sobreviveram e os que não sobreviveram?
valor_medio_sobreviventes = df_titanic.loc[filtro_sobreviventes, 'Fare'].mean()
valor_medio_nao_sobreviventes = df_titanic.loc[filtro_nao_sobreviventes, 'Fare'].mean()
print('Valor médio das passagens dos sobreviventes = ${:.2f}'.format(valor_medio_sobreviventes))
print('Valor médio das passagens dos não sobreviventes = ${:.2f}'.format(valor_medio_nao_sobreviventes))
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
fig, axs = plt.subplots(ncols=2, figsize=(30,5))
sns.violinplot(x='Survived', y='Age', hue='Sex', data=df_titanic, ax=axs[0])
sns.pointplot(x='faixa_idade', y='Survived', hue='Sex', data=df_titanic, ax=axs[1])
```
| github_jupyter |
# Web and Web Analytics
## Scraping an html page (loading and searching it's contents)
* Local: saved in a file on your computer
* Remote: somewhere on the web
To fully understand this notebook, please open `example_html.html` file in another tab, and open it's `example_html.html`'s source code in a third tab (or even better: in browser's view > developer tools). You will see in a minute what is the exact address in that file.
For scraping, we need a few of different libraries, most notably Beautifulsoup. Let's first import these:
```
import os
from urllib.request import urlopen
from bs4 import BeautifulSoup
```
We can simply enter a web page as a string and open it. Afterwards, BeautifulSoup converts it into a BeautifulSoup object which has many interesting functions and attributes:
```
# website address
#page = 'http://www.uebs.ed.ac.uk'
# open the url and store the website
#website = urlopen(page)
# for now we use a local file (os.getcwd() gets the Current Working Directory, aka. the folder you're in)
file_url = "file:///"+os.getcwd()+"/example_html.html"
website_source_code = urlopen(file_url)
# in another tab: (open the example_html.html file directly in your browser to see how it will look like)
# then in your browser, right click and select 'view source', or open developer tools to see the source
print("Paste this url to your browser to see the demo website (copy the whole thing, together wioth the file:// part):")
print( file_url)
# convert the website's content, for this a parser is needed. In this case a html parser
soup = BeautifulSoup(website_source_code, 'html.parser')
# here's a complete html of the page, but it's easier to read if you open it's source using the url above
print(soup)
# .find_all retrieves all tags containing 'h1':
h1Tags = soup.find_all('h1')
for h1 in h1Tags:
print('Complete tag code: ', h1)
print("Just the text in the tag: ", h1.text)
```
It does not work with attributes of tags:
```
titleTags = soup.find_all('title')
for title in titleTags:
print('Complete tag code: ', title)
print("Just the text in the tag: ", title.text)
# nothing will be printed. there are no tags <title> </title> there
```
## Understanding the html is all about finding components you need:
* .find_all( ) will find all things that match criteria, in a list
* .find( ) will find just the first item that mathes the criteria
You can use it on the whole website, like `a_table = soup.find("table")` or on an element you found before `rows = a_table.find("tr")`
You can seek for types of tags, classes or ids
* `soup.find("h1")`,
* `soup.find(id="main_navigation")`,
* `soup.find(class="warning_message")`
But it is very frequent to fetch an element by its unique id:
```
middle_row = soup.find(id='middle_row')
print('Complete tag code: ', middle_row)
print("Just the text in the tag: ", middle_row.text)
```
## Find children:
When, like above, a tag contains some children (tags inside it) you can extract them into a list. The example would be above table row `<tr></tr>` includes three table data `<td></td>`
`.findChildren()` will give you alist with all tags inside of a given tag
You can specify exactly which chhildre, if you want, like with the `.find()`. So you could use
* `.findChildren("tr")` or
* `.findChildren(class="warning_message")`
```
middle_row = soup.find(id='middle_row')
cells_in_the_row = middle_row.findChildren()
for cell in cells_in_the_row:
print('Complete tag code: ', cell, "Just the text in the tag: ", cell.text)
```
You can dive deeper into certain tags, for example here you look for all divs from the (CSS) class called hipster:
```
class_elements = soup.find_all("div", {"class" : "hipster" })
for element in class_elements:
print('whole tag:\n', str(element), '\n')
print('Just the text: ', element.text)
```
Getting all the elements out of the table:
```
# list all tables, since we only have 1, use the first in the list at index 0
my_table = soup.find_all('table')[0]
# or just use: my_table = soup.find('table')
# loop the rows and keep the row number
row_num = 0
for row in my_table.find_all('tr'):
print("Row: "+str(row_num))
row_num = row_num+1
#loop the cells in the row
for cell in row.find_all('td'):
print("whole html:", str(cell)+" \tJust content: "+cell.text)
# if you'd like, try to change this code to use .findChildren( ) rather t
```
## Minitask: Now attempt to scrape something from a real online website:
Use the above code to make a list of all the degrees available in business school of University of Edinburgh.
* You will need to get the source of the page the list is on and feed it into the breautiful soup (see code above). (use this url instead of our demo website file://..... use this: https://www.ed.ac.uk/studying/undergraduate/degrees/index.php?action=view&code=12)
* Get the html component that holds all the degrees. Use developer tools to identify what type of component it is (hint: ul stamds for "unordered list"). Does this component have a class or an id? How would you get a component when you know it's id? (hint: proxy_degreeList )
* What type of a tag are the actual names of degrees in? (div, a, p, or something else) hint: what tag surround the name of the course?
* Grab children of that type from the component with all names and in a loop, extract only the text of each of them. And print them.
I am posting the solution lower down, but do try to solve it by yourself first!
```
# copy-paste relevant parts of the code from above to start:
```
Only uncover the solutions once you tried to complete the task:
CLICK HERE TO SEE THE THE HINT 1.
1. You will need to get the source of the page the list is on and feed it into the breautiful soup (see code above). (use this url instead of our demo website file://..... use this: https://www.ed.ac.uk/studying/undergraduate/degrees/index.php?action=view&code=12)
```
file_url = "https://www.ed.ac.uk/studying/undergraduate/degrees/index.php?action=view&code=12"
website_source_code = urlopen(file_url)
soup_degrees_website = BeautifulSoup(website_source_code, 'html.parser')
```
CLICK HERE TO SEE THE THE HINT 2.
2. get the html component that holds all the degrees. Use developer tools to identify what type of component it is (hint: ul stamds for "unordered list").
Does this component have a class or an id? How would you get a component when you know it's id? (hint: proxy_degreeList )
```
degrees = soup_degrees_website.find(id='proxy_degreeList')
```
CLICK HERE TO SEE THE THE HINT 3.
3. What type of a tag are the actual names of degrees in? (div, a, p, or something else) hint: what tag surround the name of the course?
```
for list_item in degrees.findChildren("a"):
```
CLICK HERE TO SEE THE THE HINT 4.
4. Grab children of that type from the component with all names and in a loop, extract only the text of each of them. And print them.
```
print("Degree Name:", list_item.text)
```
## Scraping reviews using Selenium
Here is another example of how Selenium can be used to interact with websites making use of Ajax (Asynchronous JavaScript):
### Selenium is a chrome automation framework
It will enable us to tell chrome:
* go to page bbc.co.uk/weather
* "click the work 'next'"
* scroll down
Selenium will basically open a simplified version of Chrome, for a few seconds, use it and close it afterwards. You might even see it flash on your screen quickly. Then we will use beautiful soup to understand the code.
### BeautifulSoup is an HTML parsing framework
It will enable us to:
* copy the html of the tags eg. div, table
* extract text from these tags
## Getting selenium (don't skip this!)-- You need to download the chromedrive by yourself.
1. find out which version of chrome you have, in chrome open page: chrome://settings/help
2. Go to the list of selenium versions and find folder with yoru version (eg. 87.0.4280.88) https://chromedriver.storage.googleapis.com/index.html
3. Go into the folder for your version and download the zip file with the version for your operating system (most likely `chromedriver_mac64.zip` or `chromedriver_win32.zip` ).
4. unzip that file on yoru machine and put it in the folder where this notebook is. unzipped file will be called `chromedriver` or `chromedriver.exe`.
```
!pip install selenium
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
# define method that will create a browser, suitable to your operating system
import sys
def get_a_browser():
if sys.platform.startswith('win32') or sys.platform.startswith('cygwin'):
return webdriver.Chrome() # windows
else:
return webdriver.Chrome('./chromedriver') # mac
```
**Important Note**: allowing your system to run `chromedriver`. This needs to be done just once.
If you are on a mac, you will need to allow your system to use chromium. Run below cell, and you will likely see a warning the first time, click 'cancel' (don't click 'Delete').
After you see the warning, go into `Settings > Security&Privacy > General` and `"Allow Anyway"`.
On a pc the process will be simpler. When asked you'll need to allow computer to use the `chromedriver.exe` file.
## Task: let's try to scrape an interactive website
What will be the weather in Edinburgh in 2 days?
You need a web browser, pen and paper!
In this task you will be asked to do something by yourself (using your web browser, mouse and keyboard), and then you will see how you cen program `Selenium` to do it for you.
**Use www.bbc.co.uk/weather to find out what time will be the sunrise in EDINBURGH next Sunday.**
Do it at least 3 times and observe all the steps you are taking. Make a very detailed list of all the steps, as if you had to describe them to someone over the phone without seeing their screen. See example below.
it will look a bit like this:
* ok, go to www.bbc.co.uk/weather and wait for it to load
* scroll down, do you see a link with words 'Edinburgh' on it? Click it.
* Wait a minute for it to load.
* ok, now scroll down and ...
When you are done with this exercise, we will try to instruct Selenium (Chrome automation tool) to do it for us. Do you think you can try to use Chrome Dev tools to make yoru steps more specific? eg. Instead of saying "copy text in that bold link next to the word Sunrise" try to say "copy text from the html span item with a class `wr-c-astro-data__time`".
**SERIOUSLY: Take a few minutes to do this. It will make you learn more from the below code!**
Ok. And now let's get the python to do it for us.
```
browser = get_a_browser()
# the url we want to open
url = u'https://www.bbc.co.uk/weather'
# the browser will start and load the webpage
browser.get(url)
# we wait 1 second to let the page load everything
time.sleep(1)
# we search for an element that is called 'customer reviews', which is a button
# the button can be clicked with the .click() function
browser.find_element(By.LINK_TEXT,"Edinburgh").click();
# sleep again, let everything load
time.sleep(1)
# we load the HTML body (the main page content without headers, footers, etc.)
body = browser.find_element(By.TAG_NAME,'body')
# we use seleniums' send_keys() function to physically scroll down where we want to click
body.send_keys(Keys.PAGE_DOWN)
# search for the next button to access the next reviews
try:
# link will look like "Sun 12Dec" so we use find_element_by_partial_link_text()
next_button = browser.find_element(By.PARTIAL_LINK_TEXT,'Sun ')
next_button.click()
except NoSuchElementException: #if such element does not exist, just stop looping
print("something went wrong. There was no Sunday link.")
# load current view of the page into a soup
soup = BeautifulSoup(browser.page_source, 'html.parser')
"""
1. Find all the elements of class pros and print them
2. These values include today's sunrise and sunset time, and the following 13 days.
3. `browser.page_source` always get the whole page, so we can only find all
4. A not smart, but workable solution is to count how many days between today and next sunday
and then choose the right element of all sunrise_tag list.
"""
# The whole list
sunrise_tag = soup.find_all("span", {"class" : 'wr-c-astro-data__time'})
# How many days between today and the next sunday
diff = int(next_button.get_attribute('id')[-1])
print("Sunrise next Sunday: ", sunrise_tag[2*diff].text)
for i in range(8):
print(i, sunrise_tag[i].text)
```
## Using API to access Twitter
Tweepy is a library that interfaces with the Twitter API:
```
# !pip install tweepy
import tweepy
# weeeply is a python library for accessing twitter data via twitter API.
# # below I am sharing my demo credenmtials, they will work for testing it,
# but for your project you'll need to create your own credentials.
# - create a twitter app with your twitter avound (one per group will do) https://developer.twitter.com/en/apps
# - follow the tutorial on tweepy to set it up https://tweepy.readthedocs.io/
Bearer_token = 'AAAAAAAAAAAAAAAAAAAAAMi%2BYAEAAAAA%2F2LLeju%2BgWlNK34g6PMT14scXzQ%3DHa0gE8PJoBnMVlnyoC3648USErcR6E86QadKgbKlBMIrKVNiYz' # please generate it from twitter developer by yourself and put it here
client = tweepy.Client(Bearer_token)
for tweet in tweepy.Paginator(client.search_recent_tweets, "University of Edinburgh",
max_results=100).flatten(limit=10):
print(tweet.text)
```
**More details, please refer to https://developer.twitter.com/en/docs/twitter-api/tweets/search/api-reference/get-tweets-search-recent**
## Scraping tweets with Selenium
In this exercise we will use selenium to copy-paste some tweets straight from the twitter website.
Be aware that there are terms and conditions about how you can use these coppied data. If you abuse or overuse scraping, twitter might block or throttle (slow down) your access to their site. (like, don't scrpate 1000s of tweets in 100 parrallel selenium windows).
This time, we import selenium first:
```
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
# define method that will create a browser, suitable to your operating system
import sys
def get_a_browser():
if sys.platform.startswith('win32') or sys.platform.startswith('cygwin'):
return webdriver.Chrome() # windows
else:
return webdriver.Chrome('./chromedriver') # mac
```
The webdriver object can launch Internet Explorer, Firefox, and Chrome. Despite your preference, the ChromeDriver (which is a light version of Chrome) is the most widely used and complete one. You can use it to start a twitter page:
```
# launch the browser
browser = get_a_browser()
# launch the Twitter search page
twitter_url = u'https://twitter.com/search?q='
# Add the search term
query = u'%40edinburgh'
# note: %40 is a code for @ symbol, so we're asking for the tweets with @edinburgh
# Create the url
url = twitter_url+query
# Get the page
browser.get(url)
```
Let's do this again and unleash the power of Selenium by using keyboard controls to manipulate a page:
```
browser = get_a_browser()
browser.get(url)
# Let the Tweets load
time.sleep(1)
# Find the body of the HTML page
body = browser.find_element(By.TAG_NAME,'body')
# Keep scrolling down using a simulation of the PAGE_DOWN button
for _ in range(5):
body.send_keys(Keys.PAGE_DOWN)
time.sleep(1)
# Get the tweets scores by their class (similar to Beautifulsoup's find())
retweets = browser.find_elements(By.XPATH,"//div[@data-testid='retweet']");
print("number of tweets scraped: ", len(retweets))
# Print Tweets
for retweet in retweets:
print("\n--NEXT TWEET---\n", retweet.text, "\n-----\n")
```
| github_jupyter |
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python.
# 12.1. Plotting the bifurcation diagram of a chaotic dynamical system
1. We import NumPy and matplotlib.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
2. We define the logistic function by:
$$f_r(x) = rx(1-x)$$
Our discrete dynamical system is defined by the recursive application of the logistic function:
$$x_{n+1}^{(r)} = f_r(x_n^{(r)}) = rx_n^{(r)}(1-x_n^{(r)})$$
```
def logistic(r, x):
return r*x*(1-x)
```
3. We will simulate this system for 10000 values of $r$ linearly spaced between 2.5 and 4. Of course, we vectorize the simulation with NumPy.
```
n = 10000
r = np.linspace(2.5, 4.0, n)
```
4. We will simulate 1000 iterations of the logistic map, and we will keep the last 100 iterations to display the bifurcation diagram.
```
iterations = 1000
last = 100
```
5. We initialize our system with the same initial condition $x_0 = 10^{-5}$.
```
x = 1e-5 * np.ones(n)
```
6. We will also compute an approximation of the Lyapunov exponent, for every value of $r$. The Lyapunov exponent is defined by:
$$\lambda(r) = \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} \log\left| \frac{df_r}{dx}\left(x_i^{(r)}\right) \right|$$
```
lyapunov = np.zeros(n)
```
7. Now, we simulate the system and we plot the bifurcation diagram. The simulation only involves the iterative evaluation of the function $f$ on our vector $x$. Then, to display the bifurcation diagram, we draw one pixel per point $x_n^{(r)}$ during the last 100 iterations.
```
plt.figure(figsize=(6,7));
plt.subplot(211);
for i in range(iterations):
x = logistic(r, x)
# We compute the partial sum of the Lyapunov exponent.
lyapunov += np.log(abs(r-2*r*x))
# We display the bifurcation diagram.
if i >= (iterations - last):
plt.plot(r, x, ',k', alpha=.04)
plt.xlim(2.5, 4);
plt.title("Bifurcation diagram");
# We display the Lyapunov exponent.
plt.subplot(212);
plt.plot(r[lyapunov<0], lyapunov[lyapunov<0] / iterations,
',k', alpha=.2);
plt.plot(r[lyapunov>=0], lyapunov[lyapunov>=0] / iterations,
',r', alpha=.5);
plt.xlim(2.5, 4);
plt.ylim(-2, 1);
plt.title("Lyapunov exponent");
plt.tight_layout();
```
The bifurcation diagram brings out the existence of a fixed point for $r<3$, then two and four equilibria... until a chaotic behavior when $r$ belongs to certain areas of the parameter space.
We observe an important property of the Lyapunov exponent: it is positive when the system is chaotic (in red here).
> You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
> [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).
| github_jupyter |
```
from itertools import product
import networkx as nx
import pulp
import matplotlib.pyplot as plt
%matplotlib inline
```
## Pipeline
```
m = 5
S, D = parse_input(m)
print D
S
ilp_solver, s_star_to_color_edges = build_ilp_solver(S, m, D)
s_star = solve_ilp(ilp_solver, s_star_to_color_edges)
draw_graph(S, m, s_star_to_color_edges)
m = 16
S, D = parse_input(m)
print D
S
ilp_solver, s_star_to_color_edges = build_ilp_solver(S, m, D)
s_star = solve_ilp(ilp_solver, s_star_to_color_edges)
draw_graph(S, m, s_star_to_color_edges)
```
## Functions definitions
```
def parse_input(m):
with open('data/input_{}.txt'.format(m), 'r') as myfile:
data = [line.rstrip() for line in myfile]
S = [[int(i) for i in item[:m]] for item in data]
D = [int(item[m+2]) for item in data]
return S, D
def build_ilp_solver(S, m, D):
ilp_solver = pulp.LpProblem("ilp_solver", pulp.LpMinimize)
s_star_to_color_edges = pulp.LpVariable.dicts("s_star_to_color_edges",
(('s_star_{}'.format(i), 'i_{}_c_{}'.format(i, c))
for i, c in product(range(m), range(10))),
cat="Binary")
ilp_solver += 0
for i in range(m):
ilp_solver += pulp.lpSum(s_star_to_color_edges['s_star_{}'.format(i), 'i_{}_c_{}'.format(i, c)]
for c in range(10)) == 1
for i, s in enumerate(S):
ilp_solver += pulp.lpSum(s_star_to_color_edges['s_star_{}'.format(i), 'i_{}_c_{}'.format(i, s[i])]
for i in range(m)) == D[i]
return ilp_solver, s_star_to_color_edges
def solve_ilp(ilp_solver, s_star_to_color_edges):
ilp_solver.solve()
assert(pulp.LpStatus[ilp_solver.status]=='Optimal')
s_star_unsorted = {i: c
for i,c in product(range(m), range(10))
if pulp.value(s_star_to_color_edges['s_star_{}'.format(i), 'i_{}_c_{}'.format(i, c)]) == 1.0}
s_star = [s_star_unsorted[i] for i in range(m)]
print ''.join([str(i) for i in s_star])
return s_star
def draw_graph(S, m, s_star_to_color_edges):
G = nx.Graph()
G.add_nodes_from([('s_star_{}'.format(i), {'label':i, 'pos':[1, (i+1)*100]})
for i in range(m)])
G.add_nodes_from([('i_{}_c_{}'.format(i, c), {'label':i, 'pos':[2, (i+1)*100 + c*10]})
for i, c in product(range(m), range(10))])
loc_dif = (100*m)/len(S)
G.add_nodes_from([('s_{}'.format(i), {'label':i, 'pos':[3, int((i+1)*loc_dif)]})
for i, s in enumerate(S)])
G.add_edges_from([('i_{}_c_{}'.format(i, c), 's_star_{}'.format(i))
for i, c in product(range(m), range(10))])
G.add_edges_from([('i_{}_c_{}'.format(i, c), 's_{}'.format(j))
for i, c in product(range(m), range(10))
for j in range(len(S))
if S[j][i] == c])
pos = nx.get_node_attributes(G, 'pos')
labels = nx.get_node_attributes(G, 'label')
plt.figure()
nx.draw(G, pos, node_size=50, node_color='r', labels=labels, font_size=5, style='dashed')
G_sol = nx.Graph()
G_sol.add_nodes_from(G.nodes())
G_sol.add_edges_from([('i_{}_c_{}'.format(i, c), 's_star_{}'.format(i))
for i,c in product(range(m), range(10))
if pulp.value(s_star_to_color_edges['s_star_{}'.format(i), 'i_{}_c_{}'.format(i, c)]) == 1.0])
G_sol.add_edges_from([('i_{}_c_{}'.format(i, c), 's_{}'.format(i))
for i,c in product(range(m), range(10))
if pulp.value(s_star_to_color_edges['s_star_{}'.format(i), 'i_{}_c_{}'.format(i, c)]) == 1.0])
nx.draw(G_sol, pos, node_size=50, node_color='r', labels=labels, font_size=5, style='dashed', width=3.0, edge_color='m')
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/005_Python_Dictionary_Methods)**
</i></small></small>
# Python Dictionary Comprehension
In this tutorial, we will learn about Python dictionary comprehension and how to use it with the help of examples.
Dictionaries are data types in Python which allows us to store data in **key:value pair**. For example:
**Syntax**:
```python
my_dict = {1: 'apple', 2: 'ball'}
```
To learn more about them visit: **[Python Dictionary](https://github.com/milaan9/02_Python_Datatypes/blob/main/005_Python_Dictionary.ipynb)**
## What is Dictionary Comprehension in Python?
Dictionary comprehension is an elegant and concise way to create dictionaries.
```
# Example 1: Dictionary Comprehension
square_dict = dict()
for num in range(1, 11):
square_dict[num] = num*num
print(square_dict)
```
Now, let's create the dictionary in the above program using dictionary comprehension.
```
# dictionary comprehension example
square_dict = {num: num*num for num in range(1, 11)}
print(square_dict)
```
In both programs, we have created a dictionary **`square_dict`** with **number-square key/value pair**.
However, using dictionary comprehension allowed us to **create a dictionary in a single line**.
## Using Dictionary Comprehension
From the above example, we can see that dictionary comprehension should be written in a specific pattern.
**Syntax**:
```python
dictionary = {key: value for vars in iterable}
```
Let's compare this syntax with dictionary comprehension from the above example.
<div>
<img src="img/dictcomprehension.png" width="500"/>
</div>
Now, let's see how we can use dictionary comprehension using data from another dictionary.
```
# Example 3: How to use Dictionary Comprehension
#item price in dollars
old_price = {'milk': 2.03, 'bread': 2.6, 'butter': 2.6}
dollar_to_pound = 0.76
new_price = {item: value*dollar_to_pound for (item, value) in old_price.items()}
print(new_price)
```
**Explanation:**
Here, we can see that we retrieved the item prices in dollars and converted them to pounds. Using dictionary comprehension makes this task much simpler and shorter.
## Conditionals in Dictionary Comprehension
We can further customize dictionary comprehension by adding conditions to it. Let's look at an example.
```
# Example 4: If Conditional Dictionary Comprehension
original_dict = {'Allan': 36, 'Bill': 48, 'Cory': 57, 'Dave': 33}
even_dict = {k: v for (k, v) in original_dict.items() if v % 2 == 0}
print(even_dict)
```
**Explanation:**
As we can see, only the items with even value have been added, because of the **`if`** clause in the dictionary comprehension.
```
# Example 5: Multiple if Conditional Dictionary Comprehension
original_dict = {'Allan': 36, 'Bill': 48, 'Cory': 57, 'Dave': 33}
new_dict = {k: v for (k, v) in original_dict.items() if v % 2 != 0 if v < 40}
print(new_dict)
```
**Explanation:**
In this case, only the items with an odd value of less than 40 have been added to the new dictionary.
It is because of the multiple **`if`** clauses in the dictionary comprehension. They are equivalent to **`and`** operation where both conditions have to be true.
```
# Example 6: if-else Conditional Dictionary Comprehension
original_dict = {'Allan': 36, 'Bill': 48, 'Cory': 57, 'Dave': 33}
new_dict_1 = {k: ('old' if v > 40 else 'young')
for (k, v) in original_dict.items()}
print(new_dict_1)
```
**Explanation:**
In this case, a new dictionary is created via dictionary comprehension.
The items with a value of 40 or more have the value of 'old' while others have the value of 'young'.
## Nested Dictionary Comprehension
Like how a Dictionary is a collection of **key-value** pairs, Python nested dictionaries are an unordered collection of one or two dictionaries. These can be represented as shown below.
**Syntax:**
```python
nested_dict = { 'dict1': {'key1': 'value1'}, 'dict2': {'key2': 'value2'}}
```
Here, nested_dict is a nested dictionary which has two dictionaries with keys **`dict1`** and **`dict2`**. The values of those two keys (**`dict1`**, **`dict2`**) are in-turn dictionaries.
In Python, Nested dictionaries can be created by placing the comma-separated dictionaries enclosed within curly brackets. For example:
```
# Example 7: creating a nested dictionary
nes_dict = {'dict1': {'Color': 'Red', 'Shape': 'Square'}, 'dict2': {'Color': 'Pink', 'Shape': 'Round'}}
print (nes_dict)
```
<div>
<img src="img/nd0.png" width="300"/>
</div>
```
# Example 8: Nested Dictionary with Two Dictionary Comprehensions
dictionary = {k1: {k2: k1 * k2 for k2 in range(1, 6)} for k1 in range(2, 5)}
print(dictionary)
```
**Explanation:**
As you can see, we have constructed a multiplication table in a nested dictionary, for numbers from 2 to 4.
Whenever nested dictionary comprehension is used, Python first starts from the outer loop and then goes to the inner one.
So, the above code would be equivalent to:
```
# Example 8:
dictionary = dict()
for k1 in range(11, 16):
dictionary[k1] = {k2: k1*k2 for k2 in range(1, 6)}
print(dictionary)
```
It can further be unfolded:
```
# Example 8:
dictionary = dict()
for k1 in range(11, 16):
dictionary[k1] = dict()
for k2 in range(1, 6):
dictionary[k1][k2] = k1*k2
print(dictionary)
```
All these three programs give us the same output.
### How to Add a Dictionary to a Nested Dictionary
Another way to add a dictionary to an existing nested dictionary is shown below.
```
# Example 9: adding a dictionary to nested dictionary
nes_dict = {'dict1': {'Color': 'Red', 'Shape': 'Square'}, 'dict2': {'Color': 'Pink', 'Shape': 'Round'}}
nes_dict['dict3'] = {'Color': 'Blue', 'Shape': 'Rectangle'}
print(nes_dict)
```
### How to Access Values of a Nested Dictionary
Values of a nested dictionary can simply be accessed using their respective keys.
```
# Example 10: accessing a value in nested dictionary
nes_dict = {'dict1': {'Color': 'Red', 'Shape': 'Square'}, 'dict2': {'Color': 'Pink', 'Shape': 'Round'}}
print(nes_dict['dict1']['Color'])
print(nes_dict['dict2']['Shape'])
```
### How to Delete Elements from a Nested Dictionary
The key-value pairs in Python nested dictionaries can be deleted using the **`del()`** method. This method can be used to delete either the entire dictionary or a particular key-value pair from a nested dictionary.
```
# Example 11: deleting the entire nested dictionary
nes_dict = {'dict1': {'Color': 'Red', 'Shape': 'Square'}, 'dict2': {'Color': 'Pink', 'Shape': 'Round'}}
del nes_dict['dict1']
print(nes_dict)
# deleting the a key-value from nested dictionary
nes_dict = {'dict1': {'Color': 'Red', 'Shape': 'Square'}, 'dict2': {'Color': 'Pink', 'Shape': 'Round'}}
del nes_dict['dict1']
del nes_dict['dict2']['Shape']
print (nes_dict)
```
## Advantages of Using Dictionary Comprehension
As we can see, dictionary comprehension shortens the process of dictionary initialization by a lot. It makes the code more pythonic.
Using dictionary comprehension in our code can shorten the lines of code while keeping the logic intact.
## Warnings on Using Dictionary Comprehension
Even though dictionary comprehensions are great for writing elegant code that is easy to read, they are not always the right choice.
We must be careful while using them as :
* They can sometimes make the code run slower and consume more memory.
* They can also decrease the readability of the code.
We must not try to fit a difficult logic or a large number of dictionary comprehension inside them just for the sake of making the code single lined. In these cases, It is better to choose other alternatives like loops.
| github_jupyter |
# <img src="https://img.icons8.com/dusk/64/000000/code.png" style="height:50px;display:inline"> Soft-IntroVAE Code Tutorial - Image Datasets
---
Tal Daniel
<center>
<a href="https://colab.research.google.com/github/taldatech/soft-intro-vae-pytorch/blob/main/soft_intro_vae_tutorial/soft_intro_vae_image_code_tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
</center>
* Paper: [**Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder**, Tal Daniel and Aviv Tamar](https://arxiv.org/abs/2012.13253)
* GitHub: <a href="https://github.com/taldatech/soft-intro-vae-pytorch">soft-intro-vae-pytorch</a>
### <img src="https://img.icons8.com/color/96/000000/loading.png" style="height:50px;display:inline"> Running Instructions
---
* This Jupyter Notebook can be opened locally with Anaconda, or online via Google Colab.
* To run online, go to https://colab.research.google.com/ and drag-and-drop the `soft_intro_vae_image_code_tutorial.ipynb` file.
* On Colab, note the "directory" icon on the left, figures and checkpoints are saved in this directory.
* To run the training on the image dataset, it is better to have a GPU. In Google Cola select `Runtime->Change runtime type->GPU`.
### <img src="https://img.icons8.com/bubbles/50/000000/checklist.png" style="height:50px;display:inline"> Agenda
---
* [Variational Autoencoders (VAEs)](#-Variational-Autoencoders-(VAEs))
* [Soft-IntroVAE Objectives](#-Soft-IntroVAE-Objectives)
* [Image Generation Experiments](#-Image-Generation-Experiments)
* [Image Generation Experiments - Architectures](#Image-Generation-Experiments---Architectures)
* [Image Generation Experiments - Algorithm and Train Function](#Image-Generation-Experiments---Algorithm-and-Train-Function)
* [More Tutorials](#-But-Wait,-There-is-More...)
* [Credits](#-Credits)
### <img src="https://img.icons8.com/plasticine/100/000000/epsilon.png" style="height:50px;display:inline"> Variational Autoencoders (VAEs)
---
Unlike regular autoencoders, Variational Autoencoder (VAE, <a href="https://arxiv.org/abs/1312.6114">Kingma & Welling, 2014</a>) map the input to a distribution.
In VAE we infer $p_{\theta}(z|X)$ using a method calld **Variational Inference (VI)** (hence the name **Variational** Autoencoder).
**Variational Inference (VI)** - solve an optimization problem in which we model $p_{\theta}(z|X)$ using a simpler distribution, $q_{\phi}(z|x)$, which is easier to evaluate, like a Gaussian, and **minimize the difference between these distributions using the KL-divergence**.
**Evidence Lower BOund (ELBO)** - the optimization problem is to make the simpler distribution, $q_{\phi}(z|X)$ as closer as possible to $p_{\theta}(z|X)$. Using the KL-divergence, the we get the evidence lower bound: $$ \log p_{\theta}(X) \geq \mathbb{E}_{q_{\phi}(z|X)}[\log p_{\theta}(X|z)] - D_{KL}[q_{\phi}(z|X) || p(z)] = ELBO(X; \theta, \phi).$$
$p(z)$ is a prior, independent of the model. In VAE, a common choice for the prior is a simple one $$p(z) \sim \mathcal{N}(0,1).$$
$q_{\phi}(z|X)$ is also called the **encoder** and $p_{\theta}(X|z)$ the **decoder**.
In practice, the ELBO is decomposed to the **reconstruction error** and the KL-divergence, which has a closed-form solution in the Gaussian case.
The optimization is made possible thanks to the **reparameterization trick**, as it allows to backpropagate the gradients through the stochastic latent variable: $$ z \sim q_{\phi}(z|X) = \mathcal{N}(z; \mu, \sigma^2 I) $$ $$ \to z = \mu + \sigma \odot \epsilon, \text{where } \epsilon \sim \mathcal{N}(0, I) $$
<img src="https://raw.githubusercontent.com/taldatech/soft-intro-vae-web/main/assets/vae_lilian_weng_lilianweng.github.io.png" style="height:300px">
* <a href="https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html#beta-vae">Image by Lilian Weng</a>
### <img src="https://img.icons8.com/fluent/96/000000/rubiks-cube.png" style="height:50px;display:inline"> Soft-IntroVAE Objectives
---
In Soft-IntroVAE, the encoder and decoder are trained to maximize the ELBO for real data (as in standard VAEs), and in addition, we use the exponential of the ELBO (expELBO) to "push away" fake data, generated by the decoder, from the latent space learned by the encoder, while the decoder also tries to pull back its generated data closer to the latent space, hence improving over time.
Comparing to GANs, the discriminatory signal comes from the encoder (the ELBO acts as an energy function), thus, the VAE is trained in an introspective manner (no need for an additional discriminator).
The objective of Soft-IntroVAE is written as follows:
$$ \mathcal{L}_{E_{\phi}}(x,z) = s \cdot(\beta_{rec}\mathcal{L}_r(x) +\beta_{kl}KL(x)) + \frac{1}{2}\exp(-2s\cdot (\beta_{rec}\mathcal{L}_r(D_{\theta}(z)) + \beta_{neg}KL(D_{\theta}(z)))), $$
$$ \mathcal{L}_{D_{\theta}}(x,z) = s \cdot \beta_{rec}\mathcal{L}_r(x) +s \cdot(\beta_{kl}KL(D_{\theta}(z)) +\gamma_r \cdot \beta_{rec}\mathcal{L}_r(D_{\theta}(z))), $$
where $\mathcal{L}_r(x) = - \mathbb{E}_{q_{\phi}(z\mid x)}\left[\log p_{\theta}(x \mid z)\right]$ denotes the reconstruction error, $s$ is a scaling constant which is set to the inverse of the input dimensions, and $\beta_{rec}, \beta_{kl}, \beta_{neg}$ and $\gamma_r$ are hyperparameters.
Note that in all our experiments the "coeffecient of fake data reconstruction error", $\gamma_r = 1e-8$ (in the bootstrap version in can be set to 1), as setting it to higher values may hold back the decoder and slow down convergence (since at the beginning, the generated data is really bad). Basically, this hyperparamter can be annealed to 1 over time, but for simplicity, we don't do it.
<img src="https://raw.githubusercontent.com/taldatech/soft-intro-vae-web/main/assets/sintrovae_flow.PNG" style="height:350px">
## <img src="https://img.icons8.com/office/80/000000/edit-image.png" style="height:50px;display:inline"> Image Generation Experiments
---
In this part, we will demonstrate Soft-IntroVAE on the CIFAR-10 image dataset.
**Fot this part, it highly recommeneded to enable GPU** (instructions at the top if you are running on Google Colab).
```
# imports for the tutorial
import os
import time
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
# pytorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F
from torchvision.utils import make_grid
from torchvision.datasets import CIFAR10
from torchvision import transforms
import torchvision.utils as vutils
```
### <img src="https://img.icons8.com/dusk/64/000000/venn-diagram.png" style="height:50px;display:inline">Image Generation Experiments - Architectures
---
This part defines the building blocks of the Soft-IntroVAE.
```
"""
Helper Functions
"""
def calc_kl(logvar, mu, mu_o=0.0, logvar_o=0.0, reduce='sum'):
"""
Calculate kl-divergence
:param logvar: log-variance from the encoder
:param mu: mean from the encoder
:param mu_o: negative mean for outliers (hyper-parameter)
:param logvar_o: negative log-variance for outliers (hyper-parameter)
:param reduce: type of reduce: 'sum', 'none'
:return: kld
"""
if not isinstance(mu_o, torch.Tensor):
mu_o = torch.tensor(mu_o).to(mu.device)
if not isinstance(logvar_o, torch.Tensor):
logvar_o = torch.tensor(logvar_o).to(mu.device)
kl = -0.5 * (1 + logvar - logvar_o - logvar.exp() / torch.exp(logvar_o) - (mu - mu_o).pow(2) / torch.exp(
logvar_o)).sum(1)
if reduce == 'sum':
kl = torch.sum(kl)
elif reduce == 'mean':
kl = torch.mean(kl)
return kl
def reparameterize(mu, logvar):
"""
This function applies the reparameterization trick:
z = mu(X) + sigma(X)^0.5 * epsilon, where epsilon ~ N(0,I)
:param mu: mean of x
:param logvar: log variaance of x
:return z: the sampled latent variable
"""
device = mu.device
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std).to(device)
return mu + eps * std
def calc_reconstruction_loss(x, recon_x, loss_type='mse', reduction='sum'):
"""
:param x: original inputs
:param recon_x: reconstruction of the VAE's input
:param loss_type: "mse", "l1", "bce"
:param reduction: "sum", "mean", "none"
:return: recon_loss
"""
if reduction not in ['sum', 'mean', 'none']:
raise NotImplementedError
recon_x = recon_x.view(recon_x.size(0), -1)
x = x.view(x.size(0), -1)
if loss_type == 'mse':
recon_error = F.mse_loss(recon_x, x, reduction='none')
recon_error = recon_error.sum(1)
if reduction == 'sum':
recon_error = recon_error.sum()
elif reduction == 'mean':
recon_error = recon_error.mean()
elif loss_type == 'l1':
recon_error = F.l1_loss(recon_x, x, reduction=reduction)
elif loss_type == 'bce':
recon_error = F.binary_cross_entropy(recon_x, x, reduction=reduction)
else:
raise NotImplementedError
return recon_error
def load_model(model, pretrained, device):
weights = torch.load(pretrained, map_location=device)
model.load_state_dict(weights['model'], strict=False)
def save_checkpoint(model, epoch, iteration, prefix=""):
model_out_path = "./saves/" + prefix + "model_epoch_{}_iter_{}.pth".format(epoch, iteration)
state = {"epoch": epoch, "model": model.state_dict()}
if not os.path.exists("./saves/"):
os.makedirs("./saves/")
torch.save(state, model_out_path)
print("model checkpoint saved @ {}".format(model_out_path))
"""
Models
"""
class _Residual_Block(nn.Module):
"""
https://github.com/hhb072/IntroVAE
Difference: self.bn2 on output and not on (output + identity)
"""
def __init__(self, inc=64, outc=64, groups=1, scale=1.0):
super(_Residual_Block, self).__init__()
midc = int(outc * scale)
if inc is not outc:
self.conv_expand = nn.Conv2d(in_channels=inc, out_channels=outc, kernel_size=1, stride=1, padding=0,
groups=1, bias=False)
else:
self.conv_expand = None
self.conv1 = nn.Conv2d(in_channels=inc, out_channels=midc, kernel_size=3, stride=1, padding=1, groups=groups,
bias=False)
self.bn1 = nn.BatchNorm2d(midc)
self.relu1 = nn.LeakyReLU(0.2, inplace=True)
self.conv2 = nn.Conv2d(in_channels=midc, out_channels=outc, kernel_size=3, stride=1, padding=1, groups=groups,
bias=False)
self.bn2 = nn.BatchNorm2d(outc)
self.relu2 = nn.LeakyReLU(0.2, inplace=True)
def forward(self, x):
if self.conv_expand is not None:
identity_data = self.conv_expand(x)
else:
identity_data = x
output = self.relu1(self.bn1(self.conv1(x)))
output = self.conv2(output)
output = self.bn2(output)
output = self.relu2(torch.add(output, identity_data))
# output = self.relu2(self.bn2(torch.add(output, identity_data)))
return output
class Encoder(nn.Module):
def __init__(self, cdim=3, zdim=512, channels=(64, 128, 256, 512, 512, 512), image_size=256, conditional=False):
super(Encoder, self).__init__()
assert (2 ** len(channels)) * 4 == image_size
self.zdim = zdim
self.conditional = conditional
self.cond_dim = 10
cc = channels[0]
self.main = nn.Sequential(
nn.Conv2d(cdim, cc, 5, 1, 2, bias=False),
nn.BatchNorm2d(cc),
nn.LeakyReLU(0.2),
nn.AvgPool2d(2),
)
sz = image_size // 2
for ch in channels[1:]:
self.main.add_module('res_in_{}'.format(sz), _Residual_Block(cc, ch, scale=1.0))
self.main.add_module('down_to_{}'.format(sz // 2), nn.AvgPool2d(2))
cc, sz = ch, sz // 2
self.main.add_module('res_in_{}'.format(sz), _Residual_Block(cc, cc, scale=1.0))
if self.conditional:
self.fc = nn.Linear(cc * 4 * 4 + self.cond_dim, 2 * zdim)
else:
self.fc = nn.Linear(cc * 4 * 4, 2 * zdim)
def forward(self, x, o_cond=None):
y = self.main(x).view(x.size(0), -1)
if self.conditional and o_cond is not None:
y = torch.cat([y, o_cond], dim=1)
y = self.fc(y)
mu, logvar = y.chunk(2, dim=1)
return mu, logvar
class Decoder(nn.Module):
def __init__(self, cdim=3, zdim=512, channels=(64, 128, 256, 512, 512, 512), image_size=256, conditional=False):
super(Decoder, self).__init__()
assert (2 ** len(channels)) * 4 == image_size
self.conditional = conditional
cc = channels[-1]
self.cond_dim = 10
if self.conditional:
self.fc = nn.Sequential(
nn.Linear(zdim + self.cond_dim, cc * 4 * 4),
nn.ReLU(True),
)
else:
self.fc = nn.Sequential(
nn.Linear(zdim, cc * 4 * 4),
nn.ReLU(True),
)
sz = 4
self.main = nn.Sequential()
for ch in channels[::-1]:
self.main.add_module('res_in_{}'.format(sz), _Residual_Block(cc, ch, scale=1.0))
self.main.add_module('up_to_{}'.format(sz * 2), nn.Upsample(scale_factor=2, mode='nearest'))
cc, sz = ch, sz * 2
self.main.add_module('res_in_{}'.format(sz), _Residual_Block(cc, cc, scale=1.0))
self.main.add_module('predict', nn.Conv2d(cc, cdim, 5, 1, 2))
def forward(self, z, y_cond=None):
z = z.view(z.size(0), -1)
if self.conditional and y_cond is not None:
y_cond = y_cond.view(y_cond.size(0), -1)
z = torch.cat([z, y_cond], dim=1)
y = self.fc(z)
y = y.view(z.size(0), -1, 4, 4)
y = self.main(y)
return y
class SoftIntroVAE(nn.Module):
def __init__(self, cdim=3, zdim=512, channels=(64, 128, 256, 512, 512, 512), image_size=256, conditional=False):
super(SoftIntroVAE, self).__init__()
self.zdim = zdim
self.conditional = conditional
self.encoder = Encoder(cdim, zdim, channels, image_size, conditional=conditional)
self.decoder = Decoder(cdim, zdim, channels, image_size, conditional=conditional)
def forward(self, x, o_cond=None, deterministic=False):
if self.conditional and o_cond is not None:
mu, logvar = self.encode(x, o_cond=o_cond)
if deterministic:
z = mu
else:
z = reparameterize(mu, logvar)
y = self.decode(z, y_cond=o_cond)
return mu, logvar, z, y
else:
mu, logvar = self.encode(x)
if deterministic:
z = mu
else:
z = reparameterize(mu, logvar)
y = self.decode(z)
return mu, logvar, z, y
def sample(self, z, y_cond=None):
y = self.decode(z, y_cond=y_cond)
return y
def sample_with_noise(self, num_samples=1, device=torch.device("cpu"), y_cond=None):
z = torch.randn(num_samples, self.z_dim).to(device)
return self.decode(z, y_cond=y_cond)
def encode(self, x, o_cond=None):
if self.conditional and o_cond is not None:
mu, logvar = self.encoder(x, o_cond=o_cond)
else:
mu, logvar = self.encoder(x)
return mu, logvar
def decode(self, z, y_cond=None):
if self.conditional and y_cond is not None:
y = self.decoder(z, y_cond=y_cond)
else:
y = self.decoder(z)
return y
```
### <img src="https://img.icons8.com/bubbles/50/000000/loading-bar.png" style="height:50px;display:inline">Image Generation Experiments - Algorithm and Train Function
---
This part defines the training algorithm of Soft-IntroVAE.
```
def train_soft_intro_vae(dataset='cifar10', z_dim=128, lr_e=2e-4, lr_d=2e-4, batch_size=128, num_workers=4, start_epoch=0,
num_epochs=250, num_vae=0, save_interval=5000, recon_loss_type="mse",
beta_kl=1.0, beta_rec=1.0, beta_neg=1.0, test_iter=1000, seed=-1, pretrained=None,
device=torch.device("cpu"), num_row=8, gamma_r=1e-8):
if seed != -1:
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
print("random seed: ", seed)
# --------------build models -------------------------
if dataset == 'cifar10':
image_size = 32
channels = [64, 128, 256]
train_set = CIFAR10(root='./cifar10_ds', train=True, download=True, transform=transforms.ToTensor())
ch = 3
elif dataset == 'svhn':
image_size = 32
channels = [64, 128, 256]
train_set = SVHN(root='./svhn', split='train', transform=transforms.ToTensor(), download=True)
ch = 3
else:
raise NotImplementedError("dataset is not supported")
model = SoftIntroVAE(cdim=ch, zdim=z_dim, channels=channels, image_size=image_size).to(device)
if pretrained is not None:
load_model(model, pretrained, device)
# print(model)
optimizer_e = optim.Adam(model.encoder.parameters(), lr=lr_e)
optimizer_d = optim.Adam(model.decoder.parameters(), lr=lr_d)
e_scheduler = optim.lr_scheduler.MultiStepLR(optimizer_e, milestones=(350,), gamma=0.1)
d_scheduler = optim.lr_scheduler.MultiStepLR(optimizer_d, milestones=(350,), gamma=0.1)
scale = 1 / (ch * image_size ** 2) # normalizing constant, 's' in the paper
train_data_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=num_workers)
start_time = time.time()
cur_iter = 0
kls_real = []
kls_fake = []
kls_rec = []
rec_errs = []
for epoch in range(start_epoch, num_epochs):
diff_kls = []
# save models
if epoch % save_interval == 0 and epoch > 0:
save_epoch = (epoch // save_interval) * save_interval
prefix = dataset + "_soft_intro_vae" + "_betas_" + str(beta_kl) + "_" + str(beta_neg) + "_" + str(
beta_rec) + "_"
save_checkpoint(model, save_epoch, cur_iter, prefix)
model.train()
batch_kls_real = []
batch_kls_fake = []
batch_kls_rec = []
batch_rec_errs = []
for iteration, batch in enumerate(train_data_loader, 0):
# --------------train------------
if dataset == "cifar10":
batch = batch[0]
if epoch < num_vae:
# vanilla vae training
if len(batch.size()) == 3:
batch = batch.unsqueeze(0)
batch_size = batch.size(0)
real_batch = batch.to(device)
# =========== Update E, D ================
real_mu, real_logvar, z, rec = model(real_batch)
loss_rec = calc_reconstruction_loss(real_batch, rec, loss_type=recon_loss_type, reduction="mean")
loss_kl = calc_kl(real_logvar, real_mu, reduce="mean")
loss = beta_rec * loss_rec + beta_kl * loss_kl
optimizer_e.zero_grad()
optimizer_d.zero_grad()
loss.backward()
optimizer_e.step()
optimizer_d.step()
if iteration % test_iter == 0:
info = "\nEpoch[{}]({}/{}): time: {:4.4f}: ".format(epoch, iteration, len(train_data_loader),
time.time() - start_time)
info += 'Rec: {:.4f}, KL: {:.4f}, '.format(loss_rec.data.cpu(), loss_kl.data.cpu())
print(info)
vutils.save_image(torch.cat([real_batch, rec], dim=0).data.cpu(),
'{}/image_{}.jpg'.format("./", cur_iter), nrow=num_row)
else:
# soft-intro-vae training
if len(batch.size()) == 3:
batch = batch.unsqueeze(0)
b_size = batch.size(0)
# generate random noise to produce 'fake' later
noise_batch = torch.randn(size=(b_size, z_dim)).to(device)
real_batch = batch.to(device)
# =========== Update E ================
for param in model.encoder.parameters():
param.requires_grad = True
for param in model.decoder.parameters():
param.requires_grad = False
# generate 'fake' data
fake = model.sample(noise_batch)
# ELBO for real data
real_mu, real_logvar = model.encode(real_batch)
z = reparameterize(real_mu, real_logvar)
rec = model.decoder(z)
loss_rec = calc_reconstruction_loss(real_batch, rec, loss_type=recon_loss_type, reduction="mean")
lossE_real_kl = calc_kl(real_logvar, real_mu, reduce="mean")
# prepare 'fake' data for expELBO
rec_mu, rec_logvar, z_rec, rec_rec = model(rec.detach())
fake_mu, fake_logvar, z_fake, rec_fake = model(fake.detach())
# KLD loss for the fake data
fake_kl_e = calc_kl(fake_logvar, fake_mu, reduce="none")
rec_kl_e = calc_kl(rec_logvar, rec_mu, reduce="none")
# reconstruction loss for the fake data
loss_fake_rec = calc_reconstruction_loss(fake, rec_fake, loss_type=recon_loss_type, reduction="none")
loss_rec_rec = calc_reconstruction_loss(rec, rec_rec, loss_type=recon_loss_type, reduction="none")
# expELBO
exp_elbo_fake = (-2 * scale * (beta_rec * loss_fake_rec + beta_neg * fake_kl_e)).exp().mean()
exp_elbo_rec = (-2 * scale * (beta_rec * loss_rec_rec + beta_neg * rec_kl_e)).exp().mean()
# total loss
lossE = scale * (beta_rec * loss_rec + beta_kl * lossE_real_kl) + 0.25 * (exp_elbo_fake + exp_elbo_rec)
# backprop
optimizer_e.zero_grad()
lossE.backward()
optimizer_e.step()
# ========= Update D ==================
for param in model.encoder.parameters():
param.requires_grad = False
for param in model.decoder.parameters():
param.requires_grad = True
# generate 'fake' data
fake = model.sample(noise_batch)
rec = model.decoder(z.detach())
# ELBO loss for real -- just the reconstruction, KLD for real doesn't affect the decoder
loss_rec = calc_reconstruction_loss(real_batch, rec, loss_type=recon_loss_type, reduction="mean")
# prepare 'fake' data for the ELBO
rec_mu, rec_logvar = model.encode(rec)
z_rec = reparameterize(rec_mu, rec_logvar)
fake_mu, fake_logvar = model.encode(fake)
z_fake = reparameterize(fake_mu, fake_logvar)
rec_rec = model.decode(z_rec.detach())
rec_fake = model.decode(z_fake.detach())
loss_rec_rec = calc_reconstruction_loss(rec.detach(), rec_rec, loss_type=recon_loss_type,
reduction="mean")
loss_fake_rec = calc_reconstruction_loss(fake.detach(), rec_fake, loss_type=recon_loss_type,
reduction="mean")
rec_kl = calc_kl(rec_logvar, rec_mu, reduce="mean")
fake_kl = calc_kl(fake_logvar, fake_mu, reduce="mean")
lossD = scale * (loss_rec * beta_rec + (rec_kl + fake_kl) * 0.5 * beta_kl + \
gamma_r * 0.5 * beta_rec * (loss_rec_rec + loss_fake_rec))
optimizer_d.zero_grad()
lossD.backward()
optimizer_d.step()
if torch.isnan(lossD) or torch.isnan(lossE):
raise SystemError
# statistics for plotting later
diff_kls.append(-lossE_real_kl.data.cpu().item() + fake_kl.data.cpu().item())
batch_kls_real.append(lossE_real_kl.data.cpu().item())
batch_kls_fake.append(fake_kl.cpu().item())
batch_kls_rec.append(rec_kl.data.cpu().item())
batch_rec_errs.append(loss_rec.data.cpu().item())
if cur_iter % test_iter == 0:
info = "\nEpoch[{}]({}/{}): time: {:4.4f}: ".format(epoch, iteration, len(train_data_loader),
time.time() - start_time)
info += 'Rec: {:.4f}, '.format(loss_rec.data.cpu())
info += 'Kl_E: {:.4f}, expELBO_R: {:.4e}, expELBO_F: {:.4e}, '.format(lossE_real_kl.data.cpu(),
exp_elbo_rec.data.cpu(),
exp_elbo_fake.cpu())
info += 'Kl_F: {:.4f}, KL_R: {:.4f}'.format(rec_kl.data.cpu(), fake_kl.data.cpu())
info += ' DIFF_Kl_F: {:.4f}'.format(-lossE_real_kl.data.cpu() + fake_kl.data.cpu())
print(info)
_, _, _, rec_det = model(real_batch, deterministic=True)
max_imgs = min(batch.size(0), 16)
vutils.save_image(
torch.cat([real_batch[:max_imgs], rec_det[:max_imgs], fake[:max_imgs]], dim=0).data.cpu(),
'{}/image_{}.jpg'.format("./", cur_iter), nrow=num_row)
cur_iter += 1
e_scheduler.step()
d_scheduler.step()
if epoch > num_vae - 1:
kls_real.append(np.mean(batch_kls_real))
kls_fake.append(np.mean(batch_kls_fake))
kls_rec.append(np.mean(batch_kls_rec))
rec_errs.append(np.mean(batch_rec_errs))
if epoch == num_epochs - 1:
with torch.no_grad():
_, _, _, rec_det = model(real_batch, deterministic=True)
noise_batch = torch.randn(size=(b_size, z_dim)).to(device)
fake = model.sample(noise_batch)
max_imgs = min(batch.size(0), 16)
vutils.save_image(
torch.cat([real_batch[:max_imgs], rec_det[:max_imgs], fake[:max_imgs]], dim=0).data.cpu(),
'{}/image_{}.jpg'.format("./", cur_iter), nrow=num_row)
# plot graphs
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(np.arange(len(kls_real)), kls_real, label="kl_real")
ax.plot(np.arange(len(kls_fake)), kls_fake, label="kl_fake")
ax.plot(np.arange(len(kls_rec)), kls_rec, label="kl_rec")
ax.plot(np.arange(len(rec_errs)), rec_errs, label="rec_err")
ax.set_ylim([0, 200])
ax.legend()
plt.savefig('./soft_intro_vae_train_graphs.jpg')
# save models
prefix = dataset + "_soft_intro_vae" + "_betas_" + str(beta_kl) + "_" + str(beta_neg) + "_" + str(
beta_rec) + "_"
save_checkpoint(model, epoch, cur_iter, prefix)
plt.show()
return model
# hyperparameters
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device:", device)
num_epochs = 150
lr = 2e-4
batch_size = 32
beta_kl = 1.0
beta_rec = 1.0
beta_neg = 256
model = train_soft_intro_vae(dataset='cifar10', z_dim=128, lr_e=2e-4, lr_d=2e-4, batch_size=batch_size,
num_workers=0, start_epoch=0, num_epochs=num_epochs, num_vae=0, save_interval=5000,
recon_loss_type="mse", beta_kl=beta_kl, beta_rec=beta_rec, beta_neg=beta_neg,
test_iter=1000, seed=-1, pretrained=None, device=device)
# generate samples
print("Note that these results are for 150 epochs, usually more is needed.")
num_samples = 64
with torch.no_grad():
noise_batch = torch.randn(size=(num_samples, model.zdim)).to(device)
images = model.sample(noise_batch)
images = images.data.cpu().numpy()
images = np.clip(images * 255, 0, 255).astype(np.uint8)
images = images / 255.0
images = torch.from_numpy(images).type(torch.FloatTensor)
grid = make_grid(images, nrow=8)
grid_np = grid.permute(1, 2, 0).data.cpu().numpy()
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.imshow(grid_np)
ax.set_axis_off()
plt.savefig('cifa10_grid_generated.png')
plt.show()
# reconstructions
num_recon = 8
test_dataset = CIFAR10(root='./cifar10_ds', train=False, download=True, transform=transforms.ToTensor())
test_loader = DataLoader(dataset=test_dataset, batch_size=num_recon, shuffle=True)
test_images = iter(test_loader)
with torch.no_grad():
total_grid = []
for _ in range(3):
data = next(test_images)
recon = model(data[0].to(device), deterministic=True)[3]
images = recon.data.cpu().numpy()
images = np.clip(images * 255, 0, 255).astype(np.uint8)
images = images / 255.0
images = torch.from_numpy(images).type(torch.FloatTensor)
grid = make_grid(torch.cat([data[0], images], dim=0), nrow=8)
total_grid.append(grid)
total_grid = torch.cat(total_grid, dim=1)
grid_np = total_grid.permute(1, 2, 0).data.cpu().numpy()
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.imshow(grid_np)
ax.set_axis_off()
plt.savefig('cifa10_grid_reconstructions.png')
plt.show()
```
### <img src="https://img.icons8.com/nolan/64/more.png" style="height:50px;display:inline"> But Wait, There is More...
---
* Soft-IntroVAE Tutorials
* [Soft-IntroVAE for 2D Datasets](https://github.com/taldatech/soft-intro-vae-pytorch/blob/main/soft_intro_vae_tutorial/soft_intro_vae_2d_code_tutorial.ipynb)
* [Open in Colab](https://colab.research.google.com/github/taldatech/soft-intro-vae-pytorch/blob/main/soft_intro_vae_tutorial/soft_intro_vae_2d_code_tutorial.ipynb)
* [Bootstrap Soft-IntroVAE](https://github.com/taldatech/soft-intro-vae-pytorch/blob/main/soft_intro_vae_tutorial/soft_intro_vae_bootstrap_code_tutorial.ipynb)
* [Open in Colab](https://colab.research.google.com/github/taldatech/soft-intro-vae-pytorch/blob/main/soft_intro_vae_tutorial/soft_intro_vae_bootstrap_code_tutorial.ipynb)
* General Tutorials (Jupyter Notebooks with code)
* [CS236756 - Intro to Machine Learning](https://github.com/taldatech/cs236756-intro-to-ml)
* [EE046202 - Unsupervised Learning and Data Analysis](https://github.com/taldatech/ee046202-unsupervised-learning-data-analysis)
* [EE046746 - Computer Vision](https://github.com/taldatech/ee046746-computer-vision)
## <img src="https://img.icons8.com/dusk/64/000000/prize.png" style="height:50px;display:inline"> Credits
---
* Icons from <a href="https://icons8.com/">Icon8.com</a> - https://icons8.com
| github_jupyter |
```
import tensorflow as tf
import os
import numpy as np
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
X_train = np.load("/content/drive/MyDrive/Data/X_tr_concat.npy")
y_train = np.load("/content/drive/MyDrive/Data/y_tr_concat.npy")
X_val = np.load("/content/drive/MyDrive/Data/X_val_concat.npy")
y_val = np.load("/content/drive/MyDrive/Data/y_val_concat.npy")
a = np.array([[1,2],[3,4],[5,6],[7,8]])
b = a[0:a.shape[0]//2, :]
print(b)
X_valid = X_val[0:X_val.shape[0]//2,:]
X_test = X_val[X_val.shape[0]//2:,:]
y_valid = y_val[0:y_val.shape[0]//2,:]
y_test = y_val[y_val.shape[0]//2:,:]
print(X_train.shape, y_train.shape, X_valid.shape, y_valid.shape, X_test.shape, y_test.shape)
features =252
batch_size = 512
numepochs = 50
def build_model():
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(128, activation="tanh"),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(88, activation="sigmoid")
]
)
return model
model = build_model()
optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer=optimizer, loss = "binary_crossentropy", metrics=['accuracy'])
stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=8, mode='auto')
history = model.fit(X_train,y_train,epochs=numepochs,batch_size=batch_size,verbose=1,validation_data=(X_valid,y_valid), callbacks=stop)
def plot_history(history):
fig, axs = plt.subplots()
"""# create the accuracy subplot
axs[0].plot(history.history["accuracy"], label = "train accuracy")
axs[0].plot(history.history["val_accuracy"], label = "test accuracy")
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel("Epoch")
axs[0].legend(loc ="lower right")
axs[0].set_title("Accuracy eval")"""
# create the error subplot
axs.plot(history.history["loss"], label="train error")
axs.plot(history.history["val_loss"], label="test error")
axs.set_ylabel('Error')
axs.set_xlabel("Epoch")
axs.legend(loc="upper right")
axs.set_title("Error eval")
plt.show()
plot_history(history)
pred = model.predict(X_test)
print(pred.shape)
pred = pred>=0.5
pred = pred-0.0
true_positives = 0
false_positives = 0
false_negatives = 0
for j in range(pred.shape[0]):
for k in range(pred.shape[1]):
if y_test[j][k] == 1.0 and pred[j][k] == 1.0:
true_positives+=1
elif y_test[j][k] == 1.0 and pred[j][k] == 0.0:
false_negatives+=1
elif y_test[j][k] == 0.0 and pred[j][k] == 1.0:
false_positives+=1
accuracy = (true_positives/(true_positives+false_positives+false_negatives))
precision = true_positives/(true_positives+false_positives)
recall = true_positives/(true_positives+false_negatives)
fscore = (2*precision*recall)/(precision+recall)
print(accuracy, precision, recall, fscore)
model.save("dnn_trained.h5")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks%20in%20Deep%20Learning%20Networks/6)%20Resnet%20V1%20Bottleneck%20Block%20(Type%20-%202).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### 1. Learn to implement Resnet V1 Bottleneck Block (Type - 2) using monk
- Monk's Keras
- Monk's Pytorch
- Monk's Mxnet
### 2. Use network Monk's debugger to create complex blocks
### 3. Understand how syntactically different it is to implement the same using
- Traditional Keras
- Traditional Pytorch
- Traditional Mxnet
# Resnet V1 Bottleneck Block - Type 2
- Note: The block structure can have variations too, this is just an example
```
from IPython.display import Image
Image(filename='imgs/resnet_v1_bottleneck_without_downsample.png')
```
# Table of contents
[1. Install Monk](#1)
[2. Block basic Information](#2)
- [2.1) Visual structure](#2-1)
- [2.2) Layers in Branches](#2-2)
[3) Creating Block using monk visual debugger](#3)
- [3.1) Create the first branch](#3-1)
- [3.2) Create the second branch](#3-2)
- [3.3) Merge the branches](#3-3)
- [3.4) Debug the merged network](#3-4)
- [3.5) Compile the network](#3-5)
- [3.6) Visualize the network](#3-6)
- [3.7) Run data through the network](#3-7)
[4) Creating Block Using MONK one line API call](#4)
- [Mxnet Backend](#4-1)
- [Pytorch Backend](#4-2)
- [Keras Backend](#4-3)
[5) Appendix](#5)
- [Study Material](#5-1)
- [Creating block using traditional Mxnet](#5-2)
- [Creating block using traditional Pytorch](#5-3)
- [Creating block using traditional Keras](#5-4)
<a id='0'></a>
# Install Monk
## Using pip (Recommended)
- colab (gpu)
- All bakcends: `pip install -U monk-colab`
- kaggle (gpu)
- All backends: `pip install -U monk-kaggle`
- cuda 10.2
- All backends: `pip install -U monk-cuda102`
- Gluon bakcned: `pip install -U monk-gluon-cuda102`
- Pytorch backend: `pip install -U monk-pytorch-cuda102`
- Keras backend: `pip install -U monk-keras-cuda102`
- cuda 10.1
- All backend: `pip install -U monk-cuda101`
- Gluon bakcned: `pip install -U monk-gluon-cuda101`
- Pytorch backend: `pip install -U monk-pytorch-cuda101`
- Keras backend: `pip install -U monk-keras-cuda101`
- cuda 10.0
- All backend: `pip install -U monk-cuda100`
- Gluon bakcned: `pip install -U monk-gluon-cuda100`
- Pytorch backend: `pip install -U monk-pytorch-cuda100`
- Keras backend: `pip install -U monk-keras-cuda100`
- cuda 9.2
- All backend: `pip install -U monk-cuda92`
- Gluon bakcned: `pip install -U monk-gluon-cuda92`
- Pytorch backend: `pip install -U monk-pytorch-cuda92`
- Keras backend: `pip install -U monk-keras-cuda92`
- cuda 9.0
- All backend: `pip install -U monk-cuda90`
- Gluon bakcned: `pip install -U monk-gluon-cuda90`
- Pytorch backend: `pip install -U monk-pytorch-cuda90`
- Keras backend: `pip install -U monk-keras-cuda90`
- cpu
- All backend: `pip install -U monk-cpu`
- Gluon bakcned: `pip install -U monk-gluon-cpu`
- Pytorch backend: `pip install -U monk-pytorch-cpu`
- Keras backend: `pip install -U monk-keras-cpu`
## Install Monk Manually (Not recommended)
### Step 1: Clone the library
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
### Step 2: Install requirements
- Linux
- Cuda 9.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`
- Cuda 9.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`
- Cuda 10.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`
- Cuda 10.1
- `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`
- Cuda 10.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`
- Windows
- Cuda 9.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`
- Cuda 9.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`
- Cuda 10.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`
- Cuda 10.1 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`
- Cuda 10.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`
- Mac
- CPU (Non gpu system)
- `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`
- Misc
- Colab (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`
- Kaggle (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`
### Step 3: Add to system path (Required for every terminal or kernel run)
- `import sys`
- `sys.path.append("monk_v1/");`
# Imports
```
# Common
import numpy as np
import math
import netron
from collections import OrderedDict
from functools import partial
#Using mxnet-gluon backend
# When installed using pip
from monk.gluon_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.gluon_prototype import prototype
```
<a id='2'></a>
# Block Information
<a id='2_1'></a>
## Visual structure
```
from IPython.display import Image
Image(filename='imgs/resnet_v1_bottleneck_without_downsample.png')
```
<a id='2_2'></a>
## Layers in Branches
- Number of branches: 2
- Branch 1
- identity
- Branch 2
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv1x1 -> batchnorm
- Branches merged using
- Elementwise addition
(See Appendix to read blogs on resnets)
<a id='3'></a>
# Creating Block using monk debugger
```
# Imports and setup a project
# To use pytorch backend - replace gluon_prototype with pytorch_prototype
# To use keras backend - replace gluon_prototype with keras_prototype
from monk.gluon_prototype import prototype
# Create a sample project
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
```
<a id='3-1'></a>
## Create the first branch
```
def first_branch():
network = [];
network.append(gtf.identity());
return network;
# Debug the branch
branch_1 = first_branch()
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network);
```
<a id='3-2'></a>
## Create the second branch
```
def second_branch(output_channels=128, stride=1):
network = [];
# Bottleneck convolution
network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Bottleneck convolution
network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Normal convolution
network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=1));
network.append(gtf.batch_normalization());
return network;
# Debug the branch
branch_2 = second_branch(output_channels=64, stride=1)
network = [];
network.append(branch_2);
gtf.debug_custom_model_design(network);
```
<a id='3-3'></a>
## Merge the branches
```
def final_block(output_channels=64, stride=1):
network = [];
#Create subnetwork and add branches
subnetwork = [];
branch_1 = first_branch()
branch_2 = second_branch(output_channels=output_channels, stride=stride)
subnetwork.append(branch_1);
subnetwork.append(branch_2);
# Add merging element
subnetwork.append(gtf.add());
# Add the subnetwork
network.append(subnetwork)
network.append(gtf.relu());
return network;
```
<a id='3-4'></a>
## Debug the merged network
```
final = final_block(output_channels=64, stride=1)
network = [];
network.append(final);
gtf.debug_custom_model_design(network);
```
<a id='3-5'></a>
## Compile the network
```
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='3-6'></a>
## Run data through the network
```
import mxnet as mx
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = gtf.system_dict["local"]["model"].forward(x);
print(x.shape, y.shape)
```
<a id='3-7'></a>
## Visualize network using netron
```
gtf.Visualize_With_Netron(data_shape=(64, 224, 224))
```
<a id='4'></a>
# Creating Using MONK LOW code API
<a id='4-1'></a>
## Mxnet backend
```
from monk.gluon_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v1_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='4-2'></a>
## Pytorch backend
- Only the import changes
```
#Change gluon_prototype to pytorch_prototype
from monk.pytorch_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v1_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='4-3'></a>
## Keras backend
- Only the import changes
```
#Change gluon_prototype to keras_prototype
from monk.keras_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v1_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='5'></a>
# Appendix
<a id='5-1'></a>
## Study links
- https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec
- https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691
- https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80f9a507b9c
- https://hackernoon.com/resnet-block-level-design-with-deep-learning-studio-part-1-727c6f4927ac
<a id='5-2'></a>
## Creating block using traditional Mxnet
- Code credits - https://mxnet.incubator.apache.org/
```
# Traditional-Mxnet-gluon
import mxnet as mx
from mxnet.gluon import nn
from mxnet.gluon.nn import HybridBlock, BatchNorm
from mxnet.gluon.contrib.nn import HybridConcurrent, Identity
from mxnet import gluon, init, nd
def _conv3x3(channels, stride, in_channels):
return nn.Conv2D(channels, kernel_size=3, strides=stride, padding=1,
use_bias=False, in_channels=in_channels)
class ResnetBlockV1(HybridBlock):
def __init__(self, channels, stride, in_channels=0, **kwargs):
super(ResnetBlockV1, self).__init__(**kwargs)
#Branch - 1
#Identity
# Branch - 2
self.body = nn.HybridSequential(prefix='')
self.body.add(nn.Conv2D(channels//4, kernel_size=1, strides=stride,
use_bias=False, in_channels=in_channels))
self.body.add(nn.BatchNorm())
self.body.add(nn.Activation('relu'))
self.body.add(_conv3x3(channels//4, stride, in_channels))
self.body.add(nn.BatchNorm())
self.body.add(nn.Activation('relu'))
self.body.add(nn.Conv2D(channels, kernel_size=1, strides=stride,
use_bias=False, in_channels=in_channels))
self.body.add(nn.BatchNorm())
def hybrid_forward(self, F, x):
residual = x
x = self.body(x)
x = F.Activation(residual+x, act_type='relu')
return x
# Invoke the block
block = ResnetBlockV1(64, 1)
# Initialize network and load block on machine
ctx = [mx.cpu()];
block.initialize(init.Xavier(), ctx = ctx);
block.collect_params().reset_ctx(ctx)
block.hybridize()
# Run data through network
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = block.forward(x);
print(x.shape, y.shape)
# Export Model to Load on Netron
block.export("final", epoch=0);
netron.start("final-symbol.json", port=8082)
```
<a id='5-3'></a>
## Creating block using traditional Pytorch
- Code credits - https://pytorch.org/
```
# Traiditional-Pytorch
import torch
from torch import nn
from torch.jit.annotations import List
import torch.nn.functional as F
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=dilation, groups=groups, bias=False, dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class ResnetBottleNeckBlock(nn.Module):
expansion = 1
__constants__ = ['downsample']
def __init__(self, inplanes, planes, stride=1, groups=1,
base_width=64, dilation=1, norm_layer=None):
super(ResnetBottleNeckBlock, self).__init__()
norm_layer = nn.BatchNorm2d
# Branch - 1
#Identity
# Branch - 2
self.conv1 = conv1x1(inplanes, planes//4, stride)
self.bn1 = norm_layer(planes//4)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes//4, planes//4, stride)
self.bn2 = norm_layer(planes//4)
self.relu2 = nn.ReLU(inplace=True)
self.conv3 = conv1x1(planes//4, planes)
self.bn3 = norm_layer(planes)
self.stride = stride
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out = self.bn3(out)
out += identity
out = self.relu(out)
return out
# Invoke the block
block = ResnetBottleNeckBlock(64, 64, stride=1);
# Initialize network and load block on machine
layers = []
layers.append(block);
net = nn.Sequential(*layers);
# Run data through network
x = torch.randn(1, 64, 224, 224)
y = net(x)
print(x.shape, y.shape);
# Export Model to Load on Netron
torch.onnx.export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
netron.start('model.onnx', port=9998);
```
<a id='5-4'></a>
## Creating block using traditional Keras
- Code credits: https://keras.io/
```
# Traditional-Keras
import keras
import keras.layers as kla
import keras.models as kmo
import tensorflow as tf
from keras.models import Model
backend = 'channels_last'
from keras import layers
def resnet_conv_block(input_tensor,
kernel_size,
filters,
stage,
block,
strides=(1, 1)):
filters1, filters2, filters3 = filters
bn_axis = 3
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Branch - 1
#Identity
shortcut = input_tensor
# Branch - 2
x = layers.Conv2D(filters1, (1, 1), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2a')(input_tensor)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters2, (3, 3), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2b', padding="same")(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters3, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2c')(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)
x = layers.add([x, shortcut])
x = layers.Activation('relu')(x)
return x
def create_model(input_shape, kernel_size, filters, stage, block):
img_input = layers.Input(shape=input_shape);
x = resnet_conv_block(img_input, kernel_size, filters, stage, block)
return Model(img_input, x);
# Invoke the block
kernel_size=3;
filters=[16, 16, 64];
input_shape=(224, 224, 64);
model = create_model(input_shape, kernel_size, filters, 0, "0");
# Run data through network
x = tf.placeholder(tf.float32, shape=(1, 224, 224, 64))
y = model(x)
print(x.shape, y.shape)
# Export Model to Load on Netron
model.save("final.h5");
netron.start("final.h5", port=8082)
```
# Goals Completed
### 1. Learn to implement Resnet V1 Bottleneck Block (Type - 2) using monk
- Monk's Keras
- Monk's Pytorch
- Monk's Mxnet
### 2. Use network Monk's debugger to create complex blocks
### 3. Understand how syntactically different it is to implement the same using
- Traditional Keras
- Traditional Pytorch
- Traditional Mxnet
| github_jupyter |
### PoS - Part of Speech Tagging
As promised in the previous notebook, In this notebook we are going to show how we can use the `Bi-LSTM` layer together with the `Embedding` layer using `word2vec` vectors as the weigths of the embedding layer.
**Note:** The rest of the notebokk will remain the same as the previous notebook the only difference is that this time around we are going to load the data from a file called `pos.csv`.
### Part of Speech Tagging (PoS)
This is a process of classifying words into their part of speech.
### Imports
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
from sklearn.model_selection import train_test_split
from nltk.corpus import brown, treebank, conll2000
import os, time, re, string, random, nltk
tf.__version__
```
### Data loading
We are going to load the data from a `csv` file named `pos.csv` from our google drive.
```
from google.colab import drive
drive.mount("/content/drive")
file_path = "/content/drive/My Drive/NLP Data/pos-datasets/english/pos.csv"
os.path.exists(file_path)
dataframe = pd.read_csv(file_path)
dataframe.head(5)
```
### Dataset creation
Now in our dataset we a sentence of words and a sentence of tags, which each tag correspont to a word in a sentence of words.
```
X = dataframe.sentence.values
y = dataframe.tags.values
print(X[0])
print(y[0])
X = list(map(lambda x: x.split(" "), list(X)))
y = list(map(lambda x: x.split(" "), list(y)))
```
### Data Statistics
```
words = []
tags = []
for sentence in X:
for word in sentence:
words.append(word.lower())
for tag in y:
for t in tag:
tags.append(t.lower())
num_words = len(set(words))
num_tags = len(set(tags))
set(tags)
print("Total number of tagged sentences: {}".format(len(X)))
print("Vocabulary size: {}".format(num_words))
print("Total number of tags: {}".format(num_tags))
```
### Checking examples
```
print("Sample x: ", X[0], "\n")
print("Sample y: ", y[0], "\n")
```
### Text vectorization
We are going to use the `Tokenizer` class to encode text from sequences to sequence of integers.
```
tokenizer = keras.preprocessing.text.Tokenizer(split=' ')
tokenizer.fit_on_texts(X)
tag_tokenizer = keras.preprocessing.text.Tokenizer(split=' ')
tag_tokenizer.fit_on_texts(y)
tag_tokenizer.word_index
```
Now we can convert our tokens to sequences.
```
sentences_sequences = tokenizer.texts_to_sequences(X)
tags_sequences = tag_tokenizer.texts_to_sequences(y)
```
### Checking a single example.
```
len(tags_sequences[3]), len(sentences_sequences[3])
```
Let's convert tag tokens back to word representations.
```
print("Y[0]: ", y[0])
print("tags_sequences[0]: ", tags_sequences[0])
print("sequences_to_tags[0]: ",
tag_tokenizer.sequences_to_texts([tags_sequences[0]]))
```
### Checking if the inputs and outputs have the same length.
```
different_length = [1 if len(input) != len(output) else 0 for input, output in zip(tags_sequences, sentences_sequences)]
print("{} sentences have disparate input-output lengths.".format(sum(different_length)))
```
### Padding sequences
Since the sentences has various length we are going to pad the sequences of these sentences to the longest sentence. We will make sure that these sequences are padded to have the same length.
```
lengths = [len(seq) for seq in tags_sequences]
MAX_LENGTH = max(lengths)
print(f"Longest sentence: {MAX_LENGTH}")
MAX_LENGTH = 100 # we are going to set the max-length to 100
padded_sentences = keras.preprocessing.sequence.pad_sequences(
sentences_sequences,
maxlen=MAX_LENGTH,
padding="post",
truncating="post"
)
padded_tags = keras.preprocessing.sequence.pad_sequences(
tags_sequences,
maxlen=MAX_LENGTH,
padding="post",
truncating="post"
)
```
Checking the a single example of the padded sequence.
```
print(padded_sentences[0], "\n"*2)
print(padded_tags[0])
```
### One-hot Encode `padded_tags` labels
```
padded_tags = keras.utils.to_categorical(padded_tags)
padded_tags.shape
```
### Set's spliting.
We are then going to split the data into 3 sets using the `sklearn` `train_test_split` method to split our data train and test sets.
```
X_train, X_test, y_train, y_test = train_test_split(
padded_sentences, padded_tags, random_state=42, test_size=.15
)
X_train, X_valid, y_train, y_valid = train_test_split(
padded_sentences, padded_tags, random_state=42, test_size=.15
)
```
### Counting examples
```
print("training: ", len(X_train))
print("testing: ", len(X_test))
print("validation: ", len(X_valid))
```
### A simple RNN with word Embeddings
As mentioned previously we are going to make use of the `word2vec` so to download these vectors we are going to run the following code cell:
```
import gensim.downloader as api
word2vec = api.load('word2vec-google-news-300')
vec_king = word2vec['king']
vec_king
```
### Theory behind word vectors.
Words with the simmilar meaning are closer to each other in the vector space.
### Number of classes
```
n_classes = y_train.shape[-1]
n_classes
```
### Hyper parameters
```
VOCAB_SIZE = len(tokenizer.word_index) + 1
EMBEDDING_SIZE = 300 # each word in word2vec model is represented using a 300 dimensional vector
MAX_SEQUENCE_LENGTH = 100
VOCAB_SIZE
```
### Creating an embedding matrix that suits our data.
```
# Initialize the embedding weights with zeros
embedding_weigths = np.zeros([
VOCAB_SIZE, EMBEDDING_SIZE
])
# Getting the string to integer mapping
stoi = tokenizer.word_index
# copying vectors from word2vec to our embedding matrix that suits our data
for word, index in stoi.items():
try:
embedding_weigths[index, :] = word2vec[word]
except:
pass
```
### Checking the dimention of our embedding weights
```
print("Embeddings shape: {}".format(embedding_weigths.shape))
```
#### Checking a single word in our embedding weights
```
embedding_weigths[tokenizer.word_index['the']]
```
### Building an RNN using pretrained weigths.
```
rnn_model = keras.Sequential([
keras.layers.Embedding(
VOCAB_SIZE, EMBEDDING_SIZE, input_length=MAX_SEQUENCE_LENGTH,
trainable = True,
weights = [embedding_weigths]
),
keras.layers.Bidirectional(
keras.layers.LSTM(64, return_sequences=True, dropout=.5)
),
keras.layers.TimeDistributed(
keras.layers.Dense(n_classes, activation="softmax")
)
], name="lstm_rnn")
rnn_model.summary()
```
### Model training
```
rnn_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
rnn_model.fit(
X_train, y_train, batch_size=128,
epochs=2, validation_data=(X_valid, y_valid)
)
```
### Evaluating the model
```
rnn_model.evaluate(X_test, y_test, verbose=1)
```
### Model Inference
Now we are ready to make predictions of our tags. We are going to perform the following steps in the `make_prediction` function.
1. tokenize the sentence
2. convert the tokenized sentence to integer representation
3. padd the tokenized sentences and pass them to the model
4. get the predictions and we convert the predictions back to `tags`.
```
sent = "The Fulton County Grand Jury said Friday an investigation of Atlanta's recent primary election produced `` no evidence '' that any irregularities took place ."
tags = 'DET NOUN NOUN ADJ NOUN VERB NOUN DET NOUN ADP NOUN ADJ NOUN NOUN VERB . DET NOUN . ADP DET NOUN VERB NOUN .'.split(" ")
def tokenize_and_pad_sequences(sent):
if isinstance(sent, str):
tokens = sent.split(" ")
else:
tokens = sent
tokens = [t.lower() for t in tokens]
sequences = tokenizer.texts_to_sequences([tokens])
padded_sequnces = keras.preprocessing.sequence.pad_sequences(
sequences,
maxlen=MAX_LENGTH,
padding="post",
truncating="post"
)
predictions = rnn_model.predict(padded_sequnces)
predictions = tf.argmax(predictions, axis=-1).numpy().astype("int32")
return tag_tokenizer.sequences_to_texts(predictions)
pred_tags = tokenize_and_pad_sequences(sent)
print("word\t\t\ttag\tpred-tag\t")
print("-"*40)
for word, tag, pred_tag in zip(sent.split(" "), tags, pred_tags[0].split(" ")):
print(f"{word}\t\t\t{tag}\t{pred_tag.upper()}\t")
```
### Conclusion
You have made it to the end of this series where we created a models with different actitectures and be able to make predictions accurately so.
| github_jupyter |
## Deploy SSD-VGG Model as Web Service on FPGA
```
%load_ext autoreload
%autoreload 2
import os, sys
import tensorflow as tf
import azureml
import warnings
warnings.filterwarnings('ignore')
sys.path.insert(0, os.path.abspath("../tfssd"))
from tfutil import endpoints
from finetune.model_saver import SaverVggSsd
```
### Restore AzureML workspace & register Model
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
from azureml.core.model import Model
from azureml.core.image import Image
from azureml.accel import AccelOnnxConverter
from azureml.accel import AccelContainerImage
from os.path import expanduser
model_ckpt_dir = expanduser("~/azml_ssd_vgg")
model_name = r'ssdvgg-fpga'
model_save_path = os.path.join(model_ckpt_dir, model_name)
# model_save_path should NOT exist prior to saving the model
not os.path.exists(model_save_path)
with SaverVggSsd(model_ckpt_dir) as saver:
saver.save_for_deployment(model_save_path)
# Register model
registered_model = Model.register(workspace = ws,
model_path = model_save_path,
model_name = model_name)
print("Successfully registered: ", registered_model.name, registered_model.description, registered_model.version, '\n', sep = '\t')
```
### Convert inference model to ONNX format
```
#Convert the TensorFlow graph to the Open Neural Network Exchange format (ONNX).
input_tensor = saver.input_name_str
output_tensors_str = ",".join(saver.output_names)
# Convert model
convert_request = AccelOnnxConverter.convert_tf_model(ws, registered_model, input_tensor, output_tensors_str)
# If it fails, you can run wait_for_completion again with show_output=True.
convert_request.wait_for_completion(show_output=True)
converted_model = convert_request.result
print("\nSuccessfully converted: ", converted_model.name, converted_model.url, converted_model.version,
converted_model.id, converted_model.created_time, '\n')
```
### Create Docker Image
```
image_config = AccelContainerImage.image_configuration()
# Image name must be lowercase
image_name = "{}-image".format(model_name)
image = Image.create(name = image_name,
models = [converted_model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output=True)
# List the images by tag and get the detailed logs for any debugging.
print("Created AccelContainerImage: {} {} {}\n".format(image.name, image.creation_state, image.image_location))
```
### Deploy to the cloud
```
#Create a new Azure Kubernetes Service
from azureml.core.compute import AksCompute, ComputeTarget
# Uses the specific FPGA enabled VM (sku: Standard_PB6s)
# Standard_PB6s are available in: eastus, westus2, westeurope, southeastasia
prov_config = AksCompute.provisioning_configuration(vm_size = "Standard_PB6s",
agent_count = 1,
location = "westus2")
aks_name = 'aks-pb6-obj'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
# Monitor deployment
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
from azureml.core.webservice import Webservice, AksWebservice
# Set the web service configuration (for creating a test service, we don't want autoscale enabled)
# Authentication is enabled by default, but for testing we specify False
aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
num_replicas=1,
auth_enabled = False)
aks_service_name ='fpga-aks-service'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target)
aks_service.wait_for_deployment(show_output = True)
```
### Test the cloud service
```
# Using the grpc client in AzureML Accelerated Models SDK
from azureml.accel import PredictionClient
address = aks_service.scoring_uri
ssl_enabled = address.startswith("https")
address = address[address.find('/')+2:].strip('/')
port = 443 if ssl_enabled else 80
print(f"address={address}, port={port}, ssl={ssl_enabled}, name={aks_service.name}")
# Initialize AzureML Accelerated Models client
client = PredictionClient(address=address,
port=port,
use_ssl=ssl_enabled,
service_name=aks_service.name)
from tfutil import visualization
output_tensors = saver.output_names
```
### Visualize prediction using the deployed model
```
import glob
import matplotlib as plt
import cv2
# Select an example image to test.
# Default directory is the image_dir created in the Finetune VGG SSD notebook.
image_dir = expanduser("~/azml_ssd_vgg/JPEGImages")
im_files = glob.glob(os.path.join(image_dir, '*.jpg'))
im_file = im_files[0]
import azureml.accel._external.ssdvgg_utils as ssdvgg_utils
result = client.score_file(path=im_file,
input_name=saver.input_name_str,
outputs=output_tensors)
classes, scores, bboxes = ssdvgg_utils.postprocess(result, select_threshold=0.5)
plt.rcParams['figure.figsize'] = 15, 15
img = cv2.imread(im_file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
visualization.plt_bboxes(img, classes, scores, bboxes)
```
### Clean up image (optional)
```
#Delete your web service, image, and model (must be done in this order since there are dependencies).
#aks_service.delete()
#aks_target.delete()
#image.delete()
#registered_model.delete()
#converted_model.delete()
```
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe_from_ecdc():
return pd.read_csv(
"https://opendata.ecdc.europa.eu/covid19/casedistribution/csv/data.csv")
confirmed_df_ = download_cases_dataframe_from_ecdc()
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["dateRep", "cases", "geoId"]]
confirmed_df.rename(
columns={
"dateRep":"sample_date",
"cases": "new_cases",
"geoId": "country_code",
},
inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: report_backend_client.source_regions_for_date(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
source_regions_for_summary_df = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df.tail()
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
confirmed_df = confirmed_output_df.copy()
confirmed_df.tail()
confirmed_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
confirmed_df = confirmed_days_df[["sample_date_string"]].merge(confirmed_df, how="left")
confirmed_df.sort_values("sample_date_string", inplace=True)
confirmed_df.fillna(method="ffill", inplace=True)
confirmed_df.tail()
confirmed_df[["new_cases", "covid_cases"]].plot()
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
fail_on_error_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=fail_on_error_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df.head(daily_plot_days)
weekly_result_summary_df = result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(7).agg({
"covid_cases": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum"
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
weekly_result_summary_df = weekly_result_summary_df.fillna(0).astype(int)
weekly_result_summary_df["teks_per_shared_diagnosis"] = \
(weekly_result_summary_df.shared_teks_by_upload_date / weekly_result_summary_df.shared_diagnoses).fillna(0)
weekly_result_summary_df["shared_diagnoses_per_covid_case"] = \
(weekly_result_summary_df.shared_diagnoses / weekly_result_summary_df.covid_cases).fillna(0)
weekly_result_summary_df.head()
last_7_days_summary = weekly_result_summary_df.to_dict(orient="records")[1]
last_7_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases in Source Countries (7-day Rolling Average)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date",
"shared_diagnoses": "Shared Diagnoses (Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis",
"shared_diagnoses_per_covid_case": "Usage Ratio (Fraction of Cases in Source Countries Which Shared Diagnosis)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 22), legend=False)
ax_ = summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
ax_.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
media_path = get_temporary_image_path()
dfi.export(df, media_path)
return media_path
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}",
}
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Source Countries: {display_brief_source_regions}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: ≤{shared_diagnoses_per_covid_case:.2%}
Last 7 Days:
- Shared Diagnoses: ≤{last_7_days_summary["shared_diagnoses"]:.0f}
- Usage Ratio: ≤{last_7_days_summary["shared_diagnoses_per_covid_case"]:.2%}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
## Color Processing
In this lab, we will learn to use color feature in different color spaces to extract useful information from images. This notebook includes both coding and written questions. Please hand in this notebook file with all outputs and your answer
Import OpenCV, Numpy and Matplotlib as always
```
import cv2
import math
import time
import threading
import numpy as np
import random as rng
from matplotlib import pyplot as plt
from ipywidgets import interact,interactive
import ipywidgets as widgets
from IPython.display import display, HTML,clear_output
import IPython.display
%matplotlib inline
```
## Grayscale Image Thresholding
Thresholding is the simplest method of image segmentation. This non-linear operation converts a grayscale image into a binary image where the two colors (black/white) are assigned to pixels that are below or above the specified threshold. <br>
Lena comes again! Can you adjust both sliders to segment lena's skin?
```
inputImage = cv2.imread("assets/lena_std.tif",cv2.IMREAD_GRAYSCALE)
def grayscaleThresholding(minValue,maxValue):
thresholdImage = np.logical_and(inputImage > minValue, inputImage < maxValue)
inputImageCopy = inputImage.copy()
cv2.rectangle(inputImageCopy,(250,400),(340,500),255,3)
cropRegion = inputImage[400:500,250:340]
plt.figure(figsize=(10,10))
plt.subplot(131)
plt.title("Lena Image")
plt.imshow(inputImageCopy, cmap='gray')
plt.subplot(132)
plt.title("Segmentation Mask")
plt.imshow(thresholdImage, cmap='gray')
plt.subplot(133)
plt.title("Pixel Value Distribution")
plt.hist(cropRegion,range=(0,255))
plt.show()
interact(grayscaleThresholding, minValue=widgets.IntSlider(min=0,max=255,step=1,value=1),maxValue=widgets.IntSlider(min=0,max=255,step=1,value=1));
```
## Simple Image Segmentation using Color
As you can see from the above sample, only grayscale information is usually not enough to segment "things" from the images. In this section we will apply simple color segmentation on various colorspaces. The following block is code snippet which retrive image from your webcam and apply thresholding on BGR image using defined value.
```
bMin = 50; bMax = 120
gMin = 6; gMax = 60
rMin = 5; rMax = 60
cameraNo = 0
# You can press "Interupt Kernel Button to stop webcam"
inputStream = cv2.VideoCapture(cameraNo)
try:
while True:
_, videoFrameBGR = inputStream.read()
if videoFrameBGR is not None:
outputVideoFrameBGR = videoFrameBGR.copy()
# Draw ROI
cv2.rectangle(outputVideoFrameBGR,(100,100),(200,200),(0,255,0),3)
# Cropped Region
croppedRegion = videoFrameBGR[100:200,100:200,:]
mask = cv2.inRange(videoFrameBGR,(bMin,gMin,rMin),(bMax,gMax,rMax))[:,:,np.newaxis]
mask = np.repeat(mask,3,axis=2)
## Draw Min/Max pixel value in BGR order on image
cv2.putText(outputVideoFrameBGR,str(np.min(croppedRegion[:,:,0]))+'/'+str(np.min(croppedRegion[:,:,1]))+'/'+str(np.min(croppedRegion[:,:,2])),(20,20),cv2.FONT_HERSHEY_SIMPLEX,1.0,(0,0,255))
cv2.putText(outputVideoFrameBGR,str(np.max(croppedRegion[:,:,0]))+'/'+str(np.max(croppedRegion[:,:,1]))+'/'+str(np.max(croppedRegion[:,:,2])),(20,50),cv2.FONT_HERSHEY_SIMPLEX,1.0,(0,0,255))
outputVideoFrameBGR = np.hstack((outputVideoFrameBGR,mask))
# Encode image as jpg numpy array
_, buf = cv2.imencode(".jpg", outputVideoFrameBGR)
# Draw result
IPython.display.display(IPython.display.Image(data=buf))
clear_output(wait=True)
else:
print("Cannot Open Webcam, hw problem?")
break
except KeyboardInterrupt:
print ("Stream stopped")
inputStream.release()
```
Since the slider widget does not support for-loop webcam retrival method that we use, we may use build-in OpenCV GUI library to create a color range slider by using the following code. (The window name <b>"Color Segmentation"</b> will popup!)
```
def sliderCallback(x):
pass
# Create a OpenCV Window
windowName = 'Color Segmentation'
cv2.namedWindow(windowName)
cv2.createTrackbar('bMin',windowName,0,255,sliderCallback)
cv2.createTrackbar('gMin',windowName,0,255,sliderCallback)
cv2.createTrackbar('rMin',windowName,0,255,sliderCallback)
cv2.createTrackbar('bMax',windowName,0,255,sliderCallback)
cv2.createTrackbar('gMax',windowName,0,255,sliderCallback)
cv2.createTrackbar('rMax',windowName,0,255,sliderCallback)
inputStream = cv2.VideoCapture(cameraNo)
try:
while True:
_, videoFrameBGR = inputStream.read()
if videoFrameBGR is not None:
bMin = cv2.getTrackbarPos('bMin',windowName)
gMin = cv2.getTrackbarPos('gMin',windowName)
rMin = cv2.getTrackbarPos('rMin',windowName)
bMax = cv2.getTrackbarPos('bMax',windowName)
gMax = cv2.getTrackbarPos('gMax',windowName)
rMax = cv2.getTrackbarPos('rMax',windowName)
print((bMin,gMin,rMin),(bMax,gMax,rMax) )
mask = cv2.inRange(videoFrameBGR,(bMin,gMin,rMin),(bMax,gMax,rMax))[:,:,np.newaxis]
mask = np.repeat(mask,3,axis=2)
outputVideoFrameBGR = videoFrameBGR.copy()
outputVideoFrameBGR = np.hstack((outputVideoFrameBGR,mask))
cv2.imshow(windowName,outputVideoFrameBGR)
if cv2.waitKey(1) == ord('q'):
cv2.destroyAllWindows()
break
else:
print("Cannot Open Webcam, hw problem?")
break
except KeyboardInterrupt:
print ("Stream stopped")
inputStream.release()
cv2.destroyAllWindows()
```
OpenCV supports many well-known colorspaces. You can apply the colorspace transformation by using <a href="https://docs.opencv.org/3.4.2/d7/d1b/group__imgproc__misc.html#ga397ae87e1288a81d2363b61574eb8cab">cv2.cvtColor</a> and see the list of suppoted transformation flags <a href="https://docs.opencv.org/3.4.2/d7/d1b/group__imgproc__misc.html#ga4e0972be5de079fed4e3a10e24ef5ef0">here</a>. Try tp apply color segmention on any object in other colorspace <b>(NOT BGR!!)</b> by fill the following block.
```
import math
#Hmin = 100
cameraNo = 0
def sliderCallback(x):
pass
# Create a OpenCV Window
windowName = 'Color Segmentation'
cv2.namedWindow(windowName)
cv2.createTrackbar('Hmin',windowName,0,360,sliderCallback)
cv2.createTrackbar('Smin',windowName,0,100,sliderCallback)
cv2.createTrackbar('Vmin',windowName,0,100,sliderCallback)
cv2.createTrackbar('Hmax',windowName,0,360,sliderCallback)
cv2.createTrackbar('Smax',windowName,0,100,sliderCallback)
cv2.createTrackbar('Vmax',windowName,0,100,sliderCallback)
inputStream = cv2.VideoCapture(cameraNo)
try:
while True:
_, videoFrameBGR = inputStream.read()
if videoFrameBGR is not None:
# H=[0,179], S,V = [0,255] REF: https://stackoverflow.com/questions/10948589/choosing-the-correct-upper-and-lower-hsv-boundaries-for-color-detection-withcv
hsv_frame = cv2.cvtColor(videoFrameBGR, cv2.COLOR_BGR2HSV)
Hmin = cv2.getTrackbarPos('Hmin',windowName)//2
Smin = math.floor(cv2.getTrackbarPos('Smin',windowName)*2.55)
Vmin = math.floor(cv2.getTrackbarPos('Vmin',windowName)*2.55)
Hmax = cv2.getTrackbarPos('Hmax',windowName)//2
Smax = math.floor(cv2.getTrackbarPos('Smax',windowName)*2.55)
Vmax = math.floor(cv2.getTrackbarPos('Vmax',windowName)*2.55)
# parameter for face detection : I tune it myself
# Hmin = 9//2
# Smin = math.floor(9*2.55)
# Vmin = math.floor(29*2.55)
# Hmax = 30//2
# Smax = math.floor(46*2.55)
# Vmax = math.floor(64*2.55)
# print( (Hmin,Smin,Vmin),(Hmax,Smax,Vmax))
mask = cv2.inRange(hsv_frame,(Hmin,Smin,Vmin),(Hmax,Smax,Vmax))[:,:,np.newaxis]
mask = np.repeat(mask,3,axis=2)
output = np.hstack((videoFrameBGR,mask))
cv2.imshow(windowName,output)
if cv2.waitKey(1) == ord('q'):
cv2.destroyAllWindows()
break
else:
print("Cannot Open Webcam, hw problem?")
break
except KeyboardInterrupt:
print ("Stream stopped")
inputStream.release()
cv2.destroyAllWindows()
```
## Morphological Transformations
The field of mathematical morphology contributes a wide range of operators to image processing, all based around a simple mathematical concepts from set theory. Morphological transformations are the operations based on the image shape employed on binay images. This operation needs needs two inputs, one is binary image, second one is called <b>structuring element or kernel</b> which decides the operation output. You can design the kernel to suit your application needs. Two basic morphological operators are Erosion and Dilation
The following mask image is segmented by using color information. You can see that there are some hand's pixels which are not connect into a perfect hand shape. We can correct these by using the basic morphological operaters.
```
handMask = cv2.imread('assets/SegmentedHand.png',cv2.IMREAD_GRAYSCALE)
plt.title('Segmented Hand Mask')
plt.imshow(handMask,cmap='gray')
plt.show()
def openAndCloseMorph(kernelSize,kernelShape, morphType):
kernel = cv2.getStructuringElement(kernelShape,(kernelSize,kernelSize))
outputImage = handMask.copy()
if morphType == 'erode':
outputImage = cv2.erode(outputImage,kernel,iterations = 1)
else:
outputImage = cv2.dilate(outputImage,kernel,iterations = 1)
plt.figure(figsize=(5,5))
plt.imshow(outputImage, cmap='gray')
plt.show()
print('Morphology Kernel Shape:')
display(kernel)
interact(openAndCloseMorph, kernelSize=widgets.IntSlider(min=1,max=11,step=1,value=1),
kernelShape=widgets.Dropdown(
options=[cv2.MORPH_RECT,cv2.MORPH_ELLIPSE, cv2.MORPH_CROSS],
value=cv2.MORPH_RECT,
description='kernelShape:',
disabled=False),
morphType=widgets.Dropdown(
options=['erode','dilate'],
value='erode',
description='Morph Type:',
disabled=False)
);
```
This <a href="https://docs.opencv.org/3.4.2/d9/d61/tutorial_py_morphological_ops.html">page</a> shows a good morphological operation exmple, try to write an interactive visualization like the above sample on <b>Opening and Closing</b> operations. See the output results by yourself.
```
def openAndCloseMorph(kernelSize,kernelShape, morphType):
kernel = cv2.getStructuringElement(kernelShape,(kernelSize,kernelSize))
outputImage = handMask.copy()
if morphType == 'erode':
outputImage = cv2.erode(outputImage,kernel,iterations = 1)
if morphType == 'dilate':
outputImage = cv2.dilate(outputImage,kernel,iterations = 1)
if morphType == 'open':
outputImage = cv2.morphologyEx(outputImage, cv2.MORPH_OPEN, kernel)
if morphType == 'close':
outputImage = cv2.morphologyEx(outputImage, cv2.MORPH_CLOSE, kernel)
plt.figure(figsize=(5,5))
plt.imshow(outputImage, cmap='gray')
plt.show()
print('Morphology Kernel Shape:')
display(kernel)
interact(openAndCloseMorph, kernelSize=widgets.IntSlider(min=1,max=11,step=2,value=1),
kernelShape=widgets.Dropdown(
options=[cv2.MORPH_RECT,cv2.MORPH_ELLIPSE, cv2.MORPH_CROSS],
value=cv2.MORPH_RECT,
description='kernelShape:',
disabled=False),
morphType=widgets.Dropdown(
options=['erode','dilate','open','close'],
value='erode',
description='Morph Type:',
disabled=False)
);
```
## Color Based Face Detector <br>
<img src="assets/funnyface.gif"/>
By using the knowledge from lecture 1-4, you should be able to write your own simple color based face detector. Use the above code snippets to help you write it. The output should be a code which retrive video feed from <b>your webcam</b> and draw bounding boxes around detected faces. Write the detection results into video file and hand in with this notebook. There should be <b>two video sequences</b>, in good lighting and other lighting condition. The output video should show robustness of your designed alogorithm. (Optional) You will get extra points if you can use <b>same parameters</b> for both sequences.
<b>Basic Guidance:<b>
1. Create a "face color segmentation mask" using your choice colorspace.
2. Filter out the outlier pixel!
3. Categorize each connected component into group by using cv2.findContours (from Lab 3)
4. Find the bounding box which can enclose those connect components by <a href="https://docs.opencv.org/3.4.2/d3/dc0/group__imgproc__shape.html#gacb413ddce8e48ff3ca61ed7cf626a366">cv2.boundingRect</a>
<b>Hints:</b>
- From today lecture, how do to discard noise/fill small hole from color segmentation mask output?
- Since this is a color-based problem, you can use old knowledge from lecture 1-3 to improve segmentation result by apply <b>?</b> on input image
- You can use some specific threshold based on shape properties or simple morphological operations to keep only potential contours
- To achieve a better result for both lighting conditions, you may need to apply some data analysis on the <b>region of interest</b> by plotting each channel value and see their data distributions.
- Internet is your friend. You can search for relavent research papers and use their algorithms/implementations, but you must <b>give proper credits</b> by citing them in this notebook.
```
### Describe how your algorithm work here (Thai or English). You can provide any visualization if you want.
'''
1. convert BGR image to HSV image for simplicity of color segmentation
2. tune the HSV value to match my face color
3. applying threshold with those tuned HSV value to the image to get a mask
4. apply pre-processing techniques: // ref: https://www.pyimagesearch.com/2014/08/18/skin-detection-step-step-example-using-python-opencv/
4.1: apply StructuringElement: "Eclipse" to extract my face from the mask
4.2: apply dilation,erosion to eliminate noises and make it be able to find the contours accurately
5. find the contours
6. find the biggest area contours (prone to be a face)
7. I have try it on both conditions: dark, and light environment, the results are very accurate in both cases (without changing any parameters)
'''
#
import math
#Hmin = 100
cameraNo = 0
def sliderCallback(x):
pass
def plot_img(img,title):
plt.figure(figsize=(10,10))
plt.title(title)
plt.imshow(img)
plt.show()
def draw_contours(inputImage,contours):
img = np.array(inputImage)
import random as rng
for rcbContourIdx in range(len(contours)):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
# Calculates the bounding rectangle of a contour
x, y, w, h = cv2.boundingRect(contours[rcbContourIdx])
cv2.drawContours(img, contours, rcbContourIdx, [255,0,0], 2)
cv2.rectangle(img,(x,y),(x+w,y+h),[0,0,255],3)
return img
def find_bigest_countour(input_img,contours ):
if len(contours) == 0: return input_img,None
biggest_contour_img = np.array(input_img)
areas = [cv2.contourArea(a) for a in contours]
sorted_areas = sorted(areas,reverse=True)
biggest_contour_idx = areas.index(sorted_areas[0])
biggest_countor = contours[biggest_contour_idx]
# cv2.drawContours(biggest_contour_img, contours, biggest_contour_idx, [0,0,255], 2)
x, y, w, h = cv2.boundingRect(contours[biggest_contour_idx])
cv2.rectangle(biggest_contour_img,(x,y),(x+w,y+h),[0,0,255],3)
return biggest_contour_img,biggest_contour_idx
def write_video(videos_frames,title):
inputWidth, inputHeight = videos_frames[0].shape[1], videos_frames[0].shape[0]
fname = f'{title}.mp4'
outputStream = cv2.VideoWriter(fname,
cv2.VideoWriter_fourcc('x', '2', '6', '4'),
25, (inputWidth, inputHeight))
for frame in videos_frames:
# Write frame to outputStream
outputStream.write(frame)
# Encode image as jpg numpy array
_, buffer = cv2.imencode(".jpg", frame)
# Draw result
IPython.display.display(IPython.display.Image(data=buffer))
# Discard old output
clear_output(wait=True)
outputStream.release()
print("write video file complete")
# Create a OpenCV Window
windowName = 'Color Segmentation'
cv2.namedWindow(windowName)
cv2.createTrackbar('Hmin',windowName,0,360,sliderCallback)
cv2.createTrackbar('Smin',windowName,0,100,sliderCallback)
cv2.createTrackbar('Vmin',windowName,0,100,sliderCallback)
cv2.createTrackbar('Hmax',windowName,0,360,sliderCallback)
cv2.createTrackbar('Smax',windowName,0,100,sliderCallback)
cv2.createTrackbar('Vmax',windowName,0,100,sliderCallback)
video_frames = []; face_detect_frames = []
inputStream = cv2.VideoCapture(cameraNo)
try:
while True:
_, videoFrameBGR = inputStream.read()
if videoFrameBGR is not None:
video_frames.append(videoFrameBGR)
hsv_frame = cv2.cvtColor(videoFrameBGR, cv2.COLOR_BGR2HSV)
# parameter for face detection : I tune it myself
Hmin = 9//2
Smin = math.floor(9*2.55)
Vmin = math.floor(29*2.55)
Hmax = 30//2
Smax = math.floor(46*2.55)
Vmax = math.floor(64*2.55)
# print( (Hmin,Smin,Vmin),(Hmax,Smax,Vmax))
mask = cv2.inRange(hsv_frame,(Hmin,Smin,Vmin),(Hmax,Smax,Vmax))[:,:,np.newaxis]
mask = np.repeat(mask,3,axis=2)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
face_mask = cv2.erode(mask, kernel, iterations = 2)
face_mask = cv2.dilate(face_mask, kernel, iterations = 2)
face = videoFrameBGR&face_mask
# plot_img(cv2.cvtColor(face, cv2.COLOR_BGR2RGB),'face_mask')
face_mask_gray = cv2.cvtColor(face_mask, cv2.COLOR_BGR2GRAY)
contours, _ = cv2.findContours(face_mask_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours_img = draw_contours(videoFrameBGR,contours)
biggest_contour_img, biggest_contour_idx = find_bigest_countour(videoFrameBGR,contours )
face_detect_frames.append(biggest_contour_img)
output = np.hstack((videoFrameBGR,contours_img,biggest_contour_img))
cv2.imshow(windowName,output)
if cv2.waitKey(1) == ord('q'):
cv2.destroyAllWindows()
break
else:
print("Cannot Open Webcam, hw problem?")
break
except KeyboardInterrupt:
print ("Stream stopped")
inputStream.release()
write_video(video_frames,'origin')
write_video(face_detect_frames,'face_detection')
cv2.destroyAllWindows()
def openVideo(fname):
inputStream = cv2.VideoCapture(fname)
try:
while True:
_, videoFrame = inputStream.read()
if videoFrame is not None:
# Encode image as jpg numpy array
_, buffer = cv2.imencode(".jpg", videoFrame)
# Draw result
IPython.display.display(IPython.display.Image(data=buffer))
# Discard old output
clear_output(wait=True)
else:
print("End of File")
break
except KeyboardInterrupt:
print("Stop by user")
inputStream.release()
openVideo('origin.mp4')
openVideo('face_detection.mp4')
# test on a static cap_screen image
import math
def plot_img(img,title):
plt.figure(figsize=(10,10))
plt.title(title)
plt.imshow(img)
plt.show()
def cap_screen(fname):
cameraNo = 0
inputStream = cv2.VideoCapture(cameraNo)
for i in range(4):
print(str(i)+"!")
time.sleep(1)
print("Action!")
_, videoFrameBGR = inputStream.read()
inputStream.release()
plot_img(cv2.cvtColor(videoFrameBGR, cv2.COLOR_BGR2RGB),'original_frame')
inputStream.release()
cv2.imwrite(fname,videoFrameBGR)
print("writing an image done!")
return videoFrameBGR
def load_img(path):
img = cv2.imread(path)
plot_img(cv2.cvtColor(videoFrameBGR, cv2.COLOR_BGR2RGB),'original_frame')
return img
def hsv_filter(Hmin,Smin,Vmin,Hmax,Smax,Vmax):
hsv_frame = cv2.cvtColor(videoFrameBGR, cv2.COLOR_BGR2HSV)
Hmin = Hmin//2.0
Smin = math.floor(Smin*2.55)
Vmin = math.floor(Vmin*2.55)
Hmax = Hmax//2.0
Smax = math.floor(Smax*2.55)
Vmax = math.floor(Vmax*2.55)
print( (Hmin,Smin,Vmin),(Hmax,Smax,Vmax))
mask = cv2.inRange(hsv_frame,(Hmin,Smin,Vmin),(Hmax,Smax,Vmax))[:,:,np.newaxis]
mask = np.repeat(mask,3,axis=2)
plot_img(mask,'mask')
videoFrameBGR = load_img('it_s_me.jpg')
interact(hsv_filter,
Hmin=widgets.IntSlider(min=0,max=360,step=1,value=11),
Smin=widgets.IntSlider(min=0,max=100,step=1,value=23),
Vmin=widgets.IntSlider(min=0,max=100,step=1,value=25),
Hmax=widgets.IntSlider(min=0,max=360,step=1,value=36),
Smax=widgets.IntSlider(min=0,max=100,step=1,value=54),
Vmax=widgets.IntSlider(min=0,max=100,step=1,value=74));
import math
#Hmin = 100
cameraNo = 0
def sliderCallback(x):
pass
def plot_img(img,title):
plt.figure(figsize=(10,10))
plt.title(title)
plt.imshow(img)
plt.show()
def draw_contours(inputImage,contours):
img = np.array(inputImage)
import random as rng
for rcbContourIdx in range(len(contours)):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
# Calculates the bounding rectangle of a contour
x, y, w, h = cv2.boundingRect(contours[rcbContourIdx])
cv2.drawContours(img, contours, rcbContourIdx, [255,0,0], 2)
cv2.rectangle(img,(x,y),(x+w,y+h),[0,0,255],3)
return img
def find_bigest_countour(input_img,contours ):
if len(contours) == 0: return input_img,None
biggest_contour_img = np.array(input_img)
areas = [cv2.contourArea(a) for a in contours]
sorted_areas = sorted(areas,reverse=True)
biggest_contour_idx = areas.index(sorted_areas[0])
biggest_countor = contours[biggest_contour_idx]
# cv2.drawContours(biggest_contour_img, contours, biggest_contour_idx, [0,0,255], 2)
x, y, w, h = cv2.boundingRect(contours[biggest_contour_idx])
cv2.rectangle(biggest_contour_img,(x,y),(x+w,y+h),[0,0,255],3)
return biggest_contour_img,biggest_contour_idx
def write_video(videos_frames,title):
inputWidth, inputHeight = videos_frames[0].shape[1], videos_frames[0].shape[0]
fname = f'{title}.mp4'
outputStream = cv2.VideoWriter(fname,
cv2.VideoWriter_fourcc('x', '2', '6', '4'),
25, (inputWidth, inputHeight))
for frame in videos_frames:
# Write frame to outputStream
outputStream.write(frame)
# Encode image as jpg numpy array
_, buffer = cv2.imencode(".jpg", frame)
# Draw result
IPython.display.display(IPython.display.Image(data=buffer))
# Discard old output
clear_output(wait=True)
outputStream.release()
print("write video file complete")
# Create a OpenCV Window
windowName = 'Color Segmentation'
cv2.namedWindow(windowName)
cv2.createTrackbar('Hmin',windowName,0,360,sliderCallback)
cv2.createTrackbar('Smin',windowName,0,100,sliderCallback)
cv2.createTrackbar('Vmin',windowName,0,100,sliderCallback)
cv2.createTrackbar('Hmax',windowName,0,360,sliderCallback)
cv2.createTrackbar('Smax',windowName,0,100,sliderCallback)
cv2.createTrackbar('Vmax',windowName,0,100,sliderCallback)
video_frames = []; face_detect_frames = []
inputStream = cv2.VideoCapture(cameraNo)
try:
while True:
_, videoFrameBGR = inputStream.read()
if videoFrameBGR is not None:
video_frames.append(videoFrameBGR)
hsv_frame = cv2.cvtColor(videoFrameBGR, cv2.COLOR_BGR2HSV)
# parameter for face detection : I tune it myself
Hmin = 11//2
Smin = math.floor(23*2.55)
Vmin = math.floor(25*2.55)
Hmax = 36//2
Smax = math.floor(54*2.55)
Vmax = math.floor(74*2.55)
# print( (Hmin,Smin,Vmin),(Hmax,Smax,Vmax))
mask = cv2.inRange(hsv_frame,(Hmin,Smin,Vmin),(Hmax,Smax,Vmax))[:,:,np.newaxis]
mask = np.repeat(mask,3,axis=2)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
face_mask = cv2.erode(mask, kernel, iterations = 2)
face_mask = cv2.dilate(face_mask, kernel, iterations = 2)
face = videoFrameBGR&face_mask
# plot_img(cv2.cvtColor(face, cv2.COLOR_BGR2RGB),'face_mask')
face_mask_gray = cv2.cvtColor(face_mask, cv2.COLOR_BGR2GRAY)
contours, _ = cv2.findContours(face_mask_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours_img = draw_contours(videoFrameBGR,contours)
biggest_contour_img, biggest_contour_idx = find_bigest_countour(videoFrameBGR,contours )
face_detect_frames.append(biggest_contour_img)
output = np.hstack((videoFrameBGR,contours_img,biggest_contour_img))
cv2.imshow(windowName,output)
if cv2.waitKey(1) == ord('q'):
cv2.destroyAllWindows()
break
else:
print("Cannot Open Webcam, hw problem?")
break
except KeyboardInterrupt:
print ("Stream stopped")
inputStream.release()
write_video(video_frames,'origin')
write_video(face_detect_frames,'face_detection')
cv2.destroyAllWindows()
def openVideo(fname):
inputStream = cv2.VideoCapture(fname)
try:
while True:
_, videoFrame = inputStream.read()
if videoFrame is not None:
# Encode image as jpg numpy array
_, buffer = cv2.imencode(".jpg", videoFrame)
# Draw result
IPython.display.display(IPython.display.Image(data=buffer))
# Discard old output
clear_output(wait=True)
else:
print("End of File")
break
except KeyboardInterrupt:
print("Stop by user")
inputStream.release()
openVideo('origin.mp4')
openVideo('face_detection.mp4')
```
| github_jupyter |
# スマホ用 2次元分光カメラキット
Jun Hirabayashi (2020/10/03)
jun@hirax.net, twitter @ hirax
## 0. 説明
### 0.1 キットについて
<img src="mft2020.kitSampleImage.JPG" style="width: 200px;float:right;"/>
このJupyter Notebook は、Maker Faire Tokyo 2020 で配布した「スマホ用2次元分光カメラキット」の説明・サンプルコードです。「スマは、透過回折格子越しに、直線状のスリット光を撮影し、ホ用2次元分光カメラキット」は、「透過回折格子越しに、直線状のスリット光を撮影することができる」キットです。スマホのカメラに取り付けて、スリット方向と直交した方向にスマホの向きを変えながらスキャン撮影をすると、「スリット方向×スキャン方向」の2次元領域の分光情報を手に入れることができる、という仕組みです。
### 0.1 簡単な説明動画
```
from IPython.display import YouTubeVideo
id = '9x6jsidYmKo'
YouTubeVideo(id=id,width=480,height=320)
```
### 0.2 もう少し詳しいキットの仕組み
「スマホ用2次元分光カメラキット」は、スマホカメラ前面に透過回折格子を配置して、スリット光から斜め方向から届く光を分光しています。使われている透過回折格子は「DVD」で、このキットは550ナノメートルの波長の光が、スマホカメラからみて正面に見えるような仕組みになっています。
### 0.3 Python コードについて
Python コードは、Jupyter上での動作を想定しています。ただし、iOS上で動くPython環境のPytoからの動作も試みているところです。
## Python コード
```
import cv2
import numpy as np
import math
# 分光情報処理のために "colour-science" ライブラリを読み込む
# conda install -c conda-forge colour-science
# https://github.com/colour-science/Colour/
import colour
from colour.plotting import *
import matplotlib.pyplot as plt
%matplotlib inline
# 撮影モードは、Landscape前提、としておきます
# また、(回転させることができる)1次元スリットは、Landscapeの
# 縦方向に向いているとしておきます
# すると、波長方向は w 方向
# 画像の空間(h)方向は h 方向 になります。
# 「スキャン撮影」で「刻々変わるカメラの向き」が、空間(w)方向に
# なります。(磁気センサや加速度センサ、あるいは、ジャイロセンサ値と
#連動させるのも面白そうです)
# どのような処理を行うかを設定する
# 動画をファイルから読み込むか、カメラから読み込むかを決める
isCamera = False
# 画像ファイルを保存するか
isFileSave = False
# 逐次処理中に分光画像生成を行うか
isCreateSpectrumImage = True
# 横位置として、780-1300 px がスペクトル部分
# デバイスが撮影するだろう画像サイズ(記入値はiPhone 11 の場合)
assumptionW = 1920
assumptionH = 1080
# デバイスが撮影するだろう画像サイズ下で、
# 分光情報が得られるだろうX位置・領域 (これも想定)
spectorPixelStart = 800
spectorPixelWidth = 1300-780 # ⇒520
#-----------縮尺比率を決める-----------------------
# 撮影画像を(計算時間短縮のために)長さで何分の一にするかを決める
compressionRatioInLength = 10 # 10分の1
#-----------縮尺比率にしたがったピクセルサイズを決める-----------------------
# a as assumption
aW = int( assumptionW / compressionRatioInLength ) # a as assumption
aH = int( assumptionH / compressionRatioInLength )
print([aW, aH])
aSpectorPixelWidth = int(spectorPixelWidth / compressionRatioInLength)
aSpectorPixelStart = int(spectorPixelStart / compressionRatioInLength)
# 単純化したRGB値をどの(x方向)画素から読み込むか
aBOffset = int(aSpectorPixelWidth/6)
aGOffset = int(3*aSpectorPixelWidth/6)
aROffset = int(5*aSpectorPixelWidth/6)
# 処理するピクセル数を決める
def normalDist(x, mu = 0, sigma = 1):
return np.exp( - ( x - mu )**2 / ( 2 * sigma**2 ) )
gg = np.array( [ ( 0.2 + normalDist( x, 40, 12 ) ) / 1.2 for x in np.linspace( 0, 99, aSpectorPixelWidth ) ] )
rg = np.array( [ ( 0.2 + normalDist( x, 65, 7 ) ) / 1.2 for x in np.linspace( 0, 99, aSpectorPixelWidth ) ] )
# 波長 420nmから730nmまでの分光スペクトルを、
# RGB素子に撮像されたRGB画像配列から、下記としてフィッティングする
def slice2spec( slice ):
b = np.array( [ x**2.2 * 255 for x in slice[ ::1, 0 ] / 255 ] )
g = np.array( [ x**2.2 * 255 for x in slice[ ::1, 1 ] / 255 ] )
r = np.array( [ x**2.2 * 255 for x in slice[ ::1, 2 ] / 255 ] )
return 1.6 * b + 1.5 * g/gg + 0.7 * r/rg
# 波長リスト
wl = np.linspace( 420, 730, aSpectorPixelWidth )
# ------------撮影時の歪み補正用の行列を生成する------------
# 画像サイズや歪み程度が一定なら、動画読み込みより事前に処理しておく
#m1 = 0.1 # ずれ補正の傾斜調整、1 より小さな値に設定する、0になると補正量は0になる
m2 = int(-45.0/compressionRatioInLength) # 中央と上下両端での、横方向ズレをピクセルで表したもの
mapY = np.zeros( (aH, aW), dtype=np.float32 )
mapX = np.zeros( (aH, aW), dtype=np.float32 )
for y in range( aH ):
mapY[y, :] = y # Y方向は変化させない
for y in range( aH ):
for x in range( aW ):
mapX[y, x] = x + m2 * math.cos( (float(y)-float(aH)/2.0) / (float(aH)/2.0) * math.pi/2.0 )
#cv2.imwrite( 'mapX.png', mapX )
#-----------動画デバイスを開く(撮影ループ)-----------------------
if isCamera: # カメラから動画を読み込む場合
cap = cv2.VideoCapture(1)
# カメラの場合は「読み込む時間長さ(フレーム数)」を決めておく
frame_n = 25*10 # フレームレート(frame/sec) * 撮影時間(sec.)
else: # ファイルから動画を読み込む場合
cap = cv2.VideoCapture("sample.MOV")
frame_n = round( cap.get(cv2.CAP_PROP_FRAME_COUNT) )
if not cap.isOpened(): # 動画ファイルを開くことができなかったら
exit()
# 画像サイズを取得する
w = round( cap.get( cv2.CAP_PROP_FRAME_WIDTH ) )
h = round( cap.get( cv2.CAP_PROP_FRAME_HEIGHT ) )
# RGB3チャンネル画像格納用行列を作成する
simpleRGBImg = np.zeros( ( aH, int(frame_n/compressionRatioInLength)+1, 3 ), np.float )
# 2次元分光画像行列を作成する
spectorImg = []
# 動画読み取りループ
n = 0
while( True ):
ret, frame = cap.read() # 動画象読み取り
# 各種終了処理
if not ret: # 画像を読み取れなかった場合
break
if ( (cv2.waitKey(1) & 0xFF == ord('q')) ): # OpenCV Windowで q を押すと終了(しちゃう)
break
frame = cv2.resize(frame , (aW, aH) )
undistortFrame = cv2.remap( frame, mapX, mapY, cv2.INTER_CUBIC)
# 歪み補正や位置確認のために1枚目をファイル保存する場合
if n==0 and isFileSave:
cv2.imwrite('frame.png', frame)
cv2.imwrite('undistortFrame.png', undistortFrame)
#JupyterでOpenCVを使うときにdestroyAllWindows()すると固まってしまう
#https://qiita.com/kemako/items/fd72c65ca964a1b74fef
cv2.startWindowThread()
cv2.imshow('frame', undistortFrame)
#cv2.imshow('frame', frame)
# 簡易RGB画像出力
if n % compressionRatioInLength == 0:
simpleRGBImg[ :, int(n/compressionRatioInLength), 0 ] = frame[ :, aSpectorPixelStart + aROffset, 2 ].astype( np.float ) # R
simpleRGBImg[ :, int(n/compressionRatioInLength), 1 ] = frame[ :, aSpectorPixelStart + aGOffset, 1 ].astype( np.float ) # G
simpleRGBImg[ :, int(n/compressionRatioInLength), 2 ] = frame[ :, aSpectorPixelStart + aBOffset,0 ].astype( np.float ) # B
# -----スペクトル列作成&スペクトル画像に追加----------
if isCreateSpectrumImage and n % compressionRatioInLength == 0:
spectorSlice = []
for y in range( 0, aH, 1 ):
spector = frame[ y,
aSpectorPixelStart : aSpectorPixelStart+aSpectorPixelWidth,
: ].astype( np.float )
powData = slice2spec( spector )
WLvsPow = dict( zip ( wl, powData ) ) # 波長:強度 の辞書リストを作る
sdm = colour.SpectralDistribution(WLvsPow, name='sdm')
spectorSlice.append( sdm )
spectorImg.append( spectorSlice )
n = n+1
if n >= frame_n:
break
cap.release() # 動画象読み取り終了
cv2.waitKey(1)
cv2.destroyAllWindows() # 動画象表示終了
cv2.waitKey(1)
# 簡易 RGB 画像の表示
plt.figure( figsize=(10, 40), dpi=50 )
plt.axis("off")
plt.imshow( simpleRGBImg/255, aspect=0.1 )
cv2.imwrite( 'simpleRGBImg.png', simpleRGBImg )
sd_copy = spectorImg[10][50].copy()
# Interpolating the copied sample spectral distribution.
sd_copy.interpolate( colour.SpectralShape(420, 730, 10) )
colour.XYZ_to_sRGB( colour.sd_to_XYZ( sd_copy )/100.0 )
plot_single_sd( spectorImg[0][0] )
img500 = np.array([
[ spector.copy().interpolate(
colour.SpectralShape(420, 730,
10 # 波長ステップ
) )[ 500 ]
for spector in spectorSlice
] for spectorSlice in spectorImg
])
plt.figure( figsize=(12, 16), dpi=50 )
plt.axis('off')
plt.imshow( cv2.flip(cv2.rotate(img500, cv2.ROTATE_90_CLOCKWISE), 1)/np.max( img500 ),
aspect=0.1, cmap='gray')
cv2.imwrite('img500.tiff', img500/np.max( img500 ) )
```
| github_jupyter |
```
import pandas as pd
Disease = pd.read_csv("Disease.csv")
Disease = Disease.replace(to_replace ="?", value ="")
Disease.head()
Disease.describe()
Disease.info()
Disease["fbs"].value_counts()
Disease.describe()
%matplotlib inline
import matplotlib.pyplot as plt
Disease.hist(bins=50, figsize=(20, 15))
import numpy as np
def split_train_test(data, test_ratio):
np.random.seed(42)
shuffled = np.random.permutation(len(data))
print(shuffled)
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled[:test_set_size]
train_indices = shuffled[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(Disease, test_size=0.2, random_state=42)
print(f"Rows in train set: {len(train_set)}\nRows in test set: {len(test_set)}\n")
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(Disease, Disease['fbs']):
strat_train_set = Disease.loc[train_index]
strat_test_set = Disease.loc[test_index]
strat_test_set['fbs'].value_counts()
strat_train_set['fbs'].value_counts()
52/9
206/36
Disease = strat_train_set.copy()
print(Disease)
corr_matrix = Disease.corr()
corr_matrix['NUM'].sort_values(ascending=False)
Disease.plot(kind="scatter", x="AGE", y="NUM", alpha=0.8)
Disease = strat_train_set.drop("NUM", axis=1)
Disease_labels = strat_train_set["NUM"].copy()
a = Disease.dropna(subset=["AGE"])
a.shape
Disease.drop("AGE", axis=1).shape
Disease= pd.get_dummies(Disease)
median = Disease["AGE"].median()
Disease["AGE"].fillna(median)
Disease.shape
Disease.describe()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy="median")
imputer.fit(Disease)
imputer.statistics_
X = imputer.transform(Disease)
Disease_tr = pd.DataFrame(X, columns=Disease.columns)
Disease_tr.describe()
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
my_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('std_scaler', StandardScaler()),
])
Disease_num_tr = my_pipeline.fit_transform(Disease)
Disease_num_tr.shape
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
model = LinearRegression()
#model = DecisionTreeRegressor()
#model = RandomForestRegressor()
model.fit(Disease_num_tr, Disease_labels)
some_data = Disease.iloc[:5]
some_labels = Disease_labels.iloc[:5]
prepared_data = my_pipeline.transform(some_data)
model.predict(prepared_data)
list(some_labels)
from sklearn.metrics import mean_squared_error
Disease_predictions = model.predict(Disease_num_tr)
mse = mean_squared_error(Disease_labels, Disease_predictions)
rmse = np.sqrt(mse)
rmse
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, Disease_num_tr, Disease_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores
def print_scores(scores):
print("Scores:", scores)
print("Mean: ", scores.mean())
print("Standard deviation: ", scores.std())
print_scores(rmse_scores)
from joblib import dump, load
dump(model, 'Disease.joblib')
X_test = strat_test_set.drop("NUM", axis=1)
Y_test = strat_test_set["NUM"].copy()
X_test = pd.get_dummies(X_test)
X_test_prepared = my_pipeline.transform(X_test)
final_predictions = model.predict(X_test_prepared)
final_mse = mean_squared_error(Y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
# print(final_predictions, list(Y_test))
final_rmse
prepared_data[0]
from joblib import dump, load
import numpy as np
model = load('Disease.joblib')
features = np.array([[-1.54682375, 0.68964466, -0.16363892, -1.14727992, 0.09776352,
-0.41803981, -1.05309766, 1.30786643, -0.70929937, -0.87416525,
-0.94627915, -0.77354494]])
model.predict(features)
from joblib import dump, load
import numpy as np
model = load('Disease.joblib')
features = np.array([[58, 1, 2, 120, 284, 0, 2, 160, 0, 1.8, 2, 1
]])
model.predict(features)
from joblib import dump, load
import numpy as np
model = load('Disease.joblib')
features = np.array([[57, 1, 4, 130, 131, 0, 0, 115, 1, 1.2, 2, 3
]])
model.predict(features)
```
| github_jupyter |
# **Transfer Learning for Image Classification**
## Overview
Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below.
The project is broken down into multiple steps:
* Load and preprocess the image dataset
* Train the image classifier on your dataset
* Use the trained classifier to predict image content
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
```
# Please notice that some parts of this project are based on the official pytorch tutorial.
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
%load_ext autoreload
%autoreload 2
import torch
import torch.utils.data as data
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import train
import predict
import json
```
## Load the data
Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/master/torchvision/transforms.html#)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.
For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. This converts the values of each color channel to be between -1 and 1 instead of 0 and 1.
```
data_dir = 'flowers'
# Define your transforms for the training, validation, and testing sets
data_transforms = {
'train': transforms.Compose([
transforms.RandomRotation(45),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]),
'valid': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]),
'test': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
}
# Load the datasets with ImageFolder
image_datasets = {
x: datasets.ImageFolder(root=data_dir + '/' + x, transform=data_transforms[x])
for x in list(data_transforms.keys())
}
```
### Label mapping
You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
```
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
```
### Visualize a few images
Let’s visualize a few training images:
```
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Using the image datasets, define the dataloaders
dataloaders = {
x: data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=2)
for x in list(image_datasets.keys())
}
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
labels = list(cat_to_name.values())
imshow(out, title=[labels[x] for x in classes])
```
# Building and training the classifier
Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours.
Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:
* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)
* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout
* Train the classifier layers using backpropagation using the pre-trained network to get the features
* Track the loss and accuracy on the validation set to determine the best hyperparameters
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
```
model = train.train_model(image_datasets, arch='alexnet', gpu=True, epochs=14, hidden_units=4096)
```
## Testing your network
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
```
phase = 'test'
correct = 0
total = 0
with torch.no_grad():
if torch.cuda.is_available():
print("Using GPU")
device = torch.device("cuda:0")
else:
print("Using CPU")
device = torch.device("cpu")
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the test images: %d %%' % (
100 * correct / total))
```
## Save the checkpoint
Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.
```model.class_to_idx = image_datasets['train'].class_to_idx```
Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
```
# Save a checkpoint
model.class_to_idx = image_datasets['train'].class_to_idx
checkpoint = {
'arch': 'alexnet',
'class_to_idx': model.class_to_idx,
'state_dict': model.state_dict(),
'hidden_units': 4096
}
torch.save(checkpoint, 'my_model.pt')
```
## Loading the checkpoint
At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
```
checkpoint = torch.load('my_model.pt')
arch = checkpoint['arch']
num_labels = len(checkpoint['class_to_idx'])
hidden_units = checkpoint['hidden_units']
model = train.load_model(arch=arch, num_labels=num_labels, hidden_units=hidden_units)
model.load_state_dict(checkpoint['state_dict'])
model.class_to_idx = checkpoint['class_to_idx']
```
# Inference for classification
Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
First you'll need to handle processing the input image such that it can be used in your network.
## Image Preprocessing
You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training.
First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.
Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.
As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation.
And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
```
# Process a PIL image for use in a PyTorch model
def process_image(image):
''' Scales, crops, and normalizes a PIL image for a PyTorch model,
returns an Numpy array
'''
img_loader = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()])
pil_image = Image.open(image)
pil_image = img_loader(pil_image).float()
np_image = np.array(pil_image)
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
np_image = (np.transpose(np_image, (1, 2, 0)) - mean)/std
np_image = np.transpose(np_image, (2, 0, 1))
return np_image
```
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
```
def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = np.transpose(image, (1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
%matplotlib inline
_= imshow(process_image('sample_img.jpg'))
```
## Class Prediction
Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.,
To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.
Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
```
probs, classes = predict.predict(image='sample_img.jpg', checkpoint='my_model.pt', labels='cat_to_name.json', gpu=True)
print(probs)
print(classes)
```
## Sanity Checking
Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:
<img src='assets/inference_example.png' width=300px>
You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
```
# Display an image along with the top 5 classes
img = mpimg.imread('sample_img.jpg')
f, axarr = plt.subplots(2,1)
axarr[0].imshow(img)
axarr[0].set_title('hard-leaved pocket orchid')
probs, classes = predict.predict(image='sample_img.jpg', checkpoint='my_model.pt', labels='cat_to_name.json', gpu=True)
y_pos = np.arange(len(classes))
axarr[1].barh(y_pos, probs, align='center', color='blue')
axarr[1].set_yticks(y_pos)
axarr[1].set_yticklabels(classes)
axarr[1].invert_yaxis() # labels read top-to-bottom
_ = axarr[1].set_xlabel('Probs')
```
| github_jupyter |
```
%pylab inline
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
pd.options.display.max_colwidth = 128
pylab.rcParams['figure.figsize'] = 12, 12
# ask where saturn is
# given bounds, how to render a star chart
# how to add lines to the star chart
# star chart colors
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
from skyfield.api import load
planets = load('de421.bsp')
earth, saturn = planets['earth'], planets['saturn barycenter']
ts = load.timescale()
t = ts.utc(2017, 1, range(365 * 3))
e = earth.at(t)
p = e.observe(saturn)
from skyfield import projections, charting
from skyfield.data import hipparcos
h = hipparcos.load_dataframe('hip_main.dat.gz')
h.head()
```
from matplotlib.animation import FuncAnimation
fig, ax = plt.subplots()
fig.set_tight_layout(True)
# Query the figure's on-screen size and DPI. Note that when saving the figure to
# a file, we need to provide a DPI for that separately.
print('fig size: {0} DPI, size in inches {1}'.format(
fig.get_dpi(), fig.get_size_inches()))
# Plot a scatter that persists (isn't redrawn) and the initial line.
x = np.arange(0, 20, 0.1)
ax.scatter(x, x + np.random.normal(0, 3.0, len(x)))
line, = ax.plot(x, x - 5, 'r-', linewidth=2)
def update(i):
label = 'timestep {0}'.format(i)
print(label)
# Update the line and the axes (with a new xlabel). Return a tuple of
# "artists" that have to be redrawn for this frame.
line.set_ydata(x - 5 + i)
ax.set_xlabel(label)
return line, ax
# animating over 10 frames, with an interval of 200ms between frames.
anim = FuncAnimation(fig, update, frames=np.arange(0, 10), interval=200)
anim.save('line.gif', dpi=80, writer='imagemagick')
plt.close()
```
project = projections.build_stereographic_projection(p)
x, y = project(p)
fig, ax = plt.subplots()
#print(fig.canvas.supports_blit)
planet, = ax.plot(x[100], y[100], 'ro') #, animated=True)
blue_art = ax.plot(x, y, color='b')
print(ax.collections)
star_art = charting._plot_stars(h, e, project, ax, 6.0, 8.0, 0.8)
print(star_art)
print(ax.collections)
fig.canvas.draw()
#planet.draw()
background = fig.canvas.copy_from_bbox(ax.bbox)
del ax.collections[:2]
print(ax.collections)
#def init():
# return planet, #blue_art, star_art
#gcf())
#background = fig.canvas.copy_from_bbox(ax.bbox)
def update(i):
print(i, end=" ")
fig.canvas.restore_region(background)
planet.set_xdata(x[i+100])
planet.set_ydata(y[i+100])
#fig.canvas.blit(ax.bbox)
return planet,
anim = FuncAnimation(fig, update, frames=5) #, #interval=50,
#init_func=init, blit=True)
plt.close()
#anim.save('line.gif', dpi=80, writer='imagemagick')
HTML(anim.to_html5_video())
background
fig, ax = plt.subplots()
fig.canvas.restore_region(background)
background
fig, ax = plt.subplots()
print(fig.canvas.supports_blit)
project = projections.build_stereographic_projection(p)
x, y = project(p)
planet = ax.plot(x[100], y[100], 'ro', animated=True)
blue_art = ax.plot(x, y, color='b')
star_art = charting._plot_stars(h, e, project, ax, 6.0, 8.0, 0.8)
def init():
#return blue_art
#return blue_art + star_art #blue_art, star_art
#return blue_art + star_art + planet
return planet
#gcf())
#background = fig.canvas.copy_from_bbox(ax.bbox)
def update(i):
print(i, end=" ")
planet[0].set_xdata(x[i+100])
planet[0].set_ydata(y[i+100])
#planet.set_data(x[i], y[i])
return planet
anim = FuncAnimation(fig, update, frames=4, interval=50,
init_func=init, blit=True)
plt.close()
#anim.save('line.gif', dpi=80, writer='imagemagick')
from time import time
t0 = time()
html = anim.to_html5_video()
print(time() - t0)
HTML(html)
print(star_art)
dir(star_art[0])
star_art[0].draw
print(type(fig.canvas))
print(type(fig.canvas.renderer))
print(type(fig.canvas.renderer._renderer))
type(fig.canvas.renderer._renderer)
from IPython.display import HTML
HTML("<b>hi</b>")
project = projections.build_stereographic_projection(p)
x, y = project(p)
plot(x, y, color='b')
charting._plot_stars(h, e, project, gca(), 6.0, 6.0, 0.8) #gcf())
import imageio
images = []
imageio.mimsave('movie.gif', images)
tau = 2.0 * pi
from skyfield.data import hipparcos
#%%time
h = hipparcos.load_dataframe('hip_main.dat.gz')
h.head()
from skyfield import api
t = h
t = t
t = t[t['magnitude'] < 6.0]
c = t.loc[42911]
s = api.Star(ra_hours=c.ra_hours,
dec_degrees=c.dec_degrees)
from skyfield.positionlib import Barycentric
ts = api.load.timescale()
o = Barycentric([0,0,0], [0,0,0], t=ts.tt(api.T0))
x = o.observe(s)
ss = api.Star(ra_hours=t.ra_hours,
dec_degrees=t.dec_degrees)
xx = o.observe(ss)
xx.position.au.shape
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
# setup Lambert Conformal basemap.
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution='c',lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
# draw coastlines.
m.drawcoastlines()
# draw a boundary around the map, fill the background.
# this background will end up being the ocean color, since
# the continents will be drawn on top.
m.drawmapboundary(fill_color='aqua')
# fill continents, set lake color same as ocean color.
m.fillcontinents(color='coral',lake_color='aqua')
# draw parallels and meridians.
# label parallels on right and top
# meridians on bottom and left
parallels = np.arange(0.,81,10.)
# labels = [left,right,top,bottom]
m.drawparallels(parallels,labels=[False,True,True,False])
meridians = np.arange(10.,351.,20.)
m.drawmeridians(meridians,labels=[True,False,False,True])
plt.show()
#xx.separation_from(x)
ra0, dec0, distance = x.radec()
zz = xx.rotate_to(x)
zz.shape
from skyfield.functions import to_polar
r, theta, phi = to_polar(zz)
#out = tau / 4 - r
ax = subplot(111, projection='polar')
ax.plot(phi, tau/4 - theta, 'k.')#, markersize=t['magnitude'])
ax.set_ylim([0, 5 * tau / 360])
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
width = 5000000.0
height = 5000000.0
plt.figure(figsize=(10, 10))
projection=ccrs.AzimuthalEquidistant(
central_latitude=45,
central_longitude=0,
)
ax = plt.axes(projection=projection)
ax.set_extent([-width/2., width/2., -height/2., height/2.],
crs=projection)
ax.coastlines(resolution='110m')
ax.gridlines()
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
width = 20e6
height = width
globe = ccrs.Globe(ellipse='sphere')
plt.figure(figsize=(10, 10))
projection=ccrs.AzimuthalEquidistant(
central_latitude=90,
central_longitude=0,
globe=globe,
)
ax = plt.axes(projection=projection)
ax.set_extent([-width/2., width/2., -height/2., height/2.],
crs=projection)
#ax.coastlines(resolution='110m')
ax.gridlines()
#ax.gridlines(draw_labels=True)
tr = ccrs.Geodetic()
ra, dec, distance = xx.radec()
#ax.plot(ra._degrees, dec.degrees, '.', transform=tr)
m = (8 - t.magnitude) ** 2.0
ax.scatter(ra._degrees, dec.degrees, m, transform=tr)
#ax.plot([0,0], [90,80], '.', transform=tr)
#ax.scatter([0,0], [90,80], transform=tr)
print(len(ra._degrees))
import astropy
import ephem
from ephem.stars import stars
len(stars)
starlist = stars.values()
for star in starlist:
star.compute('2012/1/10')
degree = tau / 360.0
hour = tau / 24.0
ra_list = [star.ra / hour for star in starlist]
dec_list = [star.dec / degree for star in starlist]
scatter(ra_list, dec_list)
gca().invert_xaxis()
import matplotlib.animation as animation
fig = plt.figure()
ims = []
for i in range(10):
x = i
y = i + 1
im = plt.plot([x], [y], animated=True)
ims.append(im)
ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True, repeat_delay=1000)
plt.show()
scatter(ra_list, dec_list)
axis([7.5, 3.5, -20, 20])
orion_axes = [7.5, 3.5, -20, 20]
print starlist[0].name
print starlist[0].mag
print starlist[0]._spect
[star.mag for star in starlist[:5]]
mag_array = np.array([ star.mag for star in starlist ])
mag_array[:5]
size_array = (5 - mag_array) ** 1.5 * 4
size_array[:5]
scatter(ra_list, dec_list, size_array)
axis(orion_axes)
spectral_list = [star._spect for star in starlist]
spectral_list[:10]
from spectral_classification import build_color_chart
color_chart = build_color_chart('starcolors.txt')
color_list = [color_chart[spectral_class + '(V)']
for spectral_class in spectral_list]
scatter(ra_list, dec_list, size_array, color_list)
axis(orion_axes)
def pretty_hours(h, pos=None):
if h % 1.0 == 0.0:
return '{:.0g}h'.format(h)
else:
return '{:.2g}h'.format(h)
def pretty_degrees(d, pos=None):
return u'{}°'.format(d)
print pretty_hours(3.5)
print pretty_hours(5.0)
print pretty_degrees(125)
print pretty_degrees(360)
from matplotlib.ticker import FuncFormatter
hours_formatter = FuncFormatter(pretty_hours)
degrees_formatter = FuncFormatter(pretty_degrees)
scatter(ra_list, dec_list, size_array, color_list)
axis(orion_axes)
gca().xaxis.set_major_formatter(hours_formatter)
gca().yaxis.set_major_formatter(degrees_formatter)
scatter(ra_list, dec_list, size_array, color_list)
axis(orion_axes)
gca().xaxis.set_major_formatter(hours_formatter)
gca().yaxis.set_major_formatter(degrees_formatter)
gca().xaxis.grid(True)
gca().yaxis.grid(True)
import matplotlib.pyplot as plt
from astropy.wcs import WCS
from astropy.io import fits
from astropy.utils.data import get_pkg_data_filename
filename = get_pkg_data_filename('galactic_center/gc_msx_e.fits')
hdu = fits.open(filename)[0]
wcs = WCS(hdu.header)
plt.subplot(projection=wcs)
plt.imshow(hdu.data, vmin=-2.e-5, vmax=2.e-4, origin='lower')
plt.grid(color='white', ls='solid')
plt.xlabel('Galactic Longitude')
plt.ylabel('Galactic Latitude')
wcs.wcs.crval = [1, 1]
wcs.wcs_pix2world([[0,0]], 1)
wcs
wcs.wcs.crval = [2, 2]
wcs.wcs_pix2world([[0,0]], 1)
wcs
# http://docs.astropy.org/en/stable/wcs/
#w.wcs.crval = [0, -90]
wcs.wcs.crval = [20, 89.9]
ax = plt.subplot(projection=wcs)
#ax = plt.subplot(projection=w)
ax.imshow(hdu.data, vmin=-2.e-5, vmax=2.e-4, origin='lower')
print(ax.coords)
ax.coords.grid(True, color='white', ls='solid')
ax.coords[0].set_axislabel('Galactic Lon-gi-tude')
ax.coords[1].set_axislabel('Galactic Latitude')
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='white', ls='dotted')
overlay[0].set_axislabel('Right Ascension (J2000)')
overlay[1].set_axislabel('Declination (J2000)')
wcs
' '.join(dir(wcs))
w = wcs.wcs
' '.join(dir(w))
type(w)
import numpy as np
from astropy import wcs
from astropy.io import fits
# Create a new WCS object. The number of axes must be set
# from the start
w = wcs.WCS(naxis=2)
# Set up an "Airy's zenithal" projection
# Vector properties may be set with Python lists, or Numpy arrays
w.wcs.crpix = [-234.75, 8.3393]
w.wcs.cdelt = np.array([-0.066667, 0.066667])
w.wcs.crval = [0, -90]
w.wcs.ctype = ["RA---AIR", "DEC--AIR"]
w.wcs.set_pv([(2, 1, 45.0)])
# Some pixel coordinates of interest.
pixcrd = np.array([[0, 0], [24, 38], [45, 98]], np.float_)
# Convert pixel coordinates to world coordinates
world = w.wcs_pix2world(pixcrd, 1)
print(world)
# Convert the same coordinates back to pixel coordinates.
pixcrd2 = w.wcs_world2pix(world, 1)
print(pixcrd2)
# These should be the same as the original pixel coordinates, modulo
# some floating-point error.
assert np.max(np.abs(pixcrd - pixcrd2)) < 1e-6
# Now, write out the WCS object as a FITS header
header = w.to_header()
# header is an astropy.io.fits.Header object. We can use it to create a new
# PrimaryHDU and write it to a file.
hdu = fits.PrimaryHDU(header=header)
from scipy import misc
image = misc.imread('/home/brandon/Downloads/magnitudes.png')
imshow(image)
row = image[0]
len(row)
red = row[:,0]
white = (red == 255)
black = (red < 255)
starts, = where(white[:-1] & black[1:])
starts
endings, = where(black[:-1] & white[1:])
endings
diameters = endings - starts
diameters
mag = arange(0.0, 10.1, 0.5)
plot(mag, diameters)
```
| github_jupyter |
## High Schools dataset cleaning and exploration
In this notebook we will clean and explore the [2017 High Schools dataset](https://data.cityofnewyork.us/Education/DOE-High-School-Directory-2017/s3k6-pzi2) by the NYC Department of Education.
Let's start by opening and examining it.
```
import pandas as pd
all_high_schools = pd.read_csv('data/DOE_High_School_Directory_2017.csv')
all_high_schools.shape
pd.set_option('display.max_columns', 453)
all_high_schools.head(3)
```
As ou can see there are over 400 columns so let's keep only columns of interest.
Also notice that the `boys` column is a flag for boys-only schools. Since we are trying to help solving the problem of women in tech it wouldn't make sense to keep them - let's filter them out.
```
boys_only = all_high_schools['boys'] == 1
columns_of_interest = ['dbn', 'school_name', 'boro', 'academicopportunities1',
'academicopportunities2', 'academicopportunities3',
'academicopportunities4', 'academicopportunities5', 'neighborhood',
'location', 'subway', 'bus', 'total_students', 'start_time', 'end_time',
'graduation_rate', 'attendance_rate', 'pct_stu_enough_variety',
'college_career_rate', 'girls', 'specialized', 'earlycollege',
'program1', 'program2', 'program3', 'program4', 'program5', 'program6',
'program7', 'program8', 'program9', 'program10', 'interest1',
'interest2', 'interest3', 'interest4', 'interest5', 'interest6',
'interest7', 'interest8', 'interest9', 'interest10', 'city', 'zip']
df = all_high_schools[~boys_only][columns_of_interest]
df.set_index('dbn', inplace=True)
df.shape
df.head(3)
```
Let's now make a quick comparison of `college_career_rate` in girls-only schools vs mixed ones.
```
df['all'] = ""
df['girls'] = df['girls'].map({1: 'Girls-only'})
df['girls'].fillna('Mixed', inplace=True)
%matplotlib inline
import pylab as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
ax = sns.violinplot(data=df, x='all', y="college_career_rate", hue="girls", split=True)
sns.despine(left=True)
ax.set_xlabel("")
plt.suptitle('College Career Rate by Type of School (Mixed or Girls-only)')
plt.savefig('figures/girls-only.png', bbox_inches='tight')
```
Now notice that there are 5 columns on "academic opportunities", 10 columns on "programs", and 10 more columns on "interests", and that in each of these areas some schools might have something that is tech-related. For each school, let's try to find whether we can find some tech related words in any of those areas and let's call it "tech inclination".
```
import numpy as np
def contains_terms(column_name, terms=["tech"]):
"""Checks if at least one of the terms is present in the given column."""
contains = []
for i, term in enumerate(terms):
contains.append(df[column_name].str.contains(terms[i], case=False))
not_null = df[column_name].notnull()
return (not_null) & (np.any(contains, axis=0))
def contains_terms_columns(column_root, n_columns, terms=["tech"]):
"""Checks if at least one of the terms is present in the columns given by its root name."""
if n_columns == 1:
return contains_terms(column_root, terms)
tech = []
for i in range(n_columns):
column_name = column_root + str(i + 1)
tech.append(contains_terms(column_name, terms))
return np.any(tech, axis=0)
tech_academicopportunities = contains_terms_columns('academicopportunities', 5,
terms=['technology', 'computer', 'web',
'programming', 'coding'])
len(df[tech_academicopportunities])
# searching for 'tech' might match the word 'technical'
all_tech_program = contains_terms_columns('program', 10, terms=['programming', 'computer',
'tech'])
technical_program = contains_terms_columns('program', 10, terms=['technical'])
tech_program = (all_tech_program) & ~(technical_program)
len(df[tech_program])
tech_interest = contains_terms_columns('interest', 10, terms=['computer', 'technology'])
len(df[tech_interest])
tech_inclined = (tech_academicopportunities) | (tech_program) | (tech_interest)
print(len(df[tech_inclined]))
print("{:.1f}%".format(100 * len(df[tech_inclined]) / len(df)))
```
Since 46% of schools are tech inclined and our assumption here was that 200 high schools were enough let's use only tech-inclined schools going forward. It could help the canvassing team if they were talking to female students from schools that have some tech-inclination.
However, let's first see how schools compare with each other taking that into consideration.
```
df['tech_academicopportunities'] = tech_academicopportunities.astype(int)
df['tech_program'] = tech_program.astype(int)
df['tech_interest'] = tech_interest.astype(int)
df.head(3)
def fill_tech_summary(academicopportunities, program, interest):
if academicopportunities:
if program:
if interest:
return 'tech_academicopportunities+program+interest'
else:
return 'tech_academicopportunities+program'
elif interest:
return 'tech_academicopportunities+interest'
else:
return 'tech_academicopportunities'
elif program:
if interest:
return 'tech_program+interest'
else:
return 'tech_program'
elif interest:
return 'tech_interest'
else:
return 'no_tech_inclination'
df['tech_summary'] = df.apply(lambda x: fill_tech_summary(x.loc['tech_academicopportunities'],
x.loc['tech_program'],
x.loc['tech_interest']),
axis='columns')
df['tech_summary'].head()
fig, ax = plt.subplots(figsize=(20, 10))
ax = sns.violinplot(data=df, x='all', y="college_career_rate", hue="tech_summary", ax=ax,
hue_order=['no_tech_inclination', 'tech_interest', 'tech_program',
'tech_academicopportunities', 'tech_program+interest',
'tech_academicopportunities+program',
'tech_academicopportunities+interest',
'tech_academicopportunities+program+interest'])
sns.despine(left=True)
ax.set_xlabel("")
plt.suptitle('College Career Rate by Types of Tech Inclination')
plt.savefig('figures/types-tech-inclination.png', bbox_inches='tight')
def fill_tech_summary_compact(academicopportunities, program, interest):
if academicopportunities or program or interest:
return 'tech_inclined'
else:
return 'not_tech_inclined'
df['tech_summary_compact'] = df.apply(lambda x: fill_tech_summary_compact(
x.loc['tech_academicopportunities'],
x.loc['tech_program'],
x.loc['tech_interest']),
axis='columns')
df['tech_summary_compact'].head()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8))
ax1 = sns.violinplot(data=df, x='all', y="college_career_rate", hue="tech_summary_compact",
split=True, inner="quartile", ax=ax1)
ax2 = sns.violinplot(data=df, x='all', y="total_students", hue="tech_summary_compact",
split=True, inner="quartile", ax=ax2)
ax1.set_xlabel("")
ax2.set_xlabel("")
sns.despine(left=True)
plt.suptitle('College Career Rate and Total Students by Tech Inclination')
plt.savefig('figures/breakdown-tech-inclination.png', bbox_inches='tight')
```
We can see from the violin plots above that even though tech inclined high schools have a sligtly higher college career rate median, they have slightly lower 25% and 75% quartiles. On the other hand, most high schools with 1500 or more students seem to have some kind of tech inclination.
```
new_columns = ['school_name', 'boro', 'tech_academicopportunities', 'neighborhood', 'location',
'subway', 'bus', 'total_students', 'start_time', 'end_time', 'graduation_rate',
'attendance_rate', 'pct_stu_enough_variety', 'college_career_rate', 'girls',
'specialized', 'earlycollege', 'tech_program', 'tech_interest', 'city', 'zip']
tech_schools = df[tech_inclined][new_columns]
tech_schools.head(3)
```
Let's now shift our focus to the `graduation_rate` and `college_career_rate` columns. In particular, `college_career_rate`'s definition is "at the end of the 2014-15 school year, the percent of students who graduated 'on time' by earning a diploma four years after they entered 9th grade".
We could multiply that by the total number of students in each school and calculate the potential number of college schools each school has.
```
fig, ax = plt.subplots(figsize=(20, 8))
ax.set_xlim(0, 1.02)
ax.set_ylim(0, 1.05)
sns.regplot(tech_schools['graduation_rate'], tech_schools['college_career_rate'], order=3)
ax.set_xlabel('Graduation Rate')
ax.set_ylabel('College Career Rate')
plt.suptitle('College Career Rate by Graduation Rate')
plt.savefig('figures/college-career-and-graduation-rate.png', bbox_inches='tight')
```
We can see that `graduation_rate` and `college_career_rate` have a strong correlation. That means if we have too many `college_career_rate` null values we can use `graduation_rate` as a proxy.
```
potential = tech_schools['college_career_rate'] * tech_schools['total_students']
potential.sort_values(inplace=True, ascending=False)
potential
null_college_career_rate = tech_schools.college_career_rate.isnull()
print("{:.1f}%".format(100 * len(tech_schools[null_college_career_rate]) / len(tech_schools)))
null_graduation_rate = tech_schools.graduation_rate.isnull()
print("{:.1f}%".format(100 * len(tech_schools[null_graduation_rate]) / len(tech_schools)))
print("{:.1f}%".format(100 * len(tech_schools[(null_college_career_rate) & \
(null_graduation_rate)]) \
/ len(tech_schools)))
fig, ax = plt.subplots(figsize=(20, 8))
sns.distplot(tech_schools['total_students'], bins=range(0, 6000, 250), kde=False, rug=True)
ax.set_xlabel('Total Students')
ax.set_ylabel('Number of Schools with that Many Students')
plt.suptitle('Number of Schools by Total Students')
tech_schools[(null_college_career_rate) & (null_graduation_rate)]['total_students'].max()
```
It seems that 14% of schools don't have figures on the graduation rate. Let's plot its distribution to help decide if we should either ignore the column or the schools without that data.
```
import numpy as np
fig, ax = plt.subplots(figsize=(20, 8))
ax.set_xlim(0.05, 1.15)
schools_to_plot = tech_schools[~(null_college_career_rate)]
sns.distplot(schools_to_plot['college_career_rate'], bins=np.arange(0, 1, 0.1))
ax.set_xlabel('College Career Rate')
ax.set_ylabel('Number of Schools with that Rate')
plt.suptitle('Number of Schools by College Career Rate')
fig.savefig('figures/college-career-rate.png', bbox_inches='tight')
```
Since some schools have a really low college career rate let's use that data and filter schools that don't have that data point.
Let's do that and also plot the distribution of schools by their number of potential college students.
```
# Copy to avoid chained indexing and the SettingWithCopy warning (http://bit.ly/2kkXW5B)
tech_col_potential = pd.DataFrame(tech_schools, copy=True)
tech_col_potential.dropna(subset=['college_career_rate'], inplace=True)
tech_col_potential['potential_college_students'] = (tech_col_potential['total_students'] *\
tech_col_potential['college_career_rate'])\
.astype(int)
tech_col_potential.sort_values('potential_college_students', inplace=True, ascending=False)
tech_col_potential.head(3)
fig, ax = plt.subplots(figsize=(20, 8))
sns.distplot(tech_col_potential['potential_college_students'], bins=range(0, 6000, 250),
kde=False, rug=True)
ax.set_xlabel('Potential College Students')
ax.set_ylabel('Number of Schools')
plt.suptitle('Number of Schools by Potential College Students')
plt.savefig('figures/potential-college-students.png', bbox_inches='tight')
high_potential = tech_col_potential['potential_college_students'] > 1000
high_potential_schools = tech_col_potential[high_potential]
len(high_potential_schools)
```
There seems to be a big gap in the number of schools with more than 1000 potential college students as compared to the number of schools with fewer potential college students.
Since we want to reduce the number of recommended stations by at least 90% and there are 24 schools with at least 1000 potential college students let's filter those and ignore the other ones.
Next, let's examine the `subway` and `bus` columns, which tells us which subway and bus lines are near each school.
```
high_potential_schools.loc[:, ('subway', 'bus')]
high_potential_schools['subway_nearby'] = df.apply(lambda x: 'no subway' if pd.isnull(x['subway'])
else 'subway nearby',
axis='columns')
high_potential_schools['subway_nearby']
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8))
high_potential_schools['all'] = ""
ax1 = sns.violinplot(data=high_potential_schools, x='all', y="college_career_rate",
hue="subway_nearby", split=True, inner="quartile", ax=ax1)
ax2 = sns.violinplot(data=high_potential_schools, x='all', y="total_students",
hue="subway_nearby", split=True, inner="quartile", ax=ax2)
ax1.set_xlabel("")
ax2.set_xlabel("")
sns.despine(left=True)
plt.suptitle('College Career Rate and Total Students by Subway Nearby')
fig.savefig('figures/subway-vs-no-subway.png', bbox_inches='tight')
```
Notice how the 75% percentile of college career rate in high schools with a subway nearby is much higher. Also notice that the schools with the highest number of students all seem to have a subway nearby.
Going forward we will filter schools without a subway station nearby.
```
# Copy to avoid chained indexing and the SettingWithCopy warning (http://bit.ly/2kkXW5B)
close_to_subway = pd.DataFrame(high_potential_schools, copy=True)
close_to_subway.dropna(subset=['subway'], inplace=True)
close_to_subway
```
Let's turn our attention to the `location` column. We have to extract latitude and longitude in order to be able to match this dataset with the subway stations location coordinates. Let's use `add_coord_columns()` which is defined in `coordinates.py`.
```
import coordinates as coord
coord.add_coord_columns(close_to_subway, 'location')
close_to_subway.loc[:, ('latitude', 'longitude')]
```
Let's plot the the schools coordinates to see their geographical distribution:
```
!pip install folium
import folium
close_to_subway_map = folium.Map([40.72, -73.92], zoom_start=11, tiles='CartoDB positron',
width='60%')
for i, school in close_to_subway.iterrows():
marker = folium.RegularPolygonMarker([school['latitude'], school['longitude']],
popup=school['school_name'], color='RoyalBlue',
fill_color='RoyalBlue', radius=5)
marker.add_to(close_to_subway_map)
close_to_subway_map.save('maps/close_to_subway.html')
close_to_subway_map
```
The interactive map is available [here](https://cdn.rawgit.com/gabrielcs/nyc-subway-canvass/master/maps/close_to_subway.html).
It seems like we have all school data we need to perform the recommendations. Let's just clean the `DataFrame` columns and save it as a `pickle` binary file for later use in another Jupyter notebook.
```
close_to_subway.rename(columns={'subway': 'subway_lines'}, inplace=True)
df_to_pickle = close_to_subway.loc[:, ('school_name', 'potential_college_students', 'latitude',
'longitude', 'start_time', 'end_time', 'subway_lines',
'city')]
df_to_pickle
df_to_pickle.to_pickle('pickle/high_schools.p')
```
| github_jupyter |
# Glove
In ths notebook, I'm trying to see if there
are unique words that could be combined into larger categories to help
documents match by using word similarity scores from a
pretrained Glove model.
```
import pandas as pd
import numpy as np
from pathlib import Path
from gensim import corpora
from gensim.models import KeyedVectors
from budget_corpus import read_documents
import hdbscan
corpus = read_documents()
dictionary = corpora.Dictionary(tokens for tokens in corpus)
glove_file = '/Users/jlc/Downloads/glove.6B/glove.6B.100d.txt'
w2v_file = '/Users/jlc/Downloads/glove.6B/glove.6B.100d.txt.w2v'
glove = KeyedVectors.load_word2vec_format(w2v_file, binary=False)
rare_words = []
for tokenid, count in dictionary.dfs.items():
if count < 5:
rare_words.append(dictionary[tokenid])
prune = []
for word in rare_words:
try:
glove.word_vec(word)
except KeyError:
prune.append(word)
for word in prune:
rare_words.remove(word)
len(rare_words)
distance_arr = np.zeros((len(rare_words), len(rare_words)))
for i, wordi in enumerate(rare_words):
for j, wordj in enumerate(rare_words):
if i < j:
distance_arr[i,j] = glove.distance(wordi, wordj)
distance_arr[j,i] = distance_arr[i,j]
elif i == j:
distance_arr[i,j] = 0
hdb = hdbscan.HDBSCAN(metric='precomputed',
approx_min_span_tree=False,
cluster_selection_method='leaf',
alpha=.0001)
hdb.fit(distance_arr)
labels = pd.Series(hdb.labels_)
labels.value_counts()
for i in range(81):
print(f'--- word cluster {i} ----')
print([w for iw, w in enumerate(rare_words) if labels[iw] == i])
```
Some of these are pretty interesting: Cluster 1 points to environmental issues, cluster 5 human rights, cluster 7 to education, cluster 10 to public health. Cluster 0 is maybe words ending in 'e'?? Looking at cluster 15, I get the impression that glove is better at clustering nouns (at least in English).
# and what words are closest to butterfly?
Butterfly was unclustered, but we can still look at the distance
array and see what is closest.
```
rare_words.index('butterfly')
distance_arr[838]
series = pd.Series(distance_arr[838])
series.sort_values().head(10)
words = series.sort_values().head(10).index
printable = [ rare_words[i] for i in words ]
printable
```
# Expanding to the full dictionary
```
all_words = list(dictionary.values())
prune = []
for word in all_words:
try:
glove.word_vec(word)
except KeyError:
prune.append(word)
for word in prune:
all_words.remove(word)
len(all_words)
all_distance_arr = np.zeros((len(all_words), len(all_words)))
for i, wordi in enumerate(all_words):
for j, wordj in enumerate(all_words):
if i < j:
all_distance_arr[i,j] = glove.distance(wordi, wordj)
all_distance_arr[j,i] = all_distance_arr[i,j]
elif i == j:
all_distance_arr[i,j] = 0
hdb = hdbscan.HDBSCAN(metric='precomputed',
approx_min_span_tree=False,
cluster_selection_method='leaf',
alpha=.0001)
hdb.fit(all_distance_arr)
labels = pd.Series(hdb.labels_)
labels.value_counts()
for i in range(117):
print(f'--- word cluster {i} ----')
print([w for iw, w in enumerate(all_words) if labels[iw] == i])
all_words.index('butterfly')
butterfly_links = [all_words[i] for i in pd.Series(all_distance_arr[2095]).sort_values().head(10).index]
butterfly_links
```
The butterfly is an animal, but also an event in Olympic swimming.
| github_jupyter |
## Image网 Submission `128x128`
This contains a submission for the Image网 leaderboard in the `128x128` category.
In this notebook we:
1. Train on 1 pretext task:
- Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images.
2. Train on 4 downstream tasks:
- We load the pretext weights and train for `5` epochs.
- We load the pretext weights and train for `20` epochs.
- We load the pretext weights and train for `80` epochs.
- We load the pretext weights and train for `200` epochs.
Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
```
import json
import torch
import numpy as np
from functools import partial
from fastai2.layers import Mish, MaxPool, LabelSmoothingCrossEntropy
from fastai2.learner import Learner
from fastai2.metrics import accuracy, top_k_accuracy
from fastai2.basics import DataBlock, RandomSplitter, GrandparentSplitter, CategoryBlock
from fastai2.optimizer import ranger, Adam, SGD, RMSProp
from fastai2.vision.all import *
from fastai2.vision.core import *
from fastai2.vision.augment import *
from fastai2.vision.learner import unet_learner, unet_config
from fastai2.vision.models.xresnet import xresnet50, xresnet34
from fastai2.data.transforms import Normalize, parent_label
from fastai2.data.external import download_url, URLs, untar_data
from fastcore.utils import num_cpus
from torch.nn import MSELoss
from torchvision.models import resnet34
```
## Pretext Task: Image Inpainting
```
# We create this dummy class in order to create a transform that ONLY operates on images of this type
# We will use it to create all input images
class PILImageInput(PILImage): pass
class RandomCutout(RandTransform):
"Picks a random scaled crop of an image and resize it to `size`"
split_idx = None
def __init__(self, min_n_holes=5, max_n_holes=10, min_length=5, max_length=50, **kwargs):
super().__init__(**kwargs)
self.min_n_holes=min_n_holes
self.max_n_holes=max_n_holes
self.min_length=min_length
self.max_length=max_length
def encodes(self, x:PILImageInput):
"""
Note that we're accepting our dummy PILImageInput class
fastai2 will only pass images of this type to our encoder.
This means that our transform will only be applied to input images and won't
be run against output images.
"""
n_holes = np.random.randint(self.min_n_holes, self.max_n_holes)
pixels = np.array(x) # Convert to mutable numpy array. FeelsBadMan
h,w = pixels.shape[:2]
for n in range(n_holes):
h_length = np.random.randint(self.min_length, self.max_length)
w_length = np.random.randint(self.min_length, self.max_length)
h_y = np.random.randint(0, h)
h_x = np.random.randint(0, w)
y1 = int(np.clip(h_y - h_length / 2, 0, h))
y2 = int(np.clip(h_y + h_length / 2, 0, h))
x1 = int(np.clip(h_x - w_length / 2, 0, w))
x2 = int(np.clip(h_x + w_length / 2, 0, w))
pixels[y1:y2, x1:x2, :] = 0
return Image.fromarray(pixels, mode='RGB')
torch.cuda.set_device(1)
# Default parameters
gpu=None
lr=1e-2
size=128
sqrmom=0.99
mom=0.9
eps=1e-6
epochs=15
bs=64
mixup=0.
opt='ranger',
arch='xresnet50'
sh=0.
sa=0
sym=0
beta=0.
act_fn='Mish'
fp16=0
pool='AvgPool',
dump=0
runs=1
meta=''
# Chosen parameters
lr=8e-3
sqrmom=0.99
mom=0.95
eps=1e-6
bs=64
opt='ranger'
sa=1
fp16=1 #NOTE: My GPU cannot run fp16 :'(
arch='xresnet50'
pool='MaxPool'
gpu=0
# NOTE: Normally loaded from their corresponding string
m = xresnet34
act_fn = Mish
pool = MaxPool
def get_dbunch(size, bs, sh=0., workers=None):
if size<=224:
path = URLs.IMAGEWANG_160
else:
path = URLs.IMAGEWANG
source = untar_data(path)
if workers is None: workers = min(8, num_cpus())
#CHANGE: Input is ImageBlock(cls=PILImageInput)
#CHANGE: Output is ImageBlock
#CHANGE: Splitter is RandomSplitter (instead of on /val folder)
item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5), RandomCutout]
# batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None
batch_tfms = [Normalize.from_stats(*imagenet_stats)]
dblock = DataBlock(blocks=(ImageBlock(cls=PILImageInput), ImageBlock),
splitter=RandomSplitter(0.1),
get_items=get_image_files,
get_y=lambda o: o,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers)
name = 'imagewang_inpainting_80_epochs_nopretrain_normalized.pth'
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
size = 128
bs = 64
dbunch = get_dbunch(size, bs, sh=sh)
#CHANGE: We're predicting pixel values, so we're just going to predict an output for each RGB channel
dbunch.vocab = ['R', 'G', 'B']
len(dbunch.train.dataset), len(dbunch.valid.dataset)
dbunch.show_batch()
learn = unet_learner(dbunch, partial(m, sa=sa), pretrained=False, opt_func=opt_func, metrics=[], loss_func=MSELoss()).to_fp16()
cbs = MixUp(mixup) if mixup else []
learn.fit_flat_cos(80, lr, wd=1e-2, cbs=cbs)
# I'm not using fastai2's .export() because I only want to save
# the model's parameters.
torch.save(learn.model[0].state_dict(), name)
```
## Downstream Task: Image Classification
```
def get_dbunch(size, bs, sh=0., workers=None):
if size<=224:
path = URLs.IMAGEWANG_160
else:
path = URLs.IMAGEWANG
source = untar_data(path)
if workers is None: workers = min(8, num_cpus())
item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
batch_tfms = [Normalize.from_stats(*imagenet_stats)]
# batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
splitter=GrandparentSplitter(valid_name='val'),
get_items=get_image_files, get_y=parent_label,
item_tfms=item_tfms, batch_tfms=batch_tfms)
return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers,
)#item_tfms=item_tfms, batch_tfms=batch_tfms)
dbunch = get_dbunch(size, bs, sh=sh)
m_part = partial(m, c_out=20, act_cls=torch.nn.ReLU, sa=sa, sym=sym, pool=pool)
```
### 5 Epochs
```
epochs = 5
runs = 5
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load(name)
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
```
* Run 1: 0.362942
* Run 2: 0.372868
* Run 3: 0.342326
* Run 4: 0.360143
* Run 5: 0.357088
Accuracy: **35.91%**
### 20 Epochs
```
epochs = 20
runs = 3
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load(name)
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
```
* Run 1: 0.592263
* Run 2: 0.588445
* Run 3: 0.595571
Accuracy: **59.21%**
## 80 epochs
```
epochs = 80
runs = 1
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load(name)
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
```
Accuracy: **61.44%**
### 200 epochs
```
epochs = 200
runs = 1
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth')
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
```
Accuracy: **59.18%**
| github_jupyter |
# Comparing TensorFlow (original) and PyTorch models
You can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.
To run this notebook, follow these instructions:
- make sure that your Python environment has both TensorFlow and PyTorch installed,
- download the original TensorFlow implementation,
- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,
- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.
If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code.
```
import os
os.chdir('../')
```
## 1/ TensorFlow code
```
original_tf_inplem_dir = "./tensorflow_code/"
model_dir = "../google_models/uncased_L-12_H-768_A-12/"
vocab_file = model_dir + "vocab.txt"
bert_config_file = model_dir + "bert_config.json"
init_checkpoint = model_dir + "bert_model.ckpt"
input_file = "./samples/input.txt"
max_seq_length = 128
import importlib.util
import sys
spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py')
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules['extract_features_tensorflow'] = module
from extract_features_tensorflow import *
layer_indexes = list(range(12))
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=True)
examples = read_examples(input_file)
features = convert_examples_to_features(
examples=examples, seq_length=max_seq_length, tokenizer=tokenizer)
unique_id_to_feature = {}
for feature in features:
unique_id_to_feature[feature.unique_id] = feature
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
master=None,
tpu_config=tf.contrib.tpu.TPUConfig(
num_shards=1,
per_host_input_for_training=is_per_host))
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=init_checkpoint,
layer_indexes=layer_indexes,
use_tpu=False,
use_one_hot_embeddings=False)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=False,
model_fn=model_fn,
config=run_config,
predict_batch_size=1)
input_fn = input_fn_builder(
features=features, seq_length=max_seq_length)
tensorflow_all_out = []
for result in estimator.predict(input_fn, yield_single_examples=True):
unique_id = int(result["unique_id"])
feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
tensorflow_all_out_features = []
# for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
print("extracting layer {}".format(j))
layer_output = result["layer_output_%d" % j]
layers = collections.OrderedDict()
layers["index"] = layer_index
layers["values"] = layer_output
all_layers.append(layers)
tensorflow_out_features = collections.OrderedDict()
tensorflow_out_features["layers"] = all_layers
tensorflow_all_out_features.append(tensorflow_out_features)
output_json["features"] = tensorflow_all_out_features
tensorflow_all_out.append(output_json)
print(len(tensorflow_all_out))
print(len(tensorflow_all_out[0]))
print(tensorflow_all_out[0].keys())
print("number of tokens", len(tensorflow_all_out[0]['features']))
print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers']))
tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape
tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes)
```
## 2/ PyTorch code
```
os.chdir('./examples')
import extract_features
import pytorch_pretrained_bert as ppb
from extract_features import *
init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/"
device = torch.device("cpu")
model = ppb.BertModel.from_pretrained(init_checkpoint_pt)
model.to(device)
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long)
all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index)
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1)
model.eval()
layer_indexes = list(range(12))
pytorch_all_out = []
for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader:
print(input_ids)
print(input_mask)
print(example_indices)
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask)
for b, example_index in enumerate(example_indices):
feature = features[example_index.item()]
unique_id = int(feature.unique_id)
# feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
all_out_features = []
# for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
print("layer", j, layer_index)
layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy()
layer_output = layer_output[b]
layers = collections.OrderedDict()
layers["index"] = layer_index
layer_output = layer_output
layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output]
all_layers.append(layers)
out_features = collections.OrderedDict()
out_features["layers"] = all_layers
all_out_features.append(out_features)
output_json["features"] = all_out_features
pytorch_all_out.append(output_json)
print(len(pytorch_all_out))
print(len(pytorch_all_out[0]))
print(pytorch_all_out[0].keys())
print("number of tokens", len(pytorch_all_out))
print("number of layers", len(pytorch_all_out[0]['features'][0]['layers']))
print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values']))
pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape
pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes)
print(pytorch_outputs[0].shape)
print(pytorch_outputs[1].shape)
print(tensorflow_outputs[0].shape)
print(tensorflow_outputs[1].shape)
```
## 3/ Comparing the standard deviation on the last layer of both models
```
import numpy as np
print('shape tensorflow layer, shape pytorch layer, standard deviation')
print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape,
np.array(pytorch_outputs[i]).shape,
np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12))))
```
| github_jupyter |
## Analytics Vidhya Practise problem - HR Analytics
#### Step 1: Define your problem
- The main objective is to predict an employee's promotion. Binary classification problem.
----------
#### Step 2: Hypothesis generation
What can affect an employee promotion?
- Performance & KPIs
- How much impact s(he) made on the business
- How long as s(he) been associated with the company?
- Have they won any awards?
-------
#### Step 3: Understanding the dataset
<b> 3.1 - Import necessary libs and read data </b>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
train = pd.read_csv('Practise/HR Analytics/train.csv')
test = pd.read_csv('Practise/HR Analytics/test.csv')
train.head(1)
print(train.dtypes)
print(train.shape)
print(test.shape)
```
Now we know about our predictors & target features. <br>
We also understand what is the data type of each feature
<br>
Understanding continuous & categorical variables accordingly
```
train.describe()
```
Here we can see - <br>
{{Some insights in bullets}}
```
cat_var = train.dtypes.loc[train.dtypes == 'object'].index
print(cat_var)
train['department'].value_counts()/train.shape[0]
train['region'].value_counts()/train.shape[0]
train['education'].value_counts()/train.shape[0]
# train['gender'].value_counts()/train.shape[0]
# train['recruitment_channel'].value_counts()/train.shape[0]
```
-------
#### Step 4: Data Preparation
<b> 4.1 - Missing value treatment <b>
```
train.isnull().sum()
test.isnull().sum()
```
Here we have to deal cat and con vars seperately. <br>
- Either we can drop the NaN observations
- Or we can impute them based on mean (con) or mode (cat) <br>
<i> Dealing with con vars </i>
```
from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values = 'NaN', strategy = 'mean')
imputer = imputer.fit(train.iloc[:, 8:9])
train.iloc[:, 8:9] = imputer.transform(train.iloc[:, 8:9])
# now for test
from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values = 'NaN', strategy = 'mean')
imputer = imputer.fit(test.iloc[:, 8:9])
test.iloc[:, 8:9] = imputer.transform(test.iloc[:, 8:9])
```
<i> Dealing with cat vars </i>
```
# education is a categorical var so mode imptutaion
Education_Null_Indices = train[train.education.isnull()].index
for el in Education_Null_Indices: train.education[el] = "Bachelor's"
#now for test
Education_Null_Indices = test[test.education.isnull()].index
for el in Education_Null_Indices: test.education[el] = "Bachelor's"
```
And tada! No missing values! <br> <br>
<b> 4.2 - Outlier treatment </b> <br>
First let us detect outliers using Z-Score
* Using Scatter plot
```
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(train['age'], train['length_of_service'])
ax.set_xlabel('Age')
ax.set_ylabel('Length of Service')
plt.show()
```
* Using Box Plot
```
import seaborn as sns
sns.boxplot(x=train['length_of_service'])
```
* Using Z-Score
```
from scipy import stats
import numpy as np
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
con_train = train.select_dtypes(include=numerics)
z = np.abs(stats.zscore(con_train))
print(np.where(z > 3))
```
* Removing Outliers if Z-Score > 3
```
print(con_train.shape)
con_train = con_train[(z < 3).all(axis=1)]
print(con_train.shape)
```
<b> 4.3 - Redundancy Check <b>
```
train.duplicated('employee_id').sum()
```
<b>4.4 - Imbalance Check </b>
```
train['is_promoted'].value_counts()/train.shape[0]
```
This is an heavily imbalanced dataset. Inorder to deal with this, we can do the following -
----
#### 5. Exploratory Data Analysis
<b>5.1 Univariate Analysis </b>
```
#Histogram
plt.hist(train.age, bins=10)
plt.show()
#Box Plot
plt.boxplot(train.avg_training_score)
plt.show()
#Density Plot
sns.kdeplot(train['age'], shade=True)
plt.show()
```
<b> 5.2 Bivariate Analysis </b>
- Correlation (Con & Con)
```
train.corr()
#Scatter Plot
sns.regplot(x=train["age"], y=train["length_of_service"], fit_reg=False)
plt.show()
#Violin Plot
sns.violinplot(x = 'education', y = 'age', data = train)
```
----
#### 6. Feature Engineering:
<b> 6.1 Feature Transformation </b>
1. Normalisation
```
from sklearn import preprocessing
x = train.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df = pandas.DataFrame(x_scaled)
```
<b> 6.2 Feature Creation </b>
--------
#### 7. Predictive Modeling:
<b> 7.1
| github_jupyter |
```
import cv2 as cv
import matplotlib.pyplot as plt
#PROCESSING PIKACHU IMAGE
# I used Pikachu and charmander for this project. You can use any low resolution pokemon images for this
aa = cv.imread(r'C:\Users\Jojo\Desktop\projects\minecraft\pik.png')
aa = cv.resize(aa, (32, 32), interpolation = cv.INTER_CUBIC)
aa = cv.rotate(aa, cv.ROTATE_90_CLOCKWISE)
for i in range(aa.shape[0]):
for j in range(aa.shape[1]):
if aa[i,j,0]>225 and aa[i,j,1]>225 and aa[i,j,2]>225:
aa[i,j,0] = 128
aa[i,j,1] = 128
aa[i,j,2] = 128
#<a-box id="new" color="blue" position="2 2 2" ></a-box>
ind = 1
bval = 1
strlis = []
for i in range(aa.shape[0]):
for j in range(aa.shape[1]):
if aa[i,j,0]!=128 and aa[i,j,1]!=128 and aa[i,j,2]!=128:
str1 = '<a-box id'+'="new'+str(bval)+'"'+ ' color="'
bval = bval + 1
colo = '#%02x%02x%02x' % (aa[i,j,2], aa[i,j,1], aa[i,j,0])
str2 = colo+'"'+' position='
str3 = '"'+str(i)+' '+str(j)+' '+str(ind+100)+'"'+'></a-box>'
strlis.append(str1+str2+str3)
ind = ind + 1
#PROCESSING CHARMANDER IMAGE
aa = cv.imread(r'C:\Users\Jojo\Desktop\projects\minecraft\char.png')
aa = cv.resize(aa, (32, 32), interpolation = cv.INTER_CUBIC)
aa = cv.rotate(aa, cv.ROTATE_90_CLOCKWISE)
for i in range(aa.shape[0]):
for j in range(aa.shape[1]):
if aa[i,j,0]>225 and aa[i,j,1]>225 and aa[i,j,2]>225:
aa[i,j,0] = 128
aa[i,j,1] = 128
aa[i,j,2] = 128
#<a-box id="new" color="blue" position="2 2 2" ></a-box>
ind = 1
strlis2 = []
for i in range(aa.shape[0]):
for j in range(aa.shape[1]):
if aa[i,j,0]!=128 and aa[i,j,1]!=128 and aa[i,j,2]!=128:
str1 = '<a-box id="new" color="'
colo = '#%02x%02x%02x' % (aa[i,j,2], aa[i,j,1], aa[i,j,0])
str2 = colo+'"'+' position='
str3 = '"'+str(i-40)+' '+str(j)+' '+str(ind+100)+'"'+'></a-box>'
strlis2.append(str1+str2+str3)
ind = ind + 1
#pikdestroy list contains code for destroying the pikachu figure. It's very simple(just changing x-coords of boxes)
pikdestroy = []
for i in range(1,467):
str1 = "document.getElementById('new"+str(i)+"').object3D.position.x += Math.floor(Math.random() * 10);"
pikdestroy.append(str1)
myfile = open(r"C:\Users\Jojo\Desktop\projects\minecraft\pik22.txt", "w")
for line in strlis:
#var1, var2 = line.split(",");
myfile.write("%s\n" % line)
for line in strlis2:
#var1, var2 = line.split(",");
myfile.write("%s\n" % line)
myfile.close()
```
| github_jupyter |
# Simulation
```
import datetime
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from graspologic.simulations import er_corr
from pkg.io import FIG_PATH, OUT_PATH
from pkg.io import glue as default_glue
from pkg.io import savefig
from pkg.match import GraphMatchSolver
from pkg.plot import method_palette, set_theme
from tqdm import tqdm
DISPLAY_FIGS = True
FILENAME = "simulations"
OUT_PATH = OUT_PATH / FILENAME
FIG_PATH = FIG_PATH / FILENAME
def glue(name, var, **kwargs):
default_glue(name, var, FILENAME, **kwargs)
def gluefig(name, fig, **kwargs):
savefig(name, foldername=FILENAME, **kwargs)
glue(name, fig, figure=True)
if not DISPLAY_FIGS:
plt.close()
t0 = time.time()
set_theme()
rng = np.random.default_rng(8888)
np.random.seed(88888888)
```
## Model
```
n_side = 10
glue("n_side", n_side)
n_sims = 1000
glue("n_sims", n_sims, form="long")
ipsi_rho = 0.8
glue("ipsi_rho", ipsi_rho)
ipsi_p = 0.3
glue("ipsi_p", ipsi_p)
contra_p = 0.2
glue("contra_p", contra_p)
```
- Let the directed correlated Erdos-Reyni model be written as $CorrER(n, p, \rho)$, where $n$ is
the number of nodes, $p$ is the density, and $\rho$ is the correlation between the
two networks.
- The ipsilateral subgraphs were sampled from a $CorrER$ model:
- $A_{LL}^{'}, A_{RR}^{'} \sim CorrER(${glue:text}`simulations-n_side`, {glue:text}`simulations-ipsi_p`, {glue:text}`simulations-ipsi_rho`$)$
- Independently from the ipsilateral networks, the contralateral subgraphs were also sampled from a $CorrER$ model:
- $A_{LR}^{'}, A_{RL}^{'} \sim CorrER(${glue:text}`simulations-n_side`, {glue:text}`simulations-contra_p`, $\rho_{contra})$
- The full network was then defined as
- $A^{'} = \begin{bmatrix} A_{LL}^{'} & A_{LR}^{'}\\ A_{RL}^{'} & A_{RR}^{'} \end{bmatrix}$
- A random permutation was applied to the nodes of the "right hemisphere" in each sampled network:
- $A = \begin{bmatrix} I_n & 0 \\ 0 & P_{rand} \end{bmatrix} A{'} \begin{bmatrix} I_n & 0 \\ 0 & P_{rand} \end{bmatrix}^T = \begin{bmatrix} A_{LL}^{'} & A_{LR}^{'} P_{rand}^T \\ P_{rand} A_{RL}^{'} & P_{rand} A_{RR}^{'} P_{rand}^T \end{bmatrix}$
- Thus we can write
- $A_{LL} = A_{LL}^{'}$
- $A_{RR} = P_{rand} A_{RR}^{'} P_{rand}^T$
- $A_{LR} = A_{LR}^{'} P_{rand}^T$
- $A_{RL} = P_{rand} A_{RL}^{'} $
## Experiment
- $\rho_{contra}$ was varied from 0 to 1.
- For each value of $\rho_{contra}$, {glue:text}`simulations-n_sims` networks were sampled according to the model above.
- For each sampled network, two algorithms were applied to try to recover the alignment between the left and the right:
- Graph matching **(GM)**, using only the ipsilateral subgraphs $A_{LR}$ and $A_{RR}$.
- $\min_{P} \|A_{LL} - P A_{RR} P^T\|_F^2$
- Bisected graph matching **(BGM)**, using ipsilateral and contralateral subgraphs:
- $\min_{P} \|A_{LL} - P A_{RR} P^T\|_F^2 + \|A_{LR} P^T - P A_{RL}\|_F^2$
- Both algorithms were run with the same (default) settings: one initialization at the barycenter,
maximum 30 Frank-Wolfe (FW) iterations, stopping tolerance (on the norm of the difference
between solutions at each FW iteration) of 0.01.
- For each sampled network and algorithm, we computed the matching accuracy for the
recovered permutation.
```
rows = []
for contra_rho in np.linspace(0, 1, 11):
for sim in tqdm(range(n_sims), leave=False, desc=str(contra_rho)):
# simulate the correlated subgraphs
A, B = er_corr(n_side, ipsi_p, ipsi_rho, directed=True)
AB, BA = er_corr(n_side, contra_p, contra_rho, directed=True)
# permute one side as appropriate
perm = rng.permutation(n_side)
undo_perm = np.argsort(perm)
B = B[perm][:, perm]
AB = AB[:, perm]
BA = BA[perm, :]
# run the matching
for method in ["GM", "BGM"]:
if method == "GM":
solver = GraphMatchSolver(A, B)
elif method == "BGM":
solver = GraphMatchSolver(A, B, AB=AB, BA=BA)
solver.solve()
match_ratio = (solver.permutation_ == undo_perm).mean()
rows.append(
{
"ipsi_rho": ipsi_rho,
"contra_rho": contra_rho,
"match_ratio": match_ratio,
"sim": sim,
"method": method,
}
)
results = pd.DataFrame(rows)
```
## Results
Below, the mean matching accuracy is plotted as a function of the strength of the
contralateral correlation $\rho_{contra}$ for each of the three algorithms. Shaded
regions show 95% confidence intervals.
```
zero_acc = results[results["contra_rho"] == 0].groupby("method")["match_ratio"].mean()
zero_diff = zero_acc[1] - zero_acc[0]
glue("zero_diff", zero_diff, form="2.0f%")
point_9_acc = (
results[results["contra_rho"] == 0.9].groupby("method")["match_ratio"].mean()
)
point_9_diff = point_9_acc[0] - point_9_acc[1]
glue("point_9_diff", point_9_diff, form="2.0f%")
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
sns.lineplot(
data=results,
x="contra_rho",
y="match_ratio",
hue="method",
style="method",
hue_order=["GM", "BGM"],
dashes={"GM": (3, 1), "BGM": ""},
ax=ax,
palette=method_palette,
)
ax.set_ylabel("Matching accuracy")
ax.set_xlabel("Contralateral edge correlation")
sns.move_legend(ax, loc="upper left", title="Method", frameon=True)
gluefig("match_ratio_by_contra_rho", fig)
```
## End
```
elapsed = time.time() - t0
delta = datetime.timedelta(seconds=elapsed)
print(f"Script took {delta}")
print(f"Completed at {datetime.datetime.now()}")
```
| github_jupyter |
```
import numpy as np
from importlib import reload
import time
import nnmath
import ann
import cnn
training_input = np.load('training_input.npy')
training_output = np.load('training_output.npy')
test_input = np.load('test_input.npy')
test_output = np.load('test_output.npy')
validation_input = np.load('validation_input.npy')
validation_output = np.load('validation_output.npy')
weights0 = np.load('weights0.npy')
weights1 = np.load('weights1.npy')
biases0 = np.load('biases0.npy')
biases1 = np.load('biases1.npy')
ndim_input = 784
image_shape = (1, 28, 28)
ndim_output = 10
n_training = training_input.shape[0]
n_test = test_input.shape[0]
n_validation = validation_input.shape[0]
training_data = (training_input.reshape(np.r_[n_training, np.asarray(image_shape)]), training_output)
test_data = (test_input.reshape(np.r_[n_test, np.asarray(image_shape)]), test_output)
validation_data = (validation_input.reshape(np.r_[n_validation, np.asarray(image_shape)]), validation_output)
n = 5000
small_training = (training_input[0:n].reshape(np.r_[n, np.asarray(image_shape)]), training_output[0:n])
n = 1000
small_test = (test_input[0:n].reshape(np.r_[n, np.asarray(image_shape)]), test_output[0:n])
reload(nnmath)
reload(nnmath)
reload(cnn)
reload(cnn)
feature_shape = (6, 5, 5)
cplayer = cnn.ConvPoolLayer(image_shape, feature_shape)
cplayer.set_mini_batch_size(10)
#feature_shape2 = (12, 5, 5)
#cplayer2 = cnn.ConvPoolLayer((4,12,12), feature_shape2)
fcplayer = cnn.FullyConnectedLayer(6*12*12, 10)
net = cnn.Network((cplayer, fcplayer), cfunc='crossentropy')
#net = cnn.Network((cplayer, cplayer2, fcplayer), cfunc='crossentropy')
#outlayer = cnn.FullyConnectedLayer(20, 10)
#net = cnn.Network((cplayer, fcplayer, outlayer), cfunc='crossentropy')
net.set_parameters(stepsize=0.5, overfit_lambda=0E-0/50000.)
net.set_mini_batch_size(13)
net.max_epochs = 10
t0 = time.time()
net.SGD(small_training, test_data=small_test)
#net.SGD(training_data, test_data=test_data)
t1 = time.time()
print("Time lapsed {0:6.3f} secs".format(t1-t0))
f_fcplayer = cnn.FullyConnectedLayer(1*28*28, 100)
f_outlayer = cnn.FullyConnectedLayer(100, 10)
fnet = cnn.Network((f_fcplayer, f_outlayer), cfunc='crossentropy')
fnet.set_parameters(stepsize=0.125, overfit_lambda=2E-2/50000.)
fnet.set_mini_batch_size(12)
fnet.max_epochs = 10
t0 = time.time()
#fnet.SGD(small_training, test_data=small_test)
fnet.SGD(training_data, test_data=test_data)
t1 = time.time()
print("Time lapsed {0:6.3f} secs".format(t1-t0))
test_results = np.argmax(fnet.feedforward(validation_data[0]), axis=1)
#tmp_test_data = np.argmax(test_data[1], axis=1)
test_results_random = np.random.choice(np.arange(10), size=(test_results.shape))
fig = plt.figure(figsize=(20,4))
#fig.subplots_adjust(left=0, bottom=0, right=1, top=1,
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0, hspace=0)
axnum = 0
#plt.subplots(figsize=(20, 4))
select_number = 7
index = np.where(test_results==select_number)[0]
tmp_image = validation_data[0][index][np.random.choice(np.arange(index.size), size=(20))].reshape(20, 28, 28)
output = np.zeros((28*2, 28*10))
for i in np.arange(2):
for j in np.arange(10):
axnum += 1
ax = fig.add_subplot(2, 10, axnum)
ax.imshow(tmp_image[i*10+j, ...], cmap=plt.cm.Greys)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# output[i*28:(i+1)*28, j*28:(j+1)*28] = tmp_image[i*10+j,:,:]
#ax.imshow(output,cmap = plt.cm.Greys)
#random_images = test_data[0][np.random.randint(0, test_results.size, size=20)].shape
fig = plt.figure(figsize=(20,4))
#fig.subplots_adjust(left=0, bottom=0, right=1, top=1,
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0, hspace=0)
axnum = 0
#plt.subplots(figsize=(20, 4))
index = np.random.choice(np.arange(test_results.size), size=(20))
tmp_image = validation_data[0][index].reshape(20, 28, 28)
tmp_results = test_results[index]
output = np.zeros((28*2, 28*10))
for i in np.arange(2):
for j in np.arange(10):
axnum += 1
ax = fig.add_subplot(2, 10, axnum)
ax.imshow(tmp_image[i*10+j, ...], cmap=plt.cm.Greys)
ax.text(0.8, 8.0,tmp_results[i*10+j],fontsize=30)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
layer.activation_deriv(layer.z)
convolved = net.layers[0].feedforward(test_data[0])
fullyconnected = net.layers[1].feedforward(convolved)
allout = net.layers[2].feedforward(fullyconnected)
test_results = np.argmax(allout, axis=1)
tmp_test_data = np.argmax(test_data[1], axis=1)
print(np.count_nonzero(np.equal(test_results, tmp_test_data)))
tmprandom = np.random.choice(np.arange(10), size=10000)
print(np.count_nonzero(np.equal(tmprandom, tmp_test_data)))
864/12/12
training_data[0].shape
tt = np.zeros((10,1,28,28))
np.arange(2,3)
3*12*12
x = np.random.random((1000, 4))
xx = np.amax(x, axis=1)
xx.mean()
#atmp = np.arange(100).reshape(10,10)
atmp = np.random.random(8*8).reshape(8,8)
atmp
mask = np.zeros((8,8), dtype=bool)
aa = nnmath.maxpooling22_down(atmp, mask=mask)
reload(nnmath)
reload(nnmath)
mask
atmp[mask]
btmp = np.arange(4*4).reshape(4,4)+10
nnmath.maxpooling22_up(btmp, mask=mask)
mask
btmp
btmp[::-1, ::-1]
import matplotlib.pyplot as plt
%matplotlib inline
```
| github_jupyter |
# Session Login info
# Installing Watson Community Edition CE
https://developer.ibm.com/linuxonpower/deep-learning-powerai/releases/
https://www.ibm.com/support/knowledgecenter/SS5SF7_1.7.0/navigation/wmlce_systemsetup.html
https://www.ibm.com/support/knowledgecenter/SS5SF7_1.7.0/navigation/wmlce_install.html
reference only:
https://www.ibm.com/support/knowledgecenter/SS5SF7_1.7.0/navigation/wmlce_getstarted.htm
# How to invoke python from command line
## Not for today's exercise!
sometimes invoking a python script in batch mode is the order of the day
here's how you can take code and use an editor in the command line (such as gedit, or vi, sublime, notepad, vcode, etc) then run the script from the command line
- launch a command line prompt (ssh into a machine ot even launch a terminal session from your laptop)
- vi test.py
- press i (to go into insert mode)
- type the following code into the editor
- import numpy as np
- print(np.random.normal())
- press 'ESC'
- press :wq
- from commandline:
- python test.py
```
#! notepad test.py #on windows
#! gedit test.py #on linux
#!python test.py
import matplotlib
matplotlib.__version__
```
# Explore Python Tutorials: 20 minutes
https://www.w3schools.com/python/
these are very short, very quick interactive lessons to learn basic python operations
Suggestions - play with some of these short interactive live python sessions to get familiar with python coding:
| Recommended Exercises | Recommended Exercises |
| --- | --- |
| https://www.w3schools.com/python/ | https://www.w3schools.com/python/python_variables.asp |
| https://www.w3schools.com/python/python_datatypes.asp | https://www.w3schools.com/python/python_numbers.asp |
| https://www.w3schools.com/python/python_strings.asp | https://www.w3schools.com/python/python_operators.asp |
| https://www.w3schools.com/python/python_lists.asp | https://www.w3schools.com/python/python_dictionaries.asp |
| https://www.w3schools.com/python/python_conditions.asp | https://www.w3schools.com/python/python_for_loops.asp |
| https://www.w3schools.com/python/python_functions.asp | https://www.w3schools.com/python/python_modules.asp |
| https://www.w3schools.com/python/python_json.asp | https://www.w3schools.com/python/python_string_formatting.asp |
| https://www.w3schools.com/python/python_file_handling.asp | https://www.w3schools.com/python/numpy_creating_arrays.asp |
| https://www.w3schools.com/python/numpy_array_indexing.asp | https://www.w3schools.com/python/numpy_array_slicing.asp |
| https://www.w3schools.com/python/numpy_array_search.asp | https://www.w3schools.com/python/numpy_array_filter.asp |
## Pandas data table representation: 10 minutes
Pandas is one of the favorite libraries for data scientists. It is used primarily for data engineering and preparation tasks. If you are unfamiliar with Pandas, take a look through the link below to get a quick sense of the capabilities. We will be using this library later on in the session.
10 minutes to pandas https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html

| Topics | Topics | Topics | Topics |
| --- | --- | -- | -- |
| Object Creation | Viewing Data | Selection | Selection by Position |
| Setting | Missing Data | Stats | Apply | Concat |
# [Fairly] Quick Exercises
Here we introduce some basic python exercises to get you warmed up. If you can figure them out, don't worry we will walk through them during the session. For selected exercises we have embedded hints. This is not an exhaustive list, but mainly a tour to get you started with Python and getting used to the syntax.
# Python Basics - Variables and Assignment
* create a variable named x and assign it a floating point value 12.3,
* create an integer named i and assign it a value of 10
* create a string called myString and assign it a value of "hello world"
<div class="panel-group" id="accordion-21">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-21" href="#collapse1-21">
Hint 1</a>
</h4>
</div>
<div id="collapse1-21" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
x = 12.3<br>
i = 10<br>
myString = "hello world"<br>
print(x, i, myString) #fix this cell!
</div>
</div>
</div>
</div>
```
print(x, i, myString) #fix this cell!
```
### Create some variables, print them with a format satement
create a print statement using a format statement which out puts text as follows:
hello world, the value of i is 10 and the value of x is 12.3
using the following pattern - replace # with correct variable names
#print("#, the value of i is # and the value of x is #".format(#, #, #))
<div class="panel-group" id="accordion-22">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-22" href="#collapse1-22">
Hint 1</a>
</h4>
</div>
<div id="collapse1-22" class="panel-collapse collapse">
<div class="panel-body">
Type:<br><br>
print("{}, the value of i is {} and the value of x is {}".format(myString, i, x))
</div>
</div>
</div>
</div>
```
print("{}, the value of i is {} and the value of x is {}".format(myString, i, x)) # fix this cell
```
# Python Basics - Lists
* create a list called myList containing items 1,2,3,4
<div class="panel-group" id="accordion-33">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-33" href="#collapse1-33">
Hint 1</a>
</h4>
</div>
<div id="collapse1-33" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
myList = [1,2,3,4]<br>
<br>
Lists require square brackets, tuples require parentheses
</div>
</div>
</div>
</div>
```
myList = (1,2,3,4) # this is NOT a list - it is a tuple in current form. This will cause issue later - fix this. Make it a list
```
## Python Basics - List Indexing
* print element [2] of myList above
* assign the value of 12 to the last element of myList
* using slicing or indexing to access the last two elements of myList
<div class="panel-group" id="accordion-44">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-44" href="#collapse1-44">
Hint</a>
</h4>
</div>
<div id="collapse1-44" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
<br> if you get an error similar to: TypeError: 'tuple' object does not support item assignment
<br>
<br> then go to the cells above and change myList to a List NOT a tuple<br><br><br>
<br>print(myList[2]) ## prints the third element of myList <br>
<br>myList[-1] = 12 ## assigns 12 to the last element of a list. The -1 is a special index that tell python to grab last element<br>
<br>print(myList[-2:4]) ## prints elements at index 2,3, not inclusive of 4<br>
</div>
</div>
</div>
</div>
```
print(myList[2])
myList[-1] = 12
```
# Python Basics - Dictionaries
Dictionaries are handy for storing key/value pairs, see the examples below.
* create a dictionary called myDict containing the following key, value pairs
* 'xmax': 3
* 'xmin': 4
* print the value of the dictionary 'xmax'
* assign a new value 6 to the entry 'xmax' and print it
```
myDict = {'xmax':3, 'xmin':4}
print(myDict['ymax']) #fix this cell
myDict['xmax'] = 6
print(myDict['xmax'])
```
## Python Basics - Magics
* use jupyter magics to read the list of files in the current folder
* ! dir #windows
* ! ls # linux
```
!dir #this will error on linux, fix it
```
## Python Basics - Numpy Arrays
Numpy is the numerical library written in c that should be used for any task requiring lots of computation.
* create two random numpy arrays of dimension 3,2 = call one array x, the other y, add x & y and print the result
<div class="panel-group" id="accordion-46">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-46" href="#collapse1-46">
Hint</a>
</h4>
</div>
<div id="collapse1-46" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
<br>print(x+y) <br>
</div>
</div>
</div>
</div>
```
import numpy as np #this cell is broken - fix it
x = np.random.rand(3,2)
y = np.random.rand(3,2)
print(x+y+z)
```
# Python Basics - Numpy Matrix Multiply
* matrix multiply two arrays ... a 3x2 time a 2x3 with the following names and values<br>
<code>
a = [[1,2],
[3,2],
[4,1]]
b = [[1,2,3],
[3,2,1]]
</code>
Multiply these two arrays to get a 2 x 2 array. (The order of a, b is important)
<div class="panel-group" id="accordion-55">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-55" href="#collapse1-55">
Hint</a>
</h4>
</div>
<div id="collapse1-55" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
<br>b@a ## method 1 <br>
<br>b.dot(a) ## method 2<br>
<br>np.matmul(b,a) ## method 3<br>
</div>
</div>
</div>
</div>
```
a = np.array([[1,2],[3,2], [4,1]]) # this cell is broken fix it
b = np.array([[1,2,3],[3,2,1]])
#compute b.dot(a)
print(a*b)
print("the answer should be \n", [[19, 9],[13, 11]])
```
## Python Basics - Pandas
* create a pandas dataframe called my_df, and read with the contents WA_Fn-UseC_-Telco-Customer-Churn_nans.csv
* and display the first 5 rows of the dataframe
<div class="panel-group" id="accordion-66">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-66" href="#collapse1-66">
Hint</a>
</h4>
</div>
<div id="collapse1-66" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
<br>my_df.columns[cols].tolist() <br>
</div>
</div>
</div>
</div>
```
import pandas as pd #this cell is broken fix it
my_df = pd.read_csv('WA_Fn-UseC_-Telco-Customer-Churn_nans.csv')
df.head()
```
## Python Basics - Pandas NaNs
* determine which if any columns have nans in them
```
cols = my_df.isnull().sum() > 0 # this cell is broken, fix it
print(col)
# print the names of the columns that have nans
cols = (my_df.isnull().sum()>0).tolist() # this cell is broken, fix it
my_df.columns[cols].tolist()
```
<div class="panel-group" id="accordion-67">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-67" href="#collapse1-67">
Hint</a>
</h4>
</div>
<div id="collapse1-67" class="panel-collapse collapse">
<div class="panel-body">
Type:<br><br>
- isnull() <br>
<br>cols = (my_df.isnull().sum()>0).tolist() ## use isnull ... null isnt a method <br>
<br>my_df.columns[cols].tolist() <br>
</div>
</div>
</div>
</div>
```
cols = (my_df.null().sum()>0).tolist() # this cell is broken, fix it
my_df.columns[cols].tolist()
```
<div class="panel-group" id="accordion-77">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-77" href="#collapse1-77">
Hint</a>
</h4>
</div>
<div id="collapse1-77" class="panel-collapse collapse">
<div class="panel-body">
Type:<br>
<br>my_df = my_df.fillna(0)
<br>my_df.isnull().sum() <br>
<br> Google is your friend!
</div>
</div>
</div>

```
# replace the nans with 0 , also the cell is broken, fix it
my_df = my_df.fillna_na_na_na_na_na_na_na_hey_hey_goodby(0)
my_df.isnull().sum()
```
<div class="panel-group" id="accordion-88">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-88" href="#collapse1-88">
Hint</a>
</h4>
</div>
<div id="collapse1-88" class="panel-collapse collapse">
<div class="panel-body">
Heres a trick ... TYPE:<br>
<br>my_df.des + TAB ... see if you can use autocomplet to figure it out!
</div>
</div>
</div>
</div>
```
# perform basic statics for numerical columns my_df dataframes
my_df.described() # the cell is broken, fix it
```
## Python Basics - Pandas Unique values in a column
* display number of unique values for each column (the cardinality of each column)
<div class="panel-group" id="accordion-99">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-99" href="#collapse1-99">
Hint</a>
</h4>
</div>
<div id="collapse1-99" class="panel-collapse collapse">
<div class="panel-body">
Heres a trick ... TYPE:<br>
<br>my_df.nuni + TAB ... see if you can use autocomplet to figure it out!<br><br><br>
<br>my_df.nunique()
</div>
</div>
</div>
</div>
```
my_df.nmonique() # the cell is broken, fix it
```
## Python Basics - Pandas column data types
* find basic data types of each column
```
my_df.dtypes # a freebie !
```
## Python Basics - Add New Column to Pandas Dataframe
* add a new column to my_df called "myNew" and fill it with a sequence of values 0,1,2,3,4...
<div class="panel-group" id="accordion-111">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-111" href="#collapse1-111">
Hint</a>
</h4>
</div>
<div id="collapse1-111" class="panel-collapse collapse">
<div class="panel-body">
Answer:<br>
<br>my_df['myNew'] = np.arange(len(my_df))
</div>
</div>
</div>
</div>
```
import numpy as np # the cell is broken, fix it
my_df['myNew'] = np.range(len(my_df)) #lookup np range
my_df.head()
```
## Python Basics - Pandas indexing
* change the value of a specific row and column [0,0] to a new value using iloc
```
my_df.iloc[0,0]
```
## Python Basics - Filtering rows in Pandas
### filter my_df to display the following
| gender | SeniorCitizen | Partner | Dependents |
| --- | --- | ---| ---|
| Female | 0 | Yes | No |
```
my_df.columns
```
<div class="panel-group" id="accordion-112">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-112" href="#collapse1-112">
Hint</a>
</h4>
</div>
<div id="collapse1-112" class="panel-collapse collapse">
<div class="panel-body">
Answer:<br>
<br> use my_df['SeniorCitizen'] NOT my_df['Senior']
<br><br> use my_df['Dependents'] NOT my_df['Depends']
<br><br><br>
<br>my_df[(my_df['gender'] == 'Female') & (my_df['SeniorCitizen'] == 0) & (my_df['Partner'] == 'Yes') & (my_df['Dependents'] == 'No')]
</div>
</div>
</div>
</div>
```
# this cell is broken fix it
my_df[(my_df['gender'] == 'Female') & (my_df['Senior'] == 0) & (my_df['Partner'] == 'Yes') & (my_df['Depend'] == 'No')]
```
## Python Basics -Pandas sorting
* sort the my_df by columns [InternetService, PhoneService ]
<div class="panel-group" id="accordion-113">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-113" href="#collapse1-113">
Hint</a>
</h4>
</div>
<div id="collapse1-113" class="panel-collapse collapse">
<div class="panel-body">
Answer:<br>
<br> my_df.sort_values(['InternetService', 'PhoneService'])
</div>
</div>
</div>
</div>
```
my_df.sort(by=['InternetService', 'PhoneService']) # this cell is broken, fix it
```
## Python Basics - Pandas Groupby and Aggregate
* group the dataframe by ['InternetService', 'PhoneService'] and aggregate churn by count()
```
my_df.groupby(['InternetService', 'PhoneService'])['Churn'].count()
#how many Fiber Optic customers do NOT have phone service?
my_df[(my_df['InternetService'] == 'Fiber optic') & (my_df['PhoneService'] == 'Yes') ].shape
```
# Jupyter Gotchas
Can you tell which cell is code vs markdown? It makes a difference ....
in each cell below, assign the value of 12 to a variable
- first cell a = 12
- second cell b = 12
- third cell c = 12
There are different kinds of cells, code cells, Markdown cells, NBCOnvert cells
- Code cells run code - pretty simple
- Markdown cells - provided prettier HTML type documenttion
- NBConvert cells - usually used for "as is" text and primarily used for LaTex for math
- Heading is depricated

<div class="panel-group" id="accordion-114">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-parent="#accordion-114" href="#collapse1-114">
Hint</a>
</h4>
</div>
<div id="collapse1-114" class="panel-collapse collapse">
<div class="panel-body">
Answer:<br>
<br> b = 12 is incorrectly marked as a Markdown cell
<br><br> change it to a code cell
</div>
<br> c = 12 is incorrectly marked as a Raw NBConvert cell
<br><br> change it to a code cell
</div>
</div>
</div>
```
a = 12
```
b = 12
c
```
# examine the print statements of all those variables -
# Anything wrong?
print (a)
print (b)
print (c)
# go back into Jupyter and see what type of cell each each is marked as ... aha ...
```
| github_jupyter |
```
import physipy
from physipy import Dimension
from fractions import Fraction
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
import numpy as np
import scipy.integrate as integrate
import scipy.constants as csts
from physipy import s, m, sr, K, units, constants
from physipy import quad
import physipy
import matplotlib.pyplot as plt
```
# Calculus : numerical toolbox
Some usefull numerical functions are provided, which basicaly consists in dimension-wrapped functions.
The wrapping operation is needed because no mean to hook the handling of Quantity object is avalaible (as it is for numpy's functions and ufuncs).
## Integrate with quad
Lets integrate planck's law : [we know the expected result is $\sigma T^4/\pi$](https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law):
```
# physical constants
hp = constants["h"]
c = constants["c"]
kB = constants["k"]
sigma = constants["Stefan_Boltzmann"]
nm = units["nm"]
mum = units["mum"]
# blackbody at temperature 300K
Tbb = 300*K
# note : computing this factor once saves from overflow problems
x = hp*c / (kB * Tbb)
x_ = csts.h * csts.c /(csts.k * 300)
# Planck's law
def planck(lmbda):
return 2*hp*c**2/lmbda**5 * 1/(np.exp(x/lmbda)-1) /sr
def planck_(lmbda):
return 2 * csts.h * csts.c / lmbda**5 * 1 / (np.exp(x_/lmbda)-1)
# expected value
expected = sigma * Tbb**4 / (np.pi*sr)
lmbda_start = 0.001*nm
lmbda_stop = 1000*mum
res, _ = quad(planck, lmbda_start, lmbda_stop)
print(res)
print(expected)
print("error : ", res/expected-1)
```
The convergence can be seen :
```
integrands = []
ech_stop = np.logspace(2, 4, 20)*mum
ech_stop.favunit = mum
for lmbda_stop in ech_stop:
res, _ = quad(planck, lmbda_start, lmbda_stop)
integrands.append(res)
integrands = physipy.quantity.utils.list_of_Q_to_Q_array(integrands)
from physipy import setup_matplotlib
setup_matplotlib()
plt.semilogx(ech_stop, integrands, "o", label="integral")
plt.axhline(expected, label="expected value")
plt.legend()
```
The processing time is quite longer with Quantities. Use this wrapper when speed is not mandatory.
```
%timeit quad(planck_, lmbda_start.value, lmbda_stop.value)
%timeit quad(planck, lmbda_start, lmbda_stop)
```
Other writing possible:
```
def planck(lmbda, T):
x = hp*c / (kB * T)
return 2*hp*c**2/lmbda**5 * 1/(np.exp(x/lmbda)-1) /sr
res, _ = quad(lambda lmbda: planck(lmbda, 300*K), lmbda_start, lmbda_stop)
print(res)
```
Other writing possible :
```
res, _ = quad(planck, lmbda_start, lmbda_stop, args=(300*K,))
print(res)
```
## Root solver
A wrapper of `scipy.optimize.root`:
```
from physipy.quantity.calculus import root
def toto(t):
return -10*s + t
print(root(toto, 0*s))
def tata(t, p):
return -10*s*p + t
print(root(tata, 0*s, args=(0.5,)))
```
A wrapper of `scipy.optimize.brentq`:
```
from physipy.quantity.calculus import brentq
print(brentq(toto, -10*s, 10*s))
print(brentq(tata, -10*s, 10*s, args=(0.5,)))
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# 映画レビューのテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/keras/basic_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ja/beta/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ja/beta/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。
ここでは、映画のレビューをそのテキストを使って**肯定的**か**否定的**かに分類します。これは、二値分類あるいは2クラス分類という問題の例であり、機械学習において重要でいろいろな応用が可能なものです。
ここでは、[Internet Movie Database](https://www.imdb.com/)から抽出した50,000件の映画レビューを含む、 [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) を使います。レビューは訓練用とテスト用に25,000件ずつに分割されています。訓練用とテスト用のデータは**均衡**しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。
ここでは、TensorFlowを使ってモデルを構築・訓練するためのハイレベルなAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。`tf.keras`を使ったもう少し高度なテキスト分類のチュートリアルについては、 [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/)を参照してください。
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## IMDB datasetのダウンロード
IMDBデータセットは、TensorFlowにパッケージ化されています。それは前処理済みのものであり、(単語の連なりである)レビューが、整数の配列に変換されています。そこでは整数が辞書中の特定の単語を表します。
次のコードは、IMDBデータセットをあなたのパソコンにダウンロードします。(すでにダウンロードしていれば、キャッシュされたコピーを使用します)
```
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
`num_words=10000`という引数は、訓練データ中に出てくる単語のうち、最も頻繁に出現する10,000個を保持するためのものです。データサイズを管理可能にするため、稀にしか出現しない単語は破棄されます。
## データの観察
データの形式を理解するために少し時間を割いてみましょう。このデータセットは前処理済みで、サンプルそれぞれが、映画レビューの中の単語を表す整数の配列になっています。ラベルはそれぞれ、0または1の整数値で、0が否定的レビュー、1が肯定的なレビューを示しています。
```
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
```
レビューのテキストは複数の整数に変換されており、それぞれの整数が辞書の中の特定の単語を表します。最初のレビューがどのようなものか見てみましょう。
```
print(train_data[0])
```
映画のレビューはそれぞれ長さが異なっていることでしょう。次のコードで、最初と2つ目のレビューの単語の数を見てみます。ニューラルネットワークへの入力は同じ長さでなければならないため、後ほどその問題を解決する必要があります。
```
len(train_data[0]), len(train_data[1])
```
### 整数を単語に戻してみる
整数をテキストに戻す方法を知っていると便利です。整数を文字列にマッピングする辞書オブジェクトを検索するためのヘルパー関数を定義します。
```
# 単語を整数にマッピングする辞書
word_index = imdb.get_word_index()
# インデックスの最初の方は予約済み
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
`decode_review`を使うと、最初のレビューのテキストを表示できます。
```
decode_review(train_data[0])
```
## データの準備
レビュー(整数の配列)は、ニューラルネットワークに投入する前に、テンソルに変換する必要があります。これには2つの方法があります。
* 配列をワンホット(one-hot)エンコーディングと同じように、単語の出現を表す0と1のベクトルに変換します。例えば、[3, 5]という配列は、インデックス3と5を除いてすべてゼロの10,000次元のベクトルになります。そして、これをネットワークの最初の層、すなわち、浮動小数点のベクトルデータを扱うことができるDense(全結合)層とします。ただし、これは単語数×レビュー数の行列が必要なメモリ集約的な方法です。
* もう一つの方法では、配列をパディングによって同じ長さに揃え、`サンプル数 * 長さの最大値`の形の整数テンソルにします。そして、この形式を扱うことができるEmbedding(埋め込み)層をネットワークの最初の層にします。
このチュートリアルでは、後者を採用することにします。
映画レビューは同じ長さでなければならないので、長さを標準化する [pad_sequences](https://www.tensorflow.org/versions/r1.10/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) 関数を使うことにします。
```
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
```
サンプルの長さを見てみましょう。
```
len(train_data[0]), len(train_data[1])
```
次に、パディング済みの最初のサンプルを確認します。
```
print(train_data[0])
```
## モデルの構築
ニューラルネットワークは、層を積み重ねることで構成されます。この際、2つの大きな決定が必要です。
* モデルにいくつの**層**を設けるか?
* 層ごとに何個の**隠れユニット**を使用するか?
この例では、入力データは単語インデックスの配列で構成されています。推定の対象となるラベルは、0または1です。この問題のためのモデルを構築しましょう。
```
# 入力の形式は映画レビューで使われている語彙数(10,000語)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
```
これらの層は、分類器を構成するため一列に積み重ねられます。
1. 最初の層は`Embedding`(埋め込み)層です。この層は、整数にエンコードされた語彙を受け取り、それぞれの単語インデックスに対応する埋め込みベクトルを検索します。埋め込みベクトルは、モデルの訓練の中で学習されます。ベクトル化のために、出力行列には次元が1つ追加されます。その結果、次元は、`(batch, sequence, embedding)`となります。
2. 次は、`GlobalAveragePooling1D`(1次元のグローバル平均プーリング)層です。この層は、それぞれのサンプルについて、シーケンスの次元方向に平均値をもとめ、固定長のベクトルを返します。この結果、モデルは最も単純な形で、可変長の入力を扱うことができるようになります。
3. この固定長の出力ベクトルは、16個の隠れユニットを持つ全結合(`Dense`)層に受け渡されます。
4. 最後の層は、1個の出力ノードに全結合されます。シグモイド(`sigmoid`)活性化関数を使うことで、値は確率あるいは確信度を表す0と1の間の浮動小数点数となります。
### 隠れユニット
上記のモデルには、入力と出力の間に、2つの中間層あるいは「隠れ」層があります。出力(ユニット、ノード、またはニューロン)は、その層の内部表現の次元数です。言い換えると、このネットワークが学習によって内部表現を獲得する際の自由度ということです。
モデルにより多くの隠れユニットがある場合(内部表現空間の次元数がより大きい場合)、または、より多くの層がある場合、あるいはその両方の場合、ネットワークはより複雑な内部表現を学習することができます。しかしながら、その結果として、ネットワークの計算量が多くなるほか、学習してほしくないパターンを学習するようになります。学習してほしくないパターンとは、訓練データでの性能は向上するものの、テスト用データの性能が向上しないパターンです。この問題を**過学習**(*overfitting*)といいます。この問題は後ほど検証することになります。
### 損失関数とオプティマイザ
モデルを訓練するには、損失関数とオプティマイザが必要です。今回の問題は二値分類問題であり、モデルの出力は確率(1ユニットの層とシグモイド活性化関数)であるため、損失関数として`binary_crossentropy`(2値のクロスエントロピー)関数を使用することにします。
損失関数の候補はこれだけではありません。例えば、`mean_squared_error`(平均二乗誤差)を使うこともできます。しかし、一般的には、確率を扱うには`binary_crossentropy`の方が適しています。`binary_crossentropy`は、確率分布の間の「距離」を測定する尺度です。今回の場合には、真の分布と予測値の分布の間の距離ということになります。
後ほど、回帰問題を検証する際には(例えば家屋の値段を推定するとか)、もう一つの損失関数である`mean_squared_error`(平均二乗誤差)の使い方を目にすることになります。
さて、モデルのオプティマイザと損失関数を設定しましょう。
```
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
```
## 検証用データを作る
訓練を行う際、モデルが見ていないデータでの正解率を検証したいと思います。もとの訓練用データから、10,000個のサンプルを取り分けて**検証用データ**(*validation set*)を作ります。(なぜ、ここでテスト用データを使わないのでしょう? 今回の目的は、訓練用データだけを使って、モデルの開発とチューニングを行うことです。その後、テスト用データを1回だけ使い、正解率を検証するのです。)
```
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
```
## モデルの訓練
512個のサンプルからなるミニバッチを使って、40エポックモデルを訓練します。この結果、`x_train`と`y_train`に含まれるすべてのサンプルを40回繰り返すことになります。訓練中、検証用データの10,000サンプルを用いて、モデルの損失と正解率をモニタリングします。
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
```
## モデルの評価
さて、モデルの性能を見てみましょう。2つの値が返されます。損失(エラーを示す数値であり、小さい方が良い)と正解率です。
```
results = model.evaluate(test_data, test_labels)
print(results)
```
この、かなり素朴なアプローチでも87%前後の正解率を達成しました。もっと高度なアプローチを使えば、モデルの正解率は95%に近づけることもできるでしょう。
## 正解率と損失の時系列グラフを描く
`model.fit()` は、訓練中に発生したすべてのことを記録した辞書を含む`History` オブジェクトを返します。
```
history_dict = history.history
history_dict.keys()
```
4つのエントリがあります。それぞれが、訓練と検証の際にモニターしていた指標を示します。これを使って、訓練時と検証時の損失を比較するグラフと、訓練時と検証時の正解率を比較するグラフを作成することができます。
```
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 図のクリア
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
上記のグラフでは、点が訓練時の損失と正解率を、実線が検証時の損失と正解率を表しています。
訓練時の損失がエポックごとに**減少**し、訓練時の正解率がエポックごとに**上昇**していることに気がつくはずです。繰り返すごとに指定された数値指標を最小化する勾配降下法を最適化に使用している場合に期待される動きです。
これは、検証時の損失と正解率には当てはまりません。20エポックを過ぎたあたりから、横ばいになっているようです。これが、過学習の一例です。モデルの性能が、訓練用データでは高い一方で、見たことの無いデータではそれほど高くないというものです。このポイントをすぎると、モデルが最適化しすぎて、訓練用データでは特徴的であるが、テスト用データには一般化できない内部表現を学習しています。
このケースの場合、20エポックを過ぎたあたりで訓練をやめることで、過学習を防止することが出来ます。後ほど、コールバックを使って、これを自動化する方法を紹介します。
| github_jupyter |
## Dependencies
```
!pip install --quiet efficientnet
import warnings, time
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from tensorflow.keras import optimizers, Sequential, losses, metrics, Model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import efficientnet.tfkeras as efn
from cassava_scripts import *
from scripts_step_lr_schedulers import *
import tensorflow_addons as tfa
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
BATCH_SIZE_CL = 8 * REPLICAS
LEARNING_RATE = 1e-5 * REPLICAS
EPOCHS_CL = 15
EPOCHS = 20
HEIGHT = 512
WIDTH = 512
HEIGHT_DT = 512
WIDTH_DT = 512
CHANNELS = 3
N_CLASSES = 5
N_FOLDS = 5
FOLDS_USED = 1
ES_PATIENCE = 5
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
train = pd.read_csv(f'{database_base_path}train.csv')
print(f'Train samples: {len(train)}')
GCS_PATH = KaggleDatasets().get_gcs_path(f'cassava-leaf-disease-tfrecords-center-{HEIGHT_DT}x{WIDTH_DT}') # Center croped and resized (50 TFRecord)
# GCS_PATH_EXT = KaggleDatasets().get_gcs_path(f'cassava-leaf-disease-tfrecords-external-{HEIGHT_DT}x{WIDTH_DT}') # Center croped and resized (50 TFRecord) (External)
# GCS_PATH_CLASSES = KaggleDatasets().get_gcs_path(f'cassava-leaf-disease-tfrecords-classes-{HEIGHT_DT}x{WIDTH_DT}') # Center croped and resized (50 TFRecord) by classes
# GCS_PATH_EXT_CLASSES = KaggleDatasets().get_gcs_path(f'cassava-leaf-disease-tfrecords-classes-ext-{HEIGHT_DT}x{WIDTH_DT}') # Center croped and resized (50 TFRecord) (External) by classes
FILENAMES_COMP = tf.io.gfile.glob(GCS_PATH + '/*.tfrec')
# FILENAMES_2019 = tf.io.gfile.glob(GCS_PATH_EXT + '/*.tfrec')
# FILENAMES_COMP_CBB = tf.io.gfile.glob(GCS_PATH_CLASSES + '/CBB*.tfrec')
# FILENAMES_COMP_CBSD = tf.io.gfile.glob(GCS_PATH_CLASSES + '/CBSD*.tfrec')
# FILENAMES_COMP_CGM = tf.io.gfile.glob(GCS_PATH_CLASSES + '/CGM*.tfrec')
# FILENAMES_COMP_CMD = tf.io.gfile.glob(GCS_PATH_CLASSES + '/CMD*.tfrec')
# FILENAMES_COMP_Healthy = tf.io.gfile.glob(GCS_PATH_CLASSES + '/Healthy*.tfrec')
# FILENAMES_2019_CBB = tf.io.gfile.glob(GCS_PATH_EXT_CLASSES + '/CBB*.tfrec')
# FILENAMES_2019_CBSD = tf.io.gfile.glob(GCS_PATH_EXT_CLASSES + '/CBSD*.tfrec')
# FILENAMES_2019_CGM = tf.io.gfile.glob(GCS_PATH_EXT_CLASSES + '/CGM*.tfrec')
# FILENAMES_2019_CMD = tf.io.gfile.glob(GCS_PATH_EXT_CLASSES + '/CMD*.tfrec')
# FILENAMES_2019_Healthy = tf.io.gfile.glob(GCS_PATH_EXT_CLASSES + '/Healthy*.tfrec')
TRAINING_FILENAMES = FILENAMES_COMP
NUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)
print(f'GCS: train images: {NUM_TRAINING_IMAGES}')
display(train.head())
```
# Augmentation
```
def data_augment(image, label):
# p_rotation = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_shear = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_cutout = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# # Shear
# if p_shear > .2:
# if p_shear > .6:
# image = transform_shear(image, HEIGHT, shear=20.)
# else:
# image = transform_shear(image, HEIGHT, shear=-20.)
# # Rotation
# if p_rotation > .2:
# if p_rotation > .6:
# image = transform_rotation(image, HEIGHT, rotation=45.)
# else:
# image = transform_rotation(image, HEIGHT, rotation=-45.)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# Pixel-level transforms
if p_pixel_1 >= .4:
image = tf.image.random_saturation(image, lower=.7, upper=1.3)
if p_pixel_2 >= .4:
image = tf.image.random_contrast(image, lower=.8, upper=1.2)
if p_pixel_3 >= .4:
image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .6:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.5)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.6)
elif p_crop > .7:
image = tf.image.central_crop(image, central_fraction=.7)
else:
image = tf.image.central_crop(image, central_fraction=.8)
elif p_crop > .3:
crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
image = tf.image.resize(image, size=[HEIGHT, WIDTH])
if p_cutout > .5:
image = data_augment_cutout(image)
return image, label
```
## Auxiliary functions
```
# CutOut
def data_augment_cutout(image, min_mask_size=(int(HEIGHT * .1), int(HEIGHT * .1)),
max_mask_size=(int(HEIGHT * .125), int(HEIGHT * .125))):
p_cutout = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_cutout > .85: # 10~15 cut outs
n_cutout = tf.random.uniform([], 10, 15, dtype=tf.int32)
image = random_cutout(image, HEIGHT, WIDTH,
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=n_cutout)
elif p_cutout > .6: # 5~10 cut outs
n_cutout = tf.random.uniform([], 5, 10, dtype=tf.int32)
image = random_cutout(image, HEIGHT, WIDTH,
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=n_cutout)
elif p_cutout > .25: # 2~5 cut outs
n_cutout = tf.random.uniform([], 2, 5, dtype=tf.int32)
image = random_cutout(image, HEIGHT, WIDTH,
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=n_cutout)
else: # 1 cut out
image = random_cutout(image, HEIGHT, WIDTH,
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=1)
return image
# Datasets utility functions
def random_crop(image, label):
"""
Resize and reshape images to the expected size.
"""
image = tf.image.random_crop(image, size=[HEIGHT, WIDTH, CHANNELS])
return image, label
def prepare_image(image, label):
"""
Resize and reshape images to the expected size.
"""
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def center_crop_(image, label, height_rs, width_rs, height=HEIGHT_DT, width=WIDTH_DT, channels=3):
image = tf.reshape(image, [height, width, channels]) # Original shape
h, w = image.shape[0], image.shape[1]
if h > w:
image = tf.image.crop_to_bounding_box(image, (h - w) // 2, 0, w, w)
else:
image = tf.image.crop_to_bounding_box(image, 0, (w - h) // 2, h, h)
image = tf.image.resize(image, [height_rs, width_rs]) # Expected shape
return image, label
def read_tfrecord_(example, labeled=True, sparse=True, n_classes=5):
"""
1. Parse data based on the 'TFREC_FORMAT' map.
2. Decode image.
3. If 'labeled' returns (image, label) if not (image, name).
"""
if labeled:
TFREC_FORMAT = {
'image': tf.io.FixedLenFeature([], tf.string),
'target': tf.io.FixedLenFeature([], tf.int64),
}
else:
TFREC_FORMAT = {
'image': tf.io.FixedLenFeature([], tf.string),
'image_name': tf.io.FixedLenFeature([], tf.string),
}
example = tf.io.parse_single_example(example, TFREC_FORMAT)
image = decode_image(example['image'])
if labeled:
label_or_name = tf.cast(example['target'], tf.int32)
if not sparse: # One-Hot Encoding needed to use "categorical_crossentropy" loss
label_or_name = tf.one_hot(tf.cast(label_or_name, tf.int32), n_classes)
else:
label_or_name = example['image_name']
return image, label_or_name
def get_dataset(filenames, labeled=True, ordered=False, repeated=False,
cached=False, augment=False, batch_size=BATCH_SIZE, sparse=True):
"""
Return a Tensorflow dataset ready for training or inference.
"""
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False
dataset = tf.data.Dataset.list_files(filenames)
dataset = dataset.interleave(tf.data.TFRecordDataset, num_parallel_calls=AUTO)
else:
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO)
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO)
dataset = dataset.with_options(ignore_order)
dataset = dataset.map(lambda x: read_tfrecord_(x, labeled=labeled, sparse=sparse), num_parallel_calls=AUTO)
if augment:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(scale_image, num_parallel_calls=AUTO)
dataset = dataset.map(prepare_image, num_parallel_calls=AUTO)
if labeled:
dataset = dataset.map(lambda x, y: conf_output(x, y, sparse=sparse), num_parallel_calls=AUTO)
if not ordered:
dataset = dataset.shuffle(2048)
if repeated:
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
if cached:
dataset = dataset.cache()
dataset = dataset.prefetch(AUTO)
return dataset
def conf_output(image, label, sparse=False):
"""
Configure the output of the dataset.
"""
aux_label = [0.]
aux_2_label = [0.]
if sparse:
if label == 4: # Healthy
aux_label = [1.]
if label == 3: # CMD
aux_2_label = [1.]
else:
if tf.math.argmax(label, axis=-1) == 4: # Healthy
aux_label = [1.]
if tf.math.argmax(label, axis=-1) == 3: # CMD
aux_2_label = [1.]
return (image, (label, aux_label, aux_2_label))
```
# Training data samples (with augmentation)
```
# train_dataset = get_dataset(FILENAMES_COMP, ordered=True, augment=True)
# train_iter = iter(train_dataset.unbatch().batch(20))
# display_batch_of_images(next(train_iter))
# display_batch_of_images(next(train_iter))
```
# Model
```
class UnitNormLayer(L.Layer):
"""
Normalize vectors (euclidean norm) in batch to unit hypersphere.
"""
def __init__(self, **kwargs):
super(UnitNormLayer, self).__init__(**kwargs)
def call(self, input_tensor):
norm = tf.norm(input_tensor, axis=1)
return input_tensor / tf.reshape(norm, [-1, 1])
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights='noisy-student',
pooling='avg')
norm_embeddings = UnitNormLayer()(base_model.output)
model = Model(inputs=inputs, outputs=norm_embeddings)
return model
def add_projection_head(input_shape, encoder, num_projection_layers=1):
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
outputs = L.Dense(128, activation='relu', name='projection_head')(features)
outputs = UnitNormLayer(name='output')(outputs)
model = Model(inputs=inputs, outputs=outputs)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
unfreeze_model(encoder) # unfreeze all layers except "batch normalization"
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
temperature = 0.1
class SupervisedContrastiveLoss(losses.Loss):
def __init__(self, temperature=0.1, name=None):
super(SupervisedContrastiveLoss, self).__init__(name=name)
self.temperature = temperature
def __call__(self, labels, feature_vectors, sample_weight=None):
# Normalize feature vectors
feature_vectors_normalized = tf.math.l2_normalize(feature_vectors, axis=1)
# Compute logits
logits = tf.divide(
tf.matmul(
feature_vectors_normalized, tf.transpose(feature_vectors_normalized)
),
temperature,
)
return tfa.losses.npairs_loss(tf.squeeze(labels), logits)
```
### Learning rate schedule
```
lr_start = 1e-8
lr_min = 1e-6
lr_max = LEARNING_RATE
num_cycles = 1
warmup_epochs = 3
hold_max_epochs = 0
total_epochs = EPOCHS
step_size = (NUM_TRAINING_IMAGES//BATCH_SIZE)
hold_max_steps = hold_max_epochs * step_size
total_steps = total_epochs * step_size
warmup_steps = warmup_epochs * step_size
def lrfn(total_steps, warmup_steps=0, lr_start=1e-4, lr_max=1e-3, lr_min=1e-4, num_cycles=1.):
@tf.function
def cosine_with_hard_restarts_schedule_with_warmup_(step):
""" Create a schedule with a learning rate that decreases following the
values of the cosine function with several hard restarts, after a warmup
period during which it increases linearly between 0 and 1.
"""
if step < warmup_steps:
lr = (lr_max - lr_start) / warmup_steps * step + lr_start
else:
progress = (step - warmup_steps) / (total_steps - warmup_steps)
lr = lr_max * (0.5 * (1.0 + tf.math.cos(np.pi * ((num_cycles * progress) % 1.0))))
if lr_min is not None:
lr = tf.math.maximum(lr_min, float(lr))
return lr
return cosine_with_hard_restarts_schedule_with_warmup_
lrfn_fn = lrfn(total_steps, warmup_steps, lr_start, lr_max, lr_min, num_cycles)
rng = [i for i in range(total_steps)]
y = [lrfn_fn(tf.cast(x, tf.float32)) for x in rng]
sns.set(style='whitegrid')
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print(f'{total_steps} total steps and {step_size} steps per epoch')
print(f'Learning rate schedule: {y[0]:.3g} to {max(y):.3g} to {y[-1]:.3g}')
print('Classifier Learning rate scheduler')
step_size_cl = (NUM_TRAINING_IMAGES//BATCH_SIZE_CL)
total_steps_cl = (EPOCHS_CL * step_size_cl)
warmup_steps_cl = warmup_epochs * step_size_cl
num_cycles_cl = 1
lrfn_fn = lrfn(total_steps_cl, warmup_steps_cl, lr_start, lr_max, lr_min, num_cycles_cl)
rng = [i for i in range(total_steps_cl)]
y = [lrfn_fn(tf.cast(x, tf.float32)) for x in rng]
sns.set(style='whitegrid')
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print(f'{total_steps_cl} total steps and {step_size_cl} steps per epoch')
print(f'Learning rate schedule: {y[0]:.3g} to {max(y):.3g} to {y[-1]:.3g}')
```
# Training
```
skf = KFold(n_splits=N_FOLDS, shuffle=True, random_state=seed)
oof_pred = []; oof_labels = []; oof_names = []; oof_folds = []; history_list = []; oof_embed = []
for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))):
if fold >= FOLDS_USED:
break
if tpu: tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {idxT} VALID: {idxV}')
# Create train and validation sets
TRAIN_FILENAMES = tf.io.gfile.glob([GCS_PATH + '/Id_train%.2i*.tfrec' % x for x in idxT])
# FILENAMES_COMP_CBB = tf.io.gfile.glob([GCS_PATH_CLASSES + '/CBB%.2i*.tfrec' % x for x in idxT])
# FILENAMES_COMP_CBSD = tf.io.gfile.glob([GCS_PATH_CLASSES + '/CBSD%.2i*.tfrec' % x for x in idxT])
# FILENAMES_COMP_CGM = tf.io.gfile.glob([GCS_PATH_CLASSES + '/CGM%.2i*.tfrec' % x for x in idxT])
# FILENAMES_COMP_Healthy = tf.io.gfile.glob([GCS_PATH_CLASSES + '/Healthy%.2i*.tfrec' % x for x in idxT])
VALID_FILENAMES = tf.io.gfile.glob([GCS_PATH + '/Id_train%.2i*.tfrec' % x for x in idxV])
np.random.shuffle(TRAIN_FILENAMES)
ct_train = count_data_items(TRAIN_FILENAMES)
ct_valid = count_data_items(VALID_FILENAMES)
step_size = (ct_train // BATCH_SIZE)
warmup_steps = (warmup_epochs * step_size)
total_steps = (total_epochs * step_size)
### Pre-train the encoder
print('Pre-training the encoder using "Supervised Contrastive" Loss')
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
unfreeze_model(encoder) # unfreeze all layers except "batch normalization"
encoder_proj = add_projection_head((None, None, CHANNELS), encoder)
encoder_proj.summary()
lrfn_fn = lrfn(total_steps, warmup_steps, lr_start, lr_max, lr_min, num_cycles)
optimizer = optimizers.Adam(learning_rate=lambda: lrfn_fn(tf.cast(optimizer.iterations, tf.float32)))
encoder_proj.compile(optimizer=optimizer,
loss={'output': SupervisedContrastiveLoss(temperature)})
history_enc = encoder_proj.fit(x=get_dataset(TRAIN_FILENAMES, repeated=True),
# validation_data=get_dataset(VALID_FILENAMES, ordered=True),
steps_per_epoch=step_size,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2).history
### Train the classifier with the frozen encoder
print('Training the classifier with the frozen encoder')
step_size_cl = (ct_train // BATCH_SIZE_CL)
total_steps_cl = (EPOCHS_CL * step_size_cl)
with strategy.scope():
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=True) #, trainable=False)
model.summary()
lrfn_fn = lrfn(total_steps_cl, warmup_steps_cl, lr_start, lr_max, lr_min, num_cycles_cl)
optimizer = optimizers.Adam(learning_rate=lambda: lrfn_fn(tf.cast(optimizer.iterations, tf.float32)))
model.compile(optimizer=optimizer,
loss={'output': losses.CategoricalCrossentropy(label_smoothing=.3),
'output_healthy': losses.BinaryCrossentropy(label_smoothing=.1),
'output_cmd': losses.BinaryCrossentropy(label_smoothing=.1)},
loss_weights={'output': 1.,
'output_healthy': .1,
'output_cmd': .1},
metrics={'output': metrics.CategoricalAccuracy(),
'output_healthy': metrics.BinaryAccuracy(),
'output_cmd': metrics.BinaryAccuracy()})
model_path = f'model_{fold}.h5'
ckpoint = ModelCheckpoint(model_path, mode='max', verbose=0,
save_best_only=True, save_weights_only=True,
monitor='val_output_categorical_accuracy')
es = EarlyStopping(patience=ES_PATIENCE, mode='max',
restore_best_weights=True, verbose=1,
monitor='val_output_categorical_accuracy')
history = model.fit(x=get_dataset(TRAIN_FILENAMES, repeated=True, augment=True, batch_size=BATCH_SIZE_CL, sparse=False),
validation_data=get_dataset(VALID_FILENAMES, ordered=True, batch_size=BATCH_SIZE_CL, sparse=False),
steps_per_epoch=step_size,
epochs=EPOCHS_CL,
callbacks=[ckpoint, es],
verbose=2).history
### RESULTS
print(f"#### FOLD {fold+1} OOF Accuracy = {np.max(history['val_output_categorical_accuracy']):.3f}")
history_list.append(history)
# Load best model weights
model.load_weights(model_path)
# OOF predictions
ds_valid = get_dataset(VALID_FILENAMES, ordered=True)
oof_folds.append(np.full((ct_valid), fold, dtype='int8'))
oof_labels.append([target[0].numpy() for img, target in iter(ds_valid.unbatch())])
x_oof = ds_valid.map(lambda image, target: image)
oof_pred.append(model.predict(x_oof)[0])
# OOF names
ds_valid_names = get_dataset(VALID_FILENAMES, labeled=False, ordered=True)
oof_names.append(np.array([img_name.numpy().decode('utf-8') for img, img_name in iter(ds_valid_names.unbatch())]))
oof_embed.append(encoder.predict(x_oof)) # OOF embeddings
```
## Model loss graph
```
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
plot_metrics(history, acc_name='output_categorical_accuracy')
```
# Model evaluation
```
y_true = np.concatenate(oof_labels)
# y_true = np.argmax(y_true, axis=-1)
y_prob = np.concatenate(oof_pred)
y_pred = np.argmax(y_prob, axis=-1)
folds = np.concatenate(oof_folds)
names = np.concatenate(oof_names)
acc = accuracy_score(y_true, y_pred)
print(f'Overall OOF Accuracy = {acc:.3f}')
df_oof = pd.DataFrame({'image_id':names, 'fold':fold,
'target':y_true, 'pred':y_pred})
df_oof = df_oof.assign(probs=[prob for prob in y_prob])
df_oof.to_csv('oof.csv', index=False)
display(df_oof.head())
print(classification_report(y_true, y_pred, target_names=CLASSES))
```
# Confusion matrix
```
fig, ax = plt.subplots(1, 1, figsize=(20, 12))
cfn_matrix = confusion_matrix(y_true, y_pred, labels=range(len(CLASSES)))
cfn_matrix = (cfn_matrix.T / cfn_matrix.sum(axis=1)).T
df_cm = pd.DataFrame(cfn_matrix, index=CLASSES, columns=CLASSES)
ax = sns.heatmap(df_cm, cmap='Blues', annot=True, fmt='.2f', linewidths=.5).set_title('Train', fontsize=30)
plt.show()
```
# Visualize embeddings outputs
```
y_embeddings = np.concatenate(oof_embed)
visualize_embeddings(y_embeddings, y_true)
```
# Visualize predictions
```
# train_dataset = get_dataset(TRAINING_FILENAMES, ordered=True)
# x_samp, y_samp = dataset_to_numpy_util(train_dataset, 18)
# y_samp = np.argmax(y_samp, axis=-1)
# x_samp_1, y_samp_1 = x_samp[:9,:,:,:], y_samp[:9]
# samp_preds_1 = model.predict(x_samp_1, batch_size=9)
# display_9_images_with_predictions(x_samp_1, samp_preds_1, y_samp_1)
# x_samp_2, y_samp_2 = x_samp[9:,:,:,:], y_samp[9:]
# samp_preds_2 = model.predict(x_samp_2, batch_size=9)
# display_9_images_with_predictions(x_samp_2, samp_preds_2, y_samp_2)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
```
2. Segunda aba - NLP:
**a. letra - trecho de música;**
**b. artista - cantora referente a letra.**
### Questão 5:
#### Crie um classificador, a partir da segunda aba - NLP do arquivo de dados, que permita identificar qual trecho de música corresponde às respectivas artistas listadas (Sugestão: Naive Bayes Classifier).
```
DATA_RAW_PATH = '../data/raw/'
DATA_INTER_PATH = '../data/interim/'
FIGURES = '../figures/'
MODELS = '../models/'
DATA_RAW_NAME = 'teste_smarkio_lbs.xls'
DATA_INTER_NAME = 'df_2.csv'
df_raw = pd.read_excel(DATA_RAW_PATH+DATA_RAW_NAME, 'NLP')
df_raw.head(7)
df_raw.info()
df_raw['artista'].unique()
df_raw.groupby(by='artista')['letra'].count()
df = df_raw.copy()
df['letra'] = df['letra'].apply(lambda x: x.lower())
df['letra'] = df['letra'].str.replace(r'[^\w\s]','')
df.to_csv(DATA_INTER_PATH+DATA_INTER_NAME, index=False)
```
Acima estou removendo todos os caracteres especiais e deixando todas as letras das músicas minúsculas.
```
df.head(7)
from sklearn.model_selection import train_test_split
X = df['letra']
y = df['artista']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=123, stratify=y)
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import confusion_matrix
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
import pickle
import nltk
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
```
Utilizando CountVectorizer para tokenizar as letras das músicas.
- Com o CountVectorizer consigo separar todas as palavras únicas no texto e transformar num vetor binário que vai identificar cada uma dessas palavras.
```
pipe = Pipeline(
[('vect', CountVectorizer()),
('clf', MultinomialNB(alpha=0.1))])
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
print('Classificou as músicas corretas em',round(np.mean(y_pred == y_test)*100,2),'% das vezes.')
```
Utilizando TfidfVectorizer para tokenizar as letras das músicas.
- Com o TfidfVectorizer consigo mensuram a importância de cada palavra nas músicas.
```
pipe = Pipeline(
[('vect', TfidfVectorizer()),
('clf', MultinomialNB(alpha=0.1))])
# train our model on training data
pipe.fit(X_train, y_train)
# score our model on testing data
y_pred = pipe.predict(X_test)
print('Classificou as músicas corretas em',round(np.mean(y_pred == y_test)*100,2),'% das vezes.')
artistas = df['artista'].unique()
mat = confusion_matrix(y_test, y_pred)
fig = sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False, xticklabels=artistas, yticklabels=artistas)
plt.xlabel('true label')
plt.ylabel('predicted label')
plt.savefig(FIGURES+'/matriz_artistas.png')
```
- Como podemos observar o modelo conseguiu acertar 52 vezes as letras de Beyoncé e errar 23 vezes, classificando como Rihanna.
- Para Rihanna conseguiu acertar 38 letras e errar 17 vezes.
```
pickle.dump(pipe, open(MODELS+'modelo_nlp.sav', 'wb'))
```
Exportação da pipeline do modelo.
| github_jupyter |
# Tutorial: M4 Daily
This notebook is designed to give a simple introduction to forecasting using the Deep4Cast package. The time series data is taken from the [M4 dataset](https://github.com/M4Competition/M4-methods/tree/master/Dataset), specifically, the ``Daily`` subset of the data.
```
import numpy as np
import os
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader
from deep4cast.forecasters import Forecaster
from deep4cast.models import WaveNet
from deep4cast.datasets import TimeSeriesDataset
import deep4cast.transforms as transforms
import deep4cast.metrics as metrics
# Make RNG predictable
np.random.seed(0)
torch.manual_seed(0)
# Use a gpu if available, otherwise use cpu
device = ('cuda' if torch.cuda.is_available() else 'cpu')
%matplotlib inline
```
## Dataset
In this section we inspect the dataset, split it into a training and a test set, and prepare it for easy consuption with PyTorch-based data loaders. Model construction and training will be done in the next section.
```
if not os.path.exists('data/Daily-train.csv'):
!wget https://raw.githubusercontent.com/M4Competition/M4-methods/master/Dataset/Train/Daily-train.csv -P data/
if not os.path.exists('data/Daily-test.csv'):
!wget https://raw.githubusercontent.com/M4Competition/M4-methods/master/Dataset/Test/Daily-test.csv -P data/
data_arr = pd.read_csv('data/Daily-train.csv')
data_arr = data_arr.iloc[:, 1:].values
data_arr = list(data_arr)
for i, ts in enumerate(data_arr):
data_arr[i] = ts[~np.isnan(ts)][None, :]
```
### Divide into train and test
We use the DataLoader object from PyTorch to build batches from the test data set.
However, we first need to specify how much history to use in creating a forecast of a given length:
- horizon = time steps to forecast
- lookback = time steps leading up to the period to be forecast
```
horizon = 14
lookback = 128
```
We've also found that it is not necessary to train on the full dataset, so we here select a 10% random sample of time series for training. We will evaluate on the full dataset later.
```
import random
data_train = []
for time_series in data_arr:
data_train.append(time_series[:, :-horizon],)
data_train = random.sample(data_train, int(len(data_train) * 0.1))
```
We follow [Torchvision](https://pytorch.org/docs/stable/torchvision) in processing examples using [Transforms](https://pytorch.org/docs/stable/torchvision/transforms.html) chained together by [Compose](https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Compose).
* `Tensorize` creates a tensor of the example.
* `LogTransform` natural logarithm of the targets after adding the offset (similar to [torch.log1p](https://pytorch.org/docs/stable/torch.html#torch.log1p)).
* `RemoveLast` subtracts the final value in the `lookback` from both `lookback` and `horizon`.
* `Target` specifies which index in the array to forecast.
We need to perform these transformations to have input features that are of the unit scale. If the input features are not of unit scale (i.e., of O(1)) for all features, the optimizer won't be able to find an optimium due to blow-ups in the gradient calculations.
```
transform = transforms.Compose([
transforms.ToTensor(),
transforms.LogTransform(targets=[0], offset=1.0),
transforms.RemoveLast(targets=[0]),
transforms.Target(targets=[0]),
])
```
`TimeSeriesDataset` inherits from [Torch Datasets](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) for use with [Torch DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). It handles the creation of the examples used to train the network using `lookback` and `horizon` to partition the time series.
The parameter 'step' controls how far apart consective windowed samples from a time series are spaced. For example, for a time series of length 100 and a setup with lookback 24 and horizon 12, we split the original time series into smaller training examples of length 24+12=36. How much these examples are overlapping is controlled by the parameter `step` in `TimeSeriesDataset`.
```
data_train = TimeSeriesDataset(
data_train,
lookback,
horizon,
step=1,
transform=transform
)
# Create mini-batch data loader
dataloader_train = DataLoader(
data_train,
batch_size=512,
shuffle=True,
pin_memory=True,
num_workers=1
)
```
## Modeling and Forecasting
### Temporal Convolutions
The network architecture used here is based on ideas related to [WaveNet](https://deepmind.com/blog/wavenet-generative-model-raw-audio/). We employ the same architecture with a few modifications (e.g., a fully connected output layer for vector forecasts). It turns out that we do not need many layers in this example to achieve state-of-the-art results, most likely because of the simple autoregressive nature of the data.
In many ways, a temporal convoluational architecture is among the simplest possible architecures that we could employ using neural networks. In our approach, every layer has the same number of convolutional filters and uses residual connections.
When it comes to loss functions, we use the log-likelihood of probability distributions from the `torch.distributions` module. This mean that if one supplues a normal distribution the likelihood of the transformed data is modeled as coming from a normal distribution.
```
# Define the model architecture
model = WaveNet(input_channels=1,
output_channels=1,
horizon=horizon,
hidden_channels=89,
skip_channels=199,
n_layers=7)
print('Number of model parameters: {}.'.format(model.n_parameters))
print('Receptive field size: {}.'.format(model.receptive_field_size))
# Enable multi-gpu if available
if torch.cuda.device_count() > 1:
print('Using {} GPUs.'.format(torch.cuda.device_count()))
model = torch.nn.DataParallel(model)
# .. and the optimizer
optim = torch.optim.Adam(model.parameters(), lr=0.0008097436666349985)
# .. and the loss
loss = torch.distributions.StudentT
# Fit the forecaster
forecaster = Forecaster(model, loss, optim, n_epochs=5, device=device)
forecaster.fit(dataloader_train, eval_model=True)
```
## Evaluation
Before any evaluation score can be calculated, we load the held out test data.
```
data_train = pd.read_csv('data/Daily-train.csv')
data_test = pd.read_csv('data/Daily-test.csv')
data_train = data_train.iloc[:, 1:].values
data_test = data_test.iloc[:, 1:].values
data_arr = []
for ts_train, ts_test in zip(data_train, data_test):
ts_a = ts_train[~np.isnan(ts_train)]
ts_b = ts_test
ts = np.concatenate([ts_a, ts_b])[None, :]
data_arr.append(ts)
# Sequentialize the training and testing dataset
data_test = []
for time_series in data_arr:
data_test.append(time_series[:, -horizon-lookback:])
data_test = TimeSeriesDataset(
data_test,
lookback,
horizon,
step=1,
transform=transform
)
dataloader_test = DataLoader(
data_test,
batch_size=1024,
shuffle=False,
num_workers=2
)
```
We need to transform the output forecasts. The output from the foracaster is of the form (n_samples, n_time_series, n_variables, n_timesteps).
This means, that a point forcast needs to be calculated from the samples, for example, by taking the mean or the median.
```
# Get time series of actuals for the testing period
y_test = []
for example in dataloader_test:
example = dataloader_test.dataset.transform.untransform(example)
y_test.append(example['y'])
y_test = np.concatenate(y_test)
# Get corresponding predictions
y_samples = forecaster.predict(dataloader_test, n_samples=100)
```
We calculate the [symmetric MAPE](https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error).
```
# Evaluate forecasts
test_smape = metrics.smape(y_samples, y_test)
print('SMAPE: {}%'.format(test_smape.mean()))
```
| github_jupyter |
# Install
> ## BASICALLY NONE OF THIS WORKS YET, WAIT FOR A [RELEASE](https://github.com/gtri/irobotframework/releases)
The Robot Framework and Jupyter ecosystems are mostly homed in Python, and the `irobotframework` [Kernel](#Kernel) can be installed via the two most popular package managers, `pip` and `conda`. [JupyterLab](#JupyterLab-Extension) is a bit special.
## Kernel
The latest stable release of the `irobotframework` kernel and all of its python dependencies can be installed with `conda` (recommended)...
```bash
# TODO conda install -c conda-forge irobotframework
```
or `pip`...
```bash
# TODO pip install irobotframework
```
> **NOTE** Neither of these will install `notebook` or `jupyterlab`, because sometimes you just want to run a kernel, such as in Continuous Integration.
## Lab Extension
The companion JupyterLab extension, `jupyterlab-robotframework`, provides syntax highlighting and prettier documentation. It is distributed on `conda` and `npm`.
With `conda`...
```bash
# TODO conda install -c conda-forge jupyterlab-robotframework
# jupyter lab build
```
> **NOTE** This will install `jupyterlab` and `nodejs`
Or without `conda`
```bash
# TODO pip install jupyterlab
# install nodejs... somehow...
# jupyter labextension install jupyterlab-robotframework
```
## Reproduce
While the above commands are great for trying out `irobotframework`, if you are going to invest time in doing non-trivial work with Robot Framework, maintaining all your python and non-python dependencies in one file can be very helpful.
### conda env
A `conda` environment is the simplest way to capture all your python and non-python dependencies
Make an `environment.yml`:
```yaml
# environment.yml
# TODO THIS DOESN'T WORK YET
name: my-irobotframework-environment
channels:
- conda-forge
- defaults
dependencies:
- jupyterlab
- irobotframework
- jupyterlab-robotframework
## uncomment these to do browser-based testing
# - robotframework-seleniumlibrary
# - geckodriver
# - python-chromedriver-binary
```
Update and activate the environment
```bash
conda env update
conda activate my-irobotframework-environment
```
Rebuild JupyterLab
```bash
jupyter lab build
```
### Anaconda Project
When you get into a serious stack of Robot Framework environment, you may find that capturing its dependencies separate from the system-under-test can be valuable. Further, actually _dealing_ with cross-platform issues, rather than sweeping them under the rug with Docker or Vagrant can reveal brittleness in the test environment and the system-under-test.
While outside the scope of this document, it's worth mentioning [anaconda-project](https://anaconda-project.readthedocs.io/en/latest/). See this project's `anaconda-project.yml` and [contribution guide](./Contributing.ipynb) for a taste of what `anaconda-project` can do to enable reproducible, multi-platform environments.
| github_jupyter |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# GitHub - Get weekly commits from repository
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/GitHub/GitHub_Get_weekly_commits_from_repository.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #github #repos #commits #stats #naas_drivers #plotly #linechart #operations #analytics #dataframe #html
**Author:** [Sanjeet Attili](https://www.linkedin.com/in/sanjeet-attili-760bab190/)
## Input
```
import pandas as pd
import plotly.express as px
from naas_drivers import github
import naas
```
## Setup Github
**How to find your personal access token on Github?**
- First we need to create a personal access token to get the details of our organization from here: https://github.com/settings/tokens
- You will be asked to select scopes for the token. Which scopes you choose will determine what information and actions you will be able to perform against the API.
- You should be careful with the ones prefixed with write:, delete: and admin: as these might be quite destructive.
- You can find description of each scope in docs here (https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps).
```
# Github repository url
REPO_URL = "https://github.com/jupyter-naas/awesome-notebooks"
# Github token
GITHUB_TOKEN = "ghp_fUYP0Z5i29AG4ggX8owctGnHUoVHi******"
```
## Model
### Get commits from repository url
```
df_commits = github.connect(GITHUB_TOKEN).repos.get_commits(REPO_URL)
df_commits
```
## Output
### Get weekly commits
```
def get_weekly_commits(df):
# Exclude Github commits
df = df[(df.COMMITTER_EMAIL.str[-10:] != "github.com")]
# Groupby and count
df = df.groupby(pd.Grouper(freq='W', key='AUTHOR_DATE')).agg({"ID": "count"}).reset_index()
df["WEEKS"] = df["AUTHOR_DATE"].dt.strftime("W%U-%Y")
# Cleaning
df = df.rename(columns={"ID": "NB_COMMITS"})
return df
df_weekly = get_weekly_commits(df_commits)
df_weekly
```
### Plot a bar chart of weekly commit activity
```
def create_barchart(df, repository):
# Get repository
repository = repository.split("/")[-1]
# Calc commits
commits = df.NB_COMMITS.sum()
# Create fig
fig = px.bar(df,
title=f"GitHub - {repository} : Weekly user commits <br><span style='font-size: 13px;'>Total commits: {commits}</span>",
x="WEEKS",
y="NB_COMMITS",
labels={
'WEEKS':'Weeks committed',
'NB_COMMITS':"Nb. commits"
})
fig.update_traces(marker_color='black')
fig.update_layout(
plot_bgcolor="#ffffff",
width=1200,
height=800,
font=dict(family="Arial", size=14, color="black"),
paper_bgcolor="white",
margin_pad=10,
)
fig.show()
return fig
fig = create_barchart(df_weekly, REPO_URL)
```
### Save and export html
```
output_path = f"{REPO_URL.split('/')[-1]}_weekly_commits.html"
fig.write_html(output_path)
naas.asset.add(output_path, params={"inline": True})
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#default_exp vision.data
#export
from fastai.torch_basics import *
from fastai.data.all import *
from fastai.vision.core import *
#hide
from nbdev.showdoc import *
# from fastai.vision.augment import *
```
# Vision data
> Helper functions to get data in a `DataLoaders` in the vision application and higher class `ImageDataLoaders`
The main classes defined in this module are `ImageDataLoaders` and `SegmentationDataLoaders`, so you probably want to jump to their definitions. They provide factory methods that are a great way to quickly get your data ready for training, see the [vision tutorial](http://docs.fast.ai/tutorial.vision) for examples.
## Helper functions
```
#export
@delegates(subplots)
def get_grid(n, nrows=None, ncols=None, add_vert=0, figsize=None, double=False, title=None, return_fig=False, **kwargs):
"Return a grid of `n` axes, `rows` by `cols`"
nrows = nrows or int(math.sqrt(n))
ncols = ncols or int(np.ceil(n/nrows))
if double: ncols*=2 ; n*=2
fig,axs = subplots(nrows, ncols, figsize=figsize, **kwargs)
axs = [ax if i<n else ax.set_axis_off() for i, ax in enumerate(axs.flatten())][:n]
if title is not None: fig.suptitle(title, weight='bold', size=14)
return (fig,axs) if return_fig else axs
```
This is used by the type-dispatched versions of `show_batch` and `show_results` for the vision application. By default, there will be `int(math.sqrt(n))` rows and `ceil(n/rows)` columns. `double` will double the number of columns and `n`. The default `figsize` is `(cols*imsize, rows*imsize+add_vert)`. If a `title` is passed it is set to the figure. `sharex`, `sharey`, `squeeze`, `subplot_kw` and `gridspec_kw` are all passed down to `plt.subplots`. If `return_fig` is `True`, returns `fig,axs`, otherwise just `axs`.
```
# export
def clip_remove_empty(bbox, label):
"Clip bounding boxes with image border and label background the empty ones"
bbox = torch.clamp(bbox, -1, 1)
empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) < 0.)
return (bbox[~empty], label[~empty])
bb = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
bb,lbl = clip_remove_empty(bb, tensor([1,2,3,2]))
test_eq(bb, tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(lbl, tensor([1,2,2]))
#export
def bb_pad(samples, pad_idx=0):
"Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
max_len = max([len(s[2]) for s in samples])
def _f(img,bbox,lbl):
bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
return img,bbox,lbl
return [_f(*s) for s in samples]
img1,img2 = TensorImage(torch.randn(16,16,3)),TensorImage(torch.randn(16,16,3))
bb1 = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
lbl1 = tensor([1, 2, 3, 2])
bb2 = tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]])
lbl2 = tensor([2, 2])
samples = [(img1, bb1, lbl1), (img2, bb2, lbl2)]
res = bb_pad(samples)
non_empty = tensor([True,True,False,True])
test_eq(res[0][0], img1)
test_eq(res[0][1], tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(res[0][2], tensor([1,2,2]))
test_eq(res[1][0], img2)
test_eq(res[1][1], tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5], [0,0,0,0]]))
test_eq(res[1][2], tensor([2,2,0]))
```
## Show methods -
```
#export
@typedispatch
def show_batch(x:TensorImage, y, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, figsize=figsize)
ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_batch(x:TensorImage, y:TensorImage, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, add_vert=1, figsize=figsize, double=True)
for i in range(2):
ctxs[i::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[i::2],range(max_n))]
return ctxs
```
## `TransformBlock`s for vision
These are the blocks the vision application provide for the [data block API](http://docs.fast.ai/data.block).
```
#export
def ImageBlock(cls=PILImage):
"A `TransformBlock` for images of `cls`"
return TransformBlock(type_tfms=cls.create, batch_tfms=IntToFloatTensor)
#export
def MaskBlock(codes=None):
"A `TransformBlock` for segmentation masks, potentially with `codes`"
return TransformBlock(type_tfms=PILMask.create, item_tfms=AddMaskCodes(codes=codes), batch_tfms=IntToFloatTensor)
#export
PointBlock = TransformBlock(type_tfms=TensorPoint.create, item_tfms=PointScaler)
BBoxBlock = TransformBlock(type_tfms=TensorBBox.create, item_tfms=PointScaler, dls_kwargs = {'before_batch': bb_pad})
PointBlock.__doc__ = "A `TransformBlock` for points in an image"
BBoxBlock.__doc__ = "A `TransformBlock` for bounding boxes in an image"
show_doc(PointBlock, name='PointBlock')
show_doc(BBoxBlock, name='BBoxBlock')
#export
def BBoxLblBlock(vocab=None, add_na=True):
"A `TransformBlock` for labeled bounding boxes, potentially with `vocab`"
return TransformBlock(type_tfms=MultiCategorize(vocab=vocab, add_na=add_na), item_tfms=BBoxLabeler)
```
If `add_na` is `True`, a new category is added for NaN (that will represent the background class).
## ImageDataLoaders -
```
#export
class ImageDataLoaders(DataLoaders):
"Basic wrapper around several `DataLoader`s with factory methods for computer vision problems"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_folder(cls, path, train='train', valid='valid', valid_pct=None, seed=None, vocab=None, item_tfms=None,
batch_tfms=None, **kwargs):
"Create from imagenet style dataset in `path` with `train` and `valid` subfolders (or provide `valid_pct`)"
splitter = GrandparentSplitter(train_name=train, valid_name=valid) if valid_pct is None else RandomSplitter(valid_pct, seed=seed)
get_items = get_image_files if valid_pct else partial(get_image_files, folders=[train, valid])
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock(vocab=vocab)),
get_items=get_items,
splitter=splitter,
get_y=parent_label,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, path, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_path_func(cls, path, fnames, label_func, valid_pct=0.2, seed=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from list of `fnames` in `path`s with `label_func`"
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
splitter=RandomSplitter(valid_pct, seed=seed),
get_y=label_func,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, fnames, path=path, **kwargs)
@classmethod
def from_name_func(cls, path, fnames, label_func, **kwargs):
"Create from the name attrs of `fnames` in `path`s with `label_func`"
f = using_attr(label_func, 'name')
return cls.from_path_func(path, fnames, f, **kwargs)
@classmethod
def from_path_re(cls, path, fnames, pat, **kwargs):
"Create from list of `fnames` in `path`s with re expression `pat`"
return cls.from_path_func(path, fnames, RegexLabeller(pat), **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_name_re(cls, path, fnames, pat, **kwargs):
"Create from the name attrs of `fnames` in `path`s with re expression `pat`"
return cls.from_name_func(path, fnames, RegexLabeller(pat), **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_df(cls, df, path='.', valid_pct=0.2, seed=None, fn_col=0, folder=None, suff='', label_col=1, label_delim=None,
y_block=None, valid_col=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from `df` using `fn_col` and `label_col`"
pref = f'{Path(path) if folder is None else Path(path)/folder}{os.path.sep}'
if y_block is None:
is_multi = (is_listy(label_col) and len(label_col) > 1) or label_delim is not None
y_block = MultiCategoryBlock if is_multi else CategoryBlock
splitter = RandomSplitter(valid_pct, seed=seed) if valid_col is None else ColSplitter(valid_col)
dblock = DataBlock(blocks=(ImageBlock, y_block),
get_x=ColReader(fn_col, pref=pref, suff=suff),
get_y=ColReader(label_col, label_delim=label_delim),
splitter=splitter,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, df, path=path, **kwargs)
@classmethod
def from_csv(cls, path, csv_fname='labels.csv', header='infer', delimiter=None, **kwargs):
"Create from `path/csv_fname` using `fn_col` and `label_col`"
df = pd.read_csv(Path(path)/csv_fname, header=header, delimiter=delimiter)
return cls.from_df(df, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_lists(cls, path, fnames, labels, valid_pct=0.2, seed:int=None, y_block=None, item_tfms=None, batch_tfms=None,
**kwargs):
"Create from list of `fnames` and `labels` in `path`"
if y_block is None:
y_block = MultiCategoryBlock if is_listy(labels[0]) and len(labels[0]) > 1 else (
RegressionBlock if isinstance(labels[0], float) else CategoryBlock)
dblock = DataBlock.from_columns(blocks=(ImageBlock, y_block),
splitter=RandomSplitter(valid_pct, seed=seed),
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, (fnames, labels), path=path, **kwargs)
ImageDataLoaders.from_csv = delegates(to=ImageDataLoaders.from_df)(ImageDataLoaders.from_csv)
ImageDataLoaders.from_name_func = delegates(to=ImageDataLoaders.from_path_func)(ImageDataLoaders.from_name_func)
ImageDataLoaders.from_path_re = delegates(to=ImageDataLoaders.from_path_func)(ImageDataLoaders.from_path_re)
ImageDataLoaders.from_name_re = delegates(to=ImageDataLoaders.from_name_func)(ImageDataLoaders.from_name_re)
```
This class should not be used directly, one of the factory methods should be preferred instead. All those factory methods accept as arguments:
- `item_tfms`: one or several transforms applied to the items before batching them
- `batch_tfms`: one or several transforms applied to the batches once they are formed
- `bs`: the batch size
- `val_bs`: the batch size for the validation `DataLoader` (defaults to `bs`)
- `shuffle_train`: if we shuffle the training `DataLoader` or not
- `device`: the PyTorch device to use (defaults to `default_device()`)
```
show_doc(ImageDataLoaders.from_folder)
```
If `valid_pct` is provided, a random split is performed (with an optional `seed`) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a `vocab` is passed, only the folders with names in `vocab` are kept.
Here is an example loading a subsample of MNIST:
```
path = untar_data(URLs.MNIST_TINY)
dls = ImageDataLoaders.from_folder(path)
```
Passing `valid_pct` will ignore the valid/train folders and do a new random split:
```
dls = ImageDataLoaders.from_folder(path, valid_pct=0.2)
dls.valid_ds.items[:3]
show_doc(ImageDataLoaders.from_path_func)
```
The validation set is a random `subset` of `valid_pct`, optionally created with `seed` for reproducibility.
Here is how to create the same `DataLoaders` on the MNIST dataset as the previous example with a `label_func`:
```
fnames = get_image_files(path)
def label_func(x): return x.parent.name
dls = ImageDataLoaders.from_path_func(path, fnames, label_func)
```
Here is another example on the pets dataset. Here filenames are all in an "images" folder and their names have the form `class_name_123.jpg`. One way to properly label them is thus to throw away everything after the last `_`:
```
show_doc(ImageDataLoaders.from_path_re)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility.
Here is how to create the same `DataLoaders` on the MNIST dataset as the previous example (you will need to change the initial two / by a \ on Windows):
```
pat = r'/([^/]*)/\d+.png$'
dls = ImageDataLoaders.from_path_re(path, fnames, pat)
show_doc(ImageDataLoaders.from_name_func)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. This method does the same as `ImageDataLoaders.from_path_func` except `label_func` is applied to the name of each filenames, and not the full path.
```
show_doc(ImageDataLoaders.from_name_re)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. This method does the same as `ImageDataLoaders.from_path_re` except `pat` is applied to the name of each filenames, and not the full path.
```
show_doc(ImageDataLoaders.from_df)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. Alternatively, if your `df` contains a `valid_col`, give its name or its index to that argument (the column should have `True` for the elements going to the validation set).
You can add an additional `folder` to the filenames in `df` if they should not be concatenated directly to `path`. If they do not contain the proper extensions, you can add `suff`. If your label column contains multiple labels on each row, you can use `label_delim` to warn the library you have a multi-label problem.
`y_block` should be passed when the task automatically picked by the library is wrong, you should then give `CategoryBlock`, `MultiCategoryBlock` or `RegressionBlock`. For more advanced uses, you should use the data block API.
The tiny mnist example from before also contains a version in a dataframe:
```
path = untar_data(URLs.MNIST_TINY)
df = pd.read_csv(path/'labels.csv')
df.head()
```
Here is how to load it using `ImageDataLoaders.from_df`:
```
dls = ImageDataLoaders.from_df(df, path)
```
Here is another example with a multi-label problem:
```
path = untar_data(URLs.PASCAL_2007)
df = pd.read_csv(path/'train.csv')
df.head()
dls = ImageDataLoaders.from_df(df, path, folder='train', valid_col='is_valid')
```
Note that can also pass `2` to valid_col (the index, starting with 0).
```
show_doc(ImageDataLoaders.from_csv)
```
Same as `ImageDataLoaders.from_df` after loading the file with `header` and `delimiter`.
Here is how to load the same dataset as before with this method:
```
dls = ImageDataLoaders.from_csv(path, 'train.csv', folder='train', valid_col='is_valid')
show_doc(ImageDataLoaders.from_lists)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. `y_block` can be passed to specify the type of the targets.
```
path = untar_data(URLs.PETS)
fnames = get_image_files(path/"images")
labels = ['_'.join(x.name.split('_')[:-1]) for x in fnames]
dls = ImageDataLoaders.from_lists(path, fnames, labels)
#export
class SegmentationDataLoaders(DataLoaders):
"Basic wrapper around several `DataLoader`s with factory methods for segmentation problems"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_label_func(cls, path, fnames, label_func, valid_pct=0.2, seed=None, codes=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from list of `fnames` in `path`s with `label_func`."
dblock = DataBlock(blocks=(ImageBlock, MaskBlock(codes=codes)),
splitter=RandomSplitter(valid_pct, seed=seed),
get_y=label_func,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
res = cls.from_dblock(dblock, fnames, path=path, **kwargs)
return res
show_doc(SegmentationDataLoaders.from_label_func)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. `codes` contain the mapping index to label.
```
path = untar_data(URLs.CAMVID_TINY)
fnames = get_image_files(path/'images')
def label_func(x): return path/'labels'/f'{x.stem}_P{x.suffix}'
codes = np.loadtxt(path/'codes.txt', dtype=str)
dls = SegmentationDataLoaders.from_label_func(path, fnames, label_func, codes=codes)
```
# Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
```
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
correlation_dataframe = training_examples.copy()
correlation_dataframe["target"] = training_targets["median_house_value"]
correlation_dataframe.corr()
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
minimal_features = [
"median_income",
"latitude",
]
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
train_model(
learning_rate=0.001,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
plt.scatter(training_examples["latitude"], training_targets["median_house_value"])
LATITUDE_RANGES = zip(range(32, 44), range(33, 45))
def select_and_transform_features(source_df):
selected_examples = pd.DataFrame()
selected_examples["median_income"] = source_df["median_income"]
for r in LATITUDE_RANGES:
selected_examples["latitude_%d_to_%d" % r] = source_df["latitude"].apply(
lambda l: 1.0 if l >= r[0] and l < r[1] else 0.0)
return selected_examples
selected_training_examples = select_and_transform_features(training_examples)
selected_validation_examples = select_and_transform_features(validation_examples)
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=selected_training_examples,
training_targets=training_targets,
validation_examples=selected_validation_examples,
validation_targets=validation_targets)
```
| github_jupyter |
## Classical Mechanics - Week 2
### Last week we:
- Analytically mapped 1D motion over some time
- Gained practice with functions
- Reviewed vectors and matrices in Python
### This week we will:
- Practice using Python syntax and variable maniulaton
- Utilize analytical solutions to create more refined functions
- Work in 3 Dimensions
```
## As usual, here are some useful packages we will be using. Feel free to use more and experiment as you wish.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
%matplotlib inline
```
##### We previously plotted our variable as an equation.
However, this week we will begin storing the position information within vectors, implemented through arrays in coding.
Let's get some practice with this. The cell below creates two arrays, one containing the times to be analyzed and the other containing the x and y components of the position vector at each point in time. The second array is initially empty. Then it will make the initial position to be x = 2 and y = 1. Take a look at the code and comments to get an understanding of what's happening better.
```
tf = 4 #length of value to be analyzed
dt = .001 # step sizes
t = np.arange(0.0,tf,dt) # Creates an evenly spaced time array going from 0 to 3.999, with step sizes .001
p = np.zeros((len(t), 2)) # Creates an empty array of [x,y] arrays (our vectors). Array size is same as the one for time.
p[0] = [2.0,1.0] # This sets the inital position to be x = 2 and y = 1
```
##### Below we are printing specific values in our array to see what's being stored where. The first number in the r[] represents which array iteration we are looking at, while the number after the "," represents which listed number in the array iteration we are getting back. Notice how the listings start at 0.
Feel free to mess around with this as much as you want.
```
print(p[0]) # Prints the first array
print(p[0,:]) # Same as above, these commands are interchangeable
print(p[3999]) # Prints the 4000th array
print(p[0,0]) # Prints the first value of the first array
print(p[0,1]) # Prints the second value of first array
print(p[:,0]) # Prints the first value of all the arrays
# Try running this cell. Notice how it gives an error since we did not implement a third dimension into our arrays
print(p[:,2])
```
## In the cell below we want to manipulate the arrays.
Our goal is to make each vector's x component valued the same as their respective vector's position in the iteration and the y value will be twice that value, EXCEPT the first vector, which we have already set.
i.e. $p[0] = [2,1], p[1] = [1,2], p[2] = [2,4], p[3] = [3,6], ...$
#### The skeleton code has already been provided for you, along with hints. Your job is to complete the code, execute it, and then run the checker in the cell below it.
We will be using a for loop and an if statement in the checker code.
If your code is working, the cell with the checker should print "Success!" If "There is an error in your code" appears, look to see where the error in your code is and re-run the checker cell until you get the success message.
```
for i in range(1,3999):
p[i] = [,] # What equation should you put in the x and y components?
# Checker cell to make sure your code is performing correctly
c = 0
for i in range(0,3999):
if i == 0:
if p[i,0] != 2.0:
c += 1
if p[i,1] != 1.0:
c += 1
else:
if p[i,0] != 1.0*i:
c += 1
if p[i,1] != 2.0*i:
c += 1
if c == 0:
print("Success!")
else:
print("There is an error in your code")
```
### Last week:
We made basic plots of a ball moving in 1D space using Physics I equations, coded within a function. However, this week we will be working with a bit more advanced concepts for the same problem.
#### You learned to derive the equations of motions starting from Force in class/reading and how to solve integrations analytically. Now let's use those concepts to analyze such phenomena.
## Assume we have a soccer ball moving in 3 dimensions with the following trajectory:
- $x(t) = 10t\cos{45^{\circ}} $
- $y(t) = 10t\sin{45^{\circ}} $
- $z(t) = 10t - \dfrac{9.81}{2}t^2$
Now let's create a 3D plot using these equations. In the cell below write the equations into their respective labels. The time we want to analyze is already provided for you along with $x(t)$.
**Important Concept:** Numpy comes with many mathematical packages, some of them being the trigonometric functions sine, cosine, tangent. We are going to utilize these this week. Additionally, these functions work with radians, so we will also be using a function from numpy that converts degrees to radians.
```
tf = 2.04 # The final time to be evaluated
dt = 0.1 # The time step size
t = np.arange(0,tf,dt) # The time array
theta_deg = 45 # Degrees
theta_rad = np.radians(theta_deg) # Converts our degrees to its radians counterpart
x = 10*t*np.cos(theta_rad) # Equation for our x component, utilizing np.cos() and our calculated radians
y = # Put the y equation here
z = # Put the z equation here
## Once you have entered the proper equations in the cell above, run this cell to plot in 3D
fig = plt.axes(projection='3d')
fig.set_xlabel('x')
fig.set_ylabel('y')
fig.set_zlabel('z')
fig.scatter(x,y,z)
```
# Q1.) How would you express $x(t)$, $y(t)$, $z(t)$ for this problem as a single vector, $\vec{r}(t)$?
✅ Double click this cell, erase its content, and put your answer to the above question here.
In the cell below we will put the equations into a single vector, $\vec{r}$. Fix any bugs you find and comment the fixes you made in the line(s) below the $\vec{r}$ array. Comments are made by putting a # before inputs in Python
(***Hints:*** Compare the equations used in $\vec{r}$ to the ones above. Also, don't be afraid to run the cell and see what error message comes up)
```
r = np..array((10*t np.cos(theta_rad), 10*t*np.sin(theta_rad), 10*t - 9.81/2*t**2))
## Run this code to plot using our r array
fig = plt.axes(projection='3d')
fig.set_xlabel('x')
fig.set_ylabel('y')
fig.set_zlabel('z')
fig.scatter(r[0],r[1],r[2])
```
# Q2.) What do you think the benefits and/or disadvantages are from expressing our 3 equations as a single array/vector? This can be both from a computational and physics stand point.
✅ Double click this cell, erase its content, and put your answer to the above question here.
The cell bellow prints the maximum $x$ component from our $\vec{r}$ vector using the numpy package. Use the numpy package to also print the maximum $y$ and $z$ components **FROM** our $\vec{r}$.
```
print("Maximum x value is: ", np.max(r[0]))
## Put the code for printing out our maximum y and z values here
```
## Complete Taylor Question 1.35 before moving further.
(Recall that the golf ball is hit due east at an angle $\theta$ with respect to the horizontal, and the coordinate directions are $x$ measured east, $y$ north, and $z$ vertically up.)
# Q3.) What is the analytical solution for our theoretical golf ball's position $\vec{r}(t)$ over time from Taylor Question 1.35? Also what is the formula for the time $t_f$ when the golf ball returns to the ground?
✅ Double click this cell, erase its content, and put your answers to the above questions here.
## Using what we learned in this notebook and the previous one, set up a function called Golfball in the cell below that utilizes our analytical solution from above.
This function should take in an initial velocity, vi, and the angle $\theta$ that the golfball was hit in degrees. It should then return a 3D graph of the motion.
Also include code in the function to print the maximum $x$, $y$, and $z$ as above.
(A skeleton with hints has already been provided for you)
```
def Golfball(vi, theta_deg):
# Put formulae to obtain initial velocity components here.
# g is already given to you here
g = 9.81 # in m/s^2
# Set up the time array
tf = # Use the formula for tf from Taylor (1.35) to determine the length of the time array
dt = 0.1 # Choose the time step size to be 0.1 seconds.
t = # Define the time array here
# Code for position vector here
r = # meters
## Put code to print maximum x, y, z values here
## Put code for plotting in 3d here
```
In the cell below create lines of code that asks the user to input an initial velocity (in m/s) and the angle (in degrees). Then use these inputs on the created Golfball function. Play around with the values to see if your function is properly working.
#### Hint: If you get stuck, look at the previous notebook. We did something very similar to this last time.
```
```
### Call the Golfball function in the empty cell provided below and produce a graph of the solution for initial conditions $v_i=90$ m/s and $\theta=30^\circ$.
```
```
# Q4.) Given initial values of $v_i = 90 m/s$, $\theta = 30^{\circ}$, what would our maximum x, y and z components be?
✅ Double click this cell, erase its content, and put your answer to the above question here.
### Call the Golfball function in the empty cell provided below and produce a graph of the solution for initial conditions $v_i=45$ m/s and $\theta=45^\circ$.
```
```
# Q5.) Given initial values of $v_i = 45 m/s$, $\theta = 45^{\circ}$, what would our maximum x, y and z components be?
✅ Double click this cell, erase its content, and put your answer to the above question here.
#### Notebook Wrap-up.
Run the cell below and copy-paste your answers into their corresponding cells.
Rename the notebook to CM_Week2_[Insert Your Name Here].ipynb and submit it to D2L dropbox.
```
from IPython.display import HTML
HTML(
"""
<iframe
src="https://forms.gle/gLVojgAUYao9K8VR7"
width="100%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
```
# Second week done! Nice!
Hopefully you're starting to see how code is useful in analyzing physical problems now. Feel free to review the past two weeks worth of materials, watch videos on coding/Python, or just go on a nice walk while the weather is still good.
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed training with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
The `tf.distribute.Strategy` API provides an abstraction for distributing your training
across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes.
This tutorial uses the `tf.distribute.MirroredStrategy`, which
does in-graph replication with synchronous training on many GPUs on one machine.
Essentially, it copies all of the model's variables to each processor.
Then, it uses [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) to combine the gradients from all processors and applies the combined value to all copies of the model.
`MirroredStategy` is one of several distribution strategy available in TensorFlow core. You can read about more strategies at [distribution strategy guide](../../guide/distributed_training.ipynb).
### Keras API
This example uses the `tf.keras` API to build the model and training loop. For custom training loops, see the [tf.distribute.Strategy with training loops](training_loops.ipynb) tutorial.
## Import dependencies
```
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow and TensorFlow Datasets
try:
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
import os
print(tf.__version__)
```
## Download the dataset
Download the MNIST dataset and load it from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in `tf.data` format.
Setting `with_info` to `True` includes the metadata for the entire dataset, which is being saved here to `info`.
Among other things, this metadata object includes the number of train and test examples.
```
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
```
## Define distribution strategy
Create a `MirroredStrategy` object. This will handle distribution, and provides a context manager (`tf.distribute.MirroredStrategy.scope`) to build your model inside.
```
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
```
## Setup input pipeline
When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory, and tune the learning rate accordingly.
```
# You can also do info.splits.total_num_examples to get the total
# number of examples in the dataset.
num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
```
Pixel values, which are 0-255, [have to be normalized to the 0-1 range](https://en.wikipedia.org/wiki/Feature_scaling). Define this scale in a function.
```
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
```
Apply this function to the training and test data, shuffle the training data, and [batch it for training](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch). Notice we are also keeping an in-memory cache of the training data to improve performance.
```
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
```
## Create the model
Create and compile the Keras model in the context of `strategy.scope`.
```
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
```
## Define the callbacks
The callbacks used here are:
* *TensorBoard*: This callback writes a log for TensorBoard which allows you to visualize the graphs.
* *Model Checkpoint*: This callback saves the model after every epoch.
* *Learning Rate Scheduler*: Using this callback, you can schedule the learning rate to change after every epoch/batch.
For illustrative purposes, add a print callback to display the *learning rate* in the notebook.
```
# Define the checkpoint directory to store the checkpoints
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# Function for decaying the learning rate.
# You can define any decay function you need.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# Callback for printing the LR at the end of each epoch.
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
```
## Train and evaluate
Now, train the model in the usual way, calling `fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.
```
model.fit(train_dataset, epochs=12, callbacks=callbacks)
```
As you can see below, the checkpoints are getting saved.
```
# check the checkpoint directory
!ls {checkpoint_dir}
```
To see how the model perform, load the latest checkpoint and call `evaluate` on the test data.
Call `evaluate` as before using appropriate datasets.
```
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss, eval_acc = model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
```
To see the output, you can download and view the TensorBoard logs at the terminal.
```
$ tensorboard --logdir=path/to/log-directory
```
```
!ls -sh ./logs
```
## Export to SavedModel
Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope.
```
path = 'saved_model/'
model.save(path, save_format='tf')
```
Load the model without `strategy.scope`.
```
unreplicated_model = tf.keras.models.load_model(path)
unreplicated_model.compile(
loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
```
Load the model with `strategy.scope`.
```
with strategy.scope():
replicated_model = tf.keras.models.load_model(path)
replicated_model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)
print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
```
### Examples and Tutorials
Here are some examples for using distribution strategy with keras fit/compile:
1. [Transformer](https://github.com/tensorflow/models/blob/master/official/transformer/v2/transformer_main.py) example trained using `tf.distribute.MirroredStrategy`
2. [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) example trained using `tf.distribute.MirroredStrategy`.
More examples listed in the [Distribution strategy guide](../../guide/distributed_training.ipynb#examples_and_tutorials)
## Next steps
* Read the [distribution strategy guide](../../guide/distributed_training.ipynb).
* Read the [Distributed Training with Custom Training Loops](training_loops.ipynb) tutorial.
Note: `tf.distribute.Strategy` is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try. We welcome your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
# MySQL Exercise 4: Summarizing your Data
Last week you practiced retrieving and formatting selected subsets of raw data from individual tables in a database. In this lesson we are going to learn how to use SQL to run calculations that summarize your data without having to output all the raw rows or entries. These calculations will serve as building blocks for the queries that will address our business questions about how to improve Dognition test completion rates.
These are the five most common aggregate functions used to summarize information stored in tables:
<img src="https://duke.box.com/shared/static/bc3yclxtwmv8dffis09hwsvskx18u1mc.jpg" width=400 alt="AGGREGATE FUNCTIONS" />
You will use COUNT and SUM very frequently.
COUNT is the only aggregate function that can work on any type of variable. The other four aggregate functions are only appropriate for numerical data.
All aggregate functions require you to enter either a column name or a "\*" in the parentheses after the function word.
Let's begin by exploring the COUNT function.
## 1. The COUNT function
**First, load the sql library and the Dognition database, and set dognition as the default database.**
```
%load_ext sql
%sql mysql://studentuser:studentpw@localhost/dognitiondb
```
The Jupyter interface conveniently tells us how many rows are in our query output, so we can compare the results of the COUNT function to the results of our SELECT function. If you run:
```mySQL
SELECT breed
FROM dogs
```
Jupyter tells that 35050 rows are "affected", meaning there are 35050 rows in the output of the query (although, of course, we have limited the display to only 1000 rows at a time).
**Now try running:**
```mySQL
SELECT COUNT(breed)
FROM dogs
```
```
%%sql
SELECT COUNT(breed)
FROM dogs;
```
COUNT is reporting how many rows are in the breed column in total. COUNT should give you the same output as Jupyter's output without displaying the actual rows of data that are being aggregated.
You can use DISTINCT (which you learned about in MySQL Exercise 3) with COUNT to count all the unique values in a column, but it must be placed inside the parentheses, immediately before the column that is being counted. For example, to count the number of distinct breed names contained within all the entries in the breed column you could query:
```SQL
SELECT COUNT(DISTINCT breed)
FROM dogs
```
What if you wanted to know how many indivdual dogs successfully completed at least one test?
Since every row in the complete_tests table represents a completed test and we learned earlier that there are no NULL values in the created_at column of the complete_tests table, any non-null Dog_Guid in the complete_tests table will have completed at least one test. When a column is included in the parentheses, null values are automatically ignored. Therefore, you could use:
```SQL
SELECT COUNT(DISTINCT Dog_Guid)
FROM complete_tests
```
**Question 1: Try combining this query with a WHERE clause to find how many individual dogs completed tests after March 1, 2014 (the answer should be 13,289):**
```
%sql DESCRIBE complete_tests
%%sql
SELECT COUNT(DISTINCT dog_guid)
FROM complete_tests
WHERE created_at > "2014_03_01"
```
You can use the "\*" in the parentheses of a COUNT function to count how many rows are in the entire table (or subtable). There are two fundamental difference between COUNT(\*) and COUNT(column_name), though.
The first difference is that you cannot use DISTINCT with COUNT(\*).
**Question 2: To observe the second difference yourself first, count the number of rows in the dogs table using COUNT(\*):**
```
%%sql
SELECT COUNT(*)
FROM dogs
```
**Question 3: Now count the number of rows in the exclude column of the dogs table:**
```
%%sql
SELECT COUNT(exclude)
FROM dogs
```
The output of the second query should return a much smaller number than the output of the first query. That's because:
><mark> When a column is included in a count function, null values are ignored in the count. When an asterisk is included in a count function, nulls are included in the count.</mark>
This will be both useful and important to remember in future queries where you might want to use SELECT(\*) to count items in multiple groups at once.
**Question 4: How many distinct dogs have an exclude flag in the dogs table (value will be "1")? (the answer should be 853)**
```
%%sql
SELECT COUNT(DISTINCT dog_guid)
FROM dogs
WHERE exclude = 1;
```
## 2. The SUM Function
The fact that the output of:
```mySQL
SELECT COUNT(exclude)
FROM dogs
```
was so much lower than:
```mySQL
SELECT COUNT(*)
FROM dogs
```
suggests that there must be many NULL values in the exclude column. Conveniently, we can combine the SUM function with ISNULL to count exactly how many NULL values there are. Look up "ISNULL" at this link to MySQL functions I included in an earlier lesson:
http://www.w3resource.com/mysql/mysql-functions-and-operators.php
You will see that ISNULL is a logical function that returns a 1 for every row that has a NULL value in the specified column, and a 0 for everything else. If we sum up the number of 1s outputted by ISNULL(exclude), then, we should get the total number of NULL values in the column. Here's what that query would look like:
```mySQL
SELECT SUM(ISNULL(exclude))
FROM dogs
```
It might be tempting to treat SQL like a calculator and leave out the SELECT statement, but you will quickly see that doesn't work.
><mark>*Every SQL query that extracts data from a database MUST contain a SELECT statement.* <mark\>
**Try counting the number of NULL values in the exclude column:**
```
%%sql
SELECT SUM(ISNULL(exclude))
FROM dogs
```
The output should return a value of 34,025. When you add that number to the 1025 entries that have an exclude flag, you get a total of 35,050, which is the number of rows reported by SELECT COUNT(\*) from dogs.
## 3. The AVG, MIN, and MAX Functions
AVG, MIN, and MAX all work very similarly to SUM.
During the Dognition test, customers were asked the question: "How surprising were [your dog’s name]’s choices?” after completing a test. Users could choose any number between 1 (not surprising) to 9 (very surprising). We could retrieve the average, minimum, and maximum rating customers gave to this question after completing the "Eye Contact Game" with the following query:
```mySQL
SELECT test_name,
AVG(rating) AS AVG_Rating,
MIN(rating) AS MIN_Rating,
MAX(rating) AS MAX_Rating
FROM reviews
WHERE test_name="Eye Contact Game";
```
This would give us an output with 4 columns. The last three columns would have titles reflecting the names inputted after the AS clauses. Recall that if you want to title a column with a string of text that contains a space, that string will need to be enclosed in quotation marks after the AS clause in your query.
**Question 5: What is the average, minimum, and maximum ratings given to "Memory versus Pointing" game? (Your answer should be 3.5584, 0, and 9, respectively)**
```
%sql DESCRIBE reviews
%%sql
SELECT test_name,
AVG(rating) AS avg_rating,
MIN(rating) AS min_rating,
MAX(rating) AS max_rating
FROM reviews
WHERE test_name = 'Memory versus Pointing';
```
What if you wanted the average rating for each of the 40 tests in the Reviews table? One way to do that with the tools you know already is to write 40 separate queries like the ones you wrote above for each test, and then copy or transcribe the results into a separate table in another program like Excel to assemble all the results in one place. That would be a very tedious and time-consuming exercise. Fortunately, there is a very simple way to produce the results you want within one query. That's what we will learn how to do in MySQL Exercise 5. However, it is important that you feel comfortable with the syntax we have learned thus far before we start taking advantage of that functionality. Practice is the best way to become comfortable!
## Practice incorporating aggregate functions with everything else you've learned so far in your own queries.
**Question 6: How would you query how much time it took to complete each test provided in the exam_answers table, in minutes? Title the column that represents this data "Duration."** Note that the exam_answers table has over 2 million rows, so if you don't limit your output, it will take longer than usual to run this query. (HINT: use the TIMESTAMPDIFF function described at: http://www.w3resource.com/mysql/date-and-time-functions/date-and-time-functions.php. It might seem unkind of me to keep suggesting you look up and use new functions I haven't demonstrated for you, but I really want you to become confident that you know how to look up and use new functions when you need them! It will give you a very competative edge in the business world.)
```
%sql DESCRIBE exam_answers
%%sql
SELECT TIMESTAMPDIFF(MINUTE, start_time, end_time), start_time, end_time
FROM exam_answers
LIMIT 100;
```
**Question 7: Include a column for Dog_Guid, start_time, and end_time in your query, and examine the output. Do you notice anything strange?**
```
%%sql
SELECT dog_guid, start_time, end_time, TIMESTAMPDIFF(MINUTE, start_time, end_time) AS duration
FROM exam_answers
LIMIT 1000;
```
If you explore your output you will find that some of your calculated durations appear to be "0." In some cases, you will see many entries from the same Dog_ID with the same start time and end time. That should be impossible. These types of entries probably represent tests run by the Dognition team rather than real customer data. In other cases, though, a "0" is entered in the Duration column even though the start_time and end_time are different. This is because we instructed the function to output the time difference in minutes; unless you change your settings, it will output "0" for any time differences less than the integer 1. If you change your function to output the time difference in seconds, the duration in most of these columns will have a non-zero number.
**Question 8: What is the average amount of time it took customers to complete all of the tests in the exam_answers table, if you do not exclude any data (the answer will be approximately 587 minutes)?**
```
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time))
FROM exam_answers;
```
**Question 9: What is the average amount of time it took customers to complete the "Treat Warm-Up" test, according to the exam_answers table (about 165 minutes, if no data is excluded)?**
```
%%sql
SELECT DISTINCT test_name
FROM exam_answers
ORDER BY test_name;
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS duration
FROM exam_answers
WHERE test_name = 'Treat Warm-up';
```
**Question 10: How many possible test names are there in the exam_answers table?**
```
%%sql
SELECT COUNT(DISTINCT test_name)
FROM exam_answers;
```
You should have discovered that the exam_answers table has many more test names than the completed_tests table. It turns out that this table has information about experimental tests that Dognition has not yet made available to its customers.
**Question 11: What is the minimum and maximum value in the Duration column of your query that included the data from the entire table?**
```
%%sql
SELECT MIN(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS minimum, MAX(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS maximum
FROM exam_answers;
```
The minimum Duration value is *negative*! The end_times entered in rows with negative Duration values are earlier than the start_times. Unless Dognition has created a time machine, that's impossible and these entries must be mistakes.
**Question 12: How many of these negative Duration entries are there? (the answer should be 620)**
```
%%sql
SELECT COUNT(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS duration
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) < 0
```
**Question 13: How would you query all the columns of all the rows that have negative durations so that you could examine whether they share any features that might give you clues about what caused the entry mistake?**
```
%%sql
SELECT *
FROM exam_answers
WHERE TIMESTAMPDIFF(minute,start_time,end_time)<0;
```
**Question 14: What is the average amount of time it took customers to complete all of the tests in the exam_answers table when the negative durations are excluded from your calculation (you should get 11233 minutes)?**
```
%%sql
SELECT AVG(TIMESTAMPDIFF(minute,start_time,end_time)) AS duration
FROM exam_answers
WHERE TIMESTAMPDIFF(minute,start_time,end_time) > 0;
```
You have just seen another first-hand example of how messy real-world data can be, and how easy it can be to miss the "mess" when your data sets are too large to examine thoroughly by eye. Before continuing on to the next SQL lesson, make sure to watch the video about how you as a data analyst can practice building habits that will prevent you from being fooled by messy data.
**And, as always, feel free to practice more queries here!**
| github_jupyter |
# Solving in Python with LeNet
In this example, we'll explore learning with Caffe in Python, using the fully-exposed `Solver` interface.
### 1. Setup
* Set up the Python environment: we'll use the `pylab` import for numpy and plot inline.
```
from pylab import *
%matplotlib inline
```
* Import `caffe`, adding it to `sys.path` if needed. Make sure you've built pycaffe.
```
caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
```
* We'll be using the provided LeNet example data and networks (make sure you've downloaded the data and created the databases, as below).
```
# run scripts from caffe root
import os
os.chdir(caffe_root)
# Download data
!data/mnist/get_mnist.sh
# Prepare data
!examples/mnist/create_mnist.sh
# back to examples
os.chdir('examples')
```
### 2. Creating the net
Now let's make a variant of LeNet, the classic 1989 convnet architecture.
We'll need two external files to help out:
* the net `prototxt`, defining the architecture and pointing to the train/test data
* the solver `prototxt`, defining the learning parameters
We start by creating the net. We'll write the net in a succinct and natural way as Python code that serializes to Caffe's protobuf model format.
This network expects to read from pregenerated LMDBs, but reading directly from `ndarray`s is also possible using `MemoryDataLayer`.
```
from caffe import layers as L, params as P
def lenet(lmdb, batch_size):
# our version of LeNet: a series of linear and simple nonlinear transformations
n = caffe.NetSpec()
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
transform_param=dict(scale=1./255), ntop=2)
n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.fc1, in_place=True)
n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))
n.loss = L.SoftmaxWithLoss(n.score, n.label)
return n.to_proto()
with open('mnist/lenet_auto_train.prototxt', 'w') as f:
f.write(str(lenet('mnist/mnist_train_lmdb', 64)))
with open('mnist/lenet_auto_test.prototxt', 'w') as f:
f.write(str(lenet('mnist/mnist_test_lmdb', 100)))
```
The net has been written to disk in a more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net.
```
!cat mnist/lenet_auto_train.prototxt
```
Now let's see the learning parameters, which are also written as a `prototxt` file (already provided on disk). We're using SGD with momentum, weight decay, and a specific learning rate schedule.
```
!cat mnist/lenet_auto_solver.prototxt
```
### 3. Loading and checking the solver
* Let's pick a device and load the solver. We'll use SGD (with momentum), but other methods (such as Adagrad and Nesterov's accelerated gradient) are also available.
```
caffe.set_device(0)
caffe.set_mode_gpu()
### load the solver and create train and test nets
solver = None # ignore this workaround for lmdb data (can't instantiate two solvers on the same data)
solver = caffe.SGDSolver('mnist/lenet_auto_solver.prototxt')
```
* To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later).
```
# each output is (batch size, feature dim, spatial dim)
[(k, v.data.shape) for k, v in solver.net.blobs.items()]
# just print the weight sizes (we'll omit the biases)
[(k, v[0].data.shape) for k, v in solver.net.params.items()]
```
* Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data.
```
solver.net.forward() # train net
solver.test_nets[0].forward() # test net (there can be more than one)
# we use a little trick to tile the first eight images
imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray'); axis('off')
print 'train labels:', solver.net.blobs['label'].data[:8]
imshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray'); axis('off')
print 'test labels:', solver.test_nets[0].blobs['label'].data[:8]
```
### 4. Stepping the solver
Both train and test nets seem to be loading data, and to have correct labels.
* Let's take one step of (minibatch) SGD and see what happens.
```
solver.step(1)
```
Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \times 5$ grid of $5 \times 5$ filters.
```
imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)
.transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray'); axis('off')
```
### 5. Writing a custom training loop
Something is happening. Let's run the net for a while, keeping track of a few things as it goes.
Note that this process will be the same as if training through the `caffe` binary. In particular:
* logging will continue to happen as normal
* snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)
* testing will happen at the interval specified (here, every 500 iterations)
Since we have control of the loop in Python, we're free to compute additional things as we go, as we show below. We can do many other things as well, for example:
* write a custom stopping criterion
* change the solving process by updating the net in the loop
```
%%time
niter = 200
test_interval = 25
# losses will also be stored in the log
train_loss = zeros(niter)
test_acc = zeros(int(np.ceil(niter / test_interval)))
output = zeros((niter, 8, 10))
# the main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
# store the train loss
train_loss[it] = solver.net.blobs['loss'].data
# store the output on the first test batch
# (start the forward pass at conv1 to avoid loading new data)
solver.test_nets[0].forward(start='conv1')
output[it] = solver.test_nets[0].blobs['score'].data[:8]
# run a full test every so often
# (Caffe can also do this for us and write to a log, but we show here
# how to do it directly in Python, where more complicated things are easier.)
if it % test_interval == 0:
print 'Iteration', it, 'testing...'
correct = 0
for test_it in range(100):
solver.test_nets[0].forward()
correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1)
== solver.test_nets[0].blobs['label'].data)
test_acc[it // test_interval] = correct / 1e4
```
* Let's plot the train loss and test accuracy.
```
_, ax1 = subplots()
ax2 = ax1.twinx()
ax1.plot(arange(niter), train_loss)
ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
ax2.set_title('Test Accuracy: {:.2f}'.format(test_acc[-1]))
```
The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!
* Since we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.
```
for i in range(8):
figure(figsize=(2, 2))
imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
figure(figsize=(10, 2))
imshow(output[:50, i].T, interpolation='nearest', cmap='gray')
xlabel('iteration')
ylabel('label')
```
We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted "9" that's (understandably) most confused with "4".
* Note that these are the "raw" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).
```
for i in range(8):
figure(figsize=(2, 2))
imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
figure(figsize=(10, 2))
imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')
xlabel('iteration')
ylabel('label')
```
### 6. Experiment with architecture and optimization
Now that we've defined, trained, and tested LeNet there are many possible next steps:
- Define new architectures for comparison
- Tune optimization by setting `base_lr` and the like or simply training longer
- Switching the solver type from `SGD` to an adaptive method like `AdaDelta` or `Adam`
Feel free to explore these directions by editing the all-in-one example that follows.
Look for "`EDIT HERE`" comments for suggested choice points.
By default this defines a simple linear classifier as a baseline.
In case your coffee hasn't kicked in and you'd like inspiration, try out
1. Switch the nonlinearity from `ReLU` to `ELU` or a saturing nonlinearity like `Sigmoid`
2. Stack more fully connected and nonlinear layers
3. Search over learning rate 10x at a time (trying `0.1` and `0.001`)
4. Switch the solver type to `Adam` (this adaptive solver type should be less sensitive to hyperparameters, but no guarantees...)
5. Solve for longer by setting `niter` higher (to 500 or 1,000 for instance) to better show training differences
```
train_net_path = 'mnist/custom_auto_train.prototxt'
test_net_path = 'mnist/custom_auto_test.prototxt'
solver_config_path = 'mnist/custom_auto_solver.prototxt'
### define net
def custom_net(lmdb, batch_size):
# define your own net!
n = caffe.NetSpec()
# keep this data layer for all networks
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
transform_param=dict(scale=1./255), ntop=2)
# EDIT HERE to try different networks
# this single layer defines a simple linear classifier
# (in particular this defines a multiway logistic regression)
n.score = L.InnerProduct(n.data, num_output=10, weight_filler=dict(type='xavier'))
# EDIT HERE this is the LeNet variant we have already tried
# n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))
# n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
# n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))
# n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
# n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))
# EDIT HERE consider L.ELU or L.Sigmoid for the nonlinearity
# n.relu1 = L.ReLU(n.fc1, in_place=True)
# n.score = L.InnerProduct(n.fc1, num_output=10, weight_filler=dict(type='xavier'))
# keep this loss layer for all networks
n.loss = L.SoftmaxWithLoss(n.score, n.label)
return n.to_proto()
with open(train_net_path, 'w') as f:
f.write(str(custom_net('mnist/mnist_train_lmdb', 64)))
with open(test_net_path, 'w') as f:
f.write(str(custom_net('mnist/mnist_test_lmdb', 100)))
### define solver
from caffe.proto import caffe_pb2
s = caffe_pb2.SolverParameter()
# Set a seed for reproducible experiments:
# this controls for randomization in training.
s.random_seed = 0xCAFFE
# Specify locations of the train and (maybe) test networks.
s.train_net = train_net_path
s.test_net.append(test_net_path)
s.test_interval = 500 # Test after every 500 training iterations.
s.test_iter.append(100) # Test on 100 batches each time we test.
s.max_iter = 10000 # no. of times to update the net (training iterations)
# EDIT HERE to try different solvers
# solver types include "SGD", "Adam", and "Nesterov" among others.
s.type = "SGD"
# Set the initial learning rate for SGD.
s.base_lr = 0.01 # EDIT HERE to try different learning rates
# Set momentum to accelerate learning by
# taking weighted average of current and previous updates.
s.momentum = 0.9
# Set weight decay to regularize and prevent overfitting
s.weight_decay = 5e-4
# Set `lr_policy` to define how the learning rate changes during training.
# This is the same policy as our default LeNet.
s.lr_policy = 'inv'
s.gamma = 0.0001
s.power = 0.75
# EDIT HERE to try the fixed rate (and compare with adaptive solvers)
# `fixed` is the simplest policy that keeps the learning rate constant.
# s.lr_policy = 'fixed'
# Display the current training loss and accuracy every 1000 iterations.
s.display = 1000
# Snapshots are files used to store networks we've trained.
# We'll snapshot every 5K iterations -- twice during training.
s.snapshot = 5000
s.snapshot_prefix = 'mnist/custom_net'
# Train on the GPU
s.solver_mode = caffe_pb2.SolverParameter.GPU
# Write the solver to a temporary file and return its filename.
with open(solver_config_path, 'w') as f:
f.write(str(s))
### load the solver and create train and test nets
solver = None # ignore this workaround for lmdb data (can't instantiate two solvers on the same data)
solver = caffe.get_solver(solver_config_path)
### solve
niter = 250 # EDIT HERE increase to train for longer
test_interval = niter / 10
# losses will also be stored in the log
train_loss = zeros(niter)
test_acc = zeros(int(np.ceil(niter / test_interval)))
# the main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
# store the train loss
train_loss[it] = solver.net.blobs['loss'].data
# run a full test every so often
# (Caffe can also do this for us and write to a log, but we show here
# how to do it directly in Python, where more complicated things are easier.)
if it % test_interval == 0:
print 'Iteration', it, 'testing...'
correct = 0
for test_it in range(100):
solver.test_nets[0].forward()
correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1)
== solver.test_nets[0].blobs['label'].data)
test_acc[it // test_interval] = correct / 1e4
_, ax1 = subplots()
ax2 = ax1.twinx()
ax1.plot(arange(niter), train_loss)
ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
ax2.set_title('Custom Test Accuracy: {:.2f}'.format(test_acc[-1]))
```
| github_jupyter |
# Distributions of weights in ResNet34 and ResNet50
In this notebook we will compare the distribution of weights from two almost identical architectures. More information about what different architectures you can read in [this](./../../tutorials/06_detailed_model_usage.ipynb) notebook.
```
import sys
sys.path.append('../../utils')
import pickle
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from utils import plot_weights
```
First of all, load the weights that we saved in the tutorials.
```
bottle_weights_path = 'path/with/saved_bottle_weights.pkl'
res_weights_path = 'path/with/saved_res_weights.pkl'
with open(bottle_weights_path, 'rb') as f:
bottle_names, bottle_weights, bottle_params = pickle.load(f)
with open(res_weights_path', 'rb') as f:
res_names, res_weights, res_params = pickle.load(f)
```
Below is drawn the distribution of weights of 0, 4th, 7th, 14th blocks from the ResNet50 model. Drawing function you can see in [utils](./../../utils/utils.py).
```
plot_weights(bottle_names, bottle_weights, bottle_params, ['r', 'c', 'b', 'g'], [4, 4], [0, 4, 7, 14])
```
It's not difficult to notice, that distribution of 1x1 convolutions has a larger variance than in 3x3 convolution. Therefore, they put a stronger influence on the output.
__Black lines show the initial distribution of weights__
______
Now let's draw distribution of 0th, 3rd, 7th, 14th blocks from the ResNet34 model.
```
plot_weights(res_names, res_weights, res_params, ['g', 'y', 'r'], [4, 3], [0, 3, 7, 14], bottleneck=False)
```
It is not difficult to see that the distribution of the first and the second 3x3 convolutions are the same.
____
Now, let's compare the distribution of the second layer of ResNet34 architecture and the 3х3 layer of ResNet50 from 3rd, 6th, 9th, 13th blocks. Will they be the same?
```
indices = [i for i in range(len(bottle_names)) if 'conv' in bottle_names[i][:8]]
_, ax = plt.subplots(2, 2, sharex='all', figsize=(23, 24))
ax = ax.reshape(-1)
num_plot = 0
num_blocks = [3, 6, 9, 13]
res_layers = np.where(res_names == 'layer-4')[0][num_blocks]
bottle_layers = np.where(bottle_names == 'layer-4')[0][num_blocks]
for i,j in zip(res_layers, bottle_layers):
ax[num_plot].set_title('convolution layer with kernel 3x3 №{}'.format(num_blocks[num_plot]), fontsize=18)
sns.distplot(res_weights[i].reshape(-1), ax=ax[num_plot], color='y', label='simple')
sns.distplot(bottle_weights[j].reshape(-1), ax=ax[num_plot], color='c', label='bottleneck')
ax[num_plot].legend()
ax[num_plot].set_xlabel('value', fontsize=20)
ax[num_plot].set_ylabel('quantity', fontsize=20)
num_plot += 1
if num_plot == ax.shape[0]:
break
```
Graphs show, that its distributions are the same. Therefore the first 3x3 convolution layer from ResNet34 replaces the two 1x1 convolutions from ResNet50.
### It's time to conclude:
* Convolutions of 1x1 size put a stronger influence on the output than 3x3.
* The distribution of all layers with the 3x3 convolutions is the same.
Read and apply another experiments:
* previous [experiment](./../augmentation/augmentation.ipynb)
* return to the [table of contents](./../experiments_description.ipynb).
If you still have not completed our tutorial, you can fix it right [now](./../../tutorial/00_description.ipynb)!
| github_jupyter |
```
cd D:\ThisSemester\CompNeuro\Homeworks\Hw4\HW4_Can_Kocagil\Assignment
pwd
ls
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.decomposition import PCA
import h5py
```
### Part A
```
f = h5py.File('hw4_data1.mat','r')
faces = np.array(f['faces'][:]).T
print(faces.shape)
N, num_pixel = faces.shape
image_faces = faces.reshape(N, np.int(np.sqrt(num_pixel)), np.int(np.sqrt(num_pixel)))
print(image_faces.shape)
fig, axs = plt.subplots(3,3,figsize = (8,8))
num_examples = 9
for i in range(3):
for j in range(3):
axs[i,j].imshow(image_faces[i + j], cmap = 'gray')
axs[i,j].axis('off')
# For plotting
import plotly.io as plt_io
import plotly.graph_objects as go
%matplotlib inline
def plot_2d(component1, component2):
fig = go.Figure(data=go.Scatter(
x = component1,
y = component2,
mode='markers',
marker=dict(
size=20,
color=faces, #set color equal to a variable
colorscale='Rainbow', # one of plotly colorscales
showscale=True,
line_width=1
)
))
fig.update_layout(margin=dict( l=100,r=100,b=100,t=100),width=2000,height=1200)
fig.layout.template = 'plotly_dark'
fig.show()
def plot_3d(component1,component2,component3):
fig = go.Figure(data=[go.Scatter3d(
x=component1,
y=component2,
z=component3,
mode='markers',
marker=dict(
size=10,
#color=y, # set color to an array/list of desired values
colorscale='Rainbow', # choose a colorscale
opacity=1,
line_width=1
)
)])
# tight layout
fig.update_layout(margin=dict(l=50,r=50,b=50,t=50),width=1800,height=1000)
fig.layout.template = 'plotly_dark'
fig.show()
pca = PCA(n_components = 3)
principalComponents = pca.fit_transform(faces)
principal = pd.DataFrame(data = principalComponents
, columns = ['principal component 1', 'principal component 2','principal component 3'])
plot_2d(principalComponents[:, 0], principalComponents[:, 1])
latent_dim = 100
pca = PCA(n_components = latent_dim)
principalComponents = pca.fit_transform(faces)
plt.figure()
plt.plot(pca.explained_variance_ratio_)
plt.xlabel('Principal Component Index')
plt.ylabel('Proportion with Respect to Total Variance')
plt.title('Contribution of each Principal Component \n to Total Variance')
plt.grid()
plt.show()
fig, axs = plt.subplots(5,5,figsize = (12,12))
for i in range(5):
for j in range(5):
axs[i,j].imshow(pca.components_.shape[i + j], cmap = 'gray')
axs[i,j].axis('off')
fig, axes = plt.subplots(5, 5, figsize=(12,12),
subplot_kw={'xticks':[], 'yticks':[]},gridspec_kw=dict(hspace=0.01, wspace=0.01))
for i, ax in enumerate(axes.flat):
ax.imshow(pca.components_[i].reshape(32, 32).T, cmap = 'gray')
```
### Part B
```
def pca_reconstruction(data,trained_pca,up_to):
pca_mean = trained_pca.mean_
mean_removed = data - pca_mean
pca_components = trained_pca.components_[:up_to]
reconstructed = mean_removed @ pca_components.T @ pca_components + pca_mean
return reconstructed
faces_PCA_10 = pca_reconstruction(faces,pca,10)
faces_PCA_25 = pca_reconstruction(faces,pca,25)
faces_PCA_50 = pca_reconstruction(faces,pca,50)
fig, axes = plt.subplots(6, 6, figsize=(10,10), facecolor='white',subplot_kw={'xticks':[], 'yticks':[]})
fig.suptitle('Original Versions of the First 36 Images', fontsize='16')
fig.tight_layout(rect=[0, 0, 1, .95])
for i, ax in enumerate(axes.flat):
ax.imshow(faces[i].reshape(32, 32).T, cmap=plt.cm.gray)
ax.set_xlabel(i+1)
```
| github_jupyter |
### 网络科学理论
***
***
# 网络科学:使用NetworkX分析复杂网络
***
***
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
http://networkx.readthedocs.org/en/networkx-1.11/tutorial/
```
%matplotlib inline
import networkx as nx
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import networkx as nx
G=nx.Graph() # G = nx.DiGraph() # 有向网络
# 添加(孤立)节点
G.add_node("spam")
# 添加节点和链接
G.add_edge(1,2)
print(G.nodes())
print(G.edges())
# 绘制网络
nx.draw(G, with_labels = True)
```
# WWW Data download
<del>http://www3.nd.edu/~networks/resources.htm</del>
https://pan.baidu.com/s/1o86ZaTc
World-Wide-Web: [README] [DATA]
Réka Albert, Hawoong Jeong and Albert-László Barabási:
Diameter of the World Wide Web Nature 401, 130 (1999) [ PDF ]
# 作业:
- 下载www数据
- 构建networkx的网络对象g(提示:有向网络)
- 将www数据添加到g当中
- 计算网络中的节点数量和链接数量
```
G = nx.Graph()
n = 0
with open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f:
for line in f:
n += 1
#if n % 10**4 == 0:
#flushPrint(n)
x, y = line.rstrip().split(' ')
G.add_edge(x,y)
nx.info(G)
```
# 描述网络
### nx.karate_club_graph
我们从karate_club_graph开始,探索网络的基本性质。
```
G = nx.karate_club_graph()
clubs = [G.node[i]['club'] for i in G.nodes()]
colors = []
for j in clubs:
if j == 'Mr. Hi':
colors.append('r')
else:
colors.append('g')
nx.draw(G, with_labels = True, node_color = colors)
G.node[1], G.node[9] # 节点1的属性 # 节点1的属性
G.edge.keys()[:3] # 前三条边的id
nx.info(G)
G.nodes()[:10]
G.edges()[:3]
G.neighbors(1)
nx.average_shortest_path_length(G)
```
### 网络直径
```
nx.diameter(G)#返回图G的直径(最长最短路径的长度)
```
### 密度
```
nx.density(G)
nodeNum = len(G.nodes())
edgeNum = len(G.edges())
2.0*edgeNum/(nodeNum * (nodeNum - 1))
```
# 作业:
- 计算www网络的网络密度
### 聚集系数
```
cc = nx.clustering(G)
cc.items()[:5]
plt.hist(cc.values(), bins = 15)
plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20)
plt.ylabel('$Frequency, \, F$', fontsize = 20)
plt.show()
```
#### Spacing in Math Mode
In a math environment, LaTeX ignores the spaces you type and puts in the spacing that it thinks is best. LaTeX formats mathematics the way it's done in mathematics texts. If you want different spacing, LaTeX provides the following four commands for use in math mode:
\; - a thick space
\: - a medium space
\, - a thin space
\\! - a negative thin space
### 匹配系数
```
# M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003
nx.degree_assortativity_coefficient(G) #计算一个图的度匹配性。
Ge=nx.Graph()
Ge.add_nodes_from([0,1],size=2)
Ge.add_nodes_from([2,3],size=3)
Ge.add_edges_from([(0,1),(2,3)])
node_size = [Ge.node[i].values()[0]*1000 for i in Ge.nodes()]
nx.draw(Ge, with_labels = True, node_size = node_size)
print(nx.numeric_assortativity_coefficient(Ge,'size'))
# plot degree correlation
from collections import defaultdict
import numpy as np
l=defaultdict(list)
g = nx.karate_club_graph()
for i in g.nodes():
k = []
for j in g.neighbors(i):
k.append(g.degree(j))
l[g.degree(i)].append(np.mean(k))
#l.append([g.degree(i),np.mean(k)])
x = l.keys()
y = [np.mean(i) for i in l.values()]
#x, y = np.array(l).T
plt.plot(x, y, 'r-o', label = '$Karate\;Club$')
plt.legend(loc=1,fontsize=10, numpoints=1)
plt.xscale('log'); plt.yscale('log')
plt.ylabel(r'$<knn(k)$> ', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.show()
```
# Degree centrality measures.(度中心性)
* degree_centrality(G) # Compute the degree centrality for nodes.
* in_degree_centrality(G) # Compute the in-degree centrality for nodes.
* out_degree_centrality(G) # Compute the out-degree centrality for nodes.
* closeness_centrality(G[, v, weighted_edges]) # Compute closeness centrality for nodes.
* betweenness_centrality(G[, normalized, ...]) # Betweenness centrality measures.(介数中心性)
```
dc = nx.degree_centrality(G)
closeness = nx.closeness_centrality(G)
betweenness= nx.betweenness_centrality(G)
fig = plt.figure(figsize=(15, 4),facecolor='white')
ax = plt.subplot(1, 3, 1)
plt.hist(dc.values(), bins = 20)
plt.xlabel('$Degree \, Centrality$', fontsize = 20)
plt.ylabel('$Frequency, \, F$', fontsize = 20)
ax = plt.subplot(1, 3, 2)
plt.hist(closeness.values(), bins = 20)
plt.xlabel('$Closeness \, Centrality$', fontsize = 20)
ax = plt.subplot(1, 3, 3)
plt.hist(betweenness.values(), bins = 20)
plt.xlabel('$Betweenness \, Centrality$', fontsize = 20)
plt.tight_layout()
plt.show()
fig = plt.figure(figsize=(15, 8),facecolor='white')
for k in betweenness:
plt.scatter(dc[k], closeness[k], s = betweenness[k]*1000)
plt.text(dc[k], closeness[k]+0.02, str(k))
plt.xlabel('$Degree \, Centrality$', fontsize = 20)
plt.ylabel('$Closeness \, Centrality$', fontsize = 20)
plt.show()
```
# 度分布
```
from collections import defaultdict
import numpy as np
def plotDegreeDistribution(G):
degs = defaultdict(int)
for i in G.degree().values(): degs[i]+=1
items = sorted ( degs.items () )
x, y = np.array(items).T
y_sum = np.sum(y)
y = [float(i)/y_sum for i in y]
plt.plot(x, y, 'b-o')
plt.xscale('log')
plt.yscale('log')
plt.legend(['Degree'])
plt.xlabel('$K$', fontsize = 20)
plt.ylabel('$P(K)$', fontsize = 20)
plt.title('$Degree\,Distribution$', fontsize = 20)
plt.show()
G = nx.karate_club_graph()
plotDegreeDistribution(G)
```
### 网络科学理论简介
***
***
# 网络科学:分析网络结构
***
***
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
# 规则网络
```
import networkx as nx
import matplotlib.pyplot as plt
RG = nx.random_graphs.random_regular_graph(3,200)
#生成包含200个节点、 每个节点有3个邻居的规则图RG
pos = nx.spectral_layout(RG)
#定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别
nx.draw(RG,pos,with_labels=False,node_size = range(1, 201))
#绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径
plt.show() #显示图形
plotDegreeDistribution(RG)
```
# ER随机网络
```
import networkx as nx
import matplotlib.pyplot as plt
ER = nx.random_graphs.erdos_renyi_graph(200,0.05)
#生成包含20个节点、以概率0.2连接的随机图
pos = nx.shell_layout(ER)
#定义一个布局,此处采用了shell布局方式
nx.draw(ER,pos,with_labels=False,node_size = 30)
plt.show()
plotDegreeDistribution(ER)
```
# 小世界网络
```
import networkx as nx
import matplotlib.pyplot as plt
WS = nx.random_graphs.watts_strogatz_graph(200,4,0.3)
#生成包含200个节点、每个节点4个近邻、随机化重连概率为0.3的小世界网络
pos = nx.circular_layout(WS)
#定义一个布局,此处采用了circular布局方式
nx.draw(WS,pos,with_labels=False,node_size = 30)
#绘制图形
plt.show()
plotDegreeDistribution(WS)
nx.diameter(WS)
cc = nx.clustering(WS)
plt.hist(cc.values(), bins = 10)
plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20)
plt.ylabel('$Frequency, \, F$', fontsize = 20)
plt.show()
import numpy as np
np.mean(cc.values())
```
# BA网络
```
import networkx as nx
import matplotlib.pyplot as plt
BA= nx.random_graphs.barabasi_albert_graph(200,2)
#生成n=20、m=1的BA无标度网络
pos = nx.spring_layout(BA)
#定义一个布局,此处采用了spring布局方式
nx.draw(BA,pos,with_labels=False,node_size = 30)
#绘制图形
plt.show()
plotDegreeDistribution(BA)
BA= nx.random_graphs.barabasi_albert_graph(20000,2)
#生成n=20、m=1的BA无标度网络
plotDegreeDistribution(BA)
import networkx as nx
import matplotlib.pyplot as plt
BA= nx.random_graphs.barabasi_albert_graph(500,1)
#生成n=20、m=1的BA无标度网络
pos = nx.spring_layout(BA)
#定义一个布局,此处采用了spring布局方式
nx.draw(BA,pos,with_labels=False,node_size = 30)
#绘制图形
plt.show()
nx.degree_histogram(BA)[:3]
BA.degree().items()[:3]
plt.hist(BA.degree().values())
plt.show()
from collections import defaultdict
import numpy as np
def plotDegreeDistributionLongTail(G):
degs = defaultdict(int)
for i in G.degree().values(): degs[i]+=1
items = sorted ( degs.items () )
x, y = np.array(items).T
y_sum = np.sum(y)
y = [float(i)/y_sum for i in y]
plt.plot(x, y, 'b-o')
plt.legend(['Degree'])
plt.xlabel('$K$', fontsize = 20)
plt.ylabel('$P_K$', fontsize = 20)
plt.title('$Degree\,Distribution$', fontsize = 20)
plt.show()
BA= nx.random_graphs.barabasi_albert_graph(5000,2)
#生成n=20、m=1的BA无标度网络
plotDegreeDistributionLongTail(BA)
def plotDegreeDistribution(G):
degs = defaultdict(int)
for i in G.degree().values(): degs[i]+=1
items = sorted ( degs.items () )
x, y = np.array(items).T
x, y = np.array(items).T
y_sum = np.sum(y)
plt.plot(x, y, 'b-o')
plt.xscale('log')
plt.yscale('log')
plt.legend(['Degree'])
plt.xlabel('$K$', fontsize = 20)
plt.ylabel('$P_K$', fontsize = 20)
plt.title('$Degree\,Distribution$', fontsize = 20)
plt.show()
BA= nx.random_graphs.barabasi_albert_graph(50000,2)
#生成n=20、m=1的BA无标度网络
plotDegreeDistribution(BA)
```
# 作业:
- 阅读 Barabasi (1999) Internet Diameter of the world wide web.Nature.401
- 绘制www网络的出度分布、入度分布
- 使用BA模型生成节点数为N、幂指数为$\gamma$的网络
- 计算平均路径长度d与节点数量的关系
<img src = './img/diameter.png' width = 10000>
```
Ns = [i*10 for i in [1, 10, 100, 1000]]
ds = []
for N in Ns:
print N
BA= nx.random_graphs.barabasi_albert_graph(N,2)
d = nx.average_shortest_path_length(BA)
ds.append(d)
plt.plot(Ns, ds, 'r-o')
plt.xlabel('$N$', fontsize = 20)
plt.ylabel('$<d>$', fontsize = 20)
plt.xscale('log')
plt.show()
```
# 参考
* https://networkx.readthedocs.org/en/stable/tutorial/tutorial.html
* http://computational-communication.com/wiki/index.php?title=Networkx
| github_jupyter |
# Planar Flow
```
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import numpy.linalg as LA
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
def mvn_pdf(X, mu=np.array([[0, 0]]), sig=np.eye(2)):
sqrt_det_2pi_sig = np.sqrt(2 * np.pi * LA.det(sig))
sig_inv = LA.inv(sig)
X = X[:, None, :] - mu[None, :, :]
return np.exp(-np.matmul(np.matmul(X, np.expand_dims(sig_inv, 0)), (X.transpose(0, 2, 1)))/2)/sqrt_det_2pi_sig
```
Let $\mathbf{z}\sim q_0(\mathbf{z})$ where $q_0(\mathbf{z}) = \mathcal{N}(\mathbf{z};\mathbf{0},\mathbf{I})$.
Let $f(\mathbf{z})$ be an invertible transformation given by
$$ \mathbf{y} = f(\mathbf{z}) = \mathbf{z} + \mathbf{u}h(\mathbf{w}^\top\mathbf{z}+b)$$
The pdf of $\mathbf{y}$ is given by
$$q_1(\mathbf{y}) = q_0(\mathbf{z})\left|\det\frac{\partial f}{\partial \mathbf{z}}\right|^{-1}$$
where $\left|\det\frac{\partial f}{\partial \mathbf{z}}\right|$ can be computed as follows
$$ \psi(\mathbf{z}) = h'(\mathbf{w}^\top\mathbf{z}+b)\mathbf{w} $$
$$\left|\det\frac{\partial f}{\partial \mathbf{z}}\right| = |1 + \mathbf{u}^\top\psi(\mathbf{z})|$$
Here, we set $h(x)=\tanh(x)$ which gives us $h'(x)=(1-\tanh^2(x))$
```
w = np.array([5., 0])
u = np.array([1., 0])
b = 0
def h(x):
return np.tanh(x)
def h_prime(x):
return 1 - np.tanh(x) ** 2
def f(z):
y = z + np.dot(h(np.dot(z, w) + b).reshape(-1,1), u.reshape(1,-1))
return y
def det_J(z):
psi = h_prime(np.dot(z, w) + b).reshape(-1,1) * w
det = np.abs(1 + np.dot(psi, u.reshape(-1,1)))
return det
```
Let's set some values of $\mathbf{z}$ and see how $f$ moves $\mathbf{z}$ around in the 2D space.
```
r = np.linspace(-3, 3, 20)
z = np.array(np.meshgrid(r, r)).transpose(1, 2, 0)
z = np.reshape(z, [z.shape[0] * z.shape[1], -1])
fig, axn = plt.subplots(ncols=2, nrows=1, figsize=[12, 5])
colors = np.random.rand(z.shape[0])
axn[0].scatter(z[:, 0], z[:, 1], c=colors, cmap='rainbow', s=8)
axn[0].set_title(r'$\mathbf{z}$')
y = f(z)
axn[1].scatter(y[:, 0], y[:, 1], c=colors, cmap='rainbow', s=8)
axn[1].set_title(r'$\mathbf{y} = f(\mathbf{z}$)')
plt.show()
```
## Analytic Density
```
r = np.linspace(-3, 3, 1000)
z = np.array(np.meshgrid(r, r)).transpose(1, 2, 0)
z = np.reshape(z, [z.shape[0] * z.shape[1], -1])
```
Let's plot the probability density $q_0(\mathbf{z})$.
```
q0 = mvn_pdf(z)
plt.hexbin(z[:,0], z[:,1], C=q0.squeeze(), cmap='rainbow')
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.show()
```
Now, let's compute $\mathbf{y} = f(\mathbf{z})$ and the density $q_1(\mathbf{y})$. Note that we not inverting $f$. Instead, we are first setting a $\mathbf{z}$ and then plotting the density $q_1(\mathbf{y})$ at corresponding $\mathbf{y}$s.
```
q1 = q0.squeeze()/det_J(z).squeeze()
y = f(z)
plt.hexbin(y[:,0], y[:,1], C=q1.squeeze(), cmap='rainbow')
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim([-4, 4])
plt.ylim([-3, 3])
plt.show()
```
## Empirical Density
Now, let's sample some $\mathbf{z}$s from $q_0$ and plot a 2D histogram.
```
z = np.random.normal(size=(int(1e6),2))
plt.hexbin(z[:,0], z[:,1], cmap='rainbow')
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.show()
```
Compute the $\mathbf{y} = f(\mathbf{z})$ and plot the histogram. The empirical density looks like the analytic density $q_1$.
```
y = f(z)
plt.hexbin(y[:,0], y[:,1], cmap='rainbow')
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim([-4, 4])
plt.ylim([-3, 3])
plt.show()
```
| github_jupyter |
```
% pylab inline
import numpy as np
import pandas as pd
import os
import cv2
read_data=pd.read_csv('flickr_logos_27_dataset/flickr_logos_27_dataset_training_set_annotation.txt',sep=" ",header=None)
read_data.drop(read_data.columns[len(read_data.columns)-1], axis=1, inplace=True)
read_data.columns=["ID","labels","random","x1","y1","x2","y2"]
read_data
x=read_data.ID
y=read_data["labels"]
x1=read_data.x1
x2=read_data.x2
y1=read_data.y1
y2=read_data.y2
def crop_image(img_path,x1,y1,x2,y2):
img=imread(img_path)
image=cv2.rectangle(img, (x1, y1), (x2, y2), (255,0,0), 2)
h=y2-y1
w=x2-x1
crop_img = img[y1:y1+h, x1:x1+w]
return crop_img
import cv2
import random
import matplotlib.pyplot as plt
i=random.randint(0,x.shape[0])
img_path=os.path.join("flickr_logos_27_dataset/flickr_logos_27_dataset_images",x[i])
crop_img=crop_image(img_path,x1[i],y1[i],x2[i],y2[i])
print "Original Image"
plt.imshow(imread(img_path))
plt.show()
print "Cropped Image"
plt.imshow(crop_img)
plt.show()
from scipy.misc import *
temp=[]
label=[]
for index, row in read_data.iterrows():
img_path=os.path.join('flickr_logos_27_dataset/flickr_logos_27_dataset_images',row["ID"])
#print row["x1"],row["y1"],row["x2"],row["y2"]
if row["labels"]=="Pepsi":
img=crop_image(img_path,row["x1"],row["y1"],row["x2"],row["y2"])
try:
img=imresize(img,(16,16))
except:
continue
img=img.astype('float32')
temp.append(img)
label.append("Pepsi")
else:
img=crop_image(img_path,row["x1"],row["y1"],row["x2"],row["y2"])
try:
img=imresize(img,(16,16))
except:
continue
img=img.astype('float32')
temp.append(img)
label.append("Not Pepsi")
import numpy as np
data=np.stack(temp)
labeled_data=np.stack(label)
print len(labeled_data)
print len(data)
plt.imshow(data[234],cmap='gray')
plt.show()
print labeled_data[234]
print data.shape
print labeled_data.shape
normalized_data=data/255
from sklearn.model_selection import StratifiedShuffleSplit
splitter = StratifiedShuffleSplit(n_splits=1, test_size=0.05, random_state=0)
# Loop through the splits (only one)
for train_indices, test_indices in splitter.split(normalized_data, labeled_data):
# Select the train and test data
x_train, y_train = normalized_data[train_indices], labeled_data[train_indices]
x_test, y_test = normalized_data[test_indices], labeled_data[test_indices]
print x_train.shape
print y_train.shape
print x_test.shape
print y_test.shape
from sklearn.preprocessing import LabelEncoder
lb = LabelEncoder()
y_train = lb.fit_transform(y_train)
from keras.models import Sequential
from keras.layers import *
model=Sequential()
model.add(Conv2D(16,(3,3),padding='same',activation='relu',input_shape=(16,16,3)))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128,(3,3),padding='same',activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
gen_flow=datagen.flow(x_train,y_train,batch_size=32,seed=0)
model.load_weights('models/my_model_weights.h5')
hist=model.fit_generator(gen_flow,steps_per_epoch=len(x_train) / 128, epochs=150)
plt.plot(hist.history['acc'],'g')
#plt.plot(hist.history['val_acc'],'b')
plt.plot(hist.history['loss'],'r')
model.save('models/my_model.h5')
```
| github_jupyter |
## <center> Car Dekho selling price prediction for old cars </center>

<a id=top></a>
<b><u>Table of Content</u></b>
1. [Introduction](#section1)<br>
2. [About Data Set](#section2)<br>
3. [Exploratory Data Analysis](#section3)<br>
4. [Observations](#section4)<br>
5. [Feature Engineering](#section5)<br>
6. [Predictive Modeling](#section6)<br>
7. [Comparing all Regression Techniques](#section8)<br>
8. [Conclusion](#section7)<br>
<a id=section1></a>
## Introduction
CarDekho.com is India's leading car search venture that helps users buy cars that are right for them. Its website and app carry rich automotive content such as expert reviews, detailed specs and prices, comparisons as well as videos and pictures of all car brands and models available in India. The company has tie-ups with many auto manufacturers, more than 4000 car dealers and numerous financial institutions to facilitate the purchase of vehicles.
This study is done based on the data taken from Car Dekho company and the objective of this study is to create a predictive model to predict Selling price of a car based on various features like Make, Model, Distance driven, Manual or Automatic like many other features.
```
## Import the rquired libraries
import pandas as pd # importing pandas
import numpy as np # importing numpy
import seaborn as sns #import seaborn
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder # label encoder for encoding Categorical Value
from sklearn.model_selection import train_test_split #Train Test split
from sklearn.preprocessing import StandardScaler # Standard Scaling
from sklearn.linear_model import LinearRegression,Lasso # Linear Reression
from sklearn.metrics import mean_absolute_error,mean_squared_error,r2_score # import regression model evaluation
from sklearn.ensemble import RandomForestRegressor # Random Forest Regressor
from sklearn.neighbors import KNeighborsRegressor # KNN regressor
# Read the File and get Top 5 cols
df=pd.read_csv("car data.csv")
df.head()
df.info()
```
<a id=section2></a>
## About Dataset

- There are total 301 observations. Not good enough for modeling.
- There are total 9 columns as shown above.
- There are no null values, so filling missing value process will be skipped.
- There are two kind of variables, tabled as below:
| Continous varibale | Categorical Variable |
| ------------ | ------------ |
| Year | Car_Name (N) |
| Selling_Price | Fuel_Type (N)|
| Present_Price | Seller_Type (N) |
| Kms_Driven | Transmission (N) |
| Owner | |
(N: Nominal, O: Ordinal)
```
df["Depriciation"]=df["Present_Price"]-df["Selling_Price"]
df.head()
```
<a id=section3></a>
## Exploratory Data Analysis
<font color="Blue"><b> Q1. Get the top 5 depriciating cars with their make and model</b></font>
```
fig, ax = plt.subplots(figsize=(10,7))
df.groupby(["Year","Car_Name"])["Depriciation"].mean().nlargest(5).plot(kind="bar")
plt.ylabel("Depriciation")
plt.title("Top 5 Depriciated Cars with their model")
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x()+0.2, p.get_height()/2))
```
<font color="Blue"><b> Q2. How the depriciation varying based on KMs driven and Transmission(Manual or Automatic)</b></font>
```
sns.lmplot(x="Kms_Driven",y="Depriciation",hue="Transmission",data=df,fit_reg=False)
```
<font color="Blue"><b> Q3. Get the Least 5 depriciating cars with their make and model</b></font>
```
fig, ax = plt.subplots(figsize=(10,7))
df.groupby(["Year","Car_Name"])["Depriciation"].mean().nsmallest(5).plot(kind="bar")
plt.ylabel("Depriciation")
plt.title("Least 5 Depriciated Cars with their model")
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x()+0.2, p.get_height()/2))
```
<font color="Blue"><b> Q4. How depriciation depends on # of Owners changed (like first hand, second hand or more)</b></font>
```
fig, ax = plt.subplots(figsize=(10,7))
df.groupby("Owner")["Depriciation"].mean().plot(kind="bar")
plt.ylabel("Depriciation")
plt.xlabel("# of owners")
plt.title("Averge depriciation based on # of owners")
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x()+0.2, p.get_height()/2))
sns.heatmap(df.corr(),annot=True)
```
<a id=section4></a>
## Observations
1. Correlation between Selleing price and present price is high so will remove it.
2. The depriciation varies increases with Kilometer driven.
3. Depriciation is more for Automatic cars.
4. As the # of owners of the cars increases, depriciation also increases.
5. The data for KMs Drivean and Selling price is full of outliers so need to manage the outliers.
6. As there are less observation so outliers data will not be removed but be replaced by 3rd Quartile value for that column
<a id=section5></a>
## Feature Engineering
```
df=df.drop("Present_Price",axis=1) # dropping the highly correalted Present Price Column
df=df.drop("Depriciation",axis=1) # Dropping the depriciation coloumn as it is temp column for EDA
#There are 2 duplicated Rows to removed them fromm dataset.
duplicated_row=df[df.duplicated()]
duplicated_index=duplicated_row.index
duplicated_index
df=df.drop_duplicates()
# using Label encoder for Car name categorical values
le=LabelEncoder()
df["Car_Name_encoded"]=le.fit_transform(df["Car_Name"])
# for "Fuel_Type","Seller_Type","Transmission" using dummification to convert categorical to continous values
df_dummified=pd.get_dummies(df[["Fuel_Type","Seller_Type","Transmission"]],drop_first=True)
df=pd.concat([df,df_dummified],axis=1)
# Dropping Car_Name, Fuel_Type, Seller_type,Transmission
df=df.drop(["Car_Name", "Fuel_Type", "Seller_Type","Transmission"],axis=1)
df.head()
# Checking outliers for Selling price
sns.boxplot(df["Selling_Price"])
df["Selling_Price"].describe()
# function to handle outliers and replacing with 3rd Quantile
def change_outliers_75Q(df,col_name):
df[col_name] = np.where(df[col_name] > df[col_name].quantile(q=.75), df[col_name].quantile(q=.75), df[col_name])
return df[col_name]
df["Selling_Price"]=change_outliers_75Q(df,"Selling_Price")
df["Kms_Driven"].describe()
sns.boxplot(df["Kms_Driven"])
df["Kms_Driven"]=change_outliers_75Q(df,"Kms_Driven")
y=df["Selling_Price"]
X=df.drop("Selling_Price",axis=1)
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=42,test_size=.2)
std_sclr=StandardScaler()
X_Train_scaled=std_sclr.fit_transform(X_train)
X_Test_sclaed=std_sclr.transform(X_test)
df_scaled=pd.DataFrame(X_Train_scaled,columns=X.columns)
df_scaled.boxplot()
```
<a id=section6></a>
## Predictive Modeling
```
lin_reg=LinearRegression()
#lasso_reg=Lasso(alpha=.1)
lin_reg.fit(X_Train_scaled,y_train)
y_pred=lin_reg.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_pred),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_pred),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_pred)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_pred),2)))
```
## Linear Regression
| Evaluation criteria | Score |
| ------------ | ------------ |
| Mean Absolute Error (MAE)| .65 |
| Mean Squared Error (MSE)| .75 |
| Root Mean Squared Error(RMSE)| .87 |
| R2 Score | .86 |
```
rndm_frst_reg=RandomForestRegressor(n_estimators=10)
rndm_frst_reg.fit(X_Train_scaled,y_train)
y_pred1=rndm_frst_reg.predict(X_Test_sclaed)
# Get Evaluation for KNN Regressor for Rndom Forest with Estimator as 10
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_pred1),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_pred1),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_pred1)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_pred1),2)))
```
## Random Forest Regressor
| Evaluation criteria | Score |
| ------------ | ------------ |
| Mean Absolute Error (MAE)| .45 |
| Mean Squared Error (MSE)| .55 |
| Root Mean Squared Error(RMSE)| .74 |
| R2 Score | .9 |
```
for eachK in range(1,18):
neigh_reg=KNeighborsRegressor(n_neighbors=eachK)
neigh_reg.fit(X_Train_scaled,y_train)
y_pred_neigh=neigh_reg.predict(X_Test_sclaed)
print("for k={} the score is {}".format(eachK,r2_score(y_test,y_pred_neigh)))
#Best R2 score is for k=3
neigh_at_3=KNeighborsRegressor(n_neighbors=3)
neigh_at_3.fit(X_Train_scaled,y_train)
y_pred_neigh_at_3=neigh_at_3.predict(X_Test_sclaed)
# Get Evaluation for KNN Regressor for K=3
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_pred_neigh_at_3),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_pred_neigh_at_3),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_pred_neigh_at_3)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_pred_neigh_at_3),2)))
```
## KNN Regressor
| Evaluation criteria | Score |
| ------------ | ------------ |
| Mean Absolute Error (MAE)| .56 |
| Mean Squared Error (MSE)| .66 |
| Root Mean Squared Error(RMSE)| .81 |
| R2 Score | .88 |
```
from sklearn.linear_model import Ridge,Lasso,ElasticNet
from sklearn.model_selection import GridSearchCV
rdg_reg=Ridge()
lasso=Lasso()
elastinet=ElasticNet()
parameter={'alpha':[1e-15,1e-10,1e-8,1e-4,1e-3,1e-2,0,1,2,4,10,20]}
regressor=GridSearchCV(rdg_reg,parameter,scoring="neg_mean_squared_error",cv=5)
regressor.fit(X_Train_scaled,y_train)
print(regressor.best_params_,regressor.best_score_)
y_ridge_predict=regressor.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_ridge_predict),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_ridge_predict),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_ridge_predict)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_ridge_predict),2)))
```
## Ridge Regressor
| Evaluation criteria | Score |
| ------------ | ------------ |
| Mean Absolute Error (MAE)| .65 |
| Mean Squared Error (MSE)| .75 |
| Root Mean Squared Error(RMSE)| .87 |
| R2 Score | .86 |
```
lasso_regressor=GridSearchCV(lasso,parameter,scoring="neg_mean_squared_error",cv=5)
lasso_regressor.fit(X_Train_scaled,y_train)
print(lasso_regressor.best_params_,lasso_regressor.best_score_)
y_lasso_pred=lasso_regressor.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_lasso_pred),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_lasso_pred),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_lasso_pred)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_lasso_pred),2)))
```
## Lasso Regressor
| Evaluation criteria | Score |
| ------------ | ------------ |
| Mean Absolute Error (MAE)| .67 |
| Mean Squared Error (MSE)| .80 |
| Root Mean Squared Error(RMSE)| .90 |
| R2 Score | .85 |
```
elastinet_regressor=GridSearchCV(elastinet,parameter,scoring="neg_mean_squared_error",cv=5)
elastinet_regressor.fit(X_Train_scaled,y_train)
print(elastinet_regressor.best_params_,elastinet_regressor.best_score_)
y_elastinet_pred=elastinet_regressor.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_elastinet_pred),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_elastinet_pred),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_elastinet_pred)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_elastinet_pred),2)))
```
## Elastinet Regressor
| Evaluation criteria | Score |
| ------------ | ------------ |
| Mean Absolute Error (MAE)| .68 |
| Mean Squared Error (MSE)| .80 |
| Root Mean Squared Error(RMSE)| .89 |
| R2 Score | .85 |
<a id=section8></a>
## Comparing all Regression Techniques

<a id=section7></a>
## Conclusion
1. The data set is full of outliers so actions to be taken to make data correct.
2. Tried with various Regression techniques and found Random Forest Regressor has given best R2 score for the iven dataset.
3. Tried with various K values from KNN Regressor and maximum R2 score is achieved as 87%.
4. The best R2 score from Random Forest Regressor ~90% which is final model that I am suggesting for this data set.
| github_jupyter |
# Dictionaries and Functions
## Dictionaries
Dictionaries (or "dicts") are collections of item pairs, similar to a list of tuple pairs. You can think of a ``dict`` object as being conceptually similar to a physical dictionary: i.e. we can look up the definition of a word in a dictionary very quickly because the words are ordered in alphabetical order (a form of indexing). In a ``dict`` object, instead of words and their associated definitions, we have ``keys`` and their associated ``values``.
To define a ``dict`` object, we use the following syntax:
``dict_name = {key1: val1, key2: val2}``
For example:
```
d = {
'hangry': 'bad-tempered or irritable as a result of hunger.',
'LOL': 'To laugh out loud; to be amused.'
}
print(d)
```
We can then use an item ``key`` to obtain its associated value:
```
d['LOL']
```
And we can add another item to the dictionary using the following syntax:
```
d['selfie'] = 'a photograph that one has taken of oneself, typically one taken with a smartphone or webcam and shared via social media.'
d[7] = 3.2345
d['pi'] = 3.14
d
```
Now add a new key-value pair of your own:
```
d['my_name']= 'kaneez fizza'
d
d['learn'] = 'python dictionary'
print(d)
```
One of the benefits of a dictionary is that the item look-up time is very fast. If you have to search for an item in a list or tuple with millions of items, it can take a long time, but searching for an item in a dictionary containing millions of items returns a result almost instantaneously.
## Creating and using functions
Functions are encapsulated segments of code that perform a set of pre-specified actions. You can use the same function again and again in many different situations. For example, one function you have been using already is the ``print()`` function, which takes a string as input and then outputs that string to the screen.
A function can take one or more values as input and return one or more values as output. A function is defined with the following syntax (don't worry about the errors in the below code block):
```
def function_name(argument1, argument2, ......):
<code_that_performs_specific_function>
return <values_to_return>
```
Let's define a new function that takes two numbers as input arguments. The function will calculate the square of each of those numbers, add the squares together, and return the resulting "sum of squares" value:
```
def sum_of_squares(x, y):
result = x**2 + y**2
return result
```
We can then call that function, passing two numbers as input:
```
sum_of_squares(2, 5)
```
We can also use the output of one function within another function, or use the output of one function as the input to another function:
```
def hypotenuse_length(x, y):
# Use Pythoagoras' theorem (c^2 = a^2 + b^2) to calculate the length of the hypotenuse of a right-angle triangle
result = sum_of_squares(x, y)**0.5 # Use the output of the sum_of_squares() function within the hypotenuse_length() function
return result
print(hypotenuse_length(2,5)) # Use the output of the hypotenuse_length() function as the input to the print() function
```
Note that we can also specify default values for each input argument. Python will use the default value if an argument is not specified during the function call.
Arguments that do not have default values are called positional arguments -- they must all be specified every time you call the function, and make sure you specify them in the correct order!
Note that you can use default argument values to change the behaviour of your function
```
def hypotenuse_length(x, y=None):
# Use Pythoagoras' theorem (c^2 = a^2 + b^2) to calculate the length of the hypotenuse of a right-angle triangle
# If the "y" argument is not passed, assume that y = x
if y is None:
y = x
result = sum_of_squares(x, y)**0.5 # Use the output of the sum_of_squares() function within the hypotenuse_length() function
return result
print(hypotenuse_length(2))
print(hypotenuse_length(2, 2))
print(hypotenuse_length(2, 1))
```
Default argument are always placed at the end of the collection of arguments in the function definition. When we call a function with multiple default arguments, the default arguments can be specified in any order:
```
def hypotenuse_length(x, y=None, print_result=False):
# Use Pythoagoras' theorem (c^2 = a^2 + b^2) to calculate the length of the hypotenuse of a right-angle triangle
# If the "y" argument is not passed, assume that y = x
if y is None:
y = x
result = sum_of_squares(x, y)**0.5 # Use the output of the sum_of_squares() function within the hypotenuse_length() function
if print_result is True:
print('hypotenuse =', result)
return result
```
See if you can predict what each of the following lines will do before you run each cell:
```
hypotenuse_length(2, print_result=True)
hypotenuse_length(2, print_result=True, y=7)
hypotenuse_length(2, True, 7)
hypotenuse_length(2, 100)
hypotenuse_length(2, 30, True)
```
Often when I am writing code for a project I will try to break code up into many different functions, rather than have large, monolithic slabs of code. Ideally, each function you write should encapsulate a single, simple idea.
## Lambda Functions
In cases where you need to define a very small function, it may be better to instead use a lambda function, which can have cleaner syntax than separately defining a small function. The lambda function syntax is:
``lambda x: <some_calculations_using_x>``
Here is an example to show you what a lambda function does:
```
func = lambda x, y: x*y + x/y
func(3,4)
```
The above lambda function is equivalent to:
```
def func(x, y):
return x*y + x/y
func(3,4)
```
## Exercise 5
(a) In the cell below, create a new ``sum_of_squares`` function that:
1. takes a list as input,
2. calculates the square of each of the numbers in the list,
3. adds those numbers together:
```
def sum_of_squares(l):
result=sum([x**2 for x in l])
return result
list_of_numbers = [2, 5, 7, 8, 9]
result = sum_of_squares(list_of_numbers)
print(result)
```
(b) Create a function that:
1. takes a list as input,
2. prints the list to screen if an associated input argument is ``True`` (but assign the argument a default value of ``False``),
3. uses your ``sum_of_squares()`` function to calculate the sum of squares of the items in the list, and saves the resulting value to a new variable called ``result``,
4. prints the value of the ``result`` variable.
then run the function:
```
def my_fun(l, print_result=False):
if print_result is True:
print(l)
result= sum_of_squares(l)
return result
list_of_numbers = [2, 5, 7, 8, 9, 11, 15, ]
# Now run your function
my_fun(list_of_numbers)
```
## Exercise 6: Keeping functions flexible and generalisable
Rather than write functions that apply to very specific cases, it is better to write functions that are flexible and generalisable, so that they can be used in as many different situations as possible.
For example, the following function is very specific. It calculates yearly energy usage of a house, based on the power consumption of a few different input appliances:
```
def yearly_energy_usage(tv, fridge, heating, cooling):
# Hours used per day for different appliances
tv_hours = 2
fridge_hours = 24
heating_hours = 2
cooling_hours = 1
# Calculate average daily energy use in Wh
E_Wh_perday = tv*tv_hours + \
fridge*fridge_hours + \
heating*heating_hours + \
cooling*cooling_hours
# Convert from Wh to kWh
E_kWh_perday = E_Wh_perday / 1000
# Calculate average yearly energy use in kW
E_kWh_peryear = E_kWh_perday * 365
print(E_kWh_peryear, 'kWh')
return E_kWh_peryear
yearly_energy_usage(120, 150, 1500, 2000)
yearly_energy_usage(120, 200, 1500, 0)
```
This function is quite specific, and has the following problems:
- if we want to change the hours used per day for different appliances, we have to change them in the function definition. Also, we would not be able to re-use the function if we wanted to call it with multiple different daily usages. Instead, it is better to pass those values as inputs when we call the function.
- power consumption for each appliance is passed in a separate input argument. It would be better to pass some sort of iterable collection of appliance power consumption data.
- the energy calculation is calculated over a one year duration. If we want to calculate the energy use over a different duration, say 1 month or 1 week, we have to change the function definition. It would be preferable to simply change a variable passed to the function call. Also, we would not be able to re-use the function if we wanted to call it with multiple different durations.
Now use the cell below (which is a copy of the above cell) to modify the function so that it takes two input arguments:
- ``appliances``: a dict containing three key-value pairs:
- ``"name"``: value is a list of appliance names,
- ``"power"``: value is a list of each appliance's power consumption, in Watts (W),
- ``"daily_usage"``: value is a list of each appliance's average daily usage time, in hours,
- ``duration``: the length of time over which to make the energy calculation, in days.
and change the function's name to ``energy_usage()``:
```
def yearly_energy_usage(appliances, duration):
# Calculate average daily energy use in Wh
E_Wh_perday = sum([P*t for P, t in zip(appliances['power'], appliances['daily_usage'])])
# Convert from Wh to kWh
E_kWh_perday = E_Wh_perday / 1000
# Calculate average yearly energy use in kW
E_kWh_peryear = E_kWh_perday * duration
print(E_kWh_peryear, 'kWh')
return E_kWh_peryear
appliances = {
'name' : ['tv, fridge, heating, cooling', 'lighting'],
'power' : [120, 150, 1500, 2000, 500],
'daily_usage': [2, 24, 2, 1, 5]
}
#appliances['daily_usage'] = [1, 2, 15, 1, 5]
yearly_energy_usage(appliances, 30)
print(appliances['power'])
print(appliances['daily_usage'])
```
Now edit the cell above to:
- calculate the energy usage over the course of 1 month (30 days),
- add another appliance -- ``lighting`` -- which uses an average of 500W for 5h a day.
Note that if we had tried to make these changes before making the function more general, the edits would have been messier and taken longer to code -- i.e. consider how you would have implemented the above changes in the ``yearly_energy_usage()`` function (i.e. the earlier, less generalisable version of the function).
| github_jupyter |
# A demo of Hilbert-Huang transform
```
import torch
import numpy as np
from matplotlib import pyplot as plt
from torchHHT import hht, visualization
from scipy.signal import chirp
import IPython
```
Generate a mixture of two Gaussian-modulated quadratic chirps, both with a sample rate 1000Hz and a signal duration 2.0s.
```
fs = 1000
duration = 2.0
t = torch.arange(fs*duration) / fs
x = torch.from_numpy(chirp(t, 5, 0.8, 10, method = "quadratic", phi=100)) * torch.exp(-4*(t-1)**2) + \
torch.from_numpy(chirp(t, 40, 1.2, 50, method = "linear")) * torch.exp(-4*(t-1)**2)
plt.plot(t, x)
plt.title("$x(t)$")
plt.xlabel("time")
plt.show()
```
Now let's perform empirical mode decomposition (EMD).
```
imfs, imfs_env, imfs_freq = hht.hilbert_huang(x, fs, num_imf=3)
visualization.plot_IMFs(x, imfs, fs, save_fig="img/emd.png")
```
From the above illustration we can see than the two modulated chirps are successfully separated and represented by `IMF 0` and `IMF 1`, respectively. Now let's further compute the amplitude and frequency modulation of each IMF via Hilbert transform, and obtain the Hilbert spectrum.
```
spectrum, t, f = hht.hilbert_spectrum(imfs_env, imfs_freq, fs, freq_lim = (0, 60), time_scale=1, freq_res = 1)
visualization.plot_HilbertSpectrum(spectrum, t, f,
save_spectrum="img/Hilbert_spectrum.png",
save_marginal="img/Hilbert_marginal.png")
```
In the illustration, the variation of frequencies over time can be clearly seen and consistent with our configuration - one increases linearly from 40Hz, reaching 50Hz at 1.2s, and the other increases quadratically from 5Hz, reaching 10Hz at 0.8s. From the color map one can observe that both of their amplitudes are modulated by a Gaussian envelope. The marginal spectrum also shows two peaks of the frequency distribution.
------
Now let's perform short-time Fourier transform for comparison.
```
from scipy.signal import stft
from torch import fft
plt.figure(figsize=(20, 4))
f, t, Zxx = stft(x, fs, nperseg=1024, noverlap=1023, nfft=1024)
f_lim = int(60/f[1])
ax = plt.subplot(1, 3, 1)
plt.colorbar(ax.pcolormesh(t, f[:f_lim], 20 * np.log10(np.abs(Zxx))[:f_lim, :], shading='auto', cmap = plt.cm.jet),
label="energy(dB)")
ax.set_xlabel("time")
ax.set_ylabel("frequency")
ax.set_title("(a) Spectrogram (long window)")
f, t, Zxx = stft(x, fs, nperseg=128, noverlap=127, nfft = 1024)
f_lim = int(60/f[1])
ax = plt.subplot(1, 3, 2)
plt.colorbar(ax.pcolormesh(t, f[:f_lim], 20 * np.log10(np.abs(Zxx))[:f_lim, :], shading='auto', cmap = plt.cm.jet),
label="energy(dB)")
ax.set_xlabel("time")
ax.set_ylabel("frequency")
ax.set_title("(b) Spectrogram (short window)")
X = fft.fft(x)
ax = plt.subplot(1, 3, 3)
f_lim = int(100/fs*x.shape[0])
ax.plot(np.arange(f_lim)/x.shape[0]*fs, 20 * np.log10(np.abs(X))[:f_lim])
ax.set_xlim(0, 60)
ax.set_xlabel("frequency")
ax.set_ylabel("energy (dB)")
ax.set_title("(c) marginal spectrum.")
plt.savefig("img/STFT_spectrum.png", dpi = 600)
plt.show()
```
The FFT (marginal) spectrum (Fig **(c)**) is similar to the Hilbert one. However, in the spectrogram (squared norm of STFT), due to the uncertainty principle the energy distribution cannot concentrate well at the frequency and the time axis at the same time, resulting in a blurred spectrum (Fig **(a)(b)**). Furthermore, since STFT is Fourier-based, the underlying non-linear modulation are still linearly expand; thus physically implausible harmonics would still occur locally.
| github_jupyter |
# 4. Modeling
## 4.1 Import of relevant Modules
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, precision_score, recall_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
sns.set()
%matplotlib inline
```
## 4.2 Read Data
```
data = pd.read_csv('dataset_dummies.csv') # file is generated in notebook_1
data.head()
```
## 4.3 Data Preparation for Modeling
```
target = data.fraud_reported
features = data.drop('fraud_reported', axis=1)
# Split data in training and test datasets
x_train, x_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=365)
x_train.head()
# Scale data
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
# distribution of target in train data
y_train.value_counts()
# distribution of target in test datat
y_test.value_counts()
```
# 4.4 Modeling and Evaluation
### 4.4.1 Logistic regression
```
logreg = LogisticRegression()
logreg.fit(x_train, y_train)
# train data
print(classification_report(y_train, logreg.predict(x_train)))
# train data
print('Accuracy:', accuracy_score(y_train, logreg.predict(x_train))*100)
print('Precision:', precision_score(y_train, logreg.predict(x_train))*100)
print('Recall:', recall_score(y_train, logreg.predict(x_train))*100)
# test data
print(classification_report(y_test, logreg.predict(x_test)))
# test data
print('Accuracy:', accuracy_score(y_test, logreg.predict(x_test))*100)
print('Precision:', precision_score(y_test, logreg.predict(x_test))*100)
print('Recall:', recall_score(y_test, logreg.predict(x_test))*100)
tn, fp, fn, tp = confusion_matrix(y_test, logreg.predict(x_test)).ravel()
print(tn, fp, fn, tp)
cm = confusion_matrix(y_test, logreg.predict(x_test))
sns.heatmap(cm, annot=True, cmap='terrain', fmt='g')
plt.xlabel('Real data')
plt.ylabel('Predicted data')
plt.show()
logreg.intercept_
logreg.coef_
```
### 4.4.2 Decision Tree
```
tree = DecisionTreeClassifier()
tree.fit(x_train, y_train)
# train data
print(classification_report(y_train, tree.predict(x_train)))
# train data
print('Accuracy:', accuracy_score(y_train, tree.predict(x_train))*100)
print('Precision:', precision_score(y_train, tree.predict(x_train))*100)
print('Recall:', recall_score(y_train, tree.predict(x_train))*100)
# test data
print(classification_report(y_test, tree.predict(x_test)))
# test data
print('Accuracy:', accuracy_score(y_test, tree.predict(x_test))*100)
print('Precision:', precision_score(y_test, tree.predict(x_test))*100)
print('Recall:', recall_score(y_test, tree.predict(x_test))*100)
cm = confusion_matrix(y_test, tree.predict(x_test))
sns.heatmap(cm, annot=True, cmap='terrain', fmt='g')
plt.xlabel('Real data')
plt.ylabel('Predicted data')
plt.show()
```
### 4.4.3 Random Forest
```
forest = RandomForestClassifier()
forest.fit(x_train, y_train)
# train data
print(classification_report(y_train, forest.predict(x_train)))
# train data
print('Accuracy:', accuracy_score(y_train, forest.predict(x_train))*100)
print('Precision:', precision_score(y_train, forest.predict(x_train))*100)
print('Recall:', recall_score(y_train, forest.predict(x_train))*100)
# test data
print(classification_report(y_test, forest.predict(x_test)))
# test data
print('Accuracy:', accuracy_score(y_test, forest.predict(x_test))*100)
print('Precision:', precision_score(y_test, forest.predict(x_test))*100)
print('Recall:', recall_score(y_test, forest.predict(x_test))*100)
cm = confusion_matrix(y_test, forest.predict(x_test))
sns.heatmap(cm, annot=True, cmap='terrain', fmt='g')
plt.xlabel('Real data')
plt.ylabel('Predicted data')
plt.show()
```
### 4.4.4 Support Vector Machine
```
svc = SVC()
svc.fit(x_train, y_train)
# train data
print(classification_report(y_train, svc.predict(x_train)))
# train data
print('Accuracy:', accuracy_score(y_train, svc.predict(x_train))*100)
print('Precision:', precision_score(y_train, svc.predict(x_train))*100)
print('Recall:', recall_score(y_train, svc.predict(x_train))*100)
# test data
print(classification_report(y_test, svc.predict(x_test)))
# test data
print('Accuracy:', accuracy_score(y_test, svc.predict(x_test))*100)
print('Precision:', precision_score(y_test, svc.predict(x_test))*100)
print('Recall:', recall_score(y_test, svc.predict(x_test))*100)
cm = confusion_matrix(y_test, svc.predict(x_test))
sns.heatmap(cm, annot=True, cmap='terrain', fmt='g')
plt.xlabel('Real data')
plt.ylabel('Predicted data')
plt.show()
```
# 5. Deployment
```
# Select one scaled person of the dataset
sample_df = x_test[72]
# Features of the selected sample
sample_df
# Execute prediction
sample_pred = svc.predict([sample_df])
# Interpret the result
def check_prediction(pred):
if pred[0] == 1:
print("Fraud.")
else:
print("No Fraud.")
# call the prediciton method
check_prediction(sample_pred)
```
| github_jupyter |
```
import numpy as np
from random import shuffle
from math import log, floor
import pandas as pd
import tensorflow as tf
import tensorboard as tb
from keras import backend as K
from keras.models import *
from keras.layers import *
from keras.activations import *
from keras.callbacks import *
from keras.utils import *
from keras.layers.advanced_activations import *
# from keras.layers.advanced_activations import *
from keras import *
from keras.engine.topology import *
from keras.optimizers import *
import keras
# import pandas as pd
# import numpy as np
# import sklearn
import pickle
from keras.applications import *
from keras.preprocessing.image import *
# importing dependencies
import pandas as pd # data frame
import numpy as np # matrix math
from scipy.io import wavfile # reading the wavfile
from sklearn.utils import shuffle # shuffling of data
from random import sample # random selection
from tqdm import tqdm # progress bar
import matplotlib.pyplot as plt # to view graphs
import wave
from math import log, floor
# audio processing
from scipy import signal # audio processing
from scipy.fftpack import dct
import librosa # library for audio processing
import numpy as np
import pandas as pd
from sklearn.decomposition import *
from sklearn.cluster import KMeans
import sys, os
import tqdm ##
from tqdm import * ##
from xgboost.sklearn import XGBClassifier
from sklearn.utils import shuffle # shuffling of data
from random import sample # random selection
from tqdm import tqdm # progress bar
# audio processing
from scipy import signal # audio processing
from scipy.fftpack import dct
import librosa # library for audio processing
import xgboost as xgb
import lightgbm as lgb
import catboost as ctb
from keras.utils import *
from sklearn.ensemble import *
import pickle
from bayes_opt import BayesianOptimization
from logHandler import Logger
from utils import readCSV, getPath, writePickle,readPickle
from keras.regularizers import l2
from keras.callbacks import History ,ModelCheckpoint, EarlyStopping
folder = 'data/predict_test/' #共同predict對stacling verified data的結果
# acc_df = pd.read_csv('data/ens_unverified/validation_ACC_P1S1.csv') #accuracy csv
acc_df = pd.read_csv('data/predict_test/X_test_ACC.csv')
# acc_df.columns = ['model','csv_name','acc']
acc_df.columns = ['csv_name','acc']
acc_df = acc_df.filter(['csv_name','acc'])
# acc_df['csv_name'] = acc_df['csv_name'].str.replace('_unverified_','_')
files = os.listdir(folder)
ratio_all=0
df_dict = {}
for i,csv in enumerate(files):
df_name = csv[:csv.rfind('_')]
# print(df_name)
if csv.endswith('ACC.csv'):
continue
ratio = acc_df[acc_df['csv_name'] == csv]['acc'].values[0]
ratio_all += ratio
if i==0:
df = pd.read_csv(os.path.join(folder,csv))
df = df.drop('fname',axis=1)
df = np.array(df) * ratio
# df1_ = pd.DataFrame(df1_)
# df1_['acc'] = ratio
# print(len(df1_))
# df_dict[df_name] = df_dict[df_name].filter(['fname'])
# df_dict[df_name] = pd.merge(df_dict[df_name],df1_,how='inner',right_index=True,left_index=True)
else:
df_ = pd.read_csv(os.path.join(folder,csv))
df_ = df_.drop('fname',axis=1)
df += np.array(df_) * ratio
# df1_ = pd.DataFrame(df1_)
# df1_['acc'] = ratio
# # print(len(df1_))
# df1_name = df1_name.filter(['fname'])
# df1_ = pd.merge(df1_name, df1_,how='inner',right_index=True,left_index=True)
# df_dict[df_name] = df_dict[df_name].append(df1_, ignore_index=True)
# for k,v in df_dict.items():
# df_dict[k] = v.sort_values('fname')
# _ = list(df_dict.keys())
# sum_ = np.zeros((len(df_dict[_[0]]), 42))
# for k,v in df_dict.items():
# sum_ += v[v.columns[1:]].values
# ratio_ = np.tile(sum_[:, -1], (41, 1)).T
test_X = df / ratio_all
print(sum(test_X[0]))
test_X.shape
reverse_dict = pickle.load(open('data/map_reverse.pkl' , 'rb'))
```
## Ensemble Result
```
name = pd.read_csv('data/sample_submission.csv')
X_name = name['fname'].tolist()
output = []
for i , d in enumerate(test_X):
# print(i)
# print(d)
# print('\n')
top3 = d.argsort()[-3:][::-1]
result = [reverse_dict[x] for x in top3]
s = ' '.join(result)
output.append(s)
df = pd.DataFrame(output , columns = ['label'])
df.insert(0,'fname' , X_name)
df.to_csv('result/ens_ans.csv', index=False)
print(df.head(10))
df_ens = pd.DataFrame(df.label.str.split(' ',2).tolist(),columns=['1','2','3'])
df_ens = pd.merge(pd.DataFrame(df.fname),df_ens,how='inner',right_index=True,left_index=True)
df_ens
```
## NN predict
```
model1 = load_model('model/strong_S1P2_NNclf_41dim_softmax.h5')
model2 = load_model('model/strong_S1P2_NNclf_41dim.h5')
ans1 = model1.predict(test_X)
ans2 = model2.predict(test_X)
ans_NN = (ans1+ans2)/2
df_nn = pd.DataFrame(np.argmax(ans_NN,axis=1),columns=['nn1'])
df_nn['nn1_ch'] = df_nn['nn1'].map(reverse_dict)
df_nn = df_nn.filter(['nn1_ch'])
df_nn
```
## Merge Prediction
```
# 1 => nn => 2 => 3
def que(x): # decide fin2
if x['nn1_ch'] == x['1'] :
return x['2']
else:
return x['nn1_ch']
def que2(x): # decide fin3
if x['nn1_ch'] == x['1'] :
return x['3']
else:
return x['2']
# better
# nn1_ch => 1 => 2 => 3
def que(x): # decide fin2
if x['nn1_ch'] == x['1'] :
return x['2']
else:
return x['1']
def que2(x): # decide fin3
if x['nn1_ch'] == x['1'] :
return x['3']
else:
return x['2']
df_ans = pd.merge(df_ens,df_nn,how='inner',left_index=True,right_index=True)
df_ans['fin2'] = df_ans.apply(que,axis=1)
df_ans['fin3'] = df_ans.apply(que2,axis=1)
df_ans['final'] = df_ans['nn1_ch']+' '+df_ans['fin2']+' '+df_ans['fin3'] # nn=>1=>2=>3
# df_ans['final'] = df_ans['1']+' '+df_ans['fin2']+' '+df_ans['fin3'] # 1=>nn=>2=>3
dfF = df_ans.filter(['fname','final'])
dfF.columns = ['fname','label']
dfF
dfF.to_csv('result/ans_2nnclf41dim_ori23.csv',index=False)
```
| github_jupyter |
<h1>Basic structures to use when reading RH output</h1>
This notebook showcases how to read some useful output of a 1D RH run using Han's rhanalyze Python module.
```
# some imports
import numpy as np
#import rhanalyze
import sys
sys.path.append('/Users/gcauzzi/Level2/WFA_June2021_workshop/RH/python/')
from rhanalyze.rhatmos import input_atmos
import matplotlib.pyplot as plt
# Path to an RH ouput directory. ".... rhf1d/run" is the default one but you can pick any other
rhoutput ='/Users/gcauzzi/Level2/WFA_June2021_workshop/RH/rhf1d/run'
```
<h1>Read RH output directory</h1>
```
# We are importing this again, just to be sure
import sys
sys.path.append('/Users/gcauzzi/Level2/WFA_June2021_workshop/RH/python/')
import numpy as np
import rhanalyze
from rhanalyze.rhatmos import input_atmos
import matplotlib.pyplot as plt
# Read all the contents of the output directory into the "falc" structure.
# Note that the directory contains BOTH input & output of the RH synthesis
# Of course, you can name this variable whatever you want.
falc = rhanalyze.rhout(rhoutput)
# See what's inside "falc" structure
print(dir(falc))
```
<h1>Geometry</h1>
```
# Geometry
print('What is in falc.geometry?')
print()
print(dir(falc.geometry))
print()
height = falc.geometry.height # height scale in m
tau500 = falc.geometry.tau500 # optical depth tau_500 scale
cmass = falc.geometry.cmass # column mass in kg / m^3
print('RH output path: ', falc.rhdir)
print('Number of rays: ', falc.geometry.Nrays)
print(' Mu values: ', falc.geometry.xmu)
print('Number of height points: ', falc.geometry.Ndep)
print(' Height of index 0: '+ format(height[0],'4.2f') + ' m')
print(' Height of index -1: '+ format(height[-1],'4.2f') + ' m')
#Let's plot a couple of variables:
fig, ax = plt.subplots(1,2, figsize = (10, 3))
ax[0].plot(np.log(tau500), height/1000., linewidth=2.0) # height in km
ax[1].plot(np.log(tau500), cmass, linewidth=2.0)
ax[0].set_xlabel("$\\log \\tau_{500}$")
ax[0].set_ylabel("Height (km)")
ax[1].set_xlabel("$\\log \\tau_{500}$")
ax[1].set_ylabel("Column mass (kg m$^{-3}$)")
```
$\bf Note:$ 1. Larger geometrical heights correspond to smaller optical depths; 2. the lowest height in the model might be negative, i.e. below the surface as defined by $\tau_{500} = 0$; 3. Column mass increases with depth. These variables are part of the *input* atmosphere.
<h1>Atmos</h1>
```
# Atmosphere
print('What is in falc.atmos?')
print()
print(dir(falc.atmos))
print()
print('Number of elements: falc.atmos.Nelem = ', falc.atmos.Nelem)
print('Number of Hydrogen levels: falc.atmos.NHydr = ', falc.atmos.NHydr)
print('Hydrogen densities for NHydr levels: falc.atmos.nH')
print(' Hydrogen populations for ground level: falc.atmos.nH[:,0]')
print(' Ionized Hydrogen populations: falc.atmos.nH[:,5]')
print('Stokes (true or false): falc.atmos.stokes = ', falc.atmos.stokes)
print('Temperature (Kelvin): falc.atmos.T')
print('Electron density (m^-3): falc.atmos.n_elec')
print('Magnetic field strength (Tesla): falc.atmos.B')
print('Magnetic field inclination (radians): falc.atmos.gamma_B')
print('Magnetic field azimuth (radians): falc.atmos.chi_B')
print('Microturbulent velocity (m/s): falc.atmos.vturb')
#Again, let's plot some variables
fig, ax = plt.subplots(1,4, figsize = (20, 3))
ax[0].plot( height/1000.,falc.atmos.T, linewidth=2.0)
ax[1].plot(height/1000.,np.log10(falc.atmos.n_elec), linewidth=2.0)
ax[2].plot(height/1000.,falc.atmos.vturb/1000., linewidth=2.0)# km/s
ax[3].plot(height/1000.,falc.atmos.B*10000., linewidth=2.0)# in Gauss
for jj in range(4):
ax[jj].set_xlabel('height (km)')
ax[0].set_ylim(4000, 9000)
ax[0].set_ylabel('T (K)')
ax[1].set_ylabel('log n$_e$ (m$^{-3}$)')
ax[2].set_ylabel('microturbulence (km s$^{-1}$)')
ax[3].set_ylabel('B (Gauss)')
```
$\bf Note :$ 1. Height runs now left to right; 2. The temperature plot has the classical rise above ~500 km; 3. electron density stays ~ constant in the chromosphere (even if total density decreases by orders of magnitude!); 4. Microturbulence increases with height; 5. In this model we used, the magnetic field is constant throughout the whole atmosphere, at 500 G.
<h1>Spectrum</h1>
```
# Finally, we look at the resulting spectrum. In this test run, only the CaII lines were calculated (resonance H&K,
# and infrared triplet), while during the workshop we'll work also with Hydrogen and Sodium lines.
# The number of wavelength points for this default case is 291
# Let's look at them:
waves=falc.spectrum.waves
print('waves=', waves)
# A plot might be better ...
fig, figsize = (9,5)
plt.plot(waves,linewidth=2.0)
plt.plot(waves,'+',linewidth=10)
plt.ylabel("Wavelength (nm)")
```
$\bf Note:$ In this case, the wavelength points are not regularly distributed throughout the spectrum, but are clustered around the lines of interest (CaII H & K at ~ 400 nm; and CaII triplet at ~ 850 nm).
<h1>Rays</h1>
```
# Finally, the resulting spectrum. The units are intensity, i.e. J / s/ m^2 / sr/ Hz
print('What is in falc.rays[0]:')
print(dir(falc.rays[0]))
print()
print('falc.rays contains the Stokes parameters for the observing geometry specified in ray.input.')
print('ray.input is a configuration file of RH where you request the formal')
print('solution to the RTE for an observing geometry of interest. This is often mu=1.0, i.e. disk center')
print()
print('mu value = falc.rays[0].muz = ', falc.rays[0].muz)
print('I = falc.rays[0].I ')
print('Q = falc.rays[0].Q ')
print('U = falc.rays[0].U ')
print('V = falc.rays[0].V ')
print()
print('I = ', falc.rays[0].I)
print('Q = ', falc.rays[0].Q)
print('U = ', falc.rays[0].U)
print('V = ', falc.rays[0].V)
# A plot of the whole spectrum:
fig, figsize = (9,5)
plt.plot(waves,falc.rays[0].I,linewidth=2.0)
plt.plot(waves,falc.rays[0].I,'+',linewidth=10)
plt.ylabel("Intensity")
```
$\bf Note:$ From the plot you can see which spectral lines were calculated
```
# Now let'slook at the lines.
# Let's start with the most useful line of all, CaII 854.2 nm
fig, ax = plt.subplots(1,4, figsize = (20, 5))
I=falc.rays[0].I
Q=falc.rays[0].Q
U=falc.rays[0].U
V=falc.rays[0].V
ax[0].plot( waves,I,linewidth=2.0)
ax[1].plot(waves,Q, linewidth=2.0)
ax[2].plot(waves, U, linewidth=2.0)
ax[3].plot(waves, V, linewidth=2.0)# in Gauss
for jj in range(4):
ax[jj].set_xlabel('wavelength (nm)')
ax[jj].set_xlim(853.8,854.6) # here you select the wavelength range
#ax[0].set_ylim(0,1./1.e10)
ax[0].set_ylabel('Stokes I')
ax[1].set_ylabel('Stokes Q')
ax[2].set_ylabel('Stokes U')
ax[3].set_ylabel('Stokes V')
plt.tight_layout()
```
$\bf Note:$ 1. The line is fairly broad, but only the core (below I ~ 2) is chromospheric; 2. We ran a model with an inclined magnetic field, so also Q and U are non-zero; 3. The amplitude of V is ~ 2% of the continuum intensity, but Q and U are an order of magnitude smaller. Let's take a look at the core:
```
#
# Let's zoom in on the line core:
fig, ax = plt.subplots(1,4, figsize = (20, 5))
I=falc.rays[0].I
Q=falc.rays[0].Q
U=falc.rays[0].U
V=falc.rays[0].V
ax[0].plot( waves,I,linewidth=2.0)
ax[1].plot(waves,Q, linewidth=2.0)
ax[2].plot(waves, U, linewidth=2.0)
ax[3].plot(waves, V, linewidth=2.0)# in Gauss
for jj in range(4):
ax[jj].set_xlabel('wavelength (nm)')
ax[jj].set_xlim(854.1,854.32) # here you select the wavelength range
#ax[0].set_ylim(0,1./1.e10)
ax[0].set_ylabel('Stokes I')
ax[1].set_ylabel('Stokes Q')
ax[2].set_ylabel('Stokes U')
ax[3].set_ylabel('Stokes V')
plt.tight_layout()
```
Many details ! Now you could repeat the exercise with other lines: CaII K = 393.3 nm; CaII H = 396.8 nm; CaII infrared triplet = 849.8, 854.2 (the one we just did) and 866.2 nm
```
# For example, let's look at CaII K
fig, ax = plt.subplots(1,4, figsize = (20, 5))
I=falc.rays[0].I
Q=falc.rays[0].Q
U=falc.rays[0].U
V=falc.rays[0].V
ax[0].plot( waves,I,linewidth=2.0)
ax[1].plot(waves,Q, linewidth=2.0)
ax[2].plot(waves, U, linewidth=2.0)
ax[3].plot(waves, V, linewidth=2.0)# in Gauss
for jj in range(4):
ax[jj].set_xlabel('wavelength (nm)')
ax[jj].set_xlim(393.2,393.5) # here you select the wavelength range
#ax[0].set_ylim(0,1./1.e10)
ax[0].set_ylabel('Stokes I')
ax[1].set_ylabel('Stokes Q')
ax[2].set_ylabel('Stokes U')
ax[3].set_ylabel('Stokes V')
plt.tight_layout()
```
$\bf Note:$ While the amplitude of the Stokes parameters, in %, is similar to that of CaII 854.2, the absolute intensity is almost 1 order of magnitude less !
| github_jupyter |
```
import cartopy
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection=cartopy.crs.Mollweide())
ax.coastlines()
fig = plt.figure()
ax = fig.add_subplot(111, projection=cartopy.crs.Mollweide(central_longitude=180))
ax.coastlines()
fig = plt.figure()
ax = fig.add_subplot(111, projection=cartopy.crs.Orthographic())
ax.coastlines()
fig = plt.figure()
ax = fig.add_subplot(111, projection=cartopy.crs.PlateCarree())
ax.coastlines()
ax.gridlines()
c_lat, c_lon = 40.1164, -88.2434
a_lat, a_lon = -18.8792, 47.5079
fig = plt.figure(dpi=150)
ax = fig.add_subplot(111, projection=cartopy.crs.PlateCarree())
ax.scatter([c_lon, a_lon], [c_lat, a_lat])
ax.coastlines()
ax.gridlines()
ax.set_global()
c_lat, c_lon = 40.1164, -88.2434
a_lat, a_lon = -18.8792, 47.5079
fig = plt.figure(dpi=150)
ax = fig.add_subplot(111, projection=cartopy.crs.Mollweide())
ax.plot([c_lon, a_lon], [c_lat, a_lat], transform=cartopy.crs.PlateCarree())
ax.plot([c_lon, a_lon], [c_lat, a_lat], transform=cartopy.crs.Geodetic())
ax.coastlines()
ax.gridlines()
ax.set_global()
import numpy as np
vals = np.random.random((128, 128))
plt.imshow(vals)
c_lat, c_lon = 40.1164, -88.2434
a_lat, a_lon = -18.8792, 47.5079
fig = plt.figure(dpi=150)
ax = fig.add_subplot(111, projection=cartopy.crs.Mollweide())
ax.plot([c_lon, a_lon], [c_lat, a_lat], transform=cartopy.crs.PlateCarree())
ax.plot([c_lon, a_lon], [c_lat, a_lat], transform=cartopy.crs.Geodetic())
ax.imshow(vals, extent = [-60, -30, 30, 60], transform = cartopy.crs.PlateCarree())
ax.coastlines()
ax.gridlines()
ax.set_global()
import pandas as pd
!rm -f us-counties.csv ; wget https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv
!rm -f us-states.csv ; wget https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv
states = pd.read_csv("us-states.csv", parse_dates = ["date"])
import bqplot
proj = bqplot.AlbersUSA()
mark = bqplot.Map(map_data = bqplot.topo_load("map_data/USStatesMap.json"),
scales = {'projection': proj})
fig = bqplot.Figure(marks = [mark])
display(fig)
case_counts = states.groupby("fips")["cases"].max().to_dict()
proj = bqplot.AlbersUSA()
color_sc = bqplot.ColorScale(scheme = "viridis")
color_ax = bqplot.ColorAxis(scale = color_sc, label = 'Case Count')
mark = bqplot.Map(map_data = bqplot.topo_load("map_data/USStatesMap.json"),
scales = {'projection': proj, 'color': color_sc},
color = case_counts)
fig = bqplot.Figure(marks = [mark], axes = [color_ax])
display(fig)
total_cases = states.groupby("date").sum()
total_cases["cases"]
x_sc = bqplot.DateScale()
y_sc = bqplot.LogScale()
x_ax = bqplot.Axis(scale = x_sc)
y_ax = bqplot.Axis(scale = y_sc, orientation='vertical')
lines = bqplot.Lines(x = total_cases.index, y = total_cases["cases"],
scales = {'x': x_sc, 'y': y_sc})
interval_selector = bqplot.interacts.FastIntervalSelector(scale = x_sc)
fig = bqplot.Figure(marks = [lines], axes = [x_ax, y_ax], interaction = interval_selector)
display(fig)
interval_selector.selected
total_cases.loc[interval_selector.selected[0]:interval_selector.selected[1]]
```
| github_jupyter |
# Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
## Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
## Losses in PyTorch
Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.
Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),
> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.
>
> The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
```
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilites by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).
>**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss.
```
## Solution
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# Define the loss
criterion = nn.NLLLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our log-probabilities
logps = model(images)
# Calculate the loss with the logps and the labels
loss = criterion(logps, labels)
print(loss)
```
## Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.
You can turn off gradients for a block of code with the `torch.no_grad()` content:
```python
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
```
Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`.
The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.
```
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
```
Below we can see the operation that created `y`, a power operation `PowBackward0`.
```
## grad_fn shows the function that generated this variable
print(y.grad_fn)
```
The autgrad module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean.
```
z = y.mean()
print(z)
```
You can check the gradients for `x` and `y` but they are empty currently.
```
print(x.grad)
```
To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x`
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
```
z.backward()
print(x.grad)
print(x/2)
```
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
## Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logps = model(images)
loss = criterion(logps, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
```
## Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.
```
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:
* Make a forward pass through the network
* Use the network output to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
```
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
```
### Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
> **Exercise: ** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
```
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
```
With the network trained, we can check out it's predictions.
```
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
```
Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
| github_jupyter |
# Sequential Variational Autoencoders for Collaborative Filtering
**Noveen Sachdeva, Giuseppe Manco, Ettore Ritacco, and Vikram Pudi** - *12th International ACM Conference on Web Search and Data Mining - WSDM '19*
The notebook provides PyTorch code for the proposed model, "SVAE" along with the data preprocessing for the Movielens-1M dataset.
# Imports
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import os
import time
import json
import pickle
import random
import functools
import numpy as np
import pandas as pd
from tqdm import tqdm
import datetime as dt
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
```
# Hyper Parameters
```
### change `DATA_DIR` to the location where the dataset sits
### compatible datasets: ML-1M, Netflix-full
hyper_params = {
'data_base': 'saved_data/ml-1m/', # Don't remove the '/' at the end please :)
'project_name': 'svae_ml1m',
# 'data_base': 'saved_data/netflix-full/',
# 'project_name': 'svae_netflix_full',
'model_file_name': '',
'log_file': '',
'history_split_test': [0.8, 0.2], # Part of test history to train on : Part of test history to test
'learning_rate': 0.01, # learning rate is required only if optimizer is adagrad
'optimizer': 'adam',
'weight_decay': float(5e-3),
'epochs': 25,
'batch_size': 1, # Needs to be 1, because we don't pack multiple sequences in the same batch
'item_embed_size': 256,
'rnn_size': 200,
'hidden_size': 150,
'latent_size': 64,
'loss_type': 'next_k', # [predict_next, same, prefix, postfix, exp_decay, next_k]
'next_k': 4,
'number_users_to_keep': 1000000000,
'batch_log_interval': 1000,
'train_cp_users': 200,
'exploding_clip': 0.25,
}
file_name = '_optimizer_' + str(hyper_params['optimizer'])
if hyper_params['optimizer'] == 'adagrad':
file_name += '_lr_' + str(hyper_params['learning_rate'])
file_name += '_weight_decay_' + str(hyper_params['weight_decay'])
file_name += '_loss_type_' + str(hyper_params['loss_type'])
file_name += '_item_embed_size_' + str(hyper_params['item_embed_size'])
file_name += '_rnn_size_' + str(hyper_params['rnn_size'])
file_name += '_latent_size_' + str(hyper_params['latent_size'])
log_file_root = "saved_logs/" # Don't remove the '/' at the end please :)
model_file_root = "saved_models/" # Don't remove the '/' at the end please :)
if not os.path.isdir(log_file_root): os.mkdir(log_file_root)
if not os.path.isdir(model_file_root): os.mkdir(model_file_root)
hyper_params['log_file'] = log_file_root + hyper_params['project_name'] + '_log' + file_name + '.txt'
hyper_params['model_file_name'] = model_file_root + hyper_params['project_name'] + '_model' + file_name + '.pt'
```
# Data Preprocessing
**Courtesy:** Dawen Liang et al. "*Variational autoencoders for collaborative filtering*" published at WWW '18. <br>
**Link:** https://github.com/dawenl/vae_cf
```
DATA_DIR = hyper_params['data_base']
pro_dir = os.path.join(DATA_DIR, 'pro_sg') # Path where preprocessed data will be saved
hyper_params['data_base'] += 'pro_sg/'
if not os.path.isdir(pro_dir): # We don't want to keep preprocessing every time we run the notebook
cols = ['userId', 'movieId', 'rating', 'timestamp']
dtypes = {'userId': 'int', 'movieId': 'int', 'timestamp': 'int', 'rating': 'int'}
raw_data = pd.read_csv(os.path.join(DATA_DIR, 'ratings.dat'), sep='::', names=cols, parse_dates=['timestamp'])
max_seq_len = 1000
n_heldout_users = 750 # If total users = N; train_users = N - 2*heldout; test_users & val_users = heldout
# binarize the data (only keep ratings >= 4)
raw_data = raw_data[raw_data['rating'] > 3.5]
# Remove users with greater than $max_seq_len number of watched movies
raw_data = raw_data.groupby(["userId"]).filter(lambda x: len(x) <= max_seq_len)
# Sort data values with the timestamp
raw_data = raw_data.groupby(["userId"]).apply(lambda x: x.sort_values(["timestamp"], ascending = True)).reset_index(drop=True)
raw_data.head()
def get_count(tp, id):
playcount_groupbyid = tp[[id]].groupby(id, as_index=False)
count = playcount_groupbyid.size()
return count
def filter_triplets(tp, min_uc=5, min_sc=0):
# Only keep the triplets for items which were clicked on by at least min_sc users.
if min_sc > 0:
itemcount = get_count(tp, 'movieId')
tp = tp[tp['movieId'].isin(itemcount.index[itemcount >= min_sc])]
# Only keep the triplets for users who clicked on at least min_uc items
# After doing this, some of the items will have less than min_uc users, but should only be a small proportion
if min_uc > 0:
usercount = get_count(tp, 'userId')
tp = tp[tp['userId'].isin(usercount.index[usercount >= min_uc])]
# Update both usercount and itemcount after filtering
usercount, itemcount = get_count(tp, 'userId'), get_count(tp, 'movieId')
return tp, usercount, itemcount
def split_train_test_proportion(data, test_prop=0.2):
data_grouped_by_user = data.groupby('userId')
tr_list, te_list = list(), list()
np.random.seed(98765)
for i, (_, group) in enumerate(data_grouped_by_user):
n_items_u = len(group)
if n_items_u >= 5:
idx = np.zeros(n_items_u, dtype='bool')
# idx[np.random.choice(n_items_u, size=int(test_prop * n_items_u), replace=False).astype('int64')] = True
idx[int((1.0 - test_prop) * n_items_u):] = True
# print(idx)
tr_list.append(group[np.logical_not(idx)])
te_list.append(group[idx])
else:
tr_list.append(group)
if i % 1000 == 0:
print("%d users sampled" % i)
sys.stdout.flush()
data_tr = pd.concat(tr_list)
data_te = pd.concat(te_list)
return data_tr, data_te
def numerize(tp):
uid = list(map(lambda x: profile2id[x], tp['userId']))
sid = list(map(lambda x: show2id[x], tp['movieId']))
ra = list(map(lambda x: x, tp['rating']))
ret = pd.DataFrame(data={'uid': uid, 'sid': sid, 'rating': ra}, columns=['uid', 'sid', 'rating'])
ret['rating'] = ret['rating'].apply(pd.to_numeric)
return ret
if not os.path.isdir(pro_dir): # We don't want to keep preprocessing every time we run the notebook
raw_data, user_activity, item_popularity = filter_triplets(raw_data)
sparsity = 1. * raw_data.shape[0] / (user_activity.shape[0] * item_popularity.shape[0])
print("After filtering, there are %d watching events from %d users and %d movies (sparsity: %.3f%%)" %
(raw_data.shape[0], user_activity.shape[0], item_popularity.shape[0], sparsity * 100))
unique_uid = user_activity.index
np.random.seed(98765)
idx_perm = np.random.permutation(unique_uid.size)
unique_uid = unique_uid[idx_perm]
# create train/validation/test users
n_users = unique_uid.size
tr_users = unique_uid[:(n_users - n_heldout_users * 2)]
vd_users = unique_uid[(n_users - n_heldout_users * 2): (n_users - n_heldout_users)]
te_users = unique_uid[(n_users - n_heldout_users):]
train_plays = raw_data.loc[raw_data['userId'].isin(tr_users)]
unique_sid = pd.unique(train_plays['movieId'])
show2id = dict((sid, i) for (i, sid) in enumerate(unique_sid))
profile2id = dict((pid, i) for (i, pid) in enumerate(unique_uid))
if not os.path.exists(pro_dir):
os.makedirs(pro_dir)
with open(os.path.join(pro_dir, 'unique_sid.txt'), 'w') as f:
for sid in unique_sid:
f.write('%s\n' % sid)
vad_plays = raw_data.loc[raw_data['userId'].isin(vd_users)]
vad_plays = vad_plays.loc[vad_plays['movieId'].isin(unique_sid)]
vad_plays_tr, vad_plays_te = split_train_test_proportion(vad_plays)
test_plays = raw_data.loc[raw_data['userId'].isin(te_users)]
test_plays = test_plays.loc[test_plays['movieId'].isin(unique_sid)]
test_plays_tr, test_plays_te = split_train_test_proportion(test_plays)
train_data = numerize(train_plays)
train_data.to_csv(os.path.join(pro_dir, 'train.csv'), index=False)
vad_data_tr = numerize(vad_plays_tr)
vad_data_tr.to_csv(os.path.join(pro_dir, 'validation_tr.csv'), index=False)
vad_data_te = numerize(vad_plays_te)
vad_data_te.to_csv(os.path.join(pro_dir, 'validation_te.csv'), index=False)
test_data_tr = numerize(test_plays_tr)
test_data_tr.to_csv(os.path.join(pro_dir, 'test_tr.csv'), index=False)
test_data_te = numerize(test_plays_te)
test_data_te.to_csv(os.path.join(pro_dir, 'test_te.csv'), index=False)
```
# Utlity functions
```
LongTensor = torch.LongTensor
FloatTensor = torch.FloatTensor
is_cuda_available = torch.cuda.is_available()
if is_cuda_available:
print("Using CUDA...\n")
LongTensor = torch.cuda.LongTensor
FloatTensor = torch.cuda.FloatTensor
def save_obj(obj, name):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def save_obj_json(obj, name):
with open(name + '.json', 'w') as f:
json.dump(obj, f)
def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)
def load_obj_json(name):
with open(name + '.json', 'r') as f:
return json.load(f)
def file_write(log_file, s):
print(s)
f = open(log_file, 'a')
f.write(s+'\n')
f.close()
def clear_log_file(log_file):
f = open(log_file, 'w')
f.write('')
f.close()
def pretty_print(h):
print("{")
for key in h:
print(' ' * 4 + str(key) + ': ' + h[key])
print('}\n')
def plot_len_vs_ndcg(len_to_ndcg_at_100_map):
lens = list(len_to_ndcg_at_100_map.keys())
lens.sort()
X, Y = [], []
for le in lens:
X.append(le)
ans = 0.0
for i in len_to_ndcg_at_100_map[le]: ans += float(i)
ans = ans / float(len(len_to_ndcg_at_100_map[le]))
Y.append(ans * 100.0)
# Smoothening
Y_mine = []
prev_5 = []
for i in Y:
prev_5.append(i)
if len(prev_5) > 5: del prev_5[0]
temp = 0.0
for j in prev_5: temp += float(j)
temp = float(temp) / float(len(prev_5))
Y_mine.append(temp)
plt.figure(figsize=(12, 5))
plt.plot(X, Y_mine, label='SVAE')
plt.xlabel("Number of items in the fold-out set")
plt.ylabel("Average NDCG@100")
plt.title(hyper_params['project_name'])
if not os.path.isdir("saved_plots/"): os.mkdir("saved_plots/")
plt.savefig("saved_plots/seq_len_vs_ndcg_" + hyper_params['project_name'] + ".pdf")
leg = plt.legend(loc='best', ncol=2)
plt.show()
```
# Data Parsing
```
def load_data(hyper_params):
file_write(hyper_params['log_file'], "Started reading data file")
f = open(hyper_params['data_base'] + 'train.csv')
lines_train = f.readlines()[1:]
f = open(hyper_params['data_base'] + 'validation_tr.csv')
lines_val_tr = f.readlines()[1:]
f = open(hyper_params['data_base'] + 'validation_te.csv')
lines_val_te = f.readlines()[1:]
f = open(hyper_params['data_base'] + 'test_tr.csv')
lines_test_tr = f.readlines()[1:]
f = open(hyper_params['data_base'] + 'test_te.csv')
lines_test_te = f.readlines()[1:]
unique_sid = list()
with open(hyper_params['data_base'] + 'unique_sid.txt', 'r') as f:
for line in f:
unique_sid.append(line.strip())
num_items = len(unique_sid)
file_write(hyper_params['log_file'], "Data Files loaded!")
train_reader = DataReader(hyper_params, lines_train, None, num_items, True)
val_reader = DataReader(hyper_params, lines_val_tr, lines_val_te, num_items, False)
test_reader = DataReader(hyper_params, lines_test_tr, lines_test_te, num_items, False)
return train_reader, val_reader, test_reader, num_items
class DataReader:
def __init__(self, hyper_params, a, b, num_items, is_training):
self.hyper_params = hyper_params
self.batch_size = hyper_params['batch_size']
num_users = 0
min_user = 1000000000000000000000000 # Infinity
for line in a:
line = line.strip().split(",")
num_users = max(num_users, int(line[0]))
min_user = min(min_user, int(line[0]))
num_users = num_users - min_user + 1
self.num_users = num_users
self.min_user = min_user
self.num_items = num_items
self.data_train = a
self.data_test = b
self.is_training = is_training
self.all_users = []
self.prep()
self.number()
def prep(self):
self.data = []
for i in range(self.num_users): self.data.append([])
for i in tqdm(range(len(self.data_train))):
line = self.data_train[i]
line = line.strip().split(",")
self.data[int(line[0]) - self.min_user].append([ int(line[1]), 1 ])
if self.is_training == False:
self.data_te = []
for i in range(self.num_users): self.data_te.append([])
for i in tqdm(range(len(self.data_test))):
line = self.data_test[i]
line = line.strip().split(",")
self.data_te[int(line[0]) - self.min_user].append([ int(line[1]), 1 ])
def number(self):
self.num_b = int(min(len(self.data), self.hyper_params['number_users_to_keep']) / self.batch_size)
def iter(self):
users_done = 0
x_batch = []
user_iterate_order = list(range(len(self.data)))
# Randomly shuffle the training order
np.random.shuffle(user_iterate_order)
for user in user_iterate_order:
if users_done > self.hyper_params['number_users_to_keep']: break
users_done += 1
y_batch_s = torch.zeros(self.batch_size, len(self.data[user]) - 1, self.num_items)
if is_cuda_available: y_batch_s = y_batch_s.cuda()
if self.hyper_params['loss_type'] == 'predict_next':
for timestep in range(len(self.data[user]) - 1):
y_batch_s[len(x_batch), timestep, :].scatter_(
0, LongTensor([ i[0] for i in [ self.data[user][timestep + 1] ] ]), 1.0
)
elif self.hyper_params['loss_type'] == 'next_k':
for timestep in range(len(self.data[user]) - 1):
y_batch_s[len(x_batch), timestep, :].scatter_(
0, LongTensor([ i[0] for i in self.data[user][timestep + 1:][:self.hyper_params['next_k']] ]), 1.0
)
elif self.hyper_params['loss_type'] == 'postfix':
for timestep in range(len(self.data[user]) - 1):
y_batch_s[len(x_batch), timestep, :].scatter_(
0, LongTensor([ i[0] for i in self.data[user][timestep + 1:] ]), 1.0
)
x_batch.append([ i[0] for i in self.data[user][:-1] ])
if len(x_batch) == self.batch_size: # batch_size always = 1
yield Variable(LongTensor(x_batch)), Variable(y_batch_s, requires_grad=False)
x_batch = []
def iter_eval(self):
x_batch = []
test_movies, test_movies_r = [], []
users_done = 0
for user in range(len(self.data)):
users_done += 1
if users_done > self.hyper_params['number_users_to_keep']: break
if self.is_training == True:
split = float(self.hyper_params['history_split_test'][0])
base_predictions_on = self.data[user][:int(split * len(self.data[user]))]
heldout_movies = self.data[user][int(split * len(self.data[user])):]
else:
base_predictions_on = self.data[user]
heldout_movies = self.data_te[user]
y_batch_s = torch.zeros(self.batch_size, len(base_predictions_on) - 1, self.num_items).cuda()
if self.hyper_params['loss_type'] == 'predict_next':
for timestep in range(len(base_predictions_on) - 1):
y_batch_s[len(x_batch), timestep, :].scatter_(
0, LongTensor([ i[0] for i in [ base_predictions_on[timestep + 1] ] ]), 1.0
)
elif self.hyper_params['loss_type'] == 'next_k':
for timestep in range(len(base_predictions_on) - 1):
y_batch_s[len(x_batch), timestep, :].scatter_(
0, LongTensor([ i[0] for i in base_predictions_on[timestep + 1:][:self.hyper_params['next_k']] ]), 1.0
)
elif self.hyper_params['loss_type'] == 'postfix':
for timestep in range(len(base_predictions_on) - 1):
y_batch_s[len(x_batch), timestep, :].scatter_(
0, LongTensor([ i[0] for i in base_predictions_on[timestep + 1:] ]), 1.0
)
test_movies.append([ i[0] for i in heldout_movies ])
test_movies_r.append([ i[1] for i in heldout_movies ])
x_batch.append([ i[0] for i in base_predictions_on[:-1] ])
if len(x_batch) == self.batch_size: # batch_size always = 1
yield Variable(LongTensor(x_batch)), Variable(y_batch_s, requires_grad=False), test_movies, test_movies_r
x_batch = []
test_movies, test_movies_r = [], []
```
# Evaluation Code
```
def evaluate(model, criterion, reader, hyper_params, is_train_set):
model.eval()
metrics = {}
metrics['loss'] = 0.0
Ks = [10, 100]
for k in Ks:
metrics['NDCG@' + str(k)] = 0.0
metrics['Rec@' + str(k)] = 0.0
metrics['Prec@' + str(k)] = 0.0
batch = 0
total_users = 0.0
# For plotting the results (seq length vs. NDCG@100)
len_to_ndcg_at_100_map = {}
for x, y_s, test_movies, test_movies_r in reader.iter_eval():
batch += 1
if is_train_set == True and batch > hyper_params['train_cp_users']: break
decoder_output, z_mean, z_log_sigma = model(x)
metrics['loss'] += criterion(decoder_output, z_mean, z_log_sigma, y_s, 0.2).data[0]
# Making the logits of previous items in the sequence to be "- infinity"
decoder_output = decoder_output.data
x_scattered = torch.zeros(decoder_output.shape[0], decoder_output.shape[2])
if is_cuda_available: x_scattered = x_scattered.cuda()
x_scattered[0, :].scatter_(0, x[0].data, 1.0)
last_predictions = decoder_output[:, -1, :] - (torch.abs(decoder_output[:, -1, :] * x_scattered) * 100000000)
for batch_num in range(last_predictions.shape[0]): # batch_num is ideally only 0, since batch_size is enforced to be always 1
predicted_scores = last_predictions[batch_num]
actual_movies_watched = test_movies[batch_num]
actual_movies_ratings = test_movies_r[batch_num]
# Calculate NDCG
_, argsorted = torch.sort(-1.0 * predicted_scores)
for k in Ks:
best, now_at, dcg, hits = 0.0, 0.0, 0.0, 0.0
rec_list = list(argsorted[:k].cpu().numpy())
for m in range(len(actual_movies_watched)):
movie = actual_movies_watched[m]
now_at += 1.0
if now_at <= k: best += 1.0 / float(np.log2(now_at + 1))
if movie not in rec_list: continue
hits += 1.0
dcg += 1.0 / float(np.log2(float(rec_list.index(movie) + 2)))
metrics['NDCG@' + str(k)] += float(dcg) / float(best)
metrics['Rec@' + str(k)] += float(hits) / float(len(actual_movies_watched))
metrics['Prec@' + str(k)] += float(hits) / float(k)
# Only for plotting the graph (seq length vs. NDCG@100)
if k == 100:
seq_len = int(len(actual_movies_watched)) + int(x[batch_num].shape[0]) + 1
if seq_len not in len_to_ndcg_at_100_map: len_to_ndcg_at_100_map[seq_len] = []
len_to_ndcg_at_100_map[seq_len].append(float(dcg) / float(best))
total_users += 1.0
metrics['loss'] = float(metrics['loss']) / float(batch)
metrics['loss'] = round(metrics['loss'], 4)
for k in Ks:
metrics['NDCG@' + str(k)] = round((100.0 * metrics['NDCG@' + str(k)]) / float(total_users), 4)
metrics['Rec@' + str(k)] = round((100.0 * metrics['Rec@' + str(k)]) / float(total_users), 4)
metrics['Prec@' + str(k)] = round((100.0 * metrics['Prec@' + str(k)]) / float(total_users), 4)
return metrics, len_to_ndcg_at_100_map
```
# Model
```
class Encoder(nn.Module):
def __init__(self, hyper_params):
super(Encoder, self).__init__()
self.linear1 = nn.Linear(
hyper_params['rnn_size'], hyper_params['hidden_size']
)
nn.init.xavier_normal(self.linear1.weight)
self.activation = nn.Tanh()
def forward(self, x):
x = self.linear1(x)
x = self.activation(x)
return x
class Decoder(nn.Module):
def __init__(self, hyper_params):
super(Decoder, self).__init__()
self.linear1 = nn.Linear(hyper_params['latent_size'], hyper_params['hidden_size'])
self.linear2 = nn.Linear(hyper_params['hidden_size'], hyper_params['total_items'])
nn.init.xavier_normal(self.linear1.weight)
nn.init.xavier_normal(self.linear2.weight)
self.activation = nn.Tanh()
def forward(self, x):
x = self.linear1(x)
x = self.activation(x)
x = self.linear2(x)
return x
class Model(nn.Module):
def __init__(self, hyper_params):
super(Model, self).__init__()
self.hyper_params = hyper_params
self.encoder = Encoder(hyper_params)
self.decoder = Decoder(hyper_params)
# Since we don't need padding, our vocabulary size = "hyper_params['total_items']" and not "hyper_params['total_items'] + 1"
self.item_embed = nn.Embedding(hyper_params['total_items'], hyper_params['item_embed_size'])
self.gru = nn.GRU(
hyper_params['item_embed_size'], hyper_params['rnn_size'],
batch_first = True, num_layers = 1
)
self.linear1 = nn.Linear(hyper_params['hidden_size'], 2 * hyper_params['latent_size'])
nn.init.xavier_normal(self.linear1.weight)
self.tanh = nn.Tanh()
def sample_latent(self, h_enc):
"""
Return the latent normal sample z ~ N(mu, sigma^2)
"""
temp_out = self.linear1(h_enc)
mu = temp_out[:, :self.hyper_params['latent_size']]
log_sigma = temp_out[:, self.hyper_params['latent_size']:]
sigma = torch.exp(log_sigma)
std_z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).float()
if is_cuda_available: std_z = std_z.cuda()
self.z_mean = mu
self.z_log_sigma = log_sigma
return mu + sigma * Variable(std_z, requires_grad=False) # Reparameterization trick
def forward(self, x):
in_shape = x.shape # [bsz x seq_len] = [1 x seq_len]
x = x.view(-1) # [seq_len]
x = self.item_embed(x) # [seq_len x embed_size]
x = x.view(in_shape[0], in_shape[1], -1) # [1 x seq_len x embed_size]
rnn_out, _ = self.gru(x) # [1 x seq_len x rnn_size]
rnn_out = rnn_out.view(in_shape[0] * in_shape[1], -1) # [seq_len x rnn_size]
enc_out = self.encoder(rnn_out) # [seq_len x hidden_size]
sampled_z = self.sample_latent(enc_out) # [seq_len x latent_size]
dec_out = self.decoder(sampled_z) # [seq_len x total_items]
dec_out = dec_out.view(in_shape[0], in_shape[1], -1) # [1 x seq_len x total_items]
return dec_out, self.z_mean, self.z_log_sigma
```
# Custom loss
$$ Loss \; = \; \sum_{u \in U} Loss_u $$ <br>
$$ Loss_u \; = \; \beta * KL( \, \phi(z \vert x) \, \Vert \, {\rm I\!N(0, I)} \, ) \; - \; log( \, P_{\phi}(g_{\theta}(x)) \, ) $$ <br>
$ g_{\theta}(.)$ is the encoder ; $P_{\phi}(.)$ is the decoded distribution; $ \beta $ is the anneal factor.
```
class VAELoss(torch.nn.Module):
def __init__(self, hyper_params):
super(VAELoss,self).__init__()
self.hyper_params = hyper_params
def forward(self, decoder_output, mu_q, logvar_q, y_true_s, anneal):
# Calculate KL Divergence loss
kld = torch.mean(torch.sum(0.5 * (-logvar_q + torch.exp(logvar_q) + mu_q**2 - 1), -1))
# Calculate Likelihood
dec_shape = decoder_output.shape # [batch_size x seq_len x total_items] = [1 x seq_len x total_items]
decoder_output = F.log_softmax(decoder_output, -1)
num_ones = float(torch.sum(y_true_s[0, 0]))
likelihood = torch.sum(
-1.0 * y_true_s.view(dec_shape[0] * dec_shape[1], -1) * \
decoder_output.view(dec_shape[0] * dec_shape[1], -1)
) / (float(self.hyper_params['batch_size']) * num_ones)
final = (anneal * kld) + (likelihood)
return final
```
# Training loop
```
def train(reader):
model.train()
total_loss = 0
start_time = time.time()
batch = 0
batch_limit = int(train_reader.num_b)
total_anneal_steps = 200000
anneal = 0.0
update_count = 0.0
anneal_cap = 0.2
for x, y_s in reader.iter():
batch += 1
# Empty the gradients
model.zero_grad()
optimizer.zero_grad()
# Forward pass
decoder_output, z_mean, z_log_sigma = model(x)
# Backward pass
loss = criterion(decoder_output, z_mean, z_log_sigma, y_s, anneal)
loss.backward()
optimizer.step()
total_loss += loss.data
# Anneal logic
if total_anneal_steps > 0:
anneal = min(anneal_cap, 1. * update_count / total_anneal_steps)
else:
anneal = anneal_cap
update_count += 1.0
# Logging mechanism
if (batch % hyper_params['batch_log_interval'] == 0 and batch > 0) or batch == batch_limit:
div = hyper_params['batch_log_interval']
if batch == batch_limit: div = (batch_limit % hyper_params['batch_log_interval']) - 1
if div <= 0: div = 1
cur_loss = (total_loss[0] / div)
elapsed = time.time() - start_time
ss = '| epoch {:3d} | {:5d}/{:5d} batches | ms/batch {:5.2f} | loss {:5.4f}'.format(
epoch, batch, batch_limit, (elapsed * 1000) / div, cur_loss
)
file_write(hyper_params['log_file'], ss)
total_loss = 0
start_time = time.time()
# Train It..
train_reader, val_reader, test_reader, total_items = load_data(hyper_params)
hyper_params['total_items'] = total_items
hyper_params['testing_batch_limit'] = test_reader.num_b
file_write(hyper_params['log_file'], "\n\nSimulation run on: " + str(dt.datetime.now()) + "\n\n")
file_write(hyper_params['log_file'], "Data reading complete!")
file_write(hyper_params['log_file'], "Number of train batches: {:4d}".format(train_reader.num_b))
file_write(hyper_params['log_file'], "Number of validation batches: {:4d}".format(val_reader.num_b))
file_write(hyper_params['log_file'], "Number of test batches: {:4d}".format(test_reader.num_b))
file_write(hyper_params['log_file'], "Total Items: " + str(total_items) + "\n")
model = Model(hyper_params)
if is_cuda_available: model.cuda()
criterion = VAELoss(hyper_params)
if hyper_params['optimizer'] == 'adagrad':
optimizer = torch.optim.Adagrad(
model.parameters(), weight_decay=hyper_params['weight_decay'], lr = hyper_params['learning_rate']
)
elif hyper_params['optimizer'] == 'adadelta':
optimizer = torch.optim.Adadelta(
model.parameters(), weight_decay=hyper_params['weight_decay']
)
elif hyper_params['optimizer'] == 'adam':
optimizer = torch.optim.Adam(
model.parameters(), weight_decay=hyper_params['weight_decay']
)
elif hyper_params['optimizer'] == 'rmsprop':
optimizer = torch.optim.RMSprop(
model.parameters(), weight_decay=hyper_params['weight_decay']
)
file_write(hyper_params['log_file'], str(model))
file_write(hyper_params['log_file'], "\nModel Built!\nStarting Training...\n")
best_val_ndcg = None
try:
for epoch in range(1, hyper_params['epochs'] + 1):
epoch_start_time = time.time()
train(train_reader)
# Calulating the metrics on the train set
metrics, _ = evaluate(model, criterion, train_reader, hyper_params, True)
string = ""
for m in metrics: string += " | " + m + ' = ' + str(metrics[m])
string += ' (TRAIN)'
# Calulating the metrics on the validation set
metrics, _ = evaluate(model, criterion, val_reader, hyper_params, False)
string2 = ""
for m in metrics: string2 += " | " + m + ' = ' + str(metrics[m])
string2 += ' (VAL)'
ss = '-' * 89
ss += '\n| end of epoch {:3d} | time: {:5.2f}s'.format(epoch, (time.time() - epoch_start_time))
ss += string
ss += '\n'
ss += '-' * 89
ss += '\n| end of epoch {:3d} | time: {:5.2f}s'.format(epoch, (time.time() - epoch_start_time))
ss += string2
ss += '\n'
ss += '-' * 89
file_write(hyper_params['log_file'], ss)
if not best_val_ndcg or metrics['NDCG@100'] >= best_val_ndcg:
with open(hyper_params['model_file_name'], 'wb') as f: torch.save(model, f)
best_val_ndcg = metrics['NDCG@100']
except KeyboardInterrupt: print('Exiting from training early')
# Plot Traning graph
f = open(model.hyper_params['log_file'])
lines = f.readlines()
lines.reverse()
train = []
test = []
for line in lines:
if line[:10] == 'Simulation' and len(train) > 1: break
elif line[:10] == 'Simulation' and len(train) <= 1: train, test = [], []
if line[2:5] == 'end' and line[-5:-2] == 'VAL': test.append(line.strip().split("|"))
elif line[2:5] == 'end' and line[-7:-2] == 'TRAIN': train.append(line.strip().split("|"))
train.reverse()
test.reverse()
train_ndcg = []
test_ndcg = []
test_loss, train_loss = [], []
for i in train:
for metric in i:
if metric.split("=")[0] == " NDCG@100 ":
train_ndcg.append(float(metric.split('=')[1].split(' ')[1]))
if metric.split("=")[0] == " loss ":
train_loss.append(float(metric.split("=")[1].split(' ')[1]))
total, avg_runtime = 0.0, 0.0
for i in test:
avg_runtime += float(i[2].split(" ")[2][:-1])
total += 1.0
for metric in i:
if metric.split("=")[0] == " NDCG@100 ":
test_ndcg.append(float(metric.split('=')[1].split(' ')[1]))
if metric.split("=")[0] == " loss ":
test_loss.append(float(metric.split("=")[1].split(' ')[1]))
fig, ax1 = plt.subplots(figsize=(12, 5))
ax1.set_title(hyper_params["project_name"],fontweight="bold", size=20)
ax1.plot(test_ndcg, 'b-')
ax1.set_xlabel('Epochs', fontsize = 20.0)
ax1.set_ylabel('NDCG@100', color='b', fontsize = 20.0)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax2.plot(test_loss, 'r--')
ax2.set_ylabel('Loss', color='r')
ax2.tick_params('y', colors='r')
fig.tight_layout()
if not os.path.isdir("saved_plots/"): os.mkdir("saved_plots/")
fig.savefig("saved_plots/learning_curve_" + hyper_params["project_name"] + ".pdf")
plt.show()
# Checking metrics for the test set on best saved model
with open(hyper_params['model_file_name'], 'rb') as f: model = torch.load(f)
metrics, len_to_ndcg_at_100_map = evaluate(model, criterion, test_reader, hyper_params, False)
# Plot sequence length vs NDCG@100 graph
plot_len_vs_ndcg(len_to_ndcg_at_100_map)
string = ""
for m in metrics: string += " | " + m + ' = ' + str(metrics[m])
ss = '=' * 89
ss += '\n| End of training'
ss += string + " (TEST)"
ss += '\n'
ss += '=' * 89
file_write(hyper_params['log_file'], ss)
print("average runtime per epoch =", round(avg_runtime / float(total), 4), "s")
```
| github_jupyter |
# Exploratory Data Analysis Using Python and BigQuery
## Learning Objectives
1. Analyze a Pandas Dataframe
2. Create Seaborn plots for Exploratory Data Analysis in Python
3. Write a SQL query to pick up specific fields from a BigQuery dataset
4. Exploratory Analysis in BigQuery
## Introduction
This lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/python.BQ_explore_data.ipynb).
### Import Libraries
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
```
Please ignore any incompatibility warnings and errors.
**Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
```
import os
import pandas as pd
import numpy as np
# delete me from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns # Seaborn is a Python data visualization library based on matplotlib.
%matplotlib inline
```
### Load the Dataset
Here, we create a directory called usahousing. This directory will hold the dataset that we copy from Google Cloud Storage.
```
if not os.path.isdir("../data/explore"):
os.makedirs("../data/explore")
```
Next, we copy the Usahousing dataset from Google Cloud Storage.
```
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/housing_pre-proc_toy.csv ../data/explore
```
Then we use the "ls" command to list files in the directory. This ensures that the dataset was copied.
```
!ls -l ../data/explore
```
Next, we read the dataset into a Pandas dataframe.
```
df_USAhousing = # TODO 1: Your code goes here
```
### Inspect the Data
```
# Show the first five row.
df_USAhousing.head()
```
Let's check for any null values.
```
df_USAhousing.isnull().sum()
df_stats = df_USAhousing.describe()
df_stats = df_stats.transpose()
df_stats
df_USAhousing.info()
```
Let's take a peek at the first and last five rows of the data for all columns.
```
print ("Rows : " ,df_USAhousing.shape[0])
print ("Columns : " ,df_USAhousing.shape[1])
print ("\nFeatures : \n" ,df_USAhousing.columns.tolist())
print ("\nMissing values : ", df_USAhousing.isnull().sum().values.sum())
print ("\nUnique values : \n",df_USAhousing
.nunique())
```
## Explore the Data
Let's create some simple plots to check out the data!
```
sns.heatmap(df_USAhousing.corr())
```
Create a displot showing "median_house_value".
```
# TODO 2a: Your code goes here
sns.set_style('whitegrid')
df_USAhousing['median_house_value'].hist(bins=30)
plt.xlabel('median_house_value')
x = df_USAhousing['median_income']
y = df_USAhousing['median_house_value']
plt.scatter(x, y)
plt.show()
```
Create a jointplot showing "median_income" versus "median_house_value".
```
# TODO 2b: Your code goes here
sns.countplot(x = 'ocean_proximity', data=df_USAhousing)
# takes numeric only?
#plt.figure(figsize=(20,20))
g = sns.FacetGrid(df_USAhousing, col="ocean_proximity")
g.map(plt.hist, "households");
# takes numeric only?
#plt.figure(figsize=(20,20))
g = sns.FacetGrid(df_USAhousing, col="ocean_proximity")
g.map(plt.hist, "median_income");
```
You can see below that this is the state of California!
```
x = df_USAhousing['latitude']
y = df_USAhousing['longitude']
plt.scatter(x, y)
plt.show()
```
# Explore and create ML datasets
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
## Learning objectives
* Access and explore a public BigQuery dataset on NYC Taxi Cab rides
* Visualize your dataset using the Seaborn library
First, **restart the Kernel**. Now, let's start with the Python imports that we need.
```
from google.cloud import bigquery
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
```
<h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://console.cloud.google.com/bigquery?project=nyc-tlc&p=nyc-tlc&d=yellow&t=trips&page=table">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
```
%%bigquery
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
# TODO 3: Set correct BigQuery public dataset for nyc-tlc yellow taxi cab trips
# Tip: For projects with hyphens '-' be sure to escape with backticks ``
FROM
LIMIT 10
```
Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
We will also store the BigQuery result in a Pandas dataframe named "trips"
```
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
print(len(trips))
# We can slice Pandas dataframes as if they were arrays
trips[:10]
```
<h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
```
# TODO 4: Visualize your dataset using the Seaborn library.
# Plot the distance of the trip as X and the fare amount as Y.
ax = sns.regplot(x="", y="", fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
```
Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses.
```
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
# TODO 4a: Filter the data to only include non-zero distance trips and fares above $2.50
AND
print(len(trips))
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
```
What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount.
```
tollrides = trips[trips["tolls_amount"] > 0]
tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notollrides = trips[trips["tolls_amount"] == 0]
notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
```
Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
```
trips.describe()
```
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
// Stops auto-scrolling so entire output is visible: see https://stackoverflow.com/a/41646403
# Default parameter values. They will be overwritten by papermill notebook parameters.
# This cell must carry the tag "parameters" in its metadata.
from pathlib import Path
import pickle
import codecs
innereye_path = Path.cwd().parent.parent.parent
train_metrics_csv = ""
val_metrics_csv = innereye_path / 'Tests' / 'ML' / 'reports' / 'val_metrics_classification.csv'
test_metrics_csv = innereye_path / 'Tests' / 'ML' / 'reports' / 'test_metrics_classification.csv'
number_best_and_worst_performing = 20
config = ""
is_crossval_report = False
import sys
if str(innereye_path) not in sys.path:
sys.path.append(str(innereye_path))
from InnerEye.Common.fixed_paths import add_submodules_to_path
add_submodules_to_path()
%matplotlib inline
import matplotlib.pyplot as plt
config = pickle.loads(codecs.decode(config.encode(), "base64"))
from InnerEye.ML.common import ModelExecutionMode
from InnerEye.ML.reports.notebook_report import print_header
from InnerEye.ML.reports.classification_report import plot_pr_and_roc_curves_from_csv, \
print_k_best_and_worst_performing, print_metrics_for_all_prediction_targets, \
plot_k_best_and_worst_performing
import warnings
warnings.filterwarnings("ignore")
plt.rcParams['figure.figsize'] = (20, 10)
#convert params to Path
train_metrics_csv = Path(train_metrics_csv)
val_metrics_csv = Path(val_metrics_csv)
test_metrics_csv = Path(test_metrics_csv)
```
# Metrics
## Train Set
```
if train_metrics_csv.is_file():
print_metrics_for_all_prediction_targets(csv_to_set_optimal_threshold=train_metrics_csv,
data_split_to_set_optimal_threshold=ModelExecutionMode.TRAIN,
csv_to_compute_metrics=train_metrics_csv,
data_split_to_compute_metrics=ModelExecutionMode.TRAIN,
config=config, is_thresholded=False, is_crossval_report=is_crossval_report)
```
## Validation Set
```
if val_metrics_csv.is_file():
print_metrics_for_all_prediction_targets(csv_to_set_optimal_threshold=val_metrics_csv,
data_split_to_set_optimal_threshold=ModelExecutionMode.VAL,
csv_to_compute_metrics=val_metrics_csv,
data_split_to_compute_metrics=ModelExecutionMode.VAL,
config=config, is_thresholded=False, is_crossval_report=is_crossval_report)
```
## Test Set
```
if val_metrics_csv.is_file() and test_metrics_csv.is_file():
print_metrics_for_all_prediction_targets(csv_to_set_optimal_threshold=val_metrics_csv,
data_split_to_set_optimal_threshold=ModelExecutionMode.VAL,
csv_to_compute_metrics=test_metrics_csv,
data_split_to_compute_metrics=ModelExecutionMode.TEST,
config=config, is_thresholded=False, is_crossval_report=is_crossval_report)
```
# ROC and PR curves
## Train Set
```
if train_metrics_csv.is_file():
plot_pr_and_roc_curves_from_csv(metrics_csv=train_metrics_csv, data_split=ModelExecutionMode.TRAIN, config=config,
is_crossval_report=is_crossval_report)
```
## Validation set
```
if val_metrics_csv.is_file():
plot_pr_and_roc_curves_from_csv(metrics_csv=val_metrics_csv, data_split=ModelExecutionMode.VAL, config=config,
is_crossval_report=is_crossval_report)
```
## Test set
```
if test_metrics_csv.is_file():
# When analyzing cross validation runs, the report is started with a file that contains both
# validation and test set metrics in one. The test set metrics however may be missing if
# inference on the test set for cross validation child runs is turned off, leading to exceptions.
if not is_crossval_report or (is_crossval_report and config.inference_on_test_set):
plot_pr_and_roc_curves_from_csv(metrics_csv=test_metrics_csv, data_split=ModelExecutionMode.TEST, config=config,
is_crossval_report=is_crossval_report)
```
# Best and worst samples by ID
```
if not is_crossval_report and val_metrics_csv.is_file() and test_metrics_csv.is_file():
for prediction_target in config.target_names:
print_header(f"Class: {prediction_target}", level=3)
print_k_best_and_worst_performing(val_metrics_csv=val_metrics_csv, test_metrics_csv=test_metrics_csv,
k=number_best_and_worst_performing,
prediction_target=prediction_target)
```
# Plot best and worst sample images
```
if not is_crossval_report and val_metrics_csv.is_file() and test_metrics_csv.is_file():
for prediction_target in config.target_names:
print_header(f"Class: {prediction_target}", level=3)
plot_k_best_and_worst_performing(val_metrics_csv=val_metrics_csv, test_metrics_csv=test_metrics_csv,
k=number_best_and_worst_performing, prediction_target=prediction_target,
config=config)
```
| github_jupyter |
# Lung Segmentation using Computer Vision
The objective of this is to explore an expedient image segmentation algorithm for medical images to curtail the physicians interpretation of computer tomography (CT) scan images. Modern medical imaging modalities generate large images that are extremely grim to analyze manually. The consequences of segmentation algorithms rely on the exactitude and convergence time. At this moment, there is a compelling necessity to explore and implement new evolutionary algorithms to solve the problems associated with medical image segmentation. Lung cancer is the frequently diagnosed cancer across the world among men. Early detection of lung cancer navigates towards apposite treatment to save human lives. CT is one of the modest medical imaging methods to diagnose the lung cancer. In the present study, the performance of five optimization algorithms, namely, k-means clustering, k-median clustering, particle swarm optimization, inertia-weighted particle swarm optimization, and guaranteed convergence particle swarm optimization (GCPSO), to extract the tumor from the lung image has been implemented and analyzed. The performance of median, adaptive median, and average filters in the preprocessing stage was compared, and it was proved that the adaptive median filter is most suitable for medical CT images. Furthermore, the image contrast is enhanced by using adaptive histogram equalization. The preprocessed image with improved quality is subject to four algorithms. The practical results are verified for 20 sample images of the lung using MATLAB, and it was observed that the GCPSO has the highest accuracy of 95.89%.
### CT imaging:
#### Physics of CT Scans
Computed Tomography (CT) uses X-ray beams to obtain 3D pixel intensities of the human body. A heated cathode releases high-energy beams (electrons), which in turn release their energy as X-ray radiation. X-rays pass through human body tissues and hits a detector on the other side. A dense tissue (i.e. bones) will absorb more radiation than soft tissues (i.e. fat). When X-rays are not absorbed from the body (i.e. in the air region inside the lungs) and reach the detector we see them as black, similar to a black film. On the opposite, dense tissues are depicted as white.

### CT intensities and Hounsfield units
The X-ray absorption is measured in the Hounsfield scale. In this scale, we fix the Air intensity to -1000 and water to 0 intensity. It is essential to understand that Housenfield is an absolute scale, unlike MRI where we have a relative scale from 0 to 255.
The image illustrates some of the basic tissues and their corresponding intensity values. Keep in mind that the images are noisy. The numbers may slightly vary in real images.

Bones have high intensity. We usually clip the image to have an upper maximum range. For instance, the max value might be 1000, for practical reasons.
The problem: visualization libraries work on the scale [0,255]. It wouldn’t be very wise to visualize all the Hounsfield scale (from -1000 to 1000+ ) to 256 scales for medical diagnosis.
Instead, we limit our attention to different parts of this range and focus on the underlying tissues.
### CT data visualization: level and window
The medical image convention to clip the Housenfield range is by choosing a central intensity, called level and a window, as depicted:

It is actually quite an ugly convention for computer scientists. We would just like the min and max of the range:
max = level + window/2max=level+window/2
min = level - window/2min=level−window/2
```
import matplotlib.pyplot as plt
import numpy as np
def show_slice_window(slice, level, window):
"""
Function to display an image slice
Input is a numpy 2D array
"""
max = level + window/2
min = level - window/2
slice = slice.clip(min,max)
plt.figure()
plt.imshow(slice.T, cmap="gray", origin="lower")
plt.savefig('L'+str(level)+'W'+str(window))
```
### Lung segmentation based on intensity values
We will not just segment the lungs but we will also find the real area in mm^2. To do that we need to find the real size of the pixel dimensions. Each image may have a different one.

### Step 1: Find pixel dimensions to calculate the area in mm^2
```
def find_pix_dim(ct_img):
"""
Get the pixdim of the CT image.
A general solution that gets the pixdim indicated from the image dimensions. From the last 2 image dimensions, we get their pixel dimension.
Args:
ct_img: nib image
Returns: List of the 2 pixel dimensions
"""
pix_dim = ct_img.header["pixdim"] # example [1,2,1.5,1,1]
dim = ct_img.header["dim"] # example [1,512,512,1,1]
max_indx = np.argmax(dim)
pixdimX = pix_dim[max_indx]
dim = np.delete(dim, max_indx)
pix_dim = np.delete(pix_dim, max_indx)
max_indy = np.argmax(dim)
pixdimY = pix_dim[max_indy]
return [pixdimX, pixdimY] # example [2, 1.5]
```
### Step 2: Binarize image using intensity thresholding
We expect lungs to be in the Housendfield unit range of [-1000,-300]. To this end, we need to clip the image range to [-1000,-300] and binarize the values to 0 and 1.

### Step 3: Contour finding
We care about the lung regions that are shown on white. If we could find an algorithm to identify close sets or any kind of contours in the image that may help. After some search online, I found the marching squares method that finds constant valued contours in an image from skimage, called skimage.measure.find_contours().
After using this function I visualize the detected contours in the original CT image

```
def intensity_seg(ct_numpy, min=-1000, max=-300):
clipped = clip_ct(ct_numpy, min, max)
return measure.find_contours(clipped, 0.95)
```
### Step 4: Find the lung area from a set of possible contours
Note that I used a different image to show an edge case that the patient’s body is not a closed set of points. Ok, not exactly what we want, but let’s see if we could work that out.
To do so, I first extracted a convex polygon from the contour using scipy. After I assume 2 constraints:
The contour of the lungs must be a closed set (always true)
The contour must have a minimum volume of 2000 pixels to represent the lungs.
That may or may not include the body contour, resulting in more than 3 contours. When that happens the body is easily discarded by having the largest volume of the contour that satisfies the pre-described assumptions.
```
def find_lungs(contours):
"""
Chooses the contours that correspond to the lungs and the body
First, we exclude non-closed sets-contours
Then we assume some min area and volume to exclude small contours
Then the body is excluded as the highest volume closed set
The remaining areas correspond to the lungs
Args:
contours: all the detected contours
Returns: contours that correspond to the lung area
"""
body_and_lung_contours = []
vol_contours = []
for contour in contours:
hull = ConvexHull(contour)
# set some constraints for the volume
if hull.volume > 2000 and set_is_closed(contour):
body_and_lung_contours.append(contour)
vol_contours.append(hull.volume)
# Discard body contour
if len(body_and_lung_contours) == 2:
return body_and_lung_contours
elif len(body_and_lung_contours) > 2:
vol_contours, body_and_lung_contours = (list(t) for t in
zip(*sorted(zip(vol_contours, body_and_lung_contours))))
body_and_lung_contours.pop(-1) # body is out!
return body_and_lung_contours # only lungs left !!!
```
As an edge case, I am showing that the algorithm is not restricted to only two regions of the lungs.

### Step 5: Contour to binary mask
Next, we save it as a nifty file so we need to convert the set of points to a lung binary mask. For this, I used the pillow python lib that draws a polygon and creates a binary image mask. Then I merge all the masks of the already found lung contours.
```
import numpy as np
from PIL import Image, ImageDraw
def create_mask_from_polygon(image, contours):
"""
Creates a binary mask with the dimensions of the image and
converts the list of polygon-contours to binary masks and merges them together
Args:
image: the image that the contours refer to
contours: list of contours
Returns:
"""
lung_mask = np.array(Image.new('L', image.shape, 0))
for contour in contours:
x = contour[:, 0]
y = contour[:, 1]
polygon_tuple = list(zip(x, y))
img = Image.new('L', image.shape, 0)
ImageDraw.Draw(img).polygon(polygon_tuple, outline=0, fill=1)
mask = np.array(img)
lung_mask += mask
lung_mask[lung_mask > 1] = 1 # sanity check to make 100% sure that the mask is binary
return lung_mask.T # transpose it to be aligned with the image dims
```
The desired lung area in mm^2 is simply the number of nonzero elements multiplied by the two pixel dimensions of the corresponding image.
The lung areas are saved in a csv file along with the image name.
Finally, to save the mask as nifty I used the value of 255 for the lung area instead of 1 to be able to display in a nifty viewer. Moreover, I save the image with the affine transformation of the initial CT slice to be able to be displayed meaningfully (aligned without any rotation conflicts).
```
def save_nifty(img_np, name, affine):
"""
binary masks should be converted to 255 so it can be displayed in a nii viewer
we pass the affine of the initial image to make sure it exits in the same
image coordinate space
Args:
img_np: the binary mask
name: output name
affine: 4x4 np array
Returns:
"""
img_np[img_np == 1] = 255
ni_img = nib.Nifti1Image(img_np, affine)
nib.save(ni_img, name + '.nii.gz')
```
Finally, I opened the mask with a common nifty viewer for Linux to validate that everything went ok. Here are snapshots for slice number 4:

### Segment the main vessels and compute the vessels over lung area ratio
First, we do element-wise multiplication between the CT image and the lung mask to get only the lungs. Afterwards, we set the zeros that resulted from the element-wise multiplication to -1000 (AIR in HU) and finally keep only the intensities that are bigger than -500 as vessels.
```
def create_vessel_mask(lung_mask, ct_numpy, denoise=False):
vessels = lung_mask * ct_numpy # isolate lung area
vessels[vessels == 0] = -1000
vessels[vessels >= -500] = 1
vessels[vessels < -500] = 0
show_slice(vessels)
if denoise:
return denoise_vessels(lungs_contour, vessels)
show_slice(vessels)
return vessels
```

### Analyzing and improving the segmentation’s result
As you can see we have some parts of the contour of the lungs, which I believe we would like to avoid. To this end, I created a denoising function that considers the distance of the mask to all the contour points. If it is below 0.1, I set the pixel value to 0 and as a result exclude them from the detected vessels.
```
def denoise_vessels(lung_contour, vessels):
vessels_coords_x, vessels_coords_y = np.nonzero(vessels) # get non zero coordinates
for contour in lung_contour:
x_points, y_points = contour[:, 0], contour[:, 1]
for (coord_x, coord_y) in zip(vessels_coords_x, vessels_coords_y):
for (x, y) in zip(x_points, y_points):
d = euclidean_dist(x - coord_x, y - coord_y)
if d <= 0.1:
vessels[coord_x, coord_y] = 0
return vessels
```


```
def overlay_plot(im, mask):
plt.figure()
plt.imshow(im.T, 'gray', interpolation='none')
plt.imshow(mask.T, 'jet', interpolation='none', alpha=0.5)
```
Now that we have the mask, the vessel area is computed similar to what I did for the lungs, by taking into account the individual image pixel dimension.
```
def compute_area(mask, pixdim):
"""
Computes the area (number of pixels) of a binary mask and multiplies the pixels
with the pixel dimension of the acquired CT image
Args:
lung_mask: binary lung mask
pixdim: list or tuple with two values
Returns: the lung area in mm^2
"""
mask[mask >= 1] = 1
lung_pixels = np.sum(mask)
return lung_pixels * pixdim[0] * pixdim[1]
```
### Conclusion
Best imaging technique CT imaging are reliable for lung cancer diagnosis because it can disclose every suspected and unsuspected lung cancer nodules.
| github_jupyter |
# Now You Code In Class: Zomato
For this Now You Code, you will need the Zomato api https://developers.zomato.com/api which provides API access to local area restaurant information. **Sign up for for your own FREE an API key!**
Let's write a program to do the following
1. input the city you're travelling to
2. use the input city to lookup the zomato city ID
3. list the top 10 restaurants trending this week at the city ID
4. Allow user to input one of the 10 restaurants ID for details
5. Show the details of a selected restaurant ID: location/hours/phone number.
You will need to use the following Zomato API's
- https://developers.zomato.com/documentation#!/common/cities to get the city_id for the name of the city.
- https://developers.zomato.com/documentation#!/restaurant/search (collection_id = 1 is for top 10 trending restaurants) in the city_id
- https://developers.zomato.com/documentation#!/restaurant/restaurant (needs a res_id)
Let's follow the best practices from the lab and write each API call as a function once we get it working.
We will take a **top down** approach, since we have a basic algorithm
In your algorithm, frame your steps based on how the API must be used to complete the task.
# Top-Down
## Step 1 - Trivial
## Step 2a: `getCityId()` Problem Analysis
This function should return a zomato city ID for the input CITY. Remember to write then refactor.
INPUTS:
PROMPT 1
OUTPUTS:
PROMPT 2
ALGORITHM:
Every API call is the same so let's focus on how to call this api.
https://developers.zomato.com/documentation#!/common/cities to get the city_id for the name of the city.
## Step 2b: `getCityId()` Write Code
```
import requests
zomato_key = ''
city ='Syracuse, NY'
# PROMPT 3
```
## Step 2c: `getCityId()` Refactor into function
```
# PROMPT 4
```
## Step 2d: `getCityId()` Test: Call the function
```
# PROMPT 5
```
## Step 3a: `getTrending()` Problem Analysis
This function should return the top 10 trending restaurants for a given zomato city ID. Remember to write then refactor.
INPUTS:
PROMPT 6
OUTPUTS:
PROMPT 7
ALGORITHM:
Every API call is the same so let's focus on how to call this api.
https://developers.zomato.com/documentation#!/restaurant/search
## Step 3b: `getTrending()` Write Code
```
# PROMPT 8
```
## Step 3c: `getTrending()` Refactor into function
```
# PROMPT 9
```
## Step 3d: `getTrending()` Test: Call the function and print the id, name and cuisines
```
# PROMPT 10
```
## Step 4: Trivial
Select one of the ID numbers for a rest.
## Step 5a: `getDetails()` Problem Analysis
This function should return the reestaurant details for a given ID. Remember to write then refactor.
INPUTS:
PROMPT 11
OUTPUTS:
PROMPT 12
ALGORITHM:
Every API call is the same so let's focus on how to call this api.
https://developers.zomato.com/documentation#!/restaurant/restaurant_0
## Step 5b: `getDetails()` Write Code
```
rest_id = 17643666
# PROMPT 13
```
## Step 5c: `getDetails()` Refactor into function
```
# PROMPT 14
```
## Step 5d: `getDetails()` Test: Display Name, Address, Hours, Phone
```
# PROMPT 15
```
## Top-Down: Put it all together
Get it all working in this cell. No need to copy the function definitions.
```
# PROMPT 16
import requests
zomato_key = ''
print("Zomato Trending Restaurant Search")
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit_now()
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://github.com/Harvard-IACS/2018-CS109A/blob/master/content/styles/iacs.png?raw=true"> CS109A Introduction to Data Science
## Lab 2: Web Scraping with Beautiful Soup
**Harvard University**<br>
**Fall 2019**<br>
**Instructors:** Pavlos Protopapas, Kevin Rader, and Chris Tanner <br>
**Lab Instructors:** Chris Tanner and Eleni Kaxiras<br>
**Authors:** Rahul Dave, David Sondak, Will Claybaugh, Pavlos Protopapas, Chris Tanner, and Eleni Kaxiras
---
```
## RUN THIS CELL TO GET THE RIGHT FORMATTING
from IPython.core.display import HTML
def css_styling():
styles = open("../../styles/cs109.css", "r").read()
return HTML(styles)
css_styling()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn.apionly as sns
import time
```
# Table of Contents
<ol start="0">
<li> Learning Goals </li>
<li> Introduction to Web Servers and HTTP </li>
<li> Download webpages and get basic properties </li>
<li> Parse the page with Beautiful Soup</li>
<li> String formatting</li>
<li> Additonal Python/Homework Comment</li>
<li> Walkthrough Example</li>
</ol>
# Learning Goals
- Understand the structure of a web page
- Understand how to use Beautiful soup to scrape content from web pages.
- Feel comfortable storing and manipulating the content in various formats.
- Understand how to convert structured format into a Pandas DataFrame
In this lab, we'll scrape Goodread's Best Books list:
https://www.goodreads.com/list/show/1.Best_Books_Ever?page=1 .
We'll walk through scraping the list pages for the book names/urls. First, we start with an even simpler example.
*This lab corresponds to lectures #2 and #3 and maps on to Homework #1 and further.*
# 1. Introduction to Web Servers and HTTP
A web server is just a computer -- usually a powerful one, but ultimately it's just another computer -- that runs a long/continuous process that listens for requests on a pre-specified (Internet) _port_ on your computer. It responds to those requests via a protocol called HTTP (HyperText Transfer Protocol). HTTPS is the secure version. When we use a web browser and navigate to a web page, our browser is actually sending a request on our behalf to a specific web server. The browser request is essentially saying "hey, please give me the web page contents", and it's up to the browser to correctly render that raw content into a coherent manner, dependent on the format of the file. For example, HTML is one format, XML is another format, and so on.
Ideally (and usually), the web server complies with the request and all is fine. As part of this communication exchange with web servers, the server also sends a status code.
- If the code starts with a **2**, it means the request was successful.
- If the code starts with a **4**, it means there was a client error (you, as the user, are the client). For example, ever receive a 404 File Not Found error because a web page doesn't exist? This is an example of a client error, because you are requesting a bogus item.
- If the code starts with a **5**, it means there was a server error (often that your request was incorrectly formed).
[Click here](https://www.restapitutorial.com/httpstatuscodes.html) for a full list of status codes.
As an analogy, you can think of a web server as being like a server at a restaurant; its goal is _serve_ you your requests. When you try to order something not on the menu (i.e., ask for a web page at a wrong location), the server says 'sorry, we don't have that' (i.e., 404, client error; your mistake).
**IMPORTANT:**
As humans, we visit pages in a sane, reasonable rate. However, as we start to scrape web pages with our computers, we will be sending requests with our code, and thus, we can make requests at an incredible rate. This is potentially dangerous because it's akin to going to a restaurant and bombarding the server(s) with thousands of food orders. Very often, the restaurant will ban you (i.e., Harvard's network gets banned from the website, and you are potentially held responsible in some capacity?). It is imperative to be responsible and careful. In fact, this act of flooding web pages with requests is the single-most popular, yet archiac, method for maliciously attacking websites / computers with Internet connections. In short, be respectful and careful with your decisions and code. It is better to err on the side of caution, which includes using the **``time.sleep()`` function** to pause your code's execution between subsequent requests. ``time.sleep(2)`` should be fine when making just a few dozen requests. Each site has its own rules, which are often visible via their site's ``robots.txt`` file.
### Additional Resources
**HTML:** if you are not familiar with HTML see https://www.w3schools.com/html/ or one of the many tutorials on the internet.
**Document Object Model (DOM):** for more on this programming interface for HTML and XML documents see https://www.w3schools.com/js/js_htmldom.asp.
# 2. Download webpages and get basic properties
``Requests`` is a highly useful Python library that allows us to fetch web pages.
``BeautifulSoup`` is a phenomenal Python library that allows us to easily parse web content and perform basic extraction.
If one wishes to scrape webpages, one usually uses ``requests`` to fetch the page and ``BeautifulSoup`` to parse the page's meaningful components. Webpages can be messy, despite having a structured format, which is why BeautifulSoup is so handy.
Let's get started:
```
from bs4 import BeautifulSoup
import requests
```
To fetch a webpage's content, we can simply use the ``get()`` function within the requests library:
```
url = "https://www.npr.org/2018/11/05/664395755/what-if-the-polls-are-wrong-again-4-scenarios-for-what-might-happen-in-the-elect"
response = requests.get(url) # you can use any URL that you wish
```
The response variable has many highly useful attributes, such as:
- status_code
- text
- content
Let's try each of them!
### response.status_code
```
response.status_code
```
You should have received a status code of 200, which means the page was successfully found on the server and sent to receiver (aka client/user/you). [Again, you can click here](https://www.restapitutorial.com/httpstatuscodes.html) for a full list of status codes.
### response.text
```
response.text
```
Holy moly! That looks awful. If we use our browser to visit the URL, then right-click the page and click 'View Page Source', we see that it is identical to this chunk of glorious text.
### response.content
```
response.content
```
What?! This seems identical to the ``.text`` field. However, the careful eye would notice that the very 1st characters differ; that is, ``.content`` has a *b'* character at the beginning, which in Python syntax denotes that the data type is bytes, whereas the ``.text`` field did not have it and is a regular String.
Ok, so that's great, but how do we make sense of this text? We could manually parse it, but that's tedious and difficult. As mentioned, BeautifulSoup is specifically designed to parse this exact content (any webpage content).
## BEAUTIFUL SOUP
 (property of NBC)
The [documentation for BeautifulSoup is found here](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
A BeautifulSoup object can be initialized with the ``.content`` from request and a flag denoting the type of parser that we should use. For example, we could specify ``html.parser``, ``lxml``, etc [documentation here](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-between-parsers). Since we are interested in standard webpages that use HTML, let's specify the html.parser:
```
soup = BeautifulSoup(response.content, "html.parser")
soup
```
Alright! That looks a little better; there's some whitespace formatting, adding some structure to our content! HTML code is structured by `<tags>`. Every tag has an opening and closing portion, denoted by ``< >`` and ``</ >``, respectively. If we want just the text (not the tags), we can use:
```
soup.get_text()
```
There's some tricky Javascript still nesting within it, but it definitely cleaned up a bit. On other websites, you may find even clearer text extraction.
As detailed in the [BeautifulSoup documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/), the easiest way to navigate through the tags is to simply name the tag you're interested in. For example:
```
soup.head # fetches the head tag, which ecompasses the title tag
```
Usually head tags are small and only contain the most important contents; however, here, there's some Javascript code. The ``title`` tag resides within the head tag.
```
soup.title # we can specifically call for the title tag
```
This result includes the tag itself. To get just the text within the tags, we can use the ``.name`` property.
```
soup.title.string
```
We can navigate to the parent tag (the tag that encompasses the current tag) via the ``.parent`` attribute:
```
soup.title.parent.name
```
# 3. Parse the page with Beautiful Soup
In HTML code, paragraphs are often denoated with a ``<p>`` tag.
```
soup.p
```
This returns the first paragraph, and we can access properties of the given tag with the same syntax we use for dictionaries and dataframes:
```
soup.p['class']
```
In addition to 'paragraph' (aka p) tags, link tags are also very common and are denoted by ``<a>`` tags
```
soup.a
```
It is called the a tag because links are also called 'anchors'. Nearly every page has multiple paragraphs and anchors, so how do we access the subsequent tags? There are two common functions, `.find()` and `.find_all()`.
```
soup.find('title')
soup.find_all('title')
```
Here, the results were seemingly the same, since there is only one title to a webpage. However, you'll notice that ``.find_all()`` returned a list, not a single item. Sure, there was only one item in the list, but it returned a list. As the name implies, find_all() returns all items that match the passed-in tag.
```
soup.find_all('a')
```
Look at all of those links! Amazing. It might be hard to read but the **href** portion of an *a* tag denotes the URL, and we can capture it via the ``.get()`` function.
```
for link in soup.find_all('a'): # we could optionally pass the href=True flag .find_all('a', href=True)
print(link.get('href'))
```
Many of those links are relative to the current URL (e.g., /section/news/).
```
paragraphs = soup.find_all('p')
paragraphs
```
If we want just the paragraph text:
```
for pa in paragraphs:
print(pa.get_text())
```
Since there are multiple tags and various attributes, it is useful to check the data type of BeautifulSoup objects:
```
type(soup.find('p'))
```
Since the ``.find()`` function returns a BeautifulSoup element, we can tack on multiple calls that continue to return elements:
```
soup.find('p')
soup.find('p').find('a')
soup.find('p').find('a').attrs['href'] # att
soup.find('p').find('a').text
```
That doesn't look pretty, but it makes sense because if you look at what ``.find('a')`` returned, there is plenty of whitespace. We can remove that with Python's built-in ``.strip()`` function.
```
soup.find('p').find('a').text.strip()
```
**NOTE:** above, we accessed the attributes of a link by using the property ``.attrs``. ``.attrs`` takes a dictionary as a parameter, and in the example above, we only provided the _key_, not a _value_, too. That is, we only cared that the ``<a>`` tag had an attribute named ``href`` (which we grabbed by typing that command), and we made no specific demands on what the value must be. In other words, regardless of the value of _href_, we grabbed that element. Alternatively, if you inspect your HTML code and notice select regions for which you'd like to extract text, you can specify it as part of the attributes, too!
For example, in the full ``response.text``, we see the following line:
``<header class="npr-header" id="globalheader" aria-label="NPR header">``
Let's say that we know that the information we care about is within tags that match this template (i.e., **class** is an attribute, and its value is **'npr-header'**).
```
soup.find('header', attrs={'class':'npr-header'})
```
This matched it! We could then continue further processing by tacking on other commands:
```
soup.find('header', attrs={'class':'npr-header'}).find_all("li") # li stands for list items
```
This returns all of our list items, and since it's within a particular header section of the page, it appears they are links to menu items for navigating the webpage. If we wanted to grab just the links within these:
```
menu_links = set()
for list_item in soup.find('header', attrs={'class':'npr-header'}).find_all("li"):
for link in list_item.find_all('a', href=True):
menu_links.add(link)
menu_links # a unique set of all the seemingly important links in the header
```
## TAKEAWAY LESSON
The above tutorial isn't meant to be a study guide to memorize; its point is to show you the most important functionaity that exist within BeautifulSoup, and to illustrate how one can access different pieces of content. No two web scraping tasks are identical, so it's useful to play around with code and try different things, while using the above as examples of how you may navigate between different tags and properties of a page. Don't worry; we are always here to help when you get stuck!
# String formatting
As we parse webpages, we may often want to further adjust and format the text to a certain way.
For example, say we wanted to scrape a polical website that lists all US Senators' name and office phone number. We may want to store information for each senator in a dictionary. All senators' information may be stored in a list. Thus, we'd have a list of dictionaries. Below, we will initialize such a list of dictionary (it has only 3 senators, for illustrative purposes, but imagine it contains many more).
```
# this is a bit clumsy of an initialization, but we spell it out this way for clarity purposes
# NOTE: imagine the dictionary were constructed in a more organic manner
senator1 = {"name":"Lamar Alexander", "number":"555-229-2812"}
senator2 = {"name":"Tammy Baldwin", "number":"555-922-8393"}
senator3 = {"name":"John Barrasso", "number":"555-827-2281"}
senators = [senator1, senator2, senator3]
print(senators)
```
In the real-world, we may not want the final form of our information to be in a Python dictionary; rather, we may need to send an email to people in our mailing list, urging them to call their senators. If we have a templated format in mind, we can do the following:
```
email_template = """Please call {name} at {number}"""
for senator in senators:
print(email_template.format(**senator))
```
**Please [visit here](https://docs.python.org/3/library/stdtypes.html#str.format)** for further documentation
Alternatively, one can also format their text via the ``f'-strings`` property. [See documentation here](https://docs.python.org/3/reference/lexical_analysis.html#f-strings). For example, using the above data structure and goal, one could yield identical results via:
```
for senator in senators:
print(f"Please call {senator['name']} at {senator['number']}")
```
Additionally, sometimes we wish to search large strings of text. If we wish to find all occurrences within a given string, a very mechanical, procedural way of doing it would be to use the ``.find()`` function in Python and to repeatedly update the starting index from which we are looking.
## Regular Expressions
A way more suitable and powerful way is to use Regular Expressions, which is a pattern matching mechanism used throughout Computer Science and programming (it's not just specific to Python). A tutorial on Regular Expressions (aka regex) is beond this lab, but below are many great resources that we recommend, if you are interested in them (could be very useful for a homework problem):
- https://docs.python.org/3.3/library/re.html
- https://regexone.com
- https://docs.python.org/3/howto/regex.html.
# Additonal Python/Homework Comment
In Homework #1, we ask you to complete functions that have signatures with a syntax you may not have seen before:
``def create_star_table(starlist: list) -> list:``
To be clear, this syntax merely means that the input parameter must be a list, and the output must be a list. It's no different than any other function, it just puts a requirement on the behavior of the function.
It is **typing** our function. Please [see this documention if you have more questions.](https://docs.python.org/3/library/typing.html)
# Walkthrough Example (of Web Scraping)
We're going to see the structure of Goodread's best books list (**NOTE: Goodreads is described a little more within the other Lab2_More_Pandas.ipynb notebook)**. We'll use the Developer tools in chrome, safari and firefox have similar tools available. To get this page we use the `requests` module. But first we should check if the company's policy allows scraping. Check the [robots.txt](https://www.goodreads.com/robots.txt) to find what sites/elements are not accessible. Please read and verify.

```
url="https://www.npr.org/2018/11/05/664395755/what-if-the-polls-are-wrong-again-4-scenarios-for-what-might-happen-in-the-elect"
response = requests.get(url)
# response.status_code
# response.content
# Beautiful Soup (library) time!
soup = BeautifulSoup(response.content, "html.parser")
#print(soup)
# soup.prettify()
soup.find("title")
# Q1: how do we get the title's text?
# Q2: how do we get the webpage's entire content?
URLSTART="https://www.goodreads.com"
BESTBOOKS="/list/show/1.Best_Books_Ever?page="
url = URLSTART+BESTBOOKS+'1'
print(url)
page = requests.get(url)
```
We can see properties of the page. Most relevant are `status_code` and `text`. The former tells us if the web-page was found, and if found , ok. (See lecture notes.)
```
page.status_code # 200 is good
page.text[:5000]
```
Let us write a loop to fetch 2 pages of "best-books" from goodreads. Notice the use of a format string. This is an example of old-style python format strings
```
URLSTART="https://www.goodreads.com"
BESTBOOKS="/list/show/1.Best_Books_Ever?page="
for i in range(1,3):
bookpage=str(i)
stuff=requests.get(URLSTART+BESTBOOKS+bookpage)
filetowrite="files/page"+ '%02d' % i + ".html"
print("FTW", filetowrite)
fd=open(filetowrite,"w")
fd.write(stuff.text)
fd.close()
time.sleep(2)
```
## 2. Parse the page, extract book urls
Notice how we do file input-output, and use beautiful soup in the code below. The `with` construct ensures that the file being read is closed, something we do explicitly for the file being written. We look for the elements with class `bookTitle`, extract the urls, and write them into a file
```
bookdict={}
for i in range(1,3):
books=[]
stri = '%02d' % i
filetoread="files/page"+ stri + '.html'
print("FTW", filetoread)
with open(filetoread) as fdr:
data = fdr.read()
soup = BeautifulSoup(data, 'html.parser')
for e in soup.select('.bookTitle'):
books.append(e['href'])
print(books[:10])
bookdict[stri]=books
fd=open("files/list"+stri+".txt","w")
fd.write("\n".join(books))
fd.close()
```
Here is George Orwell's 1984
```
bookdict['02'][0]
```
Lets go look at the first URLs on both pages

## 3. Parse a book page, extract book properties
Ok so now lets dive in and get one of these these files and parse them.
```
furl=URLSTART+bookdict['02'][0]
furl
```

```
fstuff=requests.get(furl)
print(fstuff.status_code)
#d=BeautifulSoup(fstuff.text, 'html.parser')
# try this to take care of arabic strings
d = BeautifulSoup(fstuff.text, 'html.parser', from_encoding="utf-8")
d.select("meta[property='og:title']")[0]['content']
```
Lets get everything we want...
```
#d=BeautifulSoup(fstuff.text, 'html.parser', from_encoding="utf-8")
print(
"title", d.select_one("meta[property='og:title']")['content'],"\n",
"isbn", d.select("meta[property='books:isbn']")[0]['content'],"\n",
"type", d.select("meta[property='og:type']")[0]['content'],"\n",
"author", d.select("meta[property='books:author']")[0]['content'],"\n",
#"average rating", d.select_one("span.average").text,"\n",
"ratingCount", d.select("meta[itemprop='ratingCount']")[0]["content"],"\n"
#"reviewCount", d.select_one("span.count")["title"]
)
```
Ok, now that we know what to do, lets wrap our fetching into a proper script. So that we dont overwhelm their servers, we will only fetch 5 from each page, but you get the idea...
We'll segue of a bit to explore new style format strings. See https://pyformat.info for more info.
```
"list{:0>2}.txt".format(3)
a = "4"
b = 4
class Four:
def __str__(self):
return "Fourteen"
c=Four()
"The hazy cat jumped over the {} and {} and {}".format(a, b, c)
```
## 4. Set up a pipeline for fetching and parsing
Ok lets get back to the fetching...
```
fetched=[]
for i in range(1,3):
with open("files/list{:0>2}.txt".format(i)) as fd:
counter=0
for bookurl_line in fd:
if counter > 4:
break
bookurl=bookurl_line.strip()
stuff=requests.get(URLSTART+bookurl)
filetowrite=bookurl.split('/')[-1]
filetowrite="files/"+str(i)+"_"+filetowrite+".html"
print("FTW", filetowrite)
fd=open(filetowrite,"w")
fd.write(stuff.text)
fd.close()
fetched.append(filetowrite)
time.sleep(2)
counter=counter+1
print(fetched)
```
Ok we are off to parse each one of the html pages we fetched. We have provided the skeleton of the code and the code to parse the year, since it is a bit more complex...see the difference in the screenshots above.
```
import re
yearre = r'\d{4}'
def get_year(d):
if d.select_one("nobr.greyText"):
return d.select_one("nobr.greyText").text.strip().split()[-1][:-1]
else:
thetext=d.select("div#details div.row")[1].text.strip()
rowmatch=re.findall(yearre, thetext)
if len(rowmatch) > 0:
rowtext=rowmatch[0].strip()
else:
rowtext="NA"
return rowtext
```
<div class="exercise"><b>Exercise</b></div>
Your job is to fill in the code to get the genres.
```
def get_genres(d):
# your code here
genres=d.select("div.elementList div.left a")
glist=[]
for g in genres:
glist.append(g['href'])
return glist
listofdicts=[]
for filetoread in fetched:
print(filetoread)
td={}
with open(filetoread) as fd:
datext = fd.read()
d=BeautifulSoup(datext, 'html.parser')
td['title']=d.select_one("meta[property='og:title']")['content']
td['isbn']=d.select_one("meta[property='books:isbn']")['content']
td['booktype']=d.select_one("meta[property='og:type']")['content']
td['author']=d.select_one("meta[property='books:author']")['content']
#td['rating']=d.select_one("span.average").text
td['year'] = get_year(d)
td['file']=filetoread
glist = get_genres(d)
td['genres']="|".join(glist)
listofdicts.append(td)
listofdicts[0]
```
Finally lets write all this stuff into a csv file which we will use to do analysis.
```
df = pd.DataFrame.from_records(listofdicts)
df
df.to_csv("files/meta_utf8_EK.csv", index=False, header=True)
```
| github_jupyter |
# Plot Classification Accuracy as a function of Ensemble Weight
```
import sys
import numpy as np
from matplotlib import rc
import matplotlib.pyplot as plt
import sklearn.metrics
from collections import defaultdict
import os
import glob
from collections import OrderedDict, namedtuple
from tqdm import tqdm
import pandas as pd
import seaborn as sns
pd.options.display.float_format = '{:0.2f}'.format
rc('font', **{'family': 'serif'})
from data import data_celebahq
%matplotlib inline
```
# Define utility functions
```
### data evaluation utilities ###
def softmax_to_prediction(softmax_prediction):
# converts softmax prediction to discrete class label
if np.ndim(softmax_prediction) == 2:
# N x ensembles binary prediction
return (softmax_prediction > 0.5).astype(int)
elif np.ndim(softmax_prediction) == 3:
# N x ensembles x classes
return np.argmax(softmax_prediction, axis=-1).squeeze()
else:
assert(False)
def get_accuracy_from_image_ensembles(data_file, key, resample=False, seed=0,
n_resamples=20, ens_size=32, verbose=True):
# helper function to extract ensembled accuracy from image augmentations
# e.g. image_ensemble_imcolor.npz or image_ensemble_imcrop.npz
encoded_data = np.load(data_file)
preds_original = softmax_to_prediction(encoded_data['original'])
acc_original = sklearn.metrics.accuracy_score(encoded_data['label'], preds_original) * 100
jitters = np.concatenate([encoded_data['original'], encoded_data[key]], axis=1)
jitters = np.mean(jitters, axis=1, keepdims=True)
preds_ensembled = softmax_to_prediction(jitters)
acc_ensembled = sklearn.metrics.accuracy_score(encoded_data['label'], preds_ensembled) * 100
resamples = None
if resample:
# sample num_samples batches with replacement, compute accuracy
resamples = []
rng = np.random.RandomState(seed)
jitters = np.concatenate([encoded_data['original'], encoded_data[key]], axis=1)
assert(jitters.shape[1] == ens_size) # sanity check
for i in range(n_resamples):
if verbose:
print('*', end='')
indices = rng.choice(jitters.shape[1], ens_size, replace=True)
jitters_resampled = jitters[:, indices]
jitters_resampled = np.mean(jitters_resampled, axis=1, keepdims=True)
preds_ensembled = softmax_to_prediction(jitters_resampled)
resamples.append(sklearn.metrics.accuracy_score(encoded_data['label'], preds_ensembled) * 100)
if verbose:
print("done")
return {'acc_original': acc_original, 'acc_ensembled': acc_ensembled, 'resamples': resamples}
def sample_ensemble(raw_preds, ens_size=None, seed=None):
# helper function to resample raw ensemble predictions
# raw_preds = N x ens_size for binary classification, or N x ens_size x classes
# ens_size = number of samples to take preds for ensembling, None takes all all samples
# seed = random seed to use when sampling with replacement, None takes samples in order
if ens_size is None:
ens_size = raw_preds.shape[1] # take all samples
if seed is None:
ensemble_preds = raw_preds[:, range(ens_size)] # take the samples in order
else: # sample the given preds with replacement
rng = np.random.RandomState(seed)
indices = rng.choice(raw_preds.shape[1], ens_size, replace=True)
ensemble_preds = raw_preds[:, indices]
return ensemble_preds
def get_accuracy_from_npz(data_file, expt_name, weight=None, ens_size=None, seed=None, return_preds=False,
add_aug=False, aug_name='image_ensemble_imcrop', aug_key='imcrop'):
# compute weighted accuracies combining original image and GAN reconstructions from an npz_file
# option to use either single original image, or multiple image augmentations for the image views
# setup
encoded_data = np.load(data_file)
df = defaultdict(list)
expt_settings = os.path.basename(data_file).split('.')[0]
if weight is not None:
weights = [weight]
else:
weights = np.linspace(0, 1, 21)
# determine image classification accuracy
if not add_aug:
# basic case: just load the image predictions from the data file
preds_original = softmax_to_prediction(encoded_data['original'])
original = encoded_data['original'] # full softmax distribution
else:
# ensemble also with the image augmentations data
print('.', end='')
im_aug_data = np.load(os.path.join(data_file.rsplit('/', 1)[0], '%s.npz' % aug_name))
im_aug_ens = np.concatenate([im_aug_data['original'], im_aug_data[aug_key]], axis=1)
im_aug_ens = sample_ensemble(im_aug_ens, ens_size, seed)
im_aug_ens = np.mean(im_aug_ens, axis=1, keepdims=True)
preds_original = softmax_to_prediction(im_aug_ens)
original = im_aug_ens # full softmax distribution
acc_original = sklearn.metrics.accuracy_score(encoded_data['label'], preds_original) * 100
# determine GAN reconstruction accuracy
preds_reconstructed = softmax_to_prediction(encoded_data['reconstructed'])
acc_reconstructed = sklearn.metrics.accuracy_score(encoded_data['label'], preds_reconstructed) * 100
# determine GAN ensemble accuracy
perturbed = encoded_data[expt_name] # N x ens_size x softmax distribution
gan_ens = np.concatenate((encoded_data['reconstructed'], perturbed), axis=1)
if ens_size == 0:
gan_ens = original # dummy case: don't use gan reconstructed images
else:
gan_ens = sample_ensemble(gan_ens, ens_size, seed)
for weight in weights: # alpha weighting hyperparameter
# for binary classification: original.shape = N x 1, gan_ens.shape = N x ens_size
# for multi-class classification: original.shape = N x 1 x classes; gan_ens.shape = N x ens_size x classes
ensembled = (1-weight) * original + weight * np.mean(gan_ens, axis=1, keepdims=True)
preds_ensembled = softmax_to_prediction(ensembled)
acc_ensembled = sklearn.metrics.accuracy_score(encoded_data['label'], preds_ensembled) * 100
df['acc'].append(acc_ensembled)
df['weight'].append(weight)
df['expt_name'].append(expt_name)
# table of expt_name x weight
df = pd.DataFrame.from_dict(df)
return_data = {'expt_settings': expt_settings,
'acc_original': acc_original,
'acc_reconstructed': acc_reconstructed,
'ensemble_table': df}
if return_preds:
assert(len(weights) == 1)
return_preds = {
'original': original, # original softmax
'reconstruction': gan_ens, # softmax of all gan views
'ensembled': ensembled, # softmax of the weighted ensemble
'pred_original': preds_original,
'pred_reconstruction': preds_reconstructed,
'pred_ensemble': preds_ensembled,
'label': encoded_data['label'],
}
return return_data, return_preds
return return_data
```
# Make plot
```
attr = 'Smiling'
val_expt = (f'results/precomputed_evaluations/celebahq/output/{attr}_val/gan_ensemble_stylemix_fine_tensortransform.npz',
('stylemix_fine',), 'Style-Mix Fine')
x, y, z = val_expt
test_expt = (x.replace('_val', '_test'), y, z)
val_res = get_accuracy_from_npz(val_expt[0], val_expt[1][0], add_aug=False, ens_size=31)
test_res = get_accuracy_from_npz(test_expt[0], test_expt[1][0], add_aug=False, ens_size=31)
f, ax = plt.subplots(1, 1, figsize=(6, 3)) # , sharey=True)
ax.plot(val_res['ensemble_table']['weight'], val_res['ensemble_table']['acc'], label='Validation')
ax.plot(test_res['ensemble_table']['weight'], test_res['ensemble_table']['acc'], label='Test')
# plot the ensemble weight
val_ensemble_table = val_res['ensemble_table']
best_val_setting = val_ensemble_table.iloc[val_ensemble_table['acc'].argsort().iloc[-1], :]
ax.axvline(best_val_setting.weight, color='k', linestyle=':', label='Selected Weight')
ax.set_ylabel('Accuracy')
ax.set_xlabel('Ensemble Weight')
ax.legend()
f.tight_layout()
print("Test Accuracy, Images: %0.4f" % test_res['acc_original'])
print("Test Accuracy, GAN Reconstructions: %0.4f" % test_res['acc_reconstructed'])
test_weighted = test_res['ensemble_table'].loc[test_res['ensemble_table']['weight'] == best_val_setting.weight]
print("Test Accuracy, Weighted GAN Reconstructions (weight from validation): %0.4f @ weight=%0.2f"
% (test_weighted.acc, test_weighted.weight))
test_oracle = test_res['ensemble_table'].iloc[test_res['ensemble_table']['acc'].argsort().iloc[-1], :]
print("Test Accuracy, Weighted GAN Reconstructions (Oracle): %0.4f @ weight=%0.2f"
% (test_oracle.acc, test_oracle.weight))
```
| github_jupyter |
```
from keras import layers
from keras.models import Model, Sequential
from keras.utils import plot_model
from keras import backend as K
#from tqdm import tqdm
import matplotlib.pyplot as plt
from IPython import display
def res_block(y, nb_channels, _strides = (1,1), _project_shortcut=False):
shortcut = y
y = layers.Conv2D(nb_channels, kernel_size=(3, 3), strides=_strides, padding='same')(y)
y = layers.BatchNormalization()(y)
y = layers.ReLU()(y)
y = layers.Conv2D(nb_channels, kernel_size=(3, 3), strides=_strides, padding='same')(y)
y = layers.BatchNormalization()()
if _project_shortcut or _strides != (1, 1):
shortcut = layers.Conv2D(nb_channels, kernel_size=(1, 1), strides=_strides, padding='same')(shortcut)
shortcut = layers.BatchNormalization()(shortcut)
y = layers.add([shortcut, y])
#y = layers.LeakyReLU()(y)
return y
def res_net(x, nb_channels, _strides=(1, 1)):
x = layers.Conv2D(64, kernel_size=(3, 3), strides=_strides, padding='same', activation='relu')(x)
shortcut = x
for _ in range(16):
x = res_block(x, 64)
x = layers.Conv2D(64, kernel_size=(3, 3), strides=_strides, padding='same', activation='relu')(x)
x = layers.add([shortcut, x])
return x
def conv_net(x, nb_channels, _strides=(1, 1)):
x = layers.Conv2D(32, kernel_size=(3, 3), strides=_strides, padding='same', activation='relu')(x)
#x = layers.Conv2D(64, kernel_size=(3, 3), strides=_strides, padding='same', activation='relu')(x)
return x
def post_net(y, nb_channels, _strides=(1, 1)):
#y = layers.Conv2D(64, kernel_size=(3, 3), strides=_strides, padding='same', activation='relu')(y)
#y = layers.Conv2D(32, kernel_size=(3, 3), strides=_strides, padding='same', activation='relu')(y)
y = layers.Conv2D(3, kernel_size=(3, 3), strides=_strides, padding='same', activation='linear')(y)
return y
import cv2
import numpy as np
def load_imgs(path, number, train_type):
result=np.empty((number, 64, 64, 3), dtype="float64")
for i in range(number):
I = cv2.imread(path + "{:04}_{}.jpeg".format(i+1, train_type))
result[i, :, :, :] = I
return result/result.max()
#inport training data
dataNum = 1000
x1_train = load_imgs("./blurImg/", dataNum, 1)
x2_train = load_imgs("./blurImg/", dataNum, 2)
y_train = load_imgs("./blurImg/", dataNum, 0)
def make_trainable(net, val):
net.trainable = val
for l in net.layers:
l.trainable = val
def loss_wrapper(in_tensor1, in_tensor2):
def gaussian_blur(in_tensor):
# use large kernel to blur pred and in_tensor//
return
def custom_loss(y_true, y_pred):
# or better implementation like fourier transformation
return K.binary_crossentropy(y_true, y_pred) + K.reduce_mean(K.square(gaussian_blur(y_pred)-gaussian_blur(in_tensor1)))
return custom_loss
img_a = layers.Input(shape=(64, 64, 3))
img_b = layers.Input(shape=(64, 64, 3))
#feature_a = conv_net(img_a, 3)
#feature_b = conv_net(img_b, 3)
feature_a = res_net(img_a, 3)
feature_b = res_net(img_b, 3)
merge = layers.concatenate([feature_a, feature_b])
aif = post_net(merge, 128)
gen = Model(inputs = [img_a, img_b], outputs = [aif])
#gen.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
gen.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
#gen.summary()
#plot_model(gen, to_file='generator.png')
image_fake = gen([img_a, img_b])
dis = Sequential()
dis.add(layers.Conv2D(64, kernel_size=(3, 3),strides=(2, 2), padding='same'))
dis.add(layers.LeakyReLU())
#dis.add(layers.Dropout(0.25))
dis.add(layers.Conv2D(128, kernel_size=(3, 3), strides=(2, 2),padding='same'))
dis.add(layers.LeakyReLU())
#dis.add(layers.Dropout(0.25))
dis.add(layers.Conv2D(256, kernel_size=(3, 3), strides=(2, 2),padding='same'))
dis.add(layers.LeakyReLU())
#dis.add(layers.Dropout(0.25))
#dis.add(layers.Conv2D(1, kernel_size=(3, 3), padding='same'))
dis.add(layers.Flatten())
dis.add(layers.Dense(256))
dis.add(layers.Dense(2))
dis.add(layers.Activation('softmax'))
pred_prob = dis(image_fake)
dis.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#dis.summary()
#plot_model(dis, to_file='discriminator.png')
make_trainable(dis, False)
am = Model(inputs = [img_a, img_b], outputs = [pred_prob])
am.summary()
am.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#plot_model(am, to_file='adversary.png')
def plot_loss(losses):
display.clear_output(wait=True)
display.display(plt.gcf())
plt.figure(figsize=(10,8))
plt.plot(losses["d"], label='discriminitive loss')
plt.plot(losses["g"], label='generative loss')
plt.legend()
plt.show()
# Train discriminator on generated images
losses = {"d":[], "g":[]}
Batch_size = 16
nb_epoch = 100
for epoch in range(nb_epoch):
rand_idx = np.random.randint(0, x1_train.shape[0], size = Batch_size)
img_batch1 = x1_train[rand_idx, :, :, :]
img_batch2 = x2_train[rand_idx, :, :, :]
y_batch = y_train[np.random.randint(0, y_train.shape[0], size = Batch_size), :, :, :]
#gen.fit([x1_train, x2_train], y_train)
gen_img = gen.predict([img_batch1, img_batch2])
X = np.concatenate((y_batch, gen_img))
y = np.zeros([2*Batch_size,2])
y[0:Batch_size, 1] = 1
y[Batch_size:, 0] = 1
make_trainable(dis,True)
dis.fit(X, y,epochs=1, batch_size=Batch_size*2)
#losses["d"].append(d_loss)
y2 = np.zeros([Batch_size, 2])
y2[:, 1] = 1
# train Generator-Discriminator stack on input noise to non-generated output class
make_trainable(dis,False)
am.fit([img_batch1, img_batch2], y2,epochs=1, batch_size=Batch_size) #same batch or ???
#losses["g"].append(g_loss)
#if epoch % 25 == 25 - 1:
# plot_loss(losses)
Batch_size = 1000
rand_idx = np.random.randint(0, x1_train.shape[0], size = Batch_size)
img_batch1 = x1_train[rand_idx, :, :, :]
img_batch2 = x2_train[rand_idx, :, :, :]
y_batch = y_train[rand_idx, :, :, :]
gen.fit([img_batch1, img_batch2], y_batch)
gen_img[0]
gen_img.min()
gen_img = gen.predict([x1_train, x2_train])
cv2.imwrite("a.jpg", 255.0*(gen_img[4]-gen_img[4].min())/(gen_img[4].max()-gen_img[4].min()))
#pre-train discriminate network
make_trainable(dis,True)
Batch_size = 100
rand_idx = np.random.randint(0, x1_train.shape[0], size = Batch_size)
img_batch1 = x1_train[rand_idx, :, :, :]
img_batch2 = x2_train[rand_idx, :, :, :]
y_batch = y_train[np.random.randint(0, y_train.shape[0], size = Batch_size), :, :, :]
gen_img = gen.predict([img_batch1, img_batch2])
X = np.concatenate((y_batch, gen_img))
y = np.zeros([2*Batch_size,2])
y[0:Batch_size, 0] = 1
y[Batch_size:, 1] = 1
dis.fit(X, y)
```
| github_jupyter |
# Regression
## Create the data
```
import random
import torch
from torch import nn, optim
import math
from IPython import display
from res.plot_lib import plot_data, plot_model, set_default
from matplotlib import pyplot as plt
set_default()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
seed = 1
random.seed(seed)
torch.manual_seed(seed)
N = 1000 # num_samples_per_class
D = 1 # dimensions
C = 1 # num_classes
H = 100 # num_hidden_units
X = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1).to(device)
y = X.pow(3) + 0.3 * torch.rand(X.size()).to(device)
print("Shapes:")
print("X:", tuple(X.size()))
print("y:", tuple(y.size()))
plt.scatter(X.cpu().numpy(), y.cpu().numpy())
plt.axis('equal');
```
## Linear model
```
learning_rate = 1e-3
lambda_l2 = 1e-5
# nn package to create our linear model
# each Linear module has a weight and bias
model = nn.Sequential(
nn.Linear(D, H),
nn.Linear(H, C)
)
model.to(device) # Convert to CUDA
# nn package also has different loss functions.
# we use MSE loss for our regression task
criterion = torch.nn.MSELoss()
# we use the optim package to apply
# stochastic gradient descent for our parameter updates
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2
# Training
for t in range(1000):
# Feed forward to get the logits
y_pred = model(X)
# Compute the loss (MSE)
loss = criterion(y_pred, y)
print("[EPOCH]: %i, [LOSS or MSE]: %.6f" % (t, loss.item()))
display.clear_output(wait=True)
# zero the gradients before running
# the backward pass.
optimizer.zero_grad()
# Backward pass to compute the gradient
# of loss w.r.t our learnable params.
loss.backward()
# Update params
optimizer.step()
# Plot trained model
print(model)
plt.scatter(X.data.cpu().numpy(), y.data.cpu().numpy())
plt.plot(X.data.cpu().numpy(), y_pred.data.cpu().numpy(), 'r-', lw=5)
plt.axis('equal');
```
## Two-layered network
```
learning_rate = 1e-3
lambda_l2 = 1e-5
# Number of networks
n_networks = 10
models = list()
y_pretrain = list()
# nn package also has different loss functions.
# we use MSE for a regression task
criterion = torch.nn.MSELoss()
for mod in range(n_networks):
# nn package to create our linear model
# each Linear module has a weight and bias
model = nn.Sequential(
nn.Linear(D, H),
nn.ReLU() if mod < n_networks // 2 else nn.Tanh(),
nn.Linear(H, C)
)
model.to(device)
# Append models
models.append(model)
# we use the optim package to apply
# ADAM for our parameter updates
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2
# e = 1. # plotting purpose
# Training
for t in range(1000):
# Feed forward to get the logits
y_pred = model(X)
# Append pre-train output
if t == 0:
y_pretrain.append(y_pred.detach())
# Compute the loss and accuracy
loss = criterion(y_pred, y)
print(f"[MODEL]: {mod + 1}, [EPOCH]: {t}, [LOSS]: {loss.item():.6f}")
display.clear_output(wait=True)
# zero the gradients before running
# the backward pass.
optimizer.zero_grad()
# Backward pass to compute the gradient
# of loss w.r.t our learnable params.
loss.backward()
# Update params
optimizer.step()
print(models[0], models[-1])
```
## Predictions: Before Training
```
for y_pretrain_idx in y_pretrain:
# New X that ranges from -5 to 5 instead of -1 to 1
X_new = torch.unsqueeze(torch.linspace(-2, 2, 100), dim=1)
plt.plot(X_new.numpy(), y_pretrain_idx.cpu().numpy(), 'r-', lw=1)
plt.scatter(X.cpu().numpy(), y.cpu().numpy(), label='data')
plt.axis('square')
plt.axis((-1.1, 1.1, -1.1, 1.1));
y_combo = torch.stack(y_pretrain)
plt.plot(X_new.numpy(), y_combo.var(dim=0).cpu().numpy(), 'g', label='variance');
plt.legend()
```
## Predictions: After Training
```
y_pred = list()
relu_models = models[:n_networks // 2]
tanh_models = models[n_networks // 2:]
plt.figure(figsize=(20, 10))
def dense_prediction(models, non_linearity, zoom):
plt.subplot(1, 2, 1 if non_linearity == 'ReLU' else 2)
for model in models:
# New X that ranges from -5 to 5 instead of -1 to 1
X_new = torch.unsqueeze(torch.linspace(-4, 4, 1001), dim=1).to(device)
# Getting predictions from input
with torch.no_grad():
y_pred.append(model(X_new))
plt.plot(X_new.cpu().numpy(), y_pred[-1].cpu().numpy(), 'r-', lw=1)
plt.scatter(X.cpu().numpy(), y.cpu().numpy(), label='data')
plt.axis('square')
plt.axis(torch.tensor((-1.1, 1.1, -1.1, 1.1)) * zoom);
y_combo = torch.stack(y_pred)
plt.plot(X_new.cpu().numpy(), 10 * y_combo.var(dim=0).cpu().sqrt().numpy(), 'y', label='10 × std')
plt.plot(X_new.cpu().numpy(), 10 * y_combo.var(dim=0).cpu().numpy(), 'g', label='30 × variance')
plt.legend()
plt.title(non_linearity + ' models')
z = 1 # try 1 or 4
dense_prediction(relu_models, 'ReLU', zoom=z)
dense_prediction(tanh_models, 'Tanh', zoom=z)
```
| github_jupyter |
# PCAWG-QC Star Rating
Imports the QC measures in the form of a tsv (Supplementary Table 1 in the PCAWG-QC paper), calculates which pass for each QC measure and gives a star rating. Various graphs to show various quality measures are also plotted.
INPUT: TSV files saved from google sheets containing the data, metadata file linking the projects to tissue types. Numerical calculations are done using the numpy and scipy.stats packages. Graphs are plotted using matplotlib.pyplot package. Also, collections.Counter is needed for manipulation of the data.
OUTPUT: A TSV file with the star rating and a series of graphs used to illustrate the different QC measures and the star rating.
```
%matplotlib inline
```
### Imports the packages needed for this code
```
import matplotlib.pyplot as plt
import numpy as np
from collections import Counter
from scipy.stats import gaussian_kde
```
### First, caculate the thresholds for the Mean/Median Coverage ratio, which are the whiskers from the boxplots of the normal and tumour samples.
```
f = open('Supplementary_Table_1.tsv', 'r')
line = f.next()
medmean_norm = []
medmean_tumo = []
norm_ids = []
for line in f:
temp = line.split('\t')
if (temp[9] != 'NA') and (temp[2] not in norm_ids):
norm_ids.append(temp[2])
medmean_norm.append(float(temp[9]))
if temp[11] != 'NA':
medmean_tumo.append(float(temp[11]))
f.close()
# Plot it
fig = plt.figure(1, figsize=(9, 6))
ax = fig.add_subplot(111)
bp = ax.boxplot([medmean_norm, medmean_tumo])
ax.set_xticklabels(['Normal', 'Tumour'])
ax.axhline(1, color='k', linestyle='dashed', linewidth=2)
fig_name = 'MeanMed_boxplot.pdf'
fig.savefig(fig_name, bbox_inches='tight')
whiskers = [item.get_ydata() for item in bp['whiskers']]
fig.clf()
print "The Mean/Median Coverage ratio thresholds in use for the normal and tumour samples"
for item in whiskers:
print item[1]
```
### Second, collect all the QC data and calculate the star rating for each normal-tumour sample pair.
```
## Grab the data
f = open('Supplementary_Table_1.tsv', 'r')
line = f.next()
# This lists are for the comparison of the evenness of coverage methods
Med_Mean_size_norm = []
Med_Mean_size_tumo = []
fwhm_norm = []
fwhm_tumo = []
# Empty lists to record the individual qc measures for each list
FWHM_size_normal = []
FWHM_size_tumour = []
MedMean_size_normal = []
MedMean_size_tumour = []
CallPow_size_normal = []
CallPow_size_tumour = []
DiffChrom_size_normal = []
DiffChrom_size_tumour = []
BaseBias_size_normal = []
BaseBias_size_tumour = []
Mean_size_normal = []
Mean_size_tumour = []
FWHM_norm = {}
FWHM_tumo = {}
CallPow = {}
DiffChrom_norm = {}
DiffChrom_tumo = {}
BaseBias_norm ={}
BaseBias_tumo = {}
Mean_norm = {}
Mean_tumo = {}
# Dictionary to store the star ratings
starred = {}
all_dam = []
# Lists to store the samples which we already have the norm qc measure - so we don't count it twice for when we have samples with multiple tumours
norm_ids_mean = []
norm_ids_fwhm = []
norm_ids_diff = []
norm_ids_base = []
norm_ids_all = []
# Also open a tsv to record the results
g = open('PCAWG-QC_Star_Rating.tsv', 'w')
temp = line.split('\t')
g.write(temp[0] + '\t' + temp[1] + '\t' + temp[2] + '\t' + temp[3] + '\t' + temp[4] + '\t' + temp[5] + '\tStar_rating\n')
for line in f:
temp = line.split('\t')
add = True
stars = 0
# Mean
if temp[7] != 'NA' and temp[8] != 'NA':
if float(temp[7]) > 25:
stars += 0.5
if temp[2] not in norm_ids_mean:
norm_ids_mean.append(temp[2])
if temp[0] in Mean_norm:
passed = Mean_norm[temp[0]]['pass']
passed += 1
Mean_norm[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
Mean_norm[temp[0]] = passed
else:
if temp[2] not in norm_ids_mean:
norm_ids_mean.append(temp[2])
if temp[0] in Mean_norm:
failed = Mean_norm[temp[0]]['fail']
failed += 1
Mean_norm[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
Mean_norm[temp[0]] = failed
if float(temp[8]) > 30:
if float(temp[7]) > 25:
stars += 0.5
if temp[0] in Mean_tumo:
passed = Mean_tumo[temp[0]]['pass']
passed += 1
Mean_tumo[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
Mean_tumo[temp[0]] = passed
else:
if temp[0] in Mean_tumo:
failed = Mean_tumo[temp[0]]['fail']
failed += 1
Mean_tumo[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
Mean_tumo[temp[0]] = failed
else:
add = False
# FWHM
if temp[10] != 'NA' and temp[12] != 'NA' and temp[9] != 'NA' and temp[11] != 'NA':
if (float(temp[10]) < 0.205) and (whiskers[0][1] <= float(temp[9]) <= whiskers[1][1]):
stars += 0.5
norm_pass = True
if temp[2] not in norm_ids_fwhm:
norm_ids_fwhm.append(temp[2])
if temp[0] in FWHM_norm:
passed = FWHM_norm[temp[0]]['pass']
passed += 1
FWHM_norm[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
FWHM_norm[temp[0]] = passed
elif float(temp[10]) >= 0.205:
norm_pass = False
if temp[2] not in norm_ids_fwhm:
norm_ids_fwhm.append(temp[2])
if temp[0] in FWHM_norm:
failed = FWHM_norm[temp[0]]['fail']
failed += 1
FWHM_norm[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
FWHM_norm[temp[0]] = failed
else:
norm_pass = False
if temp[2] not in norm_ids_fwhm:
norm_ids_fwhm.append(temp[2])
if temp[0] in FWHM_norm:
failed = FWHM_norm[temp[0]]['fail']
failed += 1
FWHM_norm[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
FWHM_norm[temp[0]] = failed
if (float(temp[12]) < 0.34) and (whiskers[2][1] <= float(temp[11]) <= whiskers[3][1]):
if norm_pass:
stars += 0.5
if temp[0] in FWHM_tumo:
passed = FWHM_tumo[temp[0]]['pass']
passed += 1
FWHM_tumo[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
FWHM_tumo[temp[0]] = passed
elif float(temp[12]) >= 0.34: # >= 0.54
if temp[0] in FWHM_tumo:
failed = FWHM_tumo[temp[0]]['fail']
failed += 1
FWHM_tumo[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
FWHM_tumo[temp[0]] = failed
else:
if temp[0] in FWHM_tumo:
failed = FWHM_tumo[temp[0]]['fail']
failed += 1
FWHM_tumo[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
FWHM_tumo[temp[0]] = failed
else:
add = False
# Call_Pow
if temp[13] != 'NA':
if int(temp[13]) >= 2.6*10**9:
stars += 1.0
if temp[0] in CallPow:
passed = CallPow[temp[0]]['pass']
passed += 1
CallPow[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
CallPow[temp[0]] = passed
else:
if temp[0] in CallPow:
failed = CallPow[temp[0]]['fail']
failed += 1
CallPow[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
CallPow[temp[0]] = failed
else:
add = False
# Diff_Chrom
if temp[14] != 'NA' and temp[15] != 'NA':
if float(temp[14]) < 3:
stars += 0.5
if temp[2] not in norm_ids_diff:
norm_ids_diff.append(temp[2])
if temp[0] in DiffChrom_norm:
passed = DiffChrom_norm[temp[0]]['pass']
passed += 1
DiffChrom_norm[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
DiffChrom_norm[temp[0]] = passed
else:
if temp[2] not in norm_ids_diff:
norm_ids_diff.append(temp[2])
if temp[0] in DiffChrom_norm:
failed = DiffChrom_norm[temp[0]]['fail']
failed += 1
DiffChrom_norm[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
DiffChrom_norm[temp[0]] = failed
if float(temp[15]) < 3:
if float(temp[14]) < 3:
stars += 0.5
if temp[0] in DiffChrom_tumo:
passed = DiffChrom_tumo[temp[0]]['pass']
passed += 1
DiffChrom_tumo[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
DiffChrom_tumo[temp[0]] = passed
else:
if temp[0] in DiffChrom_tumo:
failed = DiffChrom_tumo[temp[0]]['fail']
failed += 1
DiffChrom_tumo[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
DiffChrom_tumo[temp[0]] = failed
else:
add = False
# Base_Bias
if temp[16] != 'NA' and temp[17].rstrip() != 'NA':
if float(temp[16]) < 2:
stars += 0.5
if temp[2] not in norm_ids_base:
norm_ids_base.append(temp[2])
if temp[0] in BaseBias_norm:
passed = BaseBias_norm[temp[0]]['pass']
passed += 1
BaseBias_norm[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
BaseBias_norm[temp[0]] = passed
else:
if temp[2] not in norm_ids_base:
norm_ids_base.append(temp[2])
if temp[0] in BaseBias_norm:
failed = BaseBias_norm[temp[0]]['fail']
failed += 1
BaseBias_norm[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
BaseBias_norm[temp[0]] = failed
if float(temp[17].rstrip()) < 2:
if float(temp[16]) < 2:
stars += 0.5
if temp[0] in BaseBias_tumo:
passed = BaseBias_tumo[temp[0]]['pass']
passed += 1
BaseBias_tumo[temp[0]]['pass'] = passed
else:
passed = {'pass':1, 'fail':0}
BaseBias_tumo[temp[0]] = passed
else:
if temp[0] in BaseBias_tumo:
failed = BaseBias_tumo[temp[0]]['fail']
failed += 1
BaseBias_tumo[temp[0]]['fail'] = failed
else:
failed = {'pass':0, 'fail':1}
BaseBias_tumo[temp[0]] = failed
else:
add = False
if add:
if temp[0] in starred:
star_temp = starred[temp[0]]
star_temp.append(stars)
starred[temp[0]] = star_temp
else:
starred[temp[0]] = [stars]
all_dam.append(stars)
if temp[2] not in norm_ids_all:
norm_ids_all.append(temp[2])
Med_Mean_size_norm.append(float(temp[9]))
fwhm_norm.append(float(temp[10]))
# if float(temp[14]) < 20:
Mean_size_normal.append(float(temp[7]))
MedMean_size_normal.append(abs(1-float(temp[9])))
FWHM_size_normal.append(float(temp[10]))
CallPow_size_normal.append(float(temp[13]))
DiffChrom_size_normal.append(float(temp[14]))
BaseBias_size_normal.append(float(temp[16]))
Med_Mean_size_tumo.append(float(temp[11]))
fwhm_tumo.append(float(temp[12]))
Mean_size_tumour.append(float(temp[8]))
MedMean_size_tumour.append(abs(1-float(temp[11])))
FWHM_size_tumour.append(float(temp[12]))
CallPow_size_tumour.append(float(temp[13]))
DiffChrom_size_tumour.append(float(temp[15]))
BaseBias_size_tumour.append(float(temp[17].rstrip()))
# Write out the star rating to a tsv file
g.write(temp[0] + '\t' + temp[1] + '\t' + temp[2] + '\t' + temp[3] + '\t' + temp[4] + '\t' + temp[5] + '\t' + str(stars) + '\n')
else:
print 'We do not have complete QC data for this sample'
print temp[0] + '\t' + temp[1] + '\t' + temp[2] + '\t' + temp[3] + '\t' + temp[4] + '\t' + temp[5]
f.close()
g.close()
# Get the tissue type linked to each project
f = open('Supplementary_Table_2.tsv', 'r')
tissues = {}
line = f.next()
for line in f:
temp = line.split('\t')
if temp[1].strip() in tissues:
named = tissues[temp[1].strip()]
named.append(temp[0])
else:
named = [temp[0]]
tissues[temp[1].strip()] = named
f.close()
tissues_sorted = []
for key in tissues:
tissues_sorted.append(key)
tissues_sorted.sort()
```
### Third, denisty scatter plots for the nomral and tumour samples, to compare how the two evenness of coverage measures compare
```
#% Calculate the point density for the normal samples
x = np.array(Med_Mean_size_norm)
y = np.array(fwhm_norm)
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
# Now the actual plot
fig, ax = plt.subplots()
ax.axvline(x=whiskers[0][1], color='k', linestyle='dashed', linewidth=2)
plt.text(.85,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axvline(x=whiskers[1][1], color='k', linestyle='dashed', linewidth=2)
plt.text(1.02,0.7,'Passes for Med/Mean', color='green',rotation=90)
plt.text(1.07,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axhline(y=0.205, color='k', linestyle='dashed', linewidth=2)
plt.text(0.71,0.17,'Passes for FWHM', color='green')
plt.text(0.71,0.215,'Fails for FWHM', color='red')
# ax.set_yscale('log')
# ax.set_xscale('log')
ax.set_xlim(.7,1.1)
ax.set_ylim(0,.8)
cax = ax.scatter(x, y, c=z, s=30, edgecolor='')
fig.colorbar(cax)
ax.set_xlabel('Median/Mean')
ax.set_ylabel('FWHM')
fig_name = 'Evenness_med-mean_fwhm_normal_scattterplot.pdf'
fig.savefig(fig_name)
plt.show()
plt.clf()
#% Calculate the point density for the tumour samples
x = np.array(Med_Mean_size_tumo)
y = np.array(fwhm_tumo)
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
# Now the actual plot
fig, ax = plt.subplots()
ax.axvline(x=whiskers[2][1], color='k', linestyle='dashed', linewidth=2)
plt.text(whiskers[2][1]+.008,0.7,'Passes for Med/Mean', color='green',rotation=90)
plt.text(whiskers[2][1]-.018,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axvline(x=whiskers[3][1], color='k', linestyle='dashed', linewidth=2)
plt.text(whiskers[3][1]-.018,0.7,'Passes for Med/Mean', color='green',rotation=90)
plt.text(whiskers[3][1]+.008,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axhline(y=0.34, color='k', linestyle='dashed', linewidth=2)
plt.text(0.71,0.35,'Fails for FWHM', color='red')
plt.text(0.71,0.3,'Passes for FWHM', color='green')
ax.set_xlim(.7,1.1)
ax.set_ylim(0,.8)
cax = ax.scatter(x, y, c=z, s=30, edgecolor='')
fig.colorbar(cax)
ax.set_xlabel('Median/Mean')
ax.set_ylabel('FWHM')
fig_name = 'Evenness_med-mean_fwhm_tumour_scattterplot.pdf'
fig.savefig(fig_name)
plt.show()
plt.clf()
```
### Fourth, these are individual plots of the qc data, showing what proportion passed and failed for individual projects. These figures did not make it to the final paper, but are kept here for completeness sake.
```
qcs = ['Mean_norm', 'Mean_tumo', 'FWHM_norm', 'FWHM_tumo', 'CallPow', 'DiffChrom_norm', 'DiffChrom_tumo', 'BaseBias_norm', 'BaseBias_tumo']
for k,qc in enumerate([Mean_norm, Mean_tumo, FWHM_norm, FWHM_tumo, CallPow, DiffChrom_norm, DiffChrom_tumo, BaseBias_norm, BaseBias_tumo]):
faill = 0
passs = 0
for key in qc:
passs += qc[key]['pass']
faill += qc[key]['fail']
percent = (faill / float(passs + faill)) * 100
qc['Total'] = {'fail': faill, 'pass': passs}
print 'For ' + qcs[k] + ' we have ' + str(percent) + ' percent failing (total = ' + str(passs + faill) + ')'
labelled = []
tish = ['', 'Total', '']
organ = ['', 'Total', '']
passed = []
failed = []
total = []
for key in qc:
labelled.append(key)
labelled.sort()
for key in tissues_sorted:
c = True
for item in tissues[key]:
if item in labelled:
tish.append(item)
if c:
organ.append(key)
c = False
else:
organ.append(' ')
tish.append('')
organ.append('')
for key in tish:
if key == '':
passed.append(0)
failed.append(0)
total.append('')
else:
pass_temp = qc[key]['pass']
fail_temp = qc[key]['fail']
temp = float(pass_temp + fail_temp)
passed.append(pass_temp/temp * 100)
failed.append(fail_temp/temp * 100)
total.append(str(int(temp)))
N = len(tish)
ind = np.arange(N) # the x locations for the groups
width = 1 # the width of the bars: can also be len(x) sequence
p1 = plt.bar(ind, passed, width, color='blue')
p2 = plt.bar(ind, failed, width, color='red', bottom=passed)
plt.title(qcs[k])
locs, labels = plt.xticks(ind + width/2., (organ))
plt.setp(labels, rotation=90)
plt.tick_params(axis='both', which='major', labelsize=5)
plt.legend((p1[0], p2[0]), ('Pass', 'Fail'), bbox_to_anchor=(1.02, .55), fontsize='x-small')
plt.ylim(0,100)
plt.yticks(range(0, 101, 20), [str(x) + "%" for x in range(0, 101, 20)], fontsize=5)
for j,item in enumerate(ind+0.1):
plt.text(item,15, tish[j] +': '+ total[j], color='white', size=5, rotation=90, horizontalalignment='left')
fig_name = '' + qcs[k] + '_project_bias.pdf'
plt.savefig(fig_name)
plt.show()
plt.clf
```
### Fifth, plots of the star ratings for each project, as well as a bar summarising the star ratings for all the normal-tumour sample pairs in PCAWG.
```
# Get the star rating in a usuable form to plot
one = []
onehalf = []
two = []
twohalf = []
three = []
threehalf = []
four = []
fourhalf = []
five = []
total = []
see_all = []
equal_add = True
for key in tish:
if key != '':
if key in starred:
temp = Counter(starred[key])
if equal_add:
see_all = temp
equal_add = False
else:
see_all = see_all + temp
if 1.0 in temp:
one.append((temp[1.0]/float(len(starred[key])))*100)
else:
one.append(0)
if 1.5 in temp:
onehalf.append((temp[1.5]/float(len(starred[key])))*100)
else:
onehalf.append(0)
if 2.0 in temp:
two.append((temp[2.0]/float(len(starred[key])))*100)
else:
two.append(0)
if 2.5 in temp:
twohalf.append((temp[2.5]/float(len(starred[key])))*100)
else:
twohalf.append(0)
if 3.0 in temp:
three.append((temp[3.0]/float(len(starred[key])))*100)
else:
three.append(0)
if 3.5 in temp:
threehalf.append((temp[3.5]/float(len(starred[key])))*100)
else:
threehalf.append(0)
if 4.0 in temp:
four.append((temp[4.0]/float(len(starred[key])))*100)
else:
four.append(0)
if 4.5 in temp:
fourhalf.append((temp[4.5]/float(len(starred[key])))*100)
else:
fourhalf.append(0)
if 5.0 in temp:
five.append((temp[5.0]/float(len(starred[key])))*100)
else:
five.append(0)
total.append(str(len(starred[key])))
else:
one.append(0)
onehalf.append(0)
two.append(0)
twohalf.append(0)
three.append(0)
threehalf.append(0)
four.append(0)
fourhalf.append(0)
five.append(0)
total.append('')
else:
one.append(0)
onehalf.append(0)
two.append(0)
twohalf.append(0)
three.append(0)
threehalf.append(0)
four.append(0)
fourhalf.append(0)
five.append(0)
total.append('')
vote_all = 0
for item in see_all:
vote_all += see_all[item]
one[1] = (see_all[1.0]/float(vote_all)) * 100
onehalf[1] = (see_all[1.5]/float(vote_all)) * 100
two[1] = (see_all[2.0]/float(vote_all)) * 100
twohalf[1] = (see_all[2.5]/float(vote_all)) * 100
three[1] = (see_all[3.0]/float(vote_all)) * 100
threehalf[1] = (see_all[3.5]/float(vote_all)) * 100
four[1] = (see_all[4.0]/float(vote_all)) * 100
fourhalf[1] = (see_all[4.5]/float(vote_all)) * 100
five[1] = (see_all[5.0]/float(vote_all)) * 100
total[1] = str(vote_all)
N = len(tish)
ind = np.arange(N) # the x locations for the groups
width = 1 # the width of the bars: can also be len(x) sequence
pq = plt.bar(ind, one, width, color ='gray')
pp = plt.bar(ind, onehalf, width, color ='red', bottom=one)
p0 = plt.bar(ind, two, width, color= 'blue', bottom =[one[h] + onehalf[h] for h in range(len(threehalf))])
p1 = plt.bar(ind, twohalf, width, color='brown', bottom=[one[h] + onehalf[h] + two[h] for h in range(len(threehalf))])
p2 = plt.bar(ind, three, width, color='purple', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] for h in range(len(threehalf))])
p3 = plt.bar(ind, threehalf, width, color='hotpink', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h] for h in range(len(threehalf))])
p4 = plt.bar(ind, four, width, color='orange', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h]+ threehalf[h] for h in range(len(threehalf))])
p5 = plt.bar(ind, fourhalf, width, color='gold', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h] + threehalf[h] + four[h] for h in range(len(threehalf))])
p6 = plt.bar(ind, five, width, color='green', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h] + threehalf[h] + four[h] + fourhalf[h] for h in range(len(threehalf))])
locs, labels = plt.xticks(ind + width/2., (organ))
plt.setp(labels, rotation=90)
plt.tick_params(axis='both', which='major', labelsize=8)
plt.legend((p6[0], p5[0], p4[0], p3[0], p2[0], p1[0], p0[0], pp[0], pq[0]), ('5', '4.5', '4', '3.5', '3', '2.5', '2', '1.5', '1'), bbox_to_anchor=(1, .7), fontsize='x-small')
plt.ylim(0,100)
plt.yticks(range(0, 101, 20), [str(x) + "%" for x in range(0, 101, 20)], fontsize=8)
for j,item in enumerate(ind+0.1):
plt.text(item,95, tish[j] +': '+ total[j], color='white', size=5, rotation=90, horizontalalignment='left')
plt.tight_layout()
fig_name = 'starred_project_bias.pdf'
plt.savefig(fig_name)
plt.show()
plt.clf
#% Now lets get a star plot of stars with all bars
one = []
onehalf = []
two = []
twohalf = []
three = []
threehalf = []
four = []
fourhalf = []
five = []
total =[]
temp = Counter(all_dam)
if 1.0 in temp:
one.append((temp[1.0]/float(len(all_dam)))*100)
else:
one.append(0)
if 1.5 in temp:
onehalf.append((temp[1.5]/float(len(all_dam)))*100)
else:
onehalf.append(0)
if 2.0 in temp:
two.append((temp[2.0]/float(len(all_dam)))*100)
else:
two.append(0)
if 2.5 in temp:
twohalf.append((temp[2.5]/float(len(all_dam)))*100)
else:
twohalf.append(0)
if 3.0 in temp:
three.append((temp[3.0]/float(len(all_dam)))*100)
else:
three.append(0)
if 3.5 in temp:
threehalf.append((temp[3.5]/float(len(all_dam)))*100)
else:
threehalf.append(0)
if 4.0 in temp:
four.append((temp[4.0]/float(len(all_dam)))*100)
else:
four.append(0)
if 4.5 in temp:
fourhalf.append((temp[4.5]/float(len(all_dam)))*100)
else:
fourhalf.append(0)
if 5.0 in temp:
five.append((temp[5.0]/float(len(all_dam)))*100)
else:
five.append(0)
total.append(len(all_dam))
N = 9
ind = np.arange(N) # the x locations for the groups
width = 1 # the width of the bars: can also be len(x) sequence
fig, ax = plt.subplots()
pq = ax.bar(ind[0], one, width, color='gray')
pp = ax.bar(ind[1], onehalf, width, color='red')
p0 = ax.bar(ind[2], two, width, color='blue')
p1 = ax.bar(ind[3], twohalf, width, color='brown')
p2 = ax.bar(ind[4], three, width, color='purple')
p3 = ax.bar(ind[5], threehalf, width, color='hotpink')
p4 = ax.bar(ind[6], four, width, color='orange')
p5 = ax.bar(ind[7], fourhalf, width, color='gold')
p6 = ax.bar(ind[8], five, width, color='green')
ax.set_ylabel('Percentage')
ax.set_xlabel('Star Rating')
locs, labels = plt.xticks(ind + width/2., (['1', '1.5', '2', '2.5', '3', '3.5', '4', '4.5', '5']))
plt.setp(labels)
plt.tick_params(axis='both', which='major', labelsize=10)
plt.yticks(range(0, 81, 20), [str(x) for x in range(0, 91, 20)], fontsize=10)
plt.ylim(0,80)
for y in range(10, 91, 10):
plt.plot(range(0, 10), [y] * len(range(0, 10)), "--", lw=0.5, color="black", alpha=0.2)
# Remove the tick marks; they are unnecessary with the tick lines we just plotted.
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
total = [one[0], onehalf[0], two[0], twohalf[0], three[0], threehalf[0], four[0], fourhalf[0], five[0]]
for j,item in enumerate(ind+0.1):
if total[j] < 1:
rounded = str(round(total[j],2)) + '%'
else:
rounded = str(float('%.3g' % total[j])) + '%'
plt.text(item,total[j]+0.8, rounded, color='black', size=10) #absolute[j]
fig_name = 'all_stars.pdf'
plt.savefig(fig_name)
plt.show()
plt.clf
```
### Finally histograms to show the distributions of each QC measure for normal and tumour samples.
```
miscellanea = {'Mean_size_normal': [['green', 25], ['Mean coverage for normal', 'mean coverage', 'Number of samples'], [0,140], [0,300]],
'Mean_size_tumour': [['lightgreen', 30], ['Mean coverage for tumour', 'mean coverage', 'Number of samples'], [0,140], [0,300]],
'Med_Mean_size_norm': [['indigo', whiskers[0][1],whiskers[1][1]], ['Ratio of the median coverage over the mean coverage for normal', 'Ratio', 'Number of samples'], [0.5,1.15], [0,700]],
'Med_Mean_size_tumo': [['violet', whiskers[2][1],whiskers[3][1]], ['Ratio of the median coverage over the mean coverage for tumour', 'Ratio', 'Number of samples'], [0.5,1.15], [0,700]],
'FWHM_size_normal': [['brown', 0.205], ['FWHM for normal', 'FWHM', 'Number of samples'], [0,0.8], [0,500]],
'FWHM_size_tumour': [['khaki', 0.34], ['FWHM for tumour', 'FWHM', 'Number of samples'], [0,0.8], [0,500]],
'CallPow_size_normal': [['grey', 2.6*10**9], ['Somatic mutation calling power', 'Number of bases', 'Number of samples'], [1.5*10**9,2.9*10**9], [0,1200]],
'DiffChrom_size_normal': [['blue', 3], ['Paired reads mapping to different chromosomes for normal', 'Percentage', 'Number of samples'], [0,18], [0,600]],
'DiffChrom_size_tumour': [['aqua', 3], ['Paired reads mapping to different chromosomes for tumour', 'Percentage', 'Number of samples'], [0,18], [0,600]],
'BaseBias_size_normal': [['red', 2], ['Ratio of difference in edits between paired reads for normal', 'Ratio', 'Number of samples'], [1,5.5], [0,300]],
'BaseBias_size_tumour': [['orange', 2], ['Ratio of difference in edits between paired reads for tumour', 'Ratio', 'Number of samples'], [1,5.5], [0,300]]}
# Histograms
qcs = ['Mean_size_normal', 'Mean_size_tumour', 'Med_Mean_size_norm', 'Med_Mean_size_tumo', 'FWHM_size_normal', 'FWHM_size_tumour', 'CallPow_size_normal', 'DiffChrom_size_normal', 'DiffChrom_size_tumour', 'BaseBias_size_normal', 'BaseBias_size_tumour']
for k,qc in enumerate([Mean_size_normal, Mean_size_tumour, Med_Mean_size_norm, Med_Mean_size_tumo, FWHM_size_normal, FWHM_size_tumour, CallPow_size_normal, DiffChrom_size_normal, DiffChrom_size_tumour, BaseBias_size_normal, BaseBias_size_tumour]):
to_del = []
if qcs[k] == 'DiffChrom_size_normal':
for j,item in enumerate(qc):
if item > 20:
to_del.append(j)
to_del.reverse()
for index in to_del:
del qc[index]
elif qcs[k] == 'FWHM_size_tumour':
for j,item in enumerate(qc):
if item > 1:
to_del.append(j)
to_del.reverse()
for index in to_del:
del qc[index]
if len(miscellanea[qcs[k]][0]) == 2:
fig = plt.figure()
ax = fig.add_subplot(111)
result = ax.hist(qc, bins=100, color=miscellanea[qcs[k]][0][0])
ax.axvline(miscellanea[qcs[k]][0][1], color='k', linestyle='dashed', linewidth=2)
# ax.set_title(miscellanea[qcs[k]][1][0])
ax.set_xlabel(miscellanea[qcs[k]][1][1])
ax.set_ylabel(miscellanea[qcs[k]][1][2])
ax.set_xlim(miscellanea[qcs[k]][2][0] ,miscellanea[qcs[k]][2][1])
ax.set_ylim(miscellanea[qcs[k]][3][0] ,miscellanea[qcs[k]][3][1])
fig.savefig(qcs[k] + '_histogram.pdf')
elif len(miscellanea[qcs[k]][0]) == 3:
fig = plt.figure()
ax = fig.add_subplot(111)
result = ax.hist(qc, bins=100, color=miscellanea[qcs[k]][0][0])
ax.axvline(miscellanea[qcs[k]][0][1], color='k', linestyle='dashed', linewidth=2)
ax.axvline(miscellanea[qcs[k]][0][2], color='k', linestyle='dashed', linewidth=2)
# ax.set_title(miscellanea[qcs[k]][1][0])
ax.set_xlabel(miscellanea[qcs[k]][1][1])
ax.set_ylabel(miscellanea[qcs[k]][1][2])
ax.set_xlim(miscellanea[qcs[k]][2][0] ,miscellanea[qcs[k]][2][1])
ax.set_ylim(miscellanea[qcs[k]][3][0] ,miscellanea[qcs[k]][3][1])
fig.savefig(qcs[k] + '_histogram.pdf')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/chandlerbing65nm/Cassava-Leaf-Disease-Classification/blob/main/ViT_Training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Check Resources
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
if ram_gb < 20:
print('To enable a high-RAM runtime, select the Runtime > "Change runtime type"')
print('menu, and then select High-RAM in the Runtime shape dropdown. Then, ')
print('re-execute this cell.')
else:
print('You are using a high-RAM runtime!')
```
# Mount Your GDrive Here
```
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
!cp -r /content/drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease /content
%cd Cassava-Leaf-Disease
```
# Install Packages and Dependencies
```
!pip install rdkit-pypi
!pip install ipykernel
!pip install pydicom
!pip install catalyst
!wget -q https://github.com/albumentations-team/albumentations_examples/archive/master.zip -O /tmp/albumentations_examples.zip
!unzip -o -qq /tmp/albumentations_examples.zip -d /tmp/albumentations_examples
!cp -r /tmp/albumentations_examples/albumentations_examples-master/notebooks/images .
!echo "Images are successfully downloaded"
!pip install -q -U albumentations
!echo "$(pip freeze | grep albumentations) is successfully installed"
!pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
```
#Training and Data Augmentation
## Pre-setting
```
package_paths = [
'input/pytorch-image-models/pytorch-image-models-master', 'input/image-fmix/FMix-master'
]
import sys;
for pth in package_paths:
sys.path.append(pth)
from fmix import sample_mask, make_low_freq_image, binarise_mask
from glob import glob
from sklearn.model_selection import GroupKFold, StratifiedKFold
import cv2
from skimage import io
import torch
from torch import nn
import os
from datetime import datetime
import time
import random
import cv2
import torchvision
from torchvision import transforms
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from torch.utils.data import Dataset,DataLoader
from torch.utils.data.sampler import SequentialSampler, RandomSampler
from torch.cuda.amp import autocast, GradScaler
from torch.nn.modules.loss import _WeightedLoss
import torch.nn.functional as F
import timm
import catalyst
import sklearn
import warnings
import joblib
from sklearn.metrics import roc_auc_score, log_loss
from sklearn import metrics
import warnings
import cv2
import pydicom
#from efficientnet_pytorch import EfficientNet
from scipy.ndimage.interpolation import zoom
from albumentations import (
HorizontalFlip, VerticalFlip, IAAPerspective, ShiftScaleRotate, CLAHE, RandomRotate90,
Transpose, ShiftScaleRotate, Blur, OpticalDistortion, GridDistortion, HueSaturationValue,
IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, IAAPiecewiseAffine, RandomResizedCrop,
IAASharpen, IAAEmboss, RandomBrightnessContrast, Flip, OneOf, Compose, Normalize, Cutout, CoarseDropout, ShiftScaleRotate, CenterCrop, Resize
)
from albumentations.pytorch import ToTensorV2
CFG = {
'fold_num': 5,
'seed': 719,
'model_arch': 'vit_base_patch16_224',
'img_size': 224,
'epochs': 10,
'train_bs': 16,
'valid_bs': 32,
'T_0': 10,
'lr': 1e-4,
'min_lr': 1e-6,
'weight_decay':1e-6,
'num_workers': 4,
'accum_iter': 2, # suppoprt to do batch accumulation for backprop with effectively larger batch size
'verbose_step': 1,
'device': 'cuda:0'
}
train = pd.read_csv('input/cassava-leaf-disease-classification/train.csv')
train.head()
train.label.value_counts()
submission = pd.read_csv('input/cassava-leaf-disease-classification/sample_submission.csv')
submission.head()
```
## Helper Functions (perform this twice)
```
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
def get_img(pathw):
im_bgr = cv2.imread(pathw)
im_rgb = im_bgr[:, :, ::-1]
return im_rgb
img = get_img('input/cassava-leaf-disease-classification/train_images/1000015157.jpg')
plt.imshow(img)
plt.show()
```
## Dataset
```
def rand_bbox(size, lam):
W = size[0]
H = size[1]
cut_rat = np.sqrt(1. - lam)
cut_w = np.int(W * cut_rat)
cut_h = np.int(H * cut_rat)
# uniform
cx = np.random.randint(W)
cy = np.random.randint(H)
bbx1 = np.clip(cx - cut_w // 2, 0, W)
bby1 = np.clip(cy - cut_h // 2, 0, H)
bbx2 = np.clip(cx + cut_w // 2, 0, W)
bby2 = np.clip(cy + cut_h // 2, 0, H)
return bbx1, bby1, bbx2, bby2
class CassavaDataset(Dataset):
def __init__(self, df, data_root,
transforms=None,
output_label=True,
one_hot_label=False,
do_fmix=False,
fmix_params={
'alpha': 1.,
'decay_power': 3.,
'shape': (CFG['img_size'], CFG['img_size']),
'max_soft': True,
'reformulate': False
},
do_cutmix=False,
cutmix_params={
'alpha': 1,
}
):
super().__init__()
self.df = df.reset_index(drop=True).copy()
self.transforms = transforms
self.data_root = data_root
self.do_fmix = do_fmix
self.fmix_params = fmix_params
self.do_cutmix = do_cutmix
self.cutmix_params = cutmix_params
self.output_label = output_label
self.one_hot_label = one_hot_label
if output_label == True:
self.labels = self.df['label'].values
#print(self.labels)
if one_hot_label is True:
self.labels = np.eye(self.df['label'].max()+1)[self.labels]
#print(self.labels)
def __len__(self):
return self.df.shape[0]
def __getitem__(self, index: int):
# get labels
if self.output_label:
target = self.labels[index]
img = get_img("{}/{}".format(self.data_root, self.df.loc[index]['image_id']))
if self.transforms:
img = self.transforms(image=img)['image']
if self.do_fmix and np.random.uniform(0., 1., size=1)[0] > 0.5:
with torch.no_grad():
#lam, mask = sample_mask(**self.fmix_params)
lam = np.clip(np.random.beta(self.fmix_params['alpha'], self.fmix_params['alpha']),0.6,0.7)
# Make mask, get mean / std
mask = make_low_freq_image(self.fmix_params['decay_power'], self.fmix_params['shape'])
mask = binarise_mask(mask, lam, self.fmix_params['shape'], self.fmix_params['max_soft'])
fmix_ix = np.random.choice(self.df.index, size=1)[0]
fmix_img = get_img("{}/{}".format(self.data_root, self.df.iloc[fmix_ix]['image_id']))
if self.transforms:
fmix_img = self.transforms(image=fmix_img)['image']
mask_torch = torch.from_numpy(mask)
# mix image
img = mask_torch*img+(1.-mask_torch)*fmix_img
#print(mask.shape)
#assert self.output_label==True and self.one_hot_label==True
# mix target
rate = mask.sum()/CFG['img_size']/CFG['img_size']
target = rate*target + (1.-rate)*self.labels[fmix_ix]
#print(target, mask, img)
#assert False
if self.do_cutmix and np.random.uniform(0., 1., size=1)[0] > 0.5:
#print(img.sum(), img.shape)
with torch.no_grad():
cmix_ix = np.random.choice(self.df.index, size=1)[0]
cmix_img = get_img("{}/{}".format(self.data_root, self.df.iloc[cmix_ix]['image_id']))
if self.transforms:
cmix_img = self.transforms(image=cmix_img)['image']
lam = np.clip(np.random.beta(self.cutmix_params['alpha'], self.cutmix_params['alpha']),0.3,0.4)
bbx1, bby1, bbx2, bby2 = rand_bbox((CFG['img_size'], CFG['img_size']), lam)
img[:, bbx1:bbx2, bby1:bby2] = cmix_img[:, bbx1:bbx2, bby1:bby2]
rate = 1 - ((bbx2 - bbx1) * (bby2 - bby1) / (CFG['img_size'] * CFG['img_size']))
target = rate*target + (1.-rate)*self.labels[cmix_ix]
#print('-', img.sum())
#print(target)
#assert False
# do label smoothing
#print(type(img), type(target))
if self.output_label == True:
return img, target
else:
return img
```
## Define Training and Validation Image Augmentations
```
def get_train_transforms():
return Compose([
RandomResizedCrop(CFG['img_size'], CFG['img_size']),
Transpose(p=0.5),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.5),
ShiftScaleRotate(p=0.5),
HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5),
RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
CoarseDropout(p=0.5),
Cutout(p=0.5),
ToTensorV2(p=1.0),
], p=1.)
def get_valid_transforms():
return Compose([
CenterCrop(CFG['img_size'], CFG['img_size'], p=1.),
Resize(CFG['img_size'], CFG['img_size']),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
ToTensorV2(p=1.0),
], p=1.)
```
## Model
```
class CassvaImgClassifier(nn.Module):
def __init__(self, model_arch, n_class, pretrained=False):
super().__init__()
self.model = timm.create_model(model_arch, pretrained=pretrained)
#n_features = self.model.classifier.in_features
#self.model.classifier = nn.Linear(n_features, n_class)
n_features = self.model.head.in_features
self.model.head = nn.Linear(n_features, n_class)
'''
self.model.classifier = nn.Sequential(
nn.Dropout(0.3),
#nn.Linear(n_features, hidden_size,bias=True), nn.ELU(),
nn.Linear(n_features, n_class, bias=True)
)
'''
def forward(self, x):
x = self.model(x)
return x
```
## Training APIs
```
def prepare_dataloader(df, trn_idx, val_idx, data_root='input/cassava-leaf-disease-classification/train_images/'):
from catalyst.data.sampler import BalanceClassSampler
train_ = df.loc[trn_idx,:].reset_index(drop=True)
valid_ = df.loc[val_idx,:].reset_index(drop=True)
train_ds = CassavaDataset(train_, data_root, transforms=get_train_transforms(), output_label=True, one_hot_label=False, do_fmix=False, do_cutmix=False)
valid_ds = CassavaDataset(valid_, data_root, transforms=get_valid_transforms(), output_label=True)
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=CFG['train_bs'],
pin_memory=False,
drop_last=False,
shuffle=True,
num_workers=CFG['num_workers'],
#sampler=BalanceClassSampler(labels=train_['label'].values, mode="downsampling")
)
val_loader = torch.utils.data.DataLoader(
valid_ds,
batch_size=CFG['valid_bs'],
num_workers=CFG['num_workers'],
shuffle=False,
pin_memory=False,
)
return train_loader, val_loader
def train_one_epoch(epoch, model, loss_fn, optimizer, train_loader, device, scheduler=None, schd_batch_update=False):
model.train()
t = time.time()
running_loss = None
pbar = tqdm(enumerate(train_loader), total=len(train_loader))
for step, (imgs, image_labels) in pbar:
imgs = imgs.to(device).float()
image_labels = image_labels.to(device).long()
#print(image_labels.shape, exam_label.shape)
with autocast():
image_preds = model(imgs) #output = model(input)
#print(image_preds.shape, exam_pred.shape)
loss = loss_fn(image_preds, image_labels)
scaler.scale(loss).backward()
if running_loss is None:
running_loss = loss.item()
else:
running_loss = running_loss * .99 + loss.item() * .01
if ((step + 1) % CFG['accum_iter'] == 0) or ((step + 1) == len(train_loader)):
# may unscale_ here if desired (e.g., to allow clipping unscaled gradients)
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
if scheduler is not None and schd_batch_update:
scheduler.step()
if ((step + 1) % CFG['verbose_step'] == 0) or ((step + 1) == len(train_loader)):
description = f'epoch {epoch} loss: {running_loss:.4f}'
pbar.set_description(description)
if scheduler is not None and not schd_batch_update:
scheduler.step()
def valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False):
model.eval()
t = time.time()
loss_sum = 0
sample_num = 0
image_preds_all = []
image_targets_all = []
pbar = tqdm(enumerate(val_loader), total=len(val_loader))
for step, (imgs, image_labels) in pbar:
imgs = imgs.to(device).float()
image_labels = image_labels.to(device).long()
image_preds = model(imgs) #output = model(input)
#print(image_preds.shape, exam_pred.shape)
image_preds_all += [torch.argmax(image_preds, 1).detach().cpu().numpy()]
image_targets_all += [image_labels.detach().cpu().numpy()]
loss = loss_fn(image_preds, image_labels)
loss_sum += loss.item()*image_labels.shape[0]
sample_num += image_labels.shape[0]
if ((step + 1) % CFG['verbose_step'] == 0) or ((step + 1) == len(val_loader)):
description = f'epoch {epoch} loss: {loss_sum/sample_num:.4f}'
pbar.set_description(description)
image_preds_all = np.concatenate(image_preds_all)
image_targets_all = np.concatenate(image_targets_all)
print('validation multi-class accuracy = {:.4f}'.format((image_preds_all==image_targets_all).mean()))
if scheduler is not None:
if schd_loss_update:
scheduler.step(loss_sum/sample_num)
else:
scheduler.step()
# reference: https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733
class MyCrossEntropyLoss(_WeightedLoss):
def __init__(self, weight=None, reduction='mean'):
super().__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
def forward(self, inputs, targets):
lsm = F.log_softmax(inputs, -1)
if self.weight is not None:
lsm = lsm * self.weight.unsqueeze(0)
loss = -(targets * lsm).sum(-1)
if self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
```
## Main Training Loop
```
if __name__ == '__main__':
# for training only, need nightly build pytorch
seed_everything(CFG['seed'])
folds = StratifiedKFold(n_splits=CFG['fold_num'], shuffle=True, random_state=CFG['seed']).split(np.arange(train.shape[0]), train.label.values)
for fold, (trn_idx, val_idx) in enumerate(folds):
print('\nTraining with fold {} started'.format(fold))
print(len(trn_idx), len(val_idx))
train_loader, val_loader = prepare_dataloader(train, trn_idx, val_idx, data_root='input/cassava-leaf-disease-classification/train_images/')
device = torch.device(CFG['device'])
model = CassvaImgClassifier(CFG['model_arch'], train.label.nunique(), pretrained=True).to(device)
scaler = GradScaler()
optimizer = torch.optim.Adam(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'])
#scheduler = torch.optim.lr_scheduler.StepLR(optimizer, gamma=0.1, step_size=CFG['epochs']-1)
scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=CFG['T_0'], T_mult=1, eta_min=CFG['min_lr'], last_epoch=-1)
#scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer=optimizer, pct_start=0.1, div_factor=25,
# max_lr=CFG['lr'], epochs=CFG['epochs'], steps_per_epoch=len(train_loader))
loss_tr = nn.CrossEntropyLoss().to(device) #MyCrossEntropyLoss().to(device)
loss_fn = nn.CrossEntropyLoss().to(device)
for epoch in range(CFG['epochs']):
train_one_epoch(epoch, model, loss_tr, optimizer, train_loader, device, scheduler=scheduler, schd_batch_update=False)
with torch.no_grad():
valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
torch.save(model.state_dict(),'output/{}_fold_{}_{}'.format(CFG['model_arch'], fold, epoch))
#torch.save(model.cnn_model.state_dict(),'{}/cnn_model_fold_{}_{}'.format(CFG['model_path'], fold, CFG['tag']))
del model, optimizer, train_loader, val_loader, scaler, scheduler
torch.cuda.empty_cache()
```
# Save Model Checkpoints to Drive
```
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_0 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_1 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_2 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_3 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_4 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_5 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_6 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_7 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_8 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
!cp -r /content/Cassava-Leaf-Disease/output/vit_base_patch16_224_fold_2_9 drive/MyDrive/Kaggle-Projects/Cassava-Leaf-Disease/output/fold_2
```
| github_jupyter |
```
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import cv2
import os
import scipy.signal
from htr import page_detection
from htr import word_detection
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
def implt(img, cmp=None, t=''):
plt.imshow(img, cmap=cmp)
plt.title(t)
plt.show()
path = "./data/my-pages/t1.jpg"
image = page_detection.detect(path)
implt(image)
word_imgs = word_detection.detect(image)
#for w in word_imgs:
# implt(w)
img_copy = word_imgs[4].copy()
img = cv2.cvtColor(img_copy, cv2.COLOR_RGB2GRAY)
thresh = 255-histogram_norm(img)
thresh = cropp(thresh)
implt(thresh, 'gray', 'Binary OTSU + (Filter + TO_ZERO)')
hist = vertical_projection(thresh)
hist = (hist - np.amin(hist))/(np.amax(hist) - np.amin(hist))
h, w = thresh.shape
indexes = scipy.signal.find_peaks((np.array(hist))*h, prominence=40, width=1)
RGBmask = np.zeros((thresh.shape[0], thresh.shape[1], 3), np.uint8) + 255
count=0
for l in hist:
cv2.line(RGBmask, (count, int(h*(l))), (count, h), color=[0,255,0], thickness=1)
count += 1
for i in indexes[0]:
cv2.circle(RGBmask, (i, int(h*hist[i])), radius=2, color=[255,0,0], thickness=-1)
#cv2.line(RGBmask, (i, 0), (i, h), color=[255,0,0], thickness=1)
implt(RGBmask, cmp='gray', t='Final')
r = segment2(thresh, h, w, indexes[0])
norm = []
for rr in r:
cropped = cropp(rr)
resized = cv2.resize(255-cropped,(43,43))
norm.append(resized)
implt(resized, cmp='gray', t='Final-norm')
def segment(img, height, width, indexes):
rois = []
indexes = np.insert(indexes, 0, 0)
indexes = np.insert(indexes, len(indexes), width-1)
print("svee: ", indexes)
for i in range(len(indexes)-1):
width = indexes[i+1] - indexes[i]
if width < 40:
continue
print('W:', width)
print('H:', height)
print('B:', indexes[i])
print('\n')
roi = img[0:height, indexes[i]:indexes[i]+width]
rois.append(roi)
return rois
def segment2(img, height, width, indexes):
rois = []
indexes = np.insert(indexes, 0, 0)
indexes = np.insert(indexes, len(indexes), width-1)
print("svee: ", indexes)
first = 0
second = 1
while (first < len(indexes)) and (second < len(indexes)):
width = indexes[second] - indexes[first]
print("SS: ", width)
if width < 30:
second += 1
continue
print('W:', width)
print('H:', height)
print('B:', indexes[first])
print('\n')
roi = img[0:height, indexes[first]:indexes[first]+width]
rois.append(roi)
first = second
second += 1
return rois
def vertical_projection(img):
(h, w) = img.shape[:2]
sumCols = []
for j in range(w):
col = img[0:h, j:j+1] # y1:y2, x1:x2
sumCols.append(np.sum(col)/255)
return sumCols
def histogram_norm(img):
img = bilateral_norm(img)
add_img = 255 - cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
img = 255 - img
img = (img - np.min(img)) / (np.max(img) - np.min(img)) * 255
hist, bins = np.histogram(img.ravel(), 256, [0,256])
img = img.astype(np.uint8)
ret,thresh4 = cv2.threshold(img,np.argmax(hist)+10,255,cv2.THRESH_TOZERO)
return add_img
return cv2.add(add_img, thresh4, dtype=cv2.CV_8UC1)
def bilateral_norm(img):
img = cv2.bilateralFilter(img, 9, 15, 30)
return cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX)
def cropp(img):
h,w = img.shape
top=0
down=0
left=0
right=0
halt = False
for i in range(h):
if halt:
break
for j in range(w):
if img[i,j] == 0:
halt = True
top = i-1
break
halt = False
for i in reversed(range(h)):
if halt:
break
for j in range(w):
if img[i,j] == 0:
halt = True
down = i+1
break
halt = False
for i in range(w):
if halt:
break
for j in range(h):
if img[j,i] == 0:
halt = True
left = i-1
break
halt = False
for i in reversed(range(w)):
if halt:
break
for j in range(h):
if img[j,i] == 0:
halt = True
right = i+1
break
if (top < 0): top = 0
if (down < 0): down = 0
if (left < 0): left = 0
if (right < 0): right = 0
#print('Top: ', top)
#print('Down: ', down)
#print('Left: ', left)
#print('Right: ', right)
return img[top:down, left:right]
```
| github_jupyter |
```
import warnings
warnings.filterwarnings('ignore')
import sys
sys.path.insert(0,'/media/csivsw/crossOS/playground/friends_of_tracking/src/friends_of_tracking/LaurieOnTracking')
%matplotlib inline
import Metrica_IO as mio
import Metrica_Viz as mviz
import Metrica_Velocities as mvel
import Metrica_PitchControl as mpc
import numpy as np
import matplotlib.pyplot as plt
import modin.pandas as pd
pd.options.display.max_columns = None
# set up initial path to data
DATADIR = '/media/csivsw/crossOS/playground/friends_of_tracking/datahub/metrica_sports/sample-data/data'
game_id = 2 # let's look at sample match 2
# read in the event data
events = mio.read_event_data(DATADIR,game_id)
# read in tracking data
tracking_home = mio.tracking_data(DATADIR,game_id,'Home')
tracking_away = mio.tracking_data(DATADIR,game_id,'Away')
# Convert positions from metrica units to meters (note change in Metrica's coordinate system since the last lesson)
tracking_home = mio.to_metric_coordinates(tracking_home)
tracking_away = mio.to_metric_coordinates(tracking_away)
events = mio.to_metric_coordinates(events)
# reverse direction of play in the second half so that home team is always attacking from right->left
tracking_home,tracking_away,events = mio.to_single_playing_direction(tracking_home,tracking_away,events)
# Calculate player velocities
tracking_home = mvel.calc_player_velocities(tracking_home,smoothing=True)
tracking_away = mvel.calc_player_velocities(tracking_away,smoothing=True)
```
## pitch control for passes leading up to goal 2
```
# get all shots and goals in the match
shots = events[events['Type']=='SHOT']
goals = shots[shots['Subtype'].str.contains('-GOAL')].copy()
goals
# plot the 3 events leading up to the second goal
mviz.plot_events( events.loc[820:823], color='k', indicators = ['Marker','Arrow'], annotate=True )
# first get model parameters
params = mpc.default_model_params(3)
params
# evaluated pitch control surface for first pass
PPCF,xgrid,ygrid = mpc.generate_pitch_control_for_event(820, events, tracking_home, tracking_away,
params, field_dimen = (106.,68.,),
n_grid_cells_x = 50)
mviz.plot_pitchcontrol_for_event( 820, events, tracking_home, tracking_away,
PPCF, xgrid, ygrid, annotate=True )
# evaluated pitch control surface for second pass
PPCF,xgrid,ygrid = mpc.generate_pitch_control_for_event(821, events, tracking_home, tracking_away,
params, field_dimen = (106.,68.,),
n_grid_cells_x = 50)
mviz.plot_pitchcontrol_for_event( 821, events, tracking_home, tracking_away,
PPCF, xgrid, ygrid, annotate=True )
# evaluated pitch control surface for third pass
PPCF,xgrid,ygrid = mpc.generate_pitch_control_for_event(822, events, tracking_home, tracking_away,
params, field_dimen = (106.,68.,),
n_grid_cells_x = 50)
mviz.plot_pitchcontrol_for_event( 822, events, tracking_home, tracking_away,
PPCF, xgrid, ygrid, annotate=True )
```
## calculate pass probability for every home team succesful pass
```
# get all home passes
home_passes = events[ (events['Type'].isin(['PASS'])) & (events['Team']=='Home') ]
home_passes.head()
# list for storing pass probablities
pass_success_probability = []
for i,row in home_passes.iterrows():
pass_start_pos = np.array([row['Start X'],row['Start Y']])
pass_target_pos = np.array([row['End X'],row['End Y']])
pass_frame = row['Start Frame']
attacking_players = mpc.initialise_players(tracking_home.loc[pass_frame],'Home',params)
defending_players = mpc.initialise_players(tracking_away.loc[pass_frame],'Away',params)
Patt,Pdef = mpc.calculate_pitch_control_at_target(pass_target_pos,
attacking_players,
defending_players,
pass_start_pos,
params)
pass_success_probability.append( (i,Patt) )
pass_success_probability[:10]
fig,ax = plt.subplots()
ax.hist( [p[1] for p in pass_success_probability], np.arange(0,1.1,0.1))
ax.set_xlabel('Pass success probability');
ax.set_ylabel('Frequency');
# sort the passes by pitch control probability
pass_success_probability = sorted( pass_success_probability, key = lambda x: x[1] )
# identify the events corresponding to the most risky passes (pitch control < 0.5)
risky_passes = events.loc[ [p[0] for p in pass_success_probability if p[1]<0.5 ] ]
# plot the events
mviz.plot_events( risky_passes, color='k', indicators = ['Marker','Arrow'], annotate=True )
# Print events that followed those risky passes
print("Event following a risky (completed) pass")
for p in pass_success_probability[:20]:
outcome = events.loc[ p[0]+1 ].Type
print( p[1], outcome )
events.tail()
tracking_home.iloc[73600:73600+500].tail()
# Making a movie of the second home team goal
PLOTDIR = '/media/csivsw/crossOS/playground/friends_of_tracking/output/'
mviz.save_match_clip(tracking_home.iloc[73600:73600+500],
tracking_away.iloc[73600:73600+500],
PLOTDIR,fname='home_goal_2',include_player_velocities=False,
params=params,events=events, pitch_control=True,attacking='Home')
# from IPython.display import HTML
# HTML("""
# <video alt="test" controls>
# <source src="/media/csivsw/crossOS/playground/friends_of_tracking/output/home_goal_2.mp4" type="video/mp4">
# </video>
# """)
```
| github_jupyter |
# 1. Projeto II - Fundamentos Data Science I
## 1.1. Notas
Todas as analises aqui presentes são resultados do dataset disponibilizado pelo curso de Fundamentos de Data Science I da Udacity. Essas analises são feitas de forma descritiva, apenas para estudo e não devem ser considerados como resultados fieis, para tal devem serem feitas outras analises e recomenda-se a utilização do método baseados em deep learning para definir corretamente os valores faltantes, que podem ser oriundos de erros humanos.
- O dataset utilizado foi titanic-data-6.csv e sua versão editada titanic_edited.csv
- As [perguntas feitas](#12-perguntas-feitas) estão na seção seguinte.
- As informações sobre a limpeza dos dados estão na seção [1.3](#13-limpeza-dos-dados).
- As análises e resultados finais se encontram na seção [1.4](#14-analises).
- A seção [1.5](#15-conclusoes-e-resultados) contem as conclusões tiradas a partir do dataset.
- A seção [1.6](#16-limitações) contem informação sobre as limitações dos dados.
- A ultima seção [1.7](#16-links-uteis) contem links uteis de sites com informações que me ajudaram a completar o projeto.
## 1.2. Perguntas Feitas
- De que forma é composto o banco de dados. Quais são as classes das variáveis? Existem informações faltantes?
+ Para o tratamento dos dados e fazer a analise é necessário obter essas informações e saber a existem de valores discrepantes.
- Quais são as medidas descritivas? Qual é a contagem total para os dois gêneros? Qual é a contagem dos gêneros por classe? Como são distribuídos os passageiros por classe e por categoria de idade? Qual é a idade média dos passageiros por classe? Existem diferenças entre as idades médias dos passageiros por categoria de idade para cada classe? Existem diferenças entre as categorias de idade? Qual foi o preço médio pago por passagem, por classe e por porto de embarcação?
+ Essas perguntas são importantes para analisar o perfil dos passageiros. Espera-se que existam diferenças significativas entre as classes, principalmente entre as classes extremas, primeira e terceira. É sabido que o Titanic foi de grande sucesso devido a propaganda luxuosa feita pela mídia e que essa fatídica viagem era a primeira com ele, devido a isso, espera-se que houvessem muitos passageiros da primeira classe embarcados.
- Será que mulheres e crianças possuem a maior taxa de sobrevivência no naufrágio? Por classe, qual foi a diferença de frequência da categoria de idade entre os sobreviventes e qual a relação disso pelo número total de passageiros, por classe e geral?
+ Como é esperado em acidentes, mulheres e crianças possuem preferencial no momento de fuga. Historicamente, sabe-se que muitos barcos de fuga foram lançados ao mar com pouquíssimas pessoas neles, portanto há interesse em saber se houve alguma diferença no número de sobreviventes entre cada classe e quantas pessoas sobreviveram no geral.
- A quantidade de adulto em cada classe?
+ Com a separação entre as três classes de passageiro e as categorias de idade, é possível investigar qual a probabilidade de se estar em qualquer uma das classes.
- A frequência de pessoas de diversas idades no Titanic e sua classe.
- A frequência de adultos em comparação as demais categorias de idade no barco.
+ A possibilidade de que existem mais adultos que as demais categorias é clara, mas e sua frequência em comparação as demais.
- Quais portos tem maior taxa de embarque e quais portos tem as menores taxas.
+ Agrupando as passagens, seus valores e o local de embarque é possível se ter a media dos portos que tiveram maior e menor taxa de embarque, como também a contagem de passageiros que cada porto recebeu no embarque do Titanic.
- Quais classes sociais tem os maiores números de pessoas por passagem.
+ Um agrupamento das passagens e contagem de nomes que fazem parte pode permitir descobrir quais passagens possuem uma maior quantidade de pessoas.
## 1.3. Limpeza dos dados
O programa responsável por fazer a limpeza dos dados é o arquivo `titanic_dataset_edit.py`, ele é responsável por fazer a maioria das modificações. Algumas modificações o programa não irá fazer pois foi necessário manter os dados no estado correto.
- Primeiro o programa apresenta as colunas, seus tipos e quantidade de dados que cada coluna possui. Com isso podemos analisar colunas com valores faltantes e colunas que não irão ser uteis para o projeto;
- Em seguida e verificada quantos itens únicos cada coluna tem;
- As colunas `PassengerId` e `Cabin` são removidas. A primeira contém apenas um índice que já é gerado ao se carregar o csv, a segunda tem apenas o código de cada cabine onde o passageiro dormiu e não é útil para as analises;
- A coluna `Pclass` é renomeada para `passenger_class` para melhorar a visualização, demais nomes são mantidos;
- Todas as colunas são renomeadas para ficaram em letras em minúsculo e alterar a separação entre palavras para sublinhado. Em seguida os nomes são revisados para verificar se estão corretos;
- É contado quantos valores de idade nulos existem por classe de passagem, é feito o calculo da média e aplicada a esses valores em branco para terem uma estimativa da idade;
- É criada uma nova coluna com a categoria da idade de cada passageiro, sendo: `Crianças` para pessoas com menos de 12 anos, `Adolescente` para pessoas com mais de 12 anos e menos de 18 anos e `Adulto` para todos que tiverem idade maior que 18 anos;
- Ao final é gerado um novo dataset contendo os registros editados.
- No programa que gera as tabelas, uma nova coluna temporária foi adiciona representando a frequência de indivíduos em uma passagem.
```
import numpy as np
import pandas as pd
""" Transformação do Data Set do Titanic oferencido no curso de fundamentos de
Data Science I (Udacity).
"""
df_titanic = pd.read_csv('titanic-data-6.csv')
```
Como é composto o banco de dados? Quais são as classes das variáveis? Existem informações faltantes?
```
print(df_titanic.head(1), '\n')
print(df_titanic.info(), '\n')
print(df_titanic.nunique(), '\n')
# Forçar plot das labels
# import matplotlib.pyplot as plt
# %matplotlib inline
# N = 3
# f_class = df_titanic.query('Survived == 1').groupby('Pclass')['Survived'].count()[1]
# s_class = df_titanic.query('Survived == 1').groupby('Pclass')['Survived'].count()[2]
# t_class = df_titanic.query('Survived == 1').groupby('Pclass')['Survived'].count()[3]
# ind = np.arange(N)
# width = 0.5
# fig, ax = plt.subplots()
# p1 = ax.bar(ind, [f_class, s_class, t_class], width, color=['r', 'g', 'b'])
# ax.set_xticks(ind + width/15.)
# ax.set_yticks(np.arange(0, max((f_class, s_class, t_class)), 10))
# ax.set_xticklabels(('1', '2', '3'))
# plt.legend((p1[0], p1[1], p1[2]), ('1', '2', '3'));
# fig, ax = plt.subplots()
# p1 = ax.bar([1,2,3], df_titanic.query('Survived == 1').groupby('Pclass')['Survived'].count(), color=('r','g', 'b'))
# ax.set_title('texto')
# plt.legend((p1[0], p1[1], p1[2]), ('1', '2', '3'))
# import matplotlib.pyplot as plt
# %matplotlib inline
# val1 = df_titanic.query('Pclass == 1').Survived.value_counts()[1]
# val2 = df_titanic.query('Pclass == 1').Survived.value_counts()[0]
# val3 = df_titanic.query('Pclass == 2').Survived.value_counts()[1]
# val4 = df_titanic.query('Pclass == 2').Survived.value_counts()[0]
# val5 = df_titanic.query('Pclass == 3').Survived.value_counts()[1]
# val6 = df_titanic.query('Pclass == 3').Survived.value_counts()[0]
# loc = [0,1,2,3,4,5]
# heights = [val1,val2,val3,val4,val5,val6]
# labels = ['1st vivo','1st morto','2st vivo','2st morto','3st vivo','3st morto']
# colors = ['r','r','g','g','b','b']
# fig, ax = plt.subplots()
# p1 = ax.bar(loc, heights, tick_label=labels, color=colors)
# ax.set_title('titulo')
# ax.set_xlabel('rotulo x')
# ax.set_ylabel('rotulo y')
# ax.set_xticklabels(labels, rotation=90);
# temp = df_titanic.loc[(df_titanic['Sex'] == 'female')]
# temp.loc[df_titanic['Age'].isna()]['Age'].isna().sum()
# temp2 = temp.loc[df_titanic['Age'].isna()]
# temp.loc[df_titanic['Age'].isna(), 'Age'] = temp2.fillna(df_titanic.query('Sex == "female"')['Age'].mean().round())
# temp['Age'].isna().sum()
# df_titanic.loc[df_titanic['Age'].isna(), 'Age'] = temp
# print(df_titanic.loc[(df_titanic['Sex'] == 'female'), 'Age'].isna().sum())
# print(df_titanic.loc[(df_titanic['Sex'] == 'male'), 'Age'].isna().sum())
# df_titanic[(df_titanic['Sex'] == 'female') & (df_titanic['Age'].isna())] = df_titanic.query('Sex == "female"')['Age'].mean()
# print(df_titanic.query('Sex == "female"')['Age'].isna().sum())
# print(df_titanic.query('Sex == "male"')['Age'].isna().sum())
```
O banco de dados é composto de 12 colunas cujas variáveis são inteiras, strings e float. Exitem dados faltantes em pelo menos uma variável.
```
# Coluna de id de passageiros removida, pois é igual ao index.
# A coluna Cabines não será utilizada portanto será removida.
# Nome está sendo mantido apenas para estudo de parentesco. Isso auxiliará quando forem agrupados os
# tickets e se retornar o nome das pessoas que possuem mesmo sobrenome é possível identificar se os
# passageiros tinham relações de parentesco ou não.
# Para o tratamento dos dados serão feitas algumas alterações.
df_titanic.drop(columns=['PassengerId', 'Cabin'], inplace=True)
# Renomeia Pclass para Passenger Class.
df_titanic.rename(columns={'Pclass':'Passenger Class'}, inplace=True)
# Renomeia colunas.
def rename_cols(df:pd.DataFrame, sep:str='_', lowcase:bool=True):
""" Renomeia as colunas em um DataFrame
Utilizando de um separador, renomeia as colunas de um DataFrame
:param df: DataFrame para renomear as colunas.
:param sep: String com o caracter de separação das colunas.
:param lowcase: Booleano para definir se os nomes devem ficar em minusculo.
"""
if lowcase:
df.rename(
columns=lambda x: x.strip().lower().replace(' ', sep), inplace=True)
else:
df.rename(
columns=lambda x: x.strip().replace(' ', sep), inplace=True)
rename_cols(df_titanic)
# Verifica se colunas foram alteradas corretamente
df_titanic.columns
```
Identificação e tratamento das colunas que possuem dados faltantes.
```
# Contagem de valores nulos na coluna `embarked`.
print('Locais de embarque não definidos: {}'.format(df_titanic['embarked'].isnull().sum()))
# Contagem de locais de embarque: C (Cherbourg), Q (Queenstown) e S (Southampton).
print('Total de embarques em S (Southampton), Cherbourg (C) e Q (Queenstown).')
print(df_titanic['embarked'].value_counts())
embark_local = df_titanic['embarked'].value_counts().index.tolist()[0]
print('Local com maior número de embarques: {}'.format(embark_local))
# Valores nulos em embarked recebem o valor de maior contagem.
df_titanic['embarked'].fillna(embark_local, inplace=True)
```
Identificação e tratamento das colunas que possuem dados faltantes por classe.
```
# Contagem de valores nulos na idade da primeira classe.
print('Pessoas da primeira classe sem idade definida: {}'
.format(df_titanic.query(
'passenger_class == 1')['age'].isnull().sum()))
df_mean_firstclass_age = df_titanic.query(
'passenger_class == 1')['age'].dropna().mean()
# print(df_mean_firstclass_age)
df_titanic['age'].fillna(df_mean_firstclass_age, inplace=True)
# print(df_titanic.query('passenger_class == 1')['age'].isnull().sum())
# Contagem de valores nulos na idade da segunda classe.
print('Pessoas da segunda classe sem idade definida: {}'
.format(df_titanic.query(
'passenger_class == 2')['age'].isnull().sum()))
# df_mean_second_age = df_titanic.query(
# 'passenger_class == 2')['age'].dropna().mean()
# print(df_mean_second_age)
# df_titanic['age'].fillna(df_mean_second_age, inplace=True)
# print(df_titanic.query('passenger_class == 2')['age'].isnull().sum())
# Contagem de valores nulos na idade da terceira classe.
print('Pessoas da terceira classe sem idade definida: {}'
.format(df_titanic.query(
'passenger_class == 3')['age'].isnull().sum()))
# df_mean_third_age = df_titanic.query(
# 'passenger_class == 3')['age'].dropna().mean()
# print(df_mean_third_age)
# df_titanic['age'].fillna(df_mean_third_age, inplace=True)
# print(df_titanic.query('passenger_class == 3')['age'].isnull().sum())
```
Transformação dos valores da coluna idade para inteiros. Crianças menores de um ano terão suas idades arredondadas para zero.
```
df_titanic['age'] = df_titanic['age'].apply(lambda x: np.floor(x)).astype(int)
df_titanic.head()
```
Qual o total de passageiros por tickets? Foi criada uma coluna adicional no banco de dados com essa informação.
```
# Cria coluna com frequencia dos tickets
df_titanic['freq'] = df_titanic.groupby('ticket')['ticket'].transform('count').astype(int)
df_titanic.head()
```
Identificou a necessidade de categorizar a idade dos passageiros por classes para auxílio na compreensão do perfil dos passageiros. Passageiros menores de 12 anos foram declarados serem crianças, passageiros de 13 até 18 anos foram considerados adolescentes e os demais foram considerados adultos.
```
# Cria nova coluna com categoria de idade.
#
df_criancas = df_titanic.query('age <= 12').copy()
df_adolescente = df_titanic.query('12 < age <= 18').copy()
df_adulto = df_titanic.query('age > 18').copy()
df_sem_idade = df_titanic[df_titanic['age'].isnull()].copy()
idade_crianca = np.repeat('Criança', df_criancas.shape[0])
idade_adolescente = np.repeat('Adolescente', df_adolescente.shape[0])
idade_adulto = np.repeat('Adulto', df_adulto.shape[0])
idade_desconhecido = np.repeat('Desconhecido', df_sem_idade.shape[0])
df_criancas['age_category'] = idade_crianca
df_adolescente['age_category'] = idade_adolescente
df_adulto['age_category'] = idade_adulto
df_sem_idade['age_category'] = idade_desconhecido
df_titanic_edited = df_criancas.append(
[df_adolescente, df_adulto, df_sem_idade])
df_titanic_edited.sort_index(inplace=True)
```
Novo arquivo gerado após as modificações.
```
df_titanic_edited.to_csv('titanic_edited.csv', index=False)
```
## 1.4. Analises
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
""" Geração de tabelas e gráficos e outros elementos usando do Data Set do Titanic editado
oferencido no curso de fundamentos de Data Science I (Udacity).
"""
# Algumas analises não foram feitas devido a ausência de observações no banco de dados,
# onde poderiamos verificar as informação familiares pelas colunas SibSp e Parch,
# existem informações de pessoas com acompanhantes mas não existem estes acompa-
# nhantes no banco de dados.
# Formata os ponto flutuantes para quatro casas decimais.
pd.set_option('display.float_format', '{:.4f}'.format)
df_titanic = pd.read_csv('titanic_edited.csv')
# Cabeçalho.
df_titanic.head(5)
# df_titanic.loc[df_titanic['sex'] == np.nan, 'age'] = df_titanic.query('sex == "female"')['age'].mean()
# df_titanic.query('sex == "female"')['age'].isna().sum()
print(df_titanic.query('sex == "male"')['age'].isna().sum())
print(df_titanic.query('sex == "female"')['age'].isna().sum())
# df_titanic.query('sex == "female"')['age'].fillna(df_titanic.query('sex == "female"')['age'].mean(), inplace=True)
# df_titanic.query('sex == "female"')['age'].isna().sum()
# df_titanic.query('sex == "male"')['age'].isna().sum()
# Nomes das colunas.
print('Colunas: ', *(x for x in df_titanic.columns[1:]), sep=' | ')
```
Qual o total de passageiros sobreviventes por categoria de idade e por classe?
```
# Tabela 1: Passageiros por classe social, total de indivíduos por categoria de
# idade e sobreviventes da categoria.
df_1 = df_titanic.query('survived == 1').groupby(
['passenger_class', 'age_category'])['survived'].count().reset_index()
class_sum = df_1['survived'].sum()
df_1.loc['total'] = np.array(['-','-',class_sum])
df_1['survived'] = df_1['survived'].astype(int)
df_1.to_csv('tables/t_1.csv', index=False)
print('Tabela 1: Passageiros por classe social, total de indivíduos por categoria de idade e sobreviventes da categoria.')
df_1
```
A *Tabela 1* demonstra que a grande maioria dos sobreviventes do naufrágio do Titanic, em todas as classes de passageiros, são adultos, sendo a grande maioria pertencentes a primeira classe. Esses dados consideram que 30 pessoas não identificadas da primeira classe são da categoria adulta, por intermédio da substituição dos Nas pela de média entre a quantidade de passageiros dessa classe. Ainda sabemos que o total desses sobreviventes somam 342 passageiros.
Qual o total de passageiros sobreviventes por categoria de idade e por classe?
```
# Gráfico de Sobreviventes por classe e categoria de idade.
# Existe um warning nesse gráfico, mas não entendi como resolver.
g1 = sns.catplot(data=df_titanic, x='passenger_class', y='survived',
kind='bar', hue='age_category')
g1._legend.set_title('Categoria de Idade')
plt.xlabel('Classe')
plt.ylabel('Sobreviventes')
plt.title('Sobreviventes por classe e categoria de idade')
sns.despine(offset=5, trim=True)
g1.savefig('imgs/g1-survived-class-by-age-cotegory.png')
```
A *Figura 1* ajuda a esclarecer visualmente a porcentagem de sobreviventes entre cada categoria e sua classe de passageiro. Por ela pode-se verificar claramente que a grande maioria dos sobreviventes são da primeira classe, sendo que na segunda classe quase todas as crianças sobreviveram.
Qual o total de passageiros por classe e categoria de idade?
```
# Gráfico de contagem de passageiros por classe e categoria de idade.
g2 = sns.catplot(data=df_titanic, x='passenger_class',
kind='count', hue='age_category')
g2._legend.set_title('Categoria de Idade')
plt.xlabel('Classe')
plt.ylabel('Contagem')
plt.title('Contagem por classe e categoria de idade')
sns.despine(offset=5, trim=True)
g2.savefig('imgs/g2-count-class-by-age-cotegory.png')
```
Na *Figura 2* observa-se que a população de passageiros era constituída, na maior parte, por adultos. Para todas as categorias de idade, observa-se maior discrepância de valores na terceira classe em relação as demais. Observa-se ainda que a população de crianças na primeira classe foi a menor entre as classes.
Graficamente, como é a descrição da categoria de idade por classe?
```
# Gráfico de caixa de categoria de idade por classe.
g3 = sns.catplot(y="age_category", x="age", row="passenger_class",
orient="h", height=2, aspect=4, kind="box",
data=df_titanic.query('age_category != "Desconhecido"'))
g3.set_ylabels('Categoria de Idade')
g3.set_xlabels('Idade')
g3.set_titles("{row_name}(a) classe")
sns.despine(offset=5, trim=True)
g3.savefig('imgs/g3-box-class-by-age-cotegory.png')
```
O mesmo é observado na *Figura 3*, com informações adicionais de que existem outliers na categoria adultos em todas as classes (*Figura 4*). Observa-se também que existe pouca variabilidade dentro das categorias, exceto para criança na primeira e terceira classe, adolescente na segunda classe e adultos na terceira classe.
Como os adultos constituem a maior parte da população de passageiros, como estão distribuídos sumariamente os adultos por classe? Existe muita diferença entre eles?
```
# Gráfico de caixa entre idade de passageiros adultos e classe social.
f, g8 = plt.subplots(figsize=(6, 6))
sns.boxplot(x="passenger_class", y="age",
data=df_titanic.query('age_category != "Desconhecido" & age > 18'))
plt.ylabel('Idade')
plt.xlabel('Classe')
plt.title("Idade dos adultos por classe")
sns.despine(offset=5, trim=True);
g8.figure.savefig('imgs/g8-box-adult-age-by-social.png')
```
A *Figura 4* demonstra um gráfico de caixa com dados dos passageiros adultos em diferentes classes. É possível ver que existem outliers em todas as categorias. A mediana da idade dos adultos da terceira e primeira classe são próximas, assim como existe maior variabilidade nessas classes. A primeira classe é formada por pessoas mais velhas e a terceira por adultos com idades mais jovens. A segunda classe aparenta ser mais homogênea do que as demais.
Como é a distribuição das idades por classe?
```
# Histograma idade por classe social
g = sns.FacetGrid(df_titanic_edited, col="passenger_class", height=5)
g.set_ylabels('Frequência')
g.map(plt.hist, "age")
g.set_ylabels('Frequência')
g.set_xlabels('Idade')
g.set(xticks=np.arange(0, 85, 5),
xticklabels=['0', '', '10', '', '20', '', '30', '',
'40', '', '50', '', '60', '', '70', '', '80'])
sns.despine(offset=5, trim=True);
g.savefig('imgs/g11-hist-age-first-to-third-class.png')
```
O primeiro gráfico de frequência na *Figura 5*, representando a primeira classe, apresenta uma distribuição simétrica e a maior frequência observada é de adultos com idade variando de 35 a 40 anos. No segundo e terceiro gráficos observam-se assimetrias positivas, sendo que no terceiro gráfico a maior frequência de idade varia de 40 a 45 anos e pertence a categoria adulta.
Como estão distribuídos os gêneros por classe?
```
# Gráfico de contagem de passageiros por classe e gênero.
g4 = sns.catplot(data=df_titanic, x='passenger_class', kind='count', hue='sex')
g4._legend.set_title('Gênero')
plt.xlabel('Classe')
plt.ylabel('Contagem')
plt.title('Contagem de pessoas por classe e gênero')
sns.despine(offset=5, trim=True)
g4.savefig('imgs/g4-count-passengers-class.png')
```
A *Figura 6* demonstra que a contagem de homens a bordo em todas as classes é maior que a de mulheres, independente da idade. Resalta-se a discrepância de passageiros do gênero masculino na terceira classe.
Qual o total de passageiros e sobreviventes por gênero e classe?
```
# Tabela 2: Passageiros por classe, total de indivíduos por sexo e
# sobrevivementes da categoria.
df_2 = df_titanic.groupby(['passenger_class', 'sex']).agg(
{'name':'count', 'survived':'sum'}).rename(
columns={'name':'total'})
df_2.to_csv('tables/t_2.csv')
print('Tabela 2: Passageiros por classe com total de indivíduos por classe e sobreviventes.')
df_2
```
A *Tabela 2* mostra que existe mais homens e mulheres na terceira classe, como observado anteriormente na *Figuras 6*. A maioria dos sobreviventes são do gênero feminino, *Tabela 4*, sendo que a primeira e segunda classe quase todas as passageiras sobreviveram. Esses dados podem ser reafirmados pela *Figura 6*, que mostra a contagem de passageiros a bordo, com uma discrepância de passageiros do gênero masculino na terceira classe.
Qual o total de passageiros e sobreviventes por classe?
```
# Tabela 3: Contagem de passageiros por classe e sobreviventes.
df_3 = df_titanic.groupby('passenger_class').agg(
{'name':'count', 'survived': 'sum'}).rename(
columns={'name':'total'})
df_3.to_csv('tables/t_3.csv')
print('Tabela 3: Total de passageiros por classe e sobreviventes.')
df_3
```
A *Tabela 3* demonstra que a grande maioria dos passageiros constitue a terceira classe, sendo o dobro de passageiros da primeira classe, porém sobreviveu apenas, aproximadamente, 1/4 enquanto a primeira classe teve uma taxa de sobrevivência maior que 50%.
Graficamente, qual é a diferença do total de passageiros entre as classes?
```
# Contagem de passageiros por classe.
g5 = sns.catplot(data=df_titanic, x='passenger_class', kind='count')
plt.xlabel('Classe')
plt.ylabel('Contagem')
plt.title('Contagem de passageiros por classe social.')
sns.despine(offset=5, trim=True)
g5.savefig('imgs/g5-count-passenger-class.png')
```
A *Figura 7* mostra o número de passageiros por classe. A classe com maior número de passageiros é a terceira classe, a segunda classe foi a que apresenta o menor número de passageiros.
Graficamente, qual a diferença numerica entre os gêneros?
```
# Contagem de pessoas por gênero.
g6 = sns.catplot(data=df_titanic, x='sex', kind='count')
plt.xlabel('Gênero')
plt.ylabel('Contagem')
plt.title('Contagem de pessoas por gênero')
sns.despine(offset=5, trim=True)
g6.savefig('imgs/g6-count-people-by-gender.png')
```
Observa-se, na *Figura 8*, que a população de passageiros era formada, majoritariamente, por pessoas do gênero masculino, representando quase o dobro do número total de pessoas do gênero feminino.
Qual era o total de sobreviventes por gênero?
```
# Tabela 4: Total de indivíduos por gênero e sobreviventes.
df_4 = df_titanic.groupby('sex').agg(
{'name':'count', 'survived':'sum'}).rename(columns={'name':'total'})
df_4.to_csv('tables/t_4.csv')
print('Tabela 4: Total de indivíduos por gênero e sobreviventes.')
df_4
```
A *Tabela 4* demonstra que os passageiros eram majoritariamente do gênero masculino, como também é observado na *Figura 8*, com maior sobrevivência de pessoas do gênero feminino. O que indica que mulheres tiveram uma maior taxa de sobrevivência comparado a homens, o que é esperado em um acidente dessas proporções.
Qual o total de sobreviventes por categoria de idade?
```
# Tabela 5: Contagem de pessoas por categoria de idade e sobreviventes.
df_5 = df_titanic.groupby('age_category').agg(
{'name':'count', 'survived':'sum'}).rename(columns={'name':'total'})
df_5.to_csv('tables/t_5.csv')
print('Tabela 5: Contagem de pessoas por categoria de idade e sobreviventes.')
df_5
```
A grande maioria dos passageiros era da categoria adulta, com a minoria sendo de crianças e adolescentes (*Figura 9*). Houve maior número de falecidos na categoria adulta, 63,83% e mais de 50% dos adolescentes faleceram no naufrágio. No total, 57,97% das crianças, 37,52% dos adultos e 42,86% dos adolescentes sobreviveram como pode ser observado na Tabela 5. Assim a maior taxa de sobrevivência não é de adultos e sim de crianças, com quase 58% de sobrevivência.
Graficamente, qual era a diferença entre as categorias de idade entre os passageiros?
```
# Contagem de pessoas por categoria de idade.
g7 = sns.catplot(data=df_titanic, x='age_category', kind='count')
plt.xlabel('Categoria de Idade')
plt.ylabel('Contagem')
plt.title('Contagem de pessoas por categoria de idade')
sns.despine(offset=5, trim=True)
g7.savefig('imgs/g7-count-pearson-by-age-category.png')
```
A maioria dos passageiros eram adultos, *Figura 9*.
Como era o sumário da categoria de idade dos passageiros?
```
# Tabela 6: Descritiva da categoria de idade dos passageiros.
df_6 = df_titanic.groupby('age_category')['age'].describe().dropna()
df_6.to_csv('tables/t_6.csv')
print('Tabela 6: Descritiva da categoria de idade dos passageiros.')
df_6
```
A *Tabela 6* mostra a descritiva das categorias de idade. No geral, a idade média dos adolescentes é de, aproximadamente, 17 anos (desvio de 1.44), mínimo de 13 anos e máximo de 18 anos; os adultos possuem, aproximadamente, 35 anos (desvio de 10.61), com mínimo de 19 anos e máximo de 80 anos; e as crianças possuem a idade média de, aproximadamente, 5 anos (desvio de 3.39), com o máximo de 12 anos.
Como era o sumário da categoria de idade dos passageiros por classe?
```
# Tabela 7: Descritiva da categoria de idade dos passageiros por classe.
df_7 = df_titanic.groupby(
['age_category', 'passenger_class'])['age'].describe().dropna()
df_7.to_csv('tables/t_7.csv')
print('Tabela 7: Descritiva da categoria de idade dos passageiros por classe.')
df_7
```
A *Tabela 7* separa a descritiva das categorias de idade entre as classes de passageiro, por ela é possível verificar que adultos da primeira classe são os mais velhos, com idade media de 40 anos, já adolescentes tem pouca variação na média de idade por classe, assim como na categoria infantil.
Como eram distribuídas as contagens de passageiros e as médias das passagens por classe e por porto de embarque?
```
# Tabela 8: Média de valor da passagem por local de embarque e sua classe.
df_8 = df_titanic.groupby(
['passenger_class', 'embarked']).agg(
{'fare':'mean', 'name':'count'}).rename(
columns={'fare':'fare_mean','name':'total'})
df_8.to_csv('tables/t_8.csv')
print('Tabela 8: Média de valor da passagem por local de embarque e sua classe:')
df_8
```
Na *Tabela 8* podemos ver uma separação dos valores médios das passagens em cada posto de embarque do Titanic. Sendo C (Cherbourg), Q (Queenstown) e S (Southampton). As passagens mais caras da primeira classe foram compradas em Cherbourg (*Figura 13*), totalizando 168 passageiros sendo a maioria da primeira classe (Figura 12), sendo que em Southampton apresentou um maior movimento de passageiros. Houve pouquíssimos embarques da primeira e segunda classe em Queenstown, cinco passageiros apenas.
```
fare_sum = df_titanic.iloc[df_titanic['ticket'].drop_duplicates().index].groupby('passenger_class')['fare'].sum()
print('Valor médio passagem primeira classe: {:.2f}'.format(
fare_sum[1] / df_titanic.query('passenger_class == 1')['fare'].count()))
print('Valor médio passagem segunda classe: {:.2f}'.format(
fare_sum[2] / df_titanic.query('passenger_class == 2')['fare'].count()))
print('Valor médio passagem terceira classe: {:.2f}'.format(
fare_sum[3] / df_titanic.query('passenger_class == 3')['fare'].count()))
```
Existem valores discrepantes entre as classes? Existe alguma diferença na distribuição monetária dos valores dos tickets entre as classes?
```
# Gráfico de caixa entre classe de passageiros e taxa da passagem.
f, g9 = plt.subplots(figsize=(8, 8))
sns.boxplot(x="passenger_class", y="fare", data=df_titanic)
plt.ylabel('Preço Passagem')
plt.xlabel('Classe')
plt.title("Classe do passageiro por preço da passagem")
sns.despine(offset=5, trim=True)
g9.figure.savefig('imgs/g9-box-class-by-ticket-fare.png')
```
Pela *Figura 10* pode-se observar que existem outliers com relação ao valor da passagem, como existem passagens com valores altos que permitiam entrada de mais de um passageiro. Como o banco de dados possuía dados faltantes, suspeita-se que esses outliers sejam de passagens que abrangem mais passageiros que não estavam contidos no dataset.
Em quais portos houve maior número de passageiros embarcando, por classe?
```
# Gráfico de barra com local de embarque e classe do passageiro.
g10 = sns.catplot(data=df_titanic, x='passenger_class',
y='fare', kind='bar', hue='embarked')
g10._legend.set_title('Locais de Embarque')
plt.xlabel('Classe')
plt.ylabel('Tarifa Passagem')
plt.title('Tarifa da passagem por local de embarque e classe social.')
sns.despine(offset=5, trim=True)
g10.savefig('imgs/g10-bar-embarked-by-class.png')
```
A *Figura 11* contem a visualização da média das tarifas por classe, sendo C (Cherbourg), Q (Queenstown) e S (Southampton). Por ela podemos ver que as passagens em Cherbourg eram as que custavam mais caras tanto para a primeira quanto para a segunda classe. Em Queenstown o valor da passagem entre a terceira classe não tinha grande diferença de preço.
Qual ticket possui o maior número de passageiros por classe?
```
# Passagens com maior número de passageiros da primeira classe.
print('Tabela 9 - Passagens com maior número de passageiros da primeira classe:')
df_titanic_most_pc = df_titanic.sort_values(by=['freq', 'ticket'], ascending=False).query(
'passenger_class == 1').head(10)
df_titanic_most_pc.to_csv('tables/t_9.csv', index=False)
df_titanic_most_pc
# Passagens com maior número de passageiros da segunda classe.
print('Tabela 10 - Passagens com maior número de passageiros da segunda classe:')
df_titanic_most_sc = df_titanic.sort_values(by=['freq', 'ticket'], ascending=False).query(
'passenger_class == 2').head(10)
df_titanic_most_sc.to_csv('tables/t_10.csv', index=False)
df_titanic_most_sc
# Passagens com maior número de passageiros da terceira classe.
print('Tabela 11 - Passagens com maior número de passageiros da terceira classe:')
df_titanic_most_tc = df_titanic.sort_values(by=['freq', 'ticket'], ascending=False).query(
'passenger_class == 3').head(10)
df_titanic_most_tc.to_csv('tables/t_11.csv', index=False)
df_titanic_most_tc
```
Nas *Tabelas 9 a 11* mostram as passagens com maior número de passageiros por classe. A primeira classe a passagem com o maior número de pessoas tem quatro, a grande maioria sobreviveu ao naufrágio. A terceira classe, possuíam mais de uma passagem com sete passageiros sendo que todos faleceram, porém em pesquisas online pelo nome de família consta que o banco de dados não contem todo o registro da família Sage, sendo que o total eram de 11 passageiros na mesma passagem.
Perfil das vítimas do naufrágio
Quantas pessoas foram vítimas por classe e categoria de idade?
```
# Tabela 12: Passageiros vítimas por classe social, total de indivíduos por categoria de idade.
df_12 = df_titanic.query('survived == 0').groupby(
['passenger_class', 'age_category'])['survived'].count().reset_index()
class_sum = df_12['survived'].sum()
df_12.loc['Total'] = np.array(['-','-',class_sum])
df_12['survived'] = df_12['survived'].astype(int)
df_12.to_csv('tables/t_12.csv')
print('Tabela 12: Passageiros vítimas por classe social, total de indivíduos por categoria de idade.')
df_12
```
Observa-se na *Tabela 12* apenas uma criança e um adolescente não sobreviveram ao naufrágio, na primeira classe. Todos as crianças da segunda classe sobreviveram e, na terceira classe, 33 adolescentes e 28 crianças morreram. O maior número de mortes foi o da categoria de idade adulta pertencente a terceira classe com 311 ocorrências.
Quantas pessoas foram vítimas por gênero?
```
# Tabela 13: Total de vítimas por gênero
df_13 = df_titanic.query('survived == 0').groupby(['sex'])['survived'].count().reset_index()
df_13.to_csv('tables/t_13.csv')
print('Tabela 13: Total de vítimas por gênero.')
df_13
```
O gênero masculino foi o qual obteve-se o maior número de óbtidos, 85.25%, o que era previsto visto que é o gênero que apresenta o maior número de observações e, também, o protocolo de fuga é que mulheres e crianças sejam evacuadas primeiro em acidentes.
Qual é a descrição da idade das pessoas que faleceram, por classe?
```
# Tabela 14: Descrição das vítimas por classe.
df_14 = df_titanic.query('survived == 0').groupby('passenger_class')['age'].describe()
df_14.to_csv('tables/t_14.csv')
print('Tabela 14: Descrição das vítimas por classe.')
df_14
```
A primeira classe teve, em média, a maior idade de aproximadamente, 43 anos (desvio de 13.83), sendo a vítima mais jovem com 2 anos e a mais velha com 71 anos. A segunda classe possui em média, aproximadamente, 34 anos (desvio de 11.75), com mínimo de 16 anos e máximo de 70 anos. E na terceira classe, o perfil da vítima é de passageiro com média de, aproximadamente, 30 anos (desvio de 11.67), com o mínimo de 1 ano e o máximo de 74 anos.
Qual é a idade média da vítima do naufrágio?
```
# Tabela 15: Idade média dos falecidos.
df_15 = df_titanic.query('survived == 0')['age'].describe()
df_15.to_csv('tables/t_15.csv')
print('Tabela 15: Idade média dos falecidos.')
df_15
```
A idade média da vítima era de, aproximadamente, 32 anos (desvio de 12,82) o que compreende a população adulta. A vítima mais jovem foi uma criança com 1 ano e a mais velha foi um adulto com 74 anos, ambos passageiros da terceira classe.
Qual era a idade média dos sobreviventes por classe?
```
# Tabela 16: Idade média de sobreviventes por classe.
df_16 = df_titanic.query('survived == 1').groupby('passenger_class')['age'].describe()
df_16.to_csv('tables/t_16.csv')
print('Tabela 16: Idade média de sobreviventes por classe.')
df_16
```
Existe diferenças entre as idades médias entre a primeira classe e as demais, o que era esperado visto que os passageiros da primeira classe eram mais velhos do que os passageiros das demais classes. Para a primeira classe, a média de idade era de, aproximadamente, 36 anos (desvio de 13 anos), o sobrevivente mais jovem possuia menos de 1 ano de idade, assim como para as demais classes, e o mais velho tinha 80 anos. Para a segunda classe, a idade média era de, aproximadamente, 26 anos (com desvio de 15 anos), o sobrevivente mais velho possuia 62 anos. Para a terceira classe, a média de idade, aproximadamente, igual da segunda classe e o sobrevivente mais velho possuia 63 anos.
Qual era a idade média dos sobreviventes?
```
# Tabela 17: Idade média dos sobreviventes.
df_17 = df_titanic.query('survived == 1')['age'].describe()
df_17.to_csv('tables/t_17.csv')
print('Tabela 17: Idade média dos sobreviventes.')
df_17
```
A idade média do sobrevivente era de, aproximadamente, 30 anos (desvio de 14 anos) o que compreende a população adulta. O sobrevivente mais jovem foi uma criança com menos de 1 ano e a mais velha foi um adulto com 80 anos, sendo o mais velho passageiro da primeira classe.
## 1.5. Conclusões
A maioria dos passageiros do Titanic foram vítimas decorrentes do naufrágio. No total, apenas 57,97% das crianças, 37,52% dos adultos e 42,86% dos adolescentes sobreviveram. Sendo que a taxa de sobrevivência das crianças foi de 58%. Desses, 39,77% eram da primeira classe o que evidência que essa classe foi priorizada no momento de fuga, resultando em 62,96% de sobreviventes do total de passageiros da primeira classe. Em relação a segunda e terceira classes, sobreviveram apenas 47,28% e 24,24%, respectivamente, totalizando 60,23% de sobreviventes.
As crianças foram a que apresentaram maior taxa de sobrevivência, 57,97%, seguida dos adolescentes com 42,86% e adultos com 36,17%. Em relação ao gênero, 74,20% das mulheres sobreviveram e apenas 25,80% dos homens sobreviveram, sendo que eles constituiam majoritariamente a população de passageiros.
As passagens da primeira classe possuiam os maiores valores, em média $\$43.65$, e $\$13.32$, e $\$8.09$ em média para a segunda e terceira classe, respectivamente. Em relação ao local de embarque, houveram três portos em que o Titanic aportou, o Cherbourg, o Queenstown e o Southampton. No Cherbourg, embarcaram 168 pessoas no total, representando o porto em que obteve-se as maiores médias de tickets para a primeira e segunda classe, sendo 85 da primeira classe (com valor médio do ticket de $\$104,72$), 17 da segunda classe (com valor médio do ticket de $\$25,36$). O porto de Southampton foi o qual houve maior número de embarque, no total 646 pessoas, resultando em 72.50% do total da tripulação do Titanic, sendo 91 da primeira classe, 125 da segunda classe e 279 da terceira classe, com preço médio de $\$14.64$ para a terceira classe.
O perfil das vítimas do naufrágio mostra que os adultos foram a grande maioria com 87.43% falencendo, sendo que 64.79% foram da terceira classe; os adolescentes ficaram em seguida com uma baixa taxa de mortes de apenas 7.29% e as crianças foram as que menos sofreram baixas com 5.28%, sendo que apenas 1 criança da primeira classe perdeu a vida, enquanto 28 crianças da terceira classe faleceram. A segunda classe não teve crianças falecidas no naufrágio.
De todos os falecimentos, 468 passageiros eram do gênero masculino, sendo que os homens constituiam a maior parte da população total de passageiros, representando 85.25% de óbitos. A idade média das vítimas foi de, aproximadamente, 32 anos com desvio de 13 anos, o que corresponde que, em média, a vítima pertencia a categoria adulta. A vítima mais jovem foi uma criança de 1 ano e a mais velha foi um adulto de 74 anos, ambos pertencentes a terceira classe. Tal classe também obteve, no geral, o maior número de mortos, representando 67.76% do total de falecimentos.
Em média, a idade dos sobreviventes foi de 30 anos e de vítimas foi de 32 anos, indicando que o perfil do sobrevivente é de uma pessoa adulta, em média, o que é esperado visto que haviam mais adultos do que crianças e adolescentes no navio. Para a primeira classe, a idade média dos sobreviventes foi de 36 anos e a idade média das vítimas foi de 43 anos. Para a segunda classe, a idade média dos sobreviventes foi de 26 anos e a idade média das vítimas foi de 34 anos. Para a terceira classe, a idade média dos sobreviventes foi de 23 anos e a idade média das vítimas foi de 30 anos. Isso evidencia que o perfil médio de sobreviventes, por classe, é de adultos jovens.
Em suma, o banco de dados analisado revela que o perfil médio do sobrevivente é que de um passageiro da primeira classe e do gênero feminino, independente da categoria de idade.
## 1.6 Limitações
Em um primeiro estudo notou-se que existem alguns fatores que podem limitar a análise, elas são: Age, Cabin, Embarked.
- Medidas tomadas:
- Age: Os valores da idade foram tomados de acordo com a média da classe de passageiro e aplicados aos valores nulos.
- Cabin: Como a coluna não é fundamental para a analise feita, foi removida do conjunto de dados.
- Embarked: Pouquissimos valores faltantes. Optou-se por adicionar o local com maior número de embarques.
Em Embarked optou-se por manter as siglas para facilitar a visualização. A coluna Age optou-se por tornar seus valores inteiros.
A partir das idades, uma nova coluna foi gerada para distinguir categorias de idade `age_category`. Também foi criada uma coluna de frequência do número total de passageiros por ticket `freq` que contem a quantidade de passageiros por passagem.
## 1.7. Links uteis
Está é uma lista de links utilizados durante o projeto.
- [Change Figure Size](https://stackoverflow.com/questions/31594549/how-do-i-change-the-figure-size-for-a-seaborn-plot/31597278)
- [Save Figure in Seaborn](https://stackoverflow.com/questions/33616557/barplot-savefig-returning-an-attributeerror)
- [Ploting with Seaborn](https://www.kaggle.com/princeashburton/plotting-with-seaborn)
- [Histograms and Density Plots in Python](https://towardsdatascience.com/histograms-and-density-plots-in-python-f6bda88f5ac0)
- [Adjust Ticks in Seaborn](https://github.com/mwaskom/seaborn/issues/568)
- [CSV to Markdown](https://donatstudios.com/CsvToMarkdownTable)
- [Seaborn Tutorial](https://seaborn.pydata.org/tutorial.html)
- [Matplotlib Docs](https://matplotlib.org/contents.html)
- [Pandas Doc](https://pandas.pydata.org/pandas-docs/stable/)
- [Numpy Docs](https://docs.scipy.org/doc/numpy/)
- [Encyclopedia Titanica](https://www.encyclopedia-titanica.org/)
| github_jupyter |
```
import os
import pickle
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
from IPython.display import FileLink
pip install --upgrade torchvision
import torch
from torch.utils.data import Dataset, DataLoader
import torchvision
from torchvision import transforms
from PIL import Image
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
img_path = '../input/photos-for-object-detection/photos'
existing_file = '../input/picklebackups/img_objects.pickle'
out_file = '../working/img_objects.pickle'
os.listdir()
detection_model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(
pretrained=True, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None)
detection_model.to(device).eval()
print(1)
def get_prediction(model, image, threshold):
"""Given an image, uses a model to predict object boxes with confidence level above threshold"""
preds = model(image)[0]
keep_boxes = torchvision.ops.nms(preds['boxes'], preds['scores'], 0.5)
classes = list(preds['labels'].cpu().numpy())
classes = [classes[idx] for idx in keep_boxes]
boxes = [[(i[0], i[1]), (i[2], i[3])] for i in list(preds['boxes'].cpu().detach().numpy())]
boxes = [boxes[idx] for idx in keep_boxes]
scores = list(preds['scores'].cpu().detach().numpy())
scores = [scores[idx] for idx in keep_boxes]
valid_boxes = [scores.index(x) for x in scores if x>threshold]
if not valid_boxes: return [()]
p_thresh = valid_boxes[-1]
pred_boxes = boxes[:p_thresh+1]
pred_classes = classes[:p_thresh+1]
pred_scores = scores[:p_thresh+1]
return list(zip(pred_boxes, pred_classes, pred_scores))
class ImgDataset(Dataset):
def __init__(self, main_dir, transform):
self.main_dir = main_dir
self.transform = transform
self.all_imgs = os.listdir(main_dir)
def __len__(self):
return len(self.all_imgs)
def __getitem__(self, idx):
img_loc = os.path.join(self.main_dir, self.all_imgs[idx])
image = Image.open(img_loc).convert("RGB")
tensor_image = self.transform(image)
return tensor_image, img_loc.split('/')[-1].split('.')[0], image.size
if not os.path.isfile(existing_file):
found_objects = {}
else:
with open(existing_file, 'rb') as img_dict:
found_objects = pickle.load(img_dict)
trsfm = transforms.Compose([transforms.ToTensor()])
detect_dataset = ImgDataset(img_path, transform=trsfm)
detect_loader = DataLoader(detect_dataset, batch_size=1, shuffle=False,
num_workers=0, drop_last=True)
count = len(found_objects)
for img, imgname, imgsize in tqdm(detect_loader):
if imgname not in found_objects:
count += 1
img = img.to(device)
found_objects[imgname] = get_prediction(detection_model, img, 0.5)
if not count % 10000:
with open(out_file, 'wb') as img_dict:
pickle.dump(found_objects, img_dict, protocol=pickle.HIGHEST_PROTOCOL)
with open(out_file, 'wb') as img_dict:
pickle.dump(found_objects, img_dict, protocol=pickle.HIGHEST_PROTOCOL)
FileLink(r'img_objects.pickle')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 4: Training for Tabular Data**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 4 Material
* Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb)
* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb)
* **Part 4.3: Keras Regression for Deep Neural Networks with RMSE** [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb)
* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb)
* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 4.3: Keras Regression for Deep Neural Networks with RMSE
Regression results are evaluated differently than classification. Consider the following code that trains a neural network for regression on the data set **jh-simple-dataset.csv**.
```
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
# Create train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.callbacks import EarlyStopping
# Build the neural network
model = Sequential()
model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=5, verbose=1, mode='auto',
restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),
callbacks=[monitor],verbose=2,epochs=1000)
```
### Mean Square Error
The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired.
$ \mbox{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $
```
from sklearn import metrics
# Predict
pred = model.predict(x_test)
# Measure MSE error.
score = metrics.mean_squared_error(pred,y_test)
print("Final score (MSE): {}".format(score))
```
### Root Mean Square Error
The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired.
$ \mbox{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $
```
import numpy as np
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
```
### Lift Chart
To generate a lift chart, perform the following activities:
* Sort the data by expected output. Plot the blue line above.
* For every point on the x-axis plot the predicted value for that same data point. This is the green line above.
* The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high.
* The y-axis is ranged according to the values predicted.
Reading a lift chart:
* The expected and predict lines should be close. Notice where one is above the ot other.
* The below chart is the most accurate on lower age.
```
# Regression chart.
def chart_regression(pred, y, sort=True):
t = pd.DataFrame({'pred': pred, 'y': y.flatten()})
if sort:
t.sort_values(by=['y'], inplace=True)
plt.plot(t['y'].tolist(), label='expected')
plt.plot(t['pred'].tolist(), label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Plot the chart
chart_regression(pred.flatten(),y_test)
```
| github_jupyter |
```
import cell2cell as c2c
%matplotlib inline
```
# Use toy data
**RNA-seq data**
In this case is a 6x5 matrix (6 genes/proteins and 5 samples/cell types). The values are arbitrary; for simplicity we will use them as TPMs. The indexes of this DataFrame are the names of the genes/proteins, while the columns are the names of samples/cell types.
```
rnaseq = c2c.datasets.generate_toy_rnaseq()
rnaseq
```
**Protein-Protein Interactions or Ligand-Receptor Pairs**
In this case, we use a list of LR pairs where some proteins also are ***complexes of multiple subunits***. The complexes are represented as a joint name of ***all subunits*** composing each complex, but ***separated by a separator '&'***. The ligands are in the column 'A', while the receptors in the column 'B'. For these purposes we do not use the column 'score'.
**The names of genes/proteins have to match those in the rnaseq dataset.**
```
ppi = c2c.datasets.generate_toy_ppi(prot_complex=True)
ppi
```
**Metadata**
The metadata contains the extra information for the samples/cell types in the RNA-seq data. This dataframe must contain a column where the samples/cell types are specified (#SampleID) and another with the new information, for example major groups for the samples/cell types (Groups)
```
meta = c2c.datasets.generate_toy_metadata()
meta
```
# Cell-cell Interactions and Communication Analysis
**Using an Interaction Pipeline**
The pipeline integrates the RNA-seq and PPI datasets by using the analysis setups. It generates an interaction space containing an instance for each sample/cell type, containing the values assigned to each protein in the PPI list given the setups for computing the CCI and CCC scores.
In this case is for bulk data since the toy data is not for single cell.
```
interactions = c2c.analysis.BulkInteractions(rnaseq_data=rnaseq,
ppi_data=ppi,
metadata=meta, # Metadata about cells/samples
interaction_columns=('A', 'B'), # PPI columns
complex_sep='&', # For protein complexes
communication_score='expression_thresholding',
expression_threshold=10, # TPMs
cci_score='bray_curtis',
cci_type='undirected',
sample_col='#SampleID',
group_col='Groups',
verbose=False)
```
**Compute communication scores for each PPI or LR pair**
```
interactions.compute_pairwise_communication_scores()
```
**Compute CCI scores for each pair of cells**
It is computed according to the analysis setups. We computed an undirected Bray-Curtis-like score for each pair, meaning that score(C1, C2) = score(C2, C1).
**Notice that our score is undirected, so for C1 and C2 was not necessary to compute C2 and C1 as happened for the communication scores**
```
interactions.compute_pairwise_cci_scores()
```
# Visualizations
If we want to save the figure as a vector figure, we can pass a pathname into the filename input to save it. E.g. ***filename='/Users/cell2cell/CommScores.svg'***
**Generate colors for the groups in the metadata**
It returns a dictionary with the colors.
```
colors = c2c.plotting.get_colors_from_labels(labels=meta['Groups'].unique().tolist(),
cmap='tab10'
)
colors
```
**Visualize communication scores for each LR pair and each cell pair**
```
interaction_clustermap = c2c.plotting.clustermap_ccc(interactions,
metric='jaccard',
method='complete',
metadata=meta,
sample_col='#SampleID',
group_col='Groups',
colors=colors,
row_fontsize=14,
title='Active ligand-receptor pairs for interacting cells',
filename=None,
cell_labels=('SENDER-CELLS', 'RECEIVER-CELLS'),
**{'figsize' : (10,9)}
)
# Add a legend to know the groups of the sender and receiver cells:
l1 = c2c.plotting.generate_legend(color_dict=colors,
loc='center left',
bbox_to_anchor=(20, -2), # Indicated where to include it
ncol=1, fancybox=True,
shadow=True,
title='Groups',
fontsize=14,
)
```
**Circos plot**
It generates a circos plot, showing the cells producing ligands (sender_cells), according to a list of specific ligands (ligands) to the cells producing receptors (receiver_cells), according to a list of specific receptors (receptors). The order of these lists is preserved in the visualization. Elements that are not used are omitted and therefore not plotted (in this case those with a communication score of 0, specified in 'excluded_score').
```
sender_cells = ['C1', 'C2', 'C5']
receiver_cells = ['C2', 'C3', 'C4', 'C5']
ligands = ['Protein-C',
'Protein-E',
'Protein-C&Protein-E',
'Protein F'
]
receptors = ['Protein-B',
'Protein-C',
'Protein-F',
'Protein-A'
]
c2c.plotting.circos_plot(interaction_space=interactions,
sender_cells=sender_cells,
receiver_cells=receiver_cells,
ligands=ligands,
receptors=receptors,
excluded_score=0,
metadata=meta,
sample_col='#SampleID',
group_col='Groups',
colors=colors,
fontsize=15,
)
```
If we do not pass metadata info, the samples/cell types are only plotted.
We can also change the label colors of ligands and receptors
```
c2c.plotting.circos_plot(interaction_space=interactions,
sender_cells=sender_cells,
receiver_cells=receiver_cells,
ligands=ligands,
receptors=receptors,
excluded_score=0,
fontsize=20,
ligand_label_color='orange',
receptor_label_color='brown',
)
```
**Visualize CCI scores**
Since this case is undirected, a triangular heatmap is plotted instead of the complete one.
```
cm = c2c.plotting.clustermap_cci(interactions,
method='complete',
metadata=meta,
sample_col="#SampleID",
group_col="Groups",
colors=colors,
title='CCI scores for cell-types',
cmap='Blues'
)
# Add a legend to know the groups of the sender and receiver cells:
l1 = c2c.plotting.generate_legend(color_dict=colors,
loc='center left',
bbox_to_anchor=(20, -2), # Indicated where to include it
ncol=1, fancybox=True,
shadow=True,
title='Groups',
fontsize=14,
)
```
**Project samples/cells with PCoA into a Euclidean space**
We can project the samples/cells given their CCI scores with other cells and see how close they are given their potential of interaction.
**THIS ONLY WORKS WITH UNDIRECTED CCI SCORES**
```
if interactions.analysis_setup['cci_type'] == 'undirected':
pcoa = c2c.plotting.pcoa_3dplot(interactions,
metadata=meta,
sample_col="#SampleID",
group_col="Groups",
title='PCoA based on potential CCIs',
colors=colors,
)
```
| github_jupyter |
# MIDS - w261 Machine Learning At Scale
__Course Lead:__ Dr James G. Shanahan (__email__ Jimi via James.Shanahan _AT_ gmail.com)
## Assignment - HW5 Phase 2
---
__Name:__ Leslie Teo; Stanimir Vichev; Seung Hun Ham
__Class:__ MIDS w261 Spring 2018 Section 1
__Email:__ lteo@iSchool.berkeley.edu; stassyvichev@berkeley.edu; seung.ham@ischool.berkeley.edu
__StudentId__ 303218617; 3032580461; 20957715 __End of StudentId__
__Week:__ 5.5
__NOTE:__ please replace `1234567` with your student id above
__Due Time:__ HW is due the Thursday of the following week by 8AM (West coast time).
* __HW5 Phase 1__
This can be done on a local machine (with a unit test on the cloud such as Altiscale's PaaS or on AWS) and is due Thursday, Week 6 by 8AM (West coast time). It will primarily focus on building a unit/systems and for pairwise similarity calculations pipeline (for stripe documents)
* __HW5 Phase 2__
This will require the Altiscale cluster and will be due Thursday of the following week by 8AM (West coast time).
The focus of HW5 Phase 2 will be to scale up the unit/systems tests to the Google 5 gram corpus.
# Datasets
For Phase 2 you will first use the small datasets from phase 1 to systems test your code in the cloud. Then you will test your code on 1 file and then 20 files before running the full (191 file) Google n-gram dataset.
__Small data for systems tests__
```
%%writefile atlas-boon-systems-test.txt
atlas boon 50 50 50
boon cava dipped 10 10 10
atlas dipped 15 15 15
%%writefile googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt
A BILL FOR ESTABLISHING RELIGIOUS 59 59 54
A Biography of General George 92 90 74
A Case Study in Government 102 102 78
A Case Study of Female 447 447 327
A Case Study of Limited 55 55 43
A Child's Christmas in Wales 1099 1061 866
A Circumstantial Narrative of the 62 62 50
A City by the Sea 62 60 49
A Collection of Fairy Tales 123 117 80
A Collection of Forms of 116 103 82
```
SETUP: __Paths to Main data in HDFS on Altiscale AND OTHER SETTINGS__
```
TEST_1 = "/user/winegarj/data/1_test"
TEST_20 = "/user/winegarj/data/20_test"
FULL_DATA = "/user/winegarj/data/full"
import os
USER = !whoami
USER = USER[0]
OUTPUT_PATH_BASE = '/user/{USER}'.format(USER=USER)
```
# Set - Up for Phase 2
Before you can run your simlarity analysis on the full Google n-gram dataset you should confirm that the code your wrote in Phase 1 works on the cloud. In the space below, copy the code for your three jobs from Phase 1 (`buildStripes.py`, `invertedIndex.py`, `similarity.py`) and rerun your atlas-boon systems tests on Altiscale (i.e. ** the cloud**). NOTE: _you may end up modifying this code when you get to 5.7, that's fine._
### `buildStripes.py` Note: changed to `buildStripes_v2.py`
```
%%writefile buildStripes_v2.py
#!~/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
from __future__ import division
import re
import mrjob
import json
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
import itertools
class MRbuildStripes(MRJob):
SORT_VALUES = True
def mapper(self, _, line):
fields = line.lower().strip("\n").split("\t")
words = fields[0].split(" ")
occurrence_count = int(fields[1])
for subset in itertools.combinations(sorted(set(words)), 2):
yield subset[0], (subset[1], occurrence_count)
yield subset[1], (subset[0], occurrence_count)
def reducer(self, word, occurrence_counts):
stripe = {}
for other_word, occurrence_count in occurrence_counts:
stripe[other_word] = stripe.get(other_word,0)+occurrence_count
yield word, stripe
if __name__ == '__main__':
MRbuildStripes.run()
```
### `invertedIndex.py` Note: changed to `invertedIndex_v2.py`
```
%%writefile invertedIndex_v2.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division
import collections
import sys
import re
import json
import math
import numpy as np
import itertools
import mrjob
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
class MRinvertedIndex(MRJob):
#START SUDENT CODE531_INV_INDEX
SORT_VALUES = True
def steps(self):
JOBCONF_STEP = {
'mapreduce.job.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
'mapreduce.partition.keycomparator.options': '-k1'
}
return [
MRStep(jobconf=JOBCONF_STEP,
mapper=self.mapper,
reducer=self.reducer)
]
def mapper(self, _, line):
sys.stderr.write("reporter:counter:Mapper Counters,Calls,1\n")
tokens = line.strip().split('\t')
value_dict = json.loads(tokens[1])
term_len = len(value_dict)
for key in value_dict.keys():
yield key, [tokens[0], term_len]
def reducer(self, key, values):
sys.stderr.write("reporter:counter:Reducer Counters,Calls,1\n")
out = []
for value_dict in values:
value_dict[0] = value_dict[0].replace('"','')
out.append(value_dict)
yield key, out
#END SUDENT CODE531_INV_INDEX
if __name__ == '__main__':
MRinvertedIndex.run()
```
### `similarity.py` Note: changed to `similarity_v2.py`
```
%%writefile similarity_cosine_sort.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division
import sys
import collections
import re
import json
import math
import numpy as np
import itertools
import mrjob
from operator import itemgetter
from mrjob.protocol import RawProtocol
from mrjob.protocol import JSONProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
import logging
class MRsimilarity(MRJob):
SORT_VALUES = True
INTERNAL_PROTOCOL = RawProtocol
OUTPUT_PROTOCOL = RawProtocol
def __init__(self, *args, **kwargs):
super(MRsimilarity, self).__init__(*args, **kwargs)
self.N = 25
self.NUM_REDUCERS = 25
def steps(self):
JOBCONF_STEP_1 = {
"mapreduce.job.reduces": "128",
"mapreduce.job.maps": "128",
}
JOBCONF_STEP_2 = {
'stream.num.map.output.key.fields':3,
'mapreduce.job.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
'stream.map.output.field.separator':"\t",
'mapreduce.partition.keypartitioner.options':'-k1,1',
'mapreduce.partition.keycomparator.options':'-k2,2nr -k3,3',
'mapred.reduce.tasks': self.NUM_REDUCERS,
'partitioner':'org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner',
"mapreduce.job.reduces": str(self.NUM_REDUCERS),
"SORT_VALUES":True,
"INTERNAL_PROTOCOL":"RawProtocol",
"OUTPUT_PROTOCOL":"RawProtocol"
}
return [
MRStep(jobconf=JOBCONF_STEP_1,
mapper=self.mapper_pair_sim,
combiner = self.combiner_pair_sim,
reducer=self.reducer_pair_sim
),
MRStep(jobconf=JOBCONF_STEP_2,
mapper_init=self.mapper_sort_init,
mapper= self.mapper_sort,
reducer=self.reducer_sort
)
]
def mapper_pair_sim(self, _, line):
sys.stderr.write("reporter:counter:Mapper Counters,Calls,1\n")
line = line.strip()
index, posting = line.split('\t')
posting = json.loads(posting)
posting = dict(posting)
logging.warning(line)
logging.warning(posting)
for docs in itertools.combinations(sorted(posting.keys()), 2):
yield ",".join([docs[0],docs[1], str(posting[docs[0]]), str(posting[docs[1]])]),str(1)
def combiner_pair_sim(self, key, values):
yield key,str(sum([int(v) for v in values]))
def reducer_pair_sim(self, key, values):
sys.stderr.write("reporter:counter:Reducer Counters,Calls,1\n")
total = sum([float(v) for v in values])
key = key.split(",")
key[2] = float(key[2])
key[3] = float(key[3])
cosine = total/(np.sqrt(key[2])*np.sqrt(key[3]))
jacard = total/(key[2]+key[3]-total)
overlap = total/min(key[2],key[3])
dice = 2*total/(key[2]+key[3])
yield None, str(cosine) +"\t"+"[" +",".join(["\""+ key[0]+" - "+key[1]+"\"",str(np.mean([cosine, jacard, overlap, dice])),str(jacard),str(overlap),str(dice)])+"]"
def mapper_sort_init(self):
def makeKeyHash(key, num_reducers):
byteof = lambda char: int(format(ord(char), 'b'), 2)
current_hash = 0
for c in key:
current_hash = (current_hash * 31 + byteof(c))
return current_hash % num_reducers
# printable ascii characters, starting with 'A'
keys = [str(unichr(i)) for i in range(65,65+self.NUM_REDUCERS)]
partitions = []
for key in keys:
partitions.append([key, makeKeyHash(key, self.NUM_REDUCERS)])
parts = sorted(partitions,key=itemgetter(1))
self.partition_keys = list(np.array(parts)[:,0])
self.partition_file = np.arange(0,self.N,self.N/(self.NUM_REDUCERS))[::-1]
def mapper_sort(self, key, value):
keyFloatScaled = np.floor(float(key)*self.N)
# Prepend the approriate key by finding the bucket, and using the index to fetch the key.
for idx in xrange(self.NUM_REDUCERS):
if keyFloatScaled > self.partition_file[idx]:
yield str(self.partition_keys[idx]),key+" \t "+value
break
def reducer_sort(self, key, values):
sys.stderr.write("reporter:counter:Intermediate Reducer Counters,Calls,1\n")
for value in values:
yield None, value
if __name__ == '__main__':
MRsimilarity.run()
```
#### atlas-boon systems test
```
!python buildStripes_v2.py \
-r local atlas-boon-systems-test.txt
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests_atlas')
!hadoop fs -rm -r {OUTPUT_PATH}
!python buildStripes_v2.py \
-r hadoop atlas-boon-systems-test.txt \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/*
# Save into file for processing
!hadoop fs -cat {OUTPUT_PATH}/* > test_stripes_1
# Testing inverted index
!python invertedIndex_v2.py \
-r local test_stripes_1
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!python invertedIndex_v2.py \
-r hadoop test_stripes_1 \
--output-dir={OUTPUT_PATH} \
--no-output
# Save into file for processing
!hadoop fs -cat {OUTPUT_PATH}/* > test_index_1
!cat test_index_1
##########################################################
# Pretty print systems tests for generating Inverted Index
##########################################################
import json
for i in range(1,2):
print "—"*100
print "Systems test ",i," - Inverted Index"
print "—"*100
with open("test_index_"+str(i),"r") as f:
lines = f.readlines()
for line in lines:
line = line.strip()
word,stripe = line.split("\t")
stripe = json.loads(stripe)
stripe.extend([["",""] for _ in xrange(3 - len(stripe))])
print "{0:>16} |{1:>16} |{2:>16} |{3:>16}".format((word),
stripe[0][0]+" "+str(stripe[0][1]), stripe[1][0]+" "+str(stripe[1][1]), stripe[2][0]+" "+str(stripe[2][1]))
# Testing similarity metrics
# SORTED BY AVG
!python similarity_v2.py \
-r local test_index_1
# Run in Hadoop
# SORTED BY cosine
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!python similarity_cosine_sort.py \
-r hadoop test_index_1 \
--output-dir={OUTPUT_PATH} \
--no-output
# Save into file for processing
!hadoop fs -cat {OUTPUT_PATH}/* > test_similarities_1
!cat test_similarities_1
############################################
# Pretty print systems tests
# Note: adjust print formatting if you need to
############################################
import json
for i in range(1,2):
print '—'*110
print "Systems test ",i," - Similarity measures"
print '—'*110
print "{0:>15} |{1:>15} |{2:>15} |{3:>15} |{4:>15} |{5:>15}".format(
"cosine", "pair", "average", "jaccard", "overlap", "dice")
print '-'*110
with open("test_similarities_"+str(i),"r") as f:
lines = f.readlines()
for line in lines:
line = line.strip()
avg,stripe = line.split("\t")
stripe = json.loads(stripe)
print "{0:>15f} |{1:>15} |{2:>15f} |{3:>15f} |{4:>15f} |{5:>15f}".format(float(avg),
stripe[0], float(stripe[1]), float(stripe[2]), float(stripe[3]), float(stripe[4]))
```
#### 10-line systems test
```
# Build Stripes
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python buildStripes_v2.py \
-r hadoop googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > test_stripes_2
!cat test_stripes_2
# Build Inverted Index
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python invertedIndex_v2.py \
-r hadoop test_stripes_2 \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > test_index_2
# Calculate Similarity
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python similarity_cosine_sort.py \
-r hadoop test_index_2 \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > test_similarities_2
############################################
# Pretty print systems tests
# Note: adjust print formatting if you need to
############################################
import json
for i in range(1,3):
print '—'*110
print "Systems test ",i," - Similarity measures"
print '—'*110
print "{0:>15} |{1:>15} |{2:>15} |{3:>15} |{4:>15} |{5:>15}".format(
"cosine", "pair", "average", "jaccard", "overlap", "dice")
print '-'*110
with open("test_similarities_"+str(i),"r") as f:
lines = f.readlines()
for line in lines:
line = line.strip()
avg,stripe = line.split("\t")
stripe = json.loads(stripe)
print "{0:>15f} |{1:>15} |{2:>15f} |{3:>15f} |{4:>15f} |{5:>15f}".format(float(avg),
stripe[0], float(stripe[1]), float(stripe[2]), float(stripe[3]), float(stripe[4]))
```
# HW5.6 -Google n-grams EDA
Do some EDA on this dataset using mrjob, e.g.,
- A. Longest 5-gram (number of characters)
- B. Top 10 most frequent words (please use the count information), i.e., unigrams
- C. 20 Most/Least densely appearing words (count/pages_count) sorted in decreasing order of relative frequency
- D. Distribution of 5-gram sizes (character length). E.g., count (using the count field) up how many times a 5-gram of 50 characters shows up. Plot the data graphically using a histogram.
### HW5.6.1 - A. Longest 5-gram (number of characters)
```
%%writefile longest5gram.py
#!/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
import re
from datetime import datetime
import sys
import mrjob
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
class longest5gram(MRJob):
# SORT_VALUES = True
def mapper(self, _, line):
fields = line.strip("\n").split("\t")
yield len(fields[0]), fields[0]
def reducer_init(self):
self.longest_ngrams = []
self.longest_size = 0
def reducer(self, key, values):
if int(key)> self.longest_size:
self.longest_size = int(key)
self.longest_ngrams = list(values)
elif int(key) == self.longest_size:
self.longest_ngrams = list(self.longest_ngrams)+list(values)
def reducer_final(self):
yield self.longest_size, ";".join(list(self.longest_ngrams))
def reducer_2_init(self):
self.longest_2_ngrams = []
self.longest_2_size = 0
def reducer_2(self, key, values):
if int(key)> self.longest_2_size:
self.longest_2_size = int(key)
self.longest_2_ngrams = list(values)
elif int(key) == self.longest_2_size:
self.longest_2_ngrams = list(self.longest_2_ngrams)+list(values)
def reducer_2_final(self):
yield self.longest_2_size, ";".join(list(self.longest_2_ngrams))
def steps(self):
return [
MRStep(
mapper = self.mapper,
reducer_init = self.reducer_init,
reducer_final = self.reducer_final,
reducer = self.reducer,
jobconf={
"mapreduce.job.reduces": "32",
"stream.num.map.output.key.fields": 1,
"mapreduce.job.output.key.comparator.class" : "org.apache.hadoop.mapred.lib.KeyFieldBasedComparator",
"mapreduce.partition.keycomparator.options":"-k1,1nr",
}
),
MRStep(
reducer_init = self.reducer_2_init,
reducer_final = self.reducer_2_final,
reducer = self.reducer_2,
jobconf={
"mapreduce.job.reduces": "1"
}
)
]
if __name__ == '__main__':
start_time = datetime.now()
longest5gram.run()
end_time = datetime.now()
elapsed_time = end_time - start_time
sys.stderr.write(str(elapsed_time))
```
__On test data set:__
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'longest_ngram_10lines')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python longest5gram.py \
-r hadoop googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/*
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'longest_ngram_test1')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python longest5gram.py \
-r hadoop hdfs://{TEST_1} \
--output-dir={OUTPUT_PATH} \
--no-output
!hdfs dfs -cat {OUTPUT_PATH}/*
```
__ On the 20 files dataset: __
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'longest_ngram_test20')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python longest5gram.py \
-r hadoop hdfs://{TEST_20} \
--output-dir={OUTPUT_PATH} \
--no-output
!hdfs dfs -cat {OUTPUT_PATH}/*
```
__On full data set:__
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'longest_full')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python longest5gram.py \
-r hadoop hdfs://{FULL_DATA} \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/*
```
### Longest 5grams MR stats
ec2_instance_type: m3.xlarge
num_ec2_instances: 15
__Step 1:__
RUNNING for 107.0s ~= 2 minutes
Reduce tasks = 16
__Step 2:__
RUNNING for 108.8s ~= 2 minutes
Reduce tasks = 1
### HW5.6.1 - B. Top 10 most frequent words
```
%%writefile mostFrequentWords.py
#!~/anaconda2/bin/python
# -*- coding: utf-8 -*-
import re
import mrjob
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
class mostFrequentWords(MRJob):
# START STUDENT CODE 5.6.1.B
SORT_VALUES = True
def steps(self):
JOBCONF_STEP1 = {'mapreduce.job.reduces': '10',
}
JOBCONF_STEP2 = {
'mapreduce.job.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
'stream.num.map.output.key.fields':'2',
'stream.map.output.field.separator':'\t',
'mapreduce.partition.keycomparator.options': '-k1,1nr',
'mapreduce.job.reduces': '1',
}
return [
MRStep(#jobconf=JOBCONF_STEP1,
mapper=self.mapper,
combiner=self.combiner,
reducer=self.reducer,
),
MRStep(jobconf=JOBCONF_STEP2,
mapper=self.mapper2,
),
]
def mapper(self, _, line):
words = re.findall(r'[a-z\']+', line.lower())
for word in words:
yield word, 1
def combiner(self, word, counts):
yield word, sum(counts)
def reducer(self, word, counts):
yield word, sum(counts)
def mapper2(self, word, counts):
yield counts, word
# END STUDENT CODE 5.6.1.B
if __name__ == '__main__':
mostFrequentWords.run()
```
__On the test data set:__
```
!python mostFrequentWords.py \
-r local googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt
# Find top 10 most frequent words
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords.py \
-r hadoop hdfs://{TEST_1}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 10
```
__ On the 20 files dataset: __
```
# Find top 10 most frequent word
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords.py \
-r hadoop hdfs://{TEST_20}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 10
```
__On the full data set:__
```
# Find top 10 most frequent word
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords.py \
-r hadoop hdfs://{FULL_DATA}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 10000 > test.output
!cat test.output | head -n 100
!cat test.output | tail -n 10
```
**Version that excludes stop words**
```
%%writefile mostFrequentWords_v2.py
#!/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
import re
import mrjob
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
class mostFrequentWords(MRJob):
# START STUDENT CODE 5.6.1.B
SORT_VALUES = True
def steps(self):
JOBCONF_STEP1 = {'mapreduce.job.reduces': '10',
}
JOBCONF_STEP2 = {
'mapreduce.job.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
'stream.num.map.output.key.fields':2,
'stream.map.output.field.separator':'\t',
'mapreduce.partition.keycomparator.options': '-k1,1nr',
'mapreduce.job.reduces': '1',
}
return [
MRStep(jobconf=JOBCONF_STEP1,
mapper_init=self.mapper_init,
mapper=self.mapper,
combiner=self.combiner,
reducer=self.reducer,
),
MRStep(jobconf=JOBCONF_STEP2,
mapper=self.mapper2,
),
]
def mapper_init(self):
self.stopwords = ['i', 'me', 'my', 'myself', 'we', 'our', 'ours',
'ourselves', 'you', 'your', 'yours', 'yourself',
'yourselves', 'he', 'him', 'his', 'himself', 'she',
'her', 'hers', 'herself', 'it', 'its', 'itself',
'they', 'them', 'their', 'theirs', 'themselves',
'what', 'which', 'who', 'whom', 'this', 'that',
'these', 'those', 'am', 'is', 'are', 'was', 'were',
'be', 'been', 'being', 'have', 'has', 'had', 'having',
'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and',
'but', 'if', 'or', 'because', 'as', 'until', 'while',
'of', 'at', 'by', 'for', 'with', 'about', 'against',
'between', 'into', 'through', 'during', 'before',
'after', 'above', 'below', 'to', 'from', 'up', 'down',
'in', 'out', 'on', 'off', 'over', 'under', 'again',
'further', 'then', 'once', 'here', 'there', 'when',
'where', 'why', 'how', 'all', 'any', 'both', 'each',
'few', 'more', 'most', 'other', 'some', 'such', 'no',
'nor', 'not', 'only', 'own', 'same', 'so', 'than',
'too', 'very', 's', 't', 'can', 'will', 'just',
'don', 'should', 'now']
def mapper(self, _, line):
words = re.findall(r'[a-z\']+', line.lower())
for word in words:
if word not in self.stopwords:
yield word, 1
def combiner(self, word, counts):
yield word, sum(counts)
def reducer(self, word, counts):
yield word, sum(counts)
def mapper2(self, word, counts):
yield counts, word
# END STUDENT CODE 5.6.1.B
if __name__ == '__main__':
mostFrequentWords.run()
!python mostFrequentWords_v2.py \
-r local googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt
# Find top 10 most frequent words
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords_v2.py \
-r hadoop hdfs://{TEST_1}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 100
```
### Most frequent words MR stats
ec2_instance_type: m3.xlarge
num_ec2_instances: 15
__Step 1:__
RUNNING for 590.7s ~= 10 minutes
Launched map tasks=191
Launched reduce tasks=57
__Step 2:__
RUNNING for 76.6s
Launched map tasks=110
Launched reduce tasks=16
### HW5.6.1 - C. 20 Most/Least densely appearing words
```
%%writefile mostLeastDenseWords.py
#!/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
from __future__ import division
import re
import numpy as np
import mrjob
import json
import sys
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
class mostLeastDenseWords(MRJob):
# START STUDENT CODE 5.6.1.C
OUTPUT_PROTOCOL = RawProtocol
SORT_VALUES = True
total_page_count = 0
def steps(self):
JOBCONF_STEP1 = {'mapreduce.job.reduces': '10'}
JOBCONF_STEP2 = {
'mapreduce.job.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
'stream.num.map.output.key.fields':'2',
'stream.map.output.field.separator':'\t',
'mapreduce.partition.keycomparator.options': '-k2,2nr',
'mapreduce.job.reduces': '1',
}
return [MRStep(jobconf=JOBCONF_STEP1,
mapper=self.mapper,
reducer=self.reducer
),
MRStep(jobconf=JOBCONF_STEP2,
reducer=self.reducer_output)
]
def mapper(self, _, line):
data = line.split("\t")
words = data[0].lower().split()
count = int(data[1])
page_count = int(data[2])
for w in words:
yield w, count
yield "!Total", page_count
def reducer(self, key, data):
yield key, sum(data)
def reducer_output(self, key, data):
if key == "!Total":
self.total_page_count = sum(data)
else:
yield key, str(sum(data)/self.total_page_count)
# END STUDENT CODE 5.6.1.C
if __name__ == '__main__':
mostLeastDenseWords.run()
```
__On the test data set:__
```
# Density for 1 file
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python mostLeastDenseWords.py \
-r hadoop hdfs://{TEST_1}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hdfs dfs -cat {OUTPUT_PATH}/* | head -n 20
!hdfs dfs -cat {OUTPUT_PATH}/* | tail -n 20
```
__ On the 20 files dataset: __
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python mostLeastDenseWords.py \
-r hadoop hdfs://{TEST_20}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hdfs dfs -cat {OUTPUT_PATH}/* | head -n 20
!hdfs dfs -cat {OUTPUT_PATH}/* | tail -n 20
```
__On the full data set:__
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python mostLeastDenseWords.py \
-r hadoop hdfs://{FULL_DATA}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hdfs dfs -cat {OUTPUT_PATH}/* | head -n 20
!hdfs dfs -cat {OUTPUT_PATH}/* | tail -n 20
```
### Word density MR stats
ec2_instance_type: m3.xlarge
num_ec2_instances: 15
__Step 1:__
RUNNING for 649.2s ~= 10 minutes
Launched map tasks=190
Launched reduce tasks=57
__Step 2:__
RUNNING for 74.4s ~= 1 minute
Launched map tasks=110
Launched reduce tasks=20
### HW5.6.1 - D. Distribution of 5-gram sizes (character length)
```
%%writefile distribution.py
#!/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
import mrjob
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
class distribution(MRJob):
#### TODO: divide the counts by 1000s to make the graph more readable
#### TODO: split the lengths into buckets <10, <25, <50, <75, <100
def mapper(self, _, line):
fields = line.strip("\n").split("\t")
yield len(fields[0]), int(fields[1])
def combiner(self,length, counts):
yield length, sum(counts)
def reducer(self,length, counts):
yield length, sum(counts)
def reducer_sort(self, key, values):
yield key, list(values)[0]
def steps(self):
return [
MRStep(
mapper = self.mapper,
combiner = self.combiner,
reducer = self.reducer,
jobconf = {
"mapreduce.job.reduces": "4",
}
),
MRStep(
reducer = self.reducer_sort,
jobconf = {
"SORT_VALUES":True,
"mapreduce.job.reduces": "1",
"stream.num.map.output.key.fields": 1,
"mapreduce.job.output.key.comparator.class" : "org.apache.hadoop.mapred.lib.KeyFieldBasedComparator",
"mapreduce.partition.keycomparator.options":"-k1,1nr",
}
)
]
if __name__ == '__main__':
distribution.run()
```
__On the test data set:__
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'distributions_10lines')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python distribution.py \
-r hadoop googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > distributions_10lines.txt
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'distributions_1file')
!hadoop fs -rm -r {OUTPUT_PATH}
!python distribution.py \
-r hadoop hdfs://{TEST_1} \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > distributions_1file.txt
```
__ On the 20 files dataset: __
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'distributions_20files')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python distribution.py \
-r hadoop hdfs://{TEST_20} \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > distributions_20files.txt
```
__On the full data set:__
```
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'distributions_full')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python distribution.py \
-r hadoop hdfs://{FULL_DATA} \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > distributions_full.txt
```
### Distribution MRJob stats
__Step 1:__
RUNNING for 157.8s ~= 2.6 minutes
Launched map tasks=191
Launched reduce tasks=16
__Step 2:__
RUNNING for 115.0s ~= 2 minutes
Launched map tasks=139
Launched reduce tasks=1
```
%matplotlib inline
import numpy as np
import pylab as pl
results_A = []
for line in open("distributions_10lines.txt").readlines():
line = line.strip()
X,Y = line.split("\t")
results_A.append([int(X),int(Y)])
items = (np.array(results_A)[::-1].T)
fig = pl.figure(figsize=(17,7))
ax = pl.subplot(111)
width=0.8
ax.bar(range(len(items[0])), items[1], width=width)
ax.set_xticks(np.arange(len(items[0])) + width/2)
ax.set_xticklabels(items[0], rotation=90)
pl.title("Distributions of 5 Gram lengths using 10-line sample")
pl.show()
%matplotlib inline
import numpy as np
import pylab as pl
results_A = []
for line in open("distributions_1file.txt").readlines():
line = line.strip()
X,Y = line.split("\t")
results_A.append([int(X),int(Y)])
items = (np.array(results_A)[::-1].T)
fig = pl.figure(figsize=(17,7))
ax = pl.subplot(111)
width=0.8
ax.bar(range(len(items[0])), items[1], width=width)
ax.set_xticks(np.arange(len(items[0])) + width/2)
ax.set_xticklabels(items[0], rotation=90)
pl.title("Distributions of 5 Gram lengths using 1 file")
pl.show()
%matplotlib inline
import numpy as np
import pylab as pl
results_A = []
for line in open("distributions_20files.txt").readlines():
line = line.strip()
X,Y = line.split("\t")
results_A.append([int(X),int(Y)])
items = (np.array(results_A)[::-1].T)
fig = pl.figure(figsize=(17,7))
ax = pl.subplot(111)
width=0.8
ax.bar(range(len(items[0])), items[1], width=width)
ax.set_xticks(np.arange(len(items[0])) + width/2)
ax.set_xticklabels(items[0], rotation=90)
pl.title("Distributions of 5 Gram lengths using 20 files")
pl.show()
%matplotlib inline
import numpy as np
import pylab as pl
results_A = []
for line in open("distributions_full.txt").readlines():
line = line.strip()
X,Y = line.split("\t")
results_A.append([int(X),int(Y)])
items = (np.array(results_A)[::-1].T)
fig = pl.figure(figsize=(17,7))
ax = pl.subplot(111)
width=0.8
ax.bar(range(len(items[0])), items[1], width=width)
ax.set_xticks(np.arange(len(items[0])) + width/2)
ax.set_xticklabels(items[0], rotation=90)
pl.title("Distributions of 5 Gram lengths using all files")
pl.show()
```
### HW5.6.2 - OPTIONAL: log-log plots (PHASE 2)
Plot the log-log plot of the frequency distributuion of unigrams. Does it follow power law distribution?
For more background see:
- https://en.wikipedia.org/wiki/Log%E2%80%93log_plot
- https://en.wikipedia.org/wiki/Power_law
```
%matplotlib inline
import numpy as np
import pylab as pl
results_A = []
for line in open("distributions_full.txt").readlines():
line = line.strip()
X,Y = line.split("\t")
results_A.append([np.log(int(X)),np.log(int(Y))])
items = (np.array(results_A)[::-1].T)
fig = pl.figure(figsize=(17,7))
ax = pl.subplot(111)
width=1
ax.scatter(range(len(items[0])), items[1])
#ax.set_xticks(np.arange(len(items[0])) + width)
#ax.set_xticklabels(items[0], rotation=90)
pl.title("Log-Log Plot of 5 Gram lengths using all files")
pl.show()
```
# HW5.7 - Synonym detection over 2Gig of Data with extra Preprocessing steps (HW5.3-5 plus some preprocessing)
For the remainder of this assignment please feel free to eliminate stop words from your analysis (see stopWords in the cell below)
__A large subset of the Google n-grams dataset as was described above__
For each HW 5.6 - 5.7.1 Please unit test and system test your code with respect
to SYSTEMS TEST DATASET and show the results.
Please compute the expected answer by hand and show your hand calculations for the
SYSTEMS TEST DATASET. Then show the results you get with your system.
In this part of the assignment we will focus on developing methods for detecting synonyms, using the Google 5-grams dataset. At a high level:
1. remove stopwords
2. get 10,000 most frequent
3. get 1000 (9001-10000) features
3. build stripes
To accomplish this you must script two main tasks using MRJob:
__TASK (1)__ Build stripes for the most frequent 10,000 words using cooccurence information based on
the words ranked from 9001,-10,000 as a basis/vocabulary (drop stopword-like terms),
and output to a file in your bucket on s3 (bigram analysis, though the words are non-contiguous).
__TASK (2)__ Using two (symmetric) comparison methods of your choice
(e.g., correlations, distances, similarities), pairwise compare
all stripes (vectors), and output to a file in your bucket on s3.
For this task you will have to determine a method of comparison.
Here are a few that you might consider:
- Jaccard
- Cosine similarity
- Spearman correlation
- Euclidean distance
- Taxicab (Manhattan) distance
- Shortest path graph distance (a graph, because our data is symmetric!)
- Pearson correlation
- Kendall correlation
However, be cautioned that some comparison methods are more difficult to
parallelize than others, and do not perform more associations than is necessary,
since your choice of association will be symmetric.
Please report the size of the cluster used and the amount of time it takes to run for the index construction task and for the synonym calculation task. How many pairs need to be processed (HINT: use the posting list length to calculate directly)? Report your Cluster configuration!
```
stopwords = ['i', 'me', 'my', 'myself', 'we', 'our', 'ours',
'ourselves', 'you', 'your', 'yours', 'yourself',
'yourselves', 'he', 'him', 'his', 'himself', 'she',
'her', 'hers', 'herself', 'it', 'its', 'itself',
'they', 'them', 'their', 'theirs', 'themselves',
'what', 'which', 'who', 'whom', 'this', 'that',
'these', 'those', 'am', 'is', 'are', 'was', 'were',
'be', 'been', 'being', 'have', 'has', 'had', 'having',
'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and',
'but', 'if', 'or', 'because', 'as', 'until', 'while',
'of', 'at', 'by', 'for', 'with', 'about', 'against',
'between', 'into', 'through', 'during', 'before',
'after', 'above', 'below', 'to', 'from', 'up', 'down',
'in', 'out', 'on', 'off', 'over', 'under', 'again',
'further', 'then', 'once', 'here', 'there', 'when',
'where', 'why', 'how', 'all', 'any', 'both', 'each',
'few', 'more', 'most', 'other', 'some', 'such', 'no',
'nor', 'not', 'only', 'own', 'same', 'so', 'than',
'too', 'very', 's', 't', 'can', 'will', 'just',
'don', 'should', 'now']
```
__STEP 1: Code and Steps for Preprocessing__
```
# Remove stop words, get 10,000 most frequent words
# Find top 10,000 most frequent words
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'ten_thousand_1')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords_v2.py \
-r hadoop hdfs://{TEST_1}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 10000 > ten_thousand_1.dat
# Find top 10,000 most frequent words
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'ten_thousand_20')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords_v2.py \
-r hadoop hdfs://{TEST_20}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 10000 > ten_thousand_20.dat
!cat ten_thousand_20.dat | head -n 10
!cat ten_thousand_20.dat | tail -n 10
# Find top 10,000 most frequent words
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'ten_thousand_FULL')
!hadoop fs -rm -r {OUTPUT_PATH}
!python mostFrequentWords_v2.py \
-r hadoop hdfs://{FULL_DATA}/* \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* | head -n 10000 > ten_thousand_FULL.dat
!cat ten_thousand_FULL.dat | head -n 10
!cat ten_thousand_FULL.dat | tail -n 10
# Also create features
!cat ten_thousand_1.dat |tail -n 999 > features_1.dat
!cat ten_thousand_20.dat |tail -n 999 > features_20.dat
!cat ten_thousand_FULL.dat |tail -n 999 > features_FULL.dat
```
**We now have files with 10K words => `ten_thousand_*.dat` AND
features with words from 9,001 to 10,000 in => `features_*.dat` **
```
### Some extra preprocessing to load files faster
import json
files = ['features_1','features_20', 'features_FULL', 'ten_thousand_1', 'ten_thousand_20','ten_thousand_FULL']
for fileName in files:
with open(fileName+'.dat') as f:
words = []
for line in f:
word = line.split("\t")[1].strip('"')
words.append(word)
with open(fileName+'.json', 'w') as outfile:
json.dump(words, outfile)
```
__STEP 2: MODIFY BUILD STRIPES TO ONLY INCLUDE 10K words with 1K as FEATURES__
```
%%writefile buildStripes_stopwords_1.py
#!~/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
from __future__ import division
import re
import mrjob
import json
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
import itertools
class MRbuildStripes(MRJob):
SORT_VALUES = True
def steps(self):
return [
MRStep(
mapper_init=self.mapper_init,
mapper=self.mapper,
reducer=self.reducer,
jobconf = {
"mapreduce.job.reduces": "64",
"mapreduce.job.maps": "64",
# "SORT_VALUES":True
}
),
MRStep(
reducer=self.reducer_2,
jobconf = {
"mapreduce.job.reduces": "1",
"SORT_VALUES":True
}
)
]
def mapper_init(self):
self.idx = 9001 # To define when feature set starts
self.filename = 'ten_thousand_1.json'
self.top_words = []
self.features = []
with open(self.filename, 'r') as infile:
self.top_words = json.loads(infile.read())
self.features =self.top_words[self.idx:]
def mapper(self, _, line):
fields = line.lower().strip("\n").split("\t")
words = fields[0].split(" ")
occurrence_count = int(fields[1])
filtered_words = [word.decode('utf-8', 'ignore') for word in words if word.decode('utf-8', 'ignore') in self.top_words]
for subset in itertools.combinations(sorted(set(filtered_words)), 2):
if subset[0] in self.top_words and subset[1] in self.features:
yield subset[0], (subset[1], occurrence_count)
if subset[1] in self.top_words and subset[0] in self.features:
yield subset[1], (subset[0], occurrence_count)
def reducer(self, word, occurrence_counts):
stripe = {}
for other_word, occurrence_count in occurrence_counts:
stripe[other_word] = stripe.get(other_word,0)+occurrence_count
yield word, stripe
def reducer_2(self, key, values):
yield str(key), list(values)[0]
if __name__ == '__main__':
MRbuildStripes.run()
```
### HW5.7.1 Running on 1 file
```
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python buildStripes_stopwords_1.py \
-r hadoop hdfs://{TEST_1}/* \
--file ten_thousand_1.json \
--output-dir={OUTPUT_PATH} \
--no-output
####NOT USED, BUT KEEP JUST IN CASE
import os
from os import listdir
from os.path import isfile, join
def totalOrderSort(myPath, outFileName):
wordsFiles = {}
words = []
for f in listdir(myPath):
if isfile(join(myPath,f)) and "part" in f:
with open(join(myPath,f)) as openFile:
word = f.readline().split("\t")[0]
words.append(word)
wordsFiles[word]= join(mypath,f)
print wordsFiles
print words
# Look at data
# !hadoop fs -cat {OUTPUT_PATH}/*
# Save into file for processing
!hadoop fs -cat {OUTPUT_PATH}/* > google_stripes_1
!cat google_stripes_1 | head
!cat google_stripes_1 | tail
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python invertedIndex_v2.py \
-r hadoop google_stripes_1 \
--output-dir={OUTPUT_PATH} \
--no-output
# Save into file for processing
!hadoop fs -cat {OUTPUT_PATH}/* > google_index_1
!cat google_index_1 | head
##########################################################
# Pretty print systems tests for generating Inverted Index
##########################################################
import json
for i in range(1,2):
print "—"*100
print "Systems test ",i," - Inverted Index"
print "—"*100
with open("google_index_"+str(i)+"","r") as f:
lines = f.readlines()
for line in lines:
line = line.strip()
word,stripe = line.split("\t")
stripe = json.loads(stripe)
stripe.extend([["",""] for _ in xrange(3 - len(stripe))])
print "{0:>16} |{1:>16} |{2:>16} |{3:>16}".format((word),
stripe[0][0]+" "+str(stripe[0][1]), stripe[1][0]+" "+str(stripe[1][1]), stripe[2][0]+" "+str(stripe[2][1]))
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python similarity_cosine_sort.py \
-r hadoop google_index_1 \
--output-dir={OUTPUT_PATH} \
--no-output
# Save into file for processing
!hadoop fs -cat {OUTPUT_PATH}/* > google_similarities_1
!head google_similarities_1
############################################
# Pretty print systems tests
# Note: adjust print formatting if you need to
############################################
import json
for i in range(1,2):
print '—'*110
print "Systems test ",i," - Similarity measures"
print '—'*110
print "{0:>21} | {1:>15} |{2:>15} | {3:>15} | {4:>15} | {5:>15}".format("pair",
"cosine", "jaccard", "overlap", "dice", "average")
print '-'*110
with open("google_similarities_"+str(i),"r") as f:
lines = f.readlines()
for line in lines:
line = line.strip()
avg,stripe = line.split("\t")
stripe = json.loads(stripe)
print "{0:>21} | {1:>15f} |{2:>15f} |{3:>15f} | {4:>15f} | {5:>15f} ".format(stripe[0],
float(stripe[1]), float(stripe[2]), float(stripe[3]), float(stripe[4]), float(avg))
```
### HW5.7.2 Running on 20 test files
```
%%writefile buildStripes_stopwords_20.py
#!~/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
from __future__ import division
import re
import mrjob
import json
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
import itertools
class MRbuildStripes(MRJob):
SORT_VALUES = True
def steps(self):
return [
MRStep(
mapper_init=self.mapper_init,
mapper=self.mapper,
reducer=self.reducer,
jobconf = {
"mapreduce.job.reduces": "64",
"mapreduce.job.maps": "64",
# "SORT_VALUES":True
}
),
MRStep(
reducer=self.reducer_2,
jobconf = {
"mapreduce.job.reduces": "1",
"SORT_VALUES":True
}
)
]
def mapper_init(self):
self.idx = 9001 # To define when feature set starts
self.filename = 'ten_thousand_20.json'
self.top_words = []
self.features = []
#with open('features_20.json', 'r') as infile:
# self.features = json.loads(infile.read())
with open(self.filename, 'r') as infile:
self.top_words = json.loads(infile.read())
self.features =self.top_words[self.idx:]
def mapper(self, _, line):
fields = line.lower().strip("\n").split("\t")
words = fields[0].split(" ")
occurrence_count = int(fields[1])
filtered_words = [word.decode('utf-8', 'ignore') for word in words if word.decode('utf-8', 'ignore') in self.top_words]
for subset in itertools.combinations(sorted(set(filtered_words)), 2):
if subset[0] in self.top_words and subset[1] in self.features:
yield subset[0], (subset[1], occurrence_count)
if subset[1] in self.top_words and subset[0] in self.features:
yield subset[1], (subset[0], occurrence_count)
def reducer(self, word, occurrence_counts):
stripe = {}
for other_word, occurrence_count in occurrence_counts:
stripe[other_word] = stripe.get(other_word,0)+occurrence_count
yield word, stripe
def reducer_2(self, key, values):
yield str(key), list(values)[0]
if __name__ == '__main__':
MRbuildStripes.run()
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python buildStripes_stopwords_20.py \
-r hadoop hdfs://{TEST_20}/* \
--file ten_thousand_20.json \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > google_stripes_20
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python invertedIndex_v2.py \
-r hadoop google_stripes_20 \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > google_index_20
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python similarity_cosine_sort.py \
-r hadoop google_index_20 \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > google_similarities_2
!head google_similarities_2
!tail google_similarities_2
############################################
# Pretty print systems tests
# Note: adjust print formatting if you need to
############################################
import json
for i in range(2,3):
print '—'*110
print "Systems test ",i," - Similarity measures"
print '—'*110
print "{0:>15} |{1:>15} |{2:>15} |{3:>15} |{4:>15} |{5:>15}".format(
"average", "pair", "cosine", "jaccard", "overlap", "dice")
print '-'*110
with open("google_similarities_"+str(i),"r") as f:
lines = f.readlines()
for line in lines:
line = line.strip()
avg,stripe = line.split("\t")
stripe = json.loads(stripe)
print "{0:>15f} |{1:>15} |{2:>15f} |{3:>15f} |{4:>15f} |{5:>15f}".format(float(avg),
stripe[0], float(stripe[1]), float(stripe[2]), float(stripe[3]), float(stripe[4]))
```
### HW5.7.3 Running the full dataset on Altiscale
Please contact the TAs for approval after obtaining results from 5.7.2. We have ran into issues in the past where the clusters froze because people did not test their code on a smaller dataset.
```
%%writefile buildStripes_stopwords.py
#!~/opt/anaconda2/bin/python
# -*- coding: utf-8 -*-
from __future__ import division
import re
import mrjob
import json
from mrjob.protocol import RawProtocol
from mrjob.job import MRJob
from mrjob.step import MRStep
import itertools
class MRbuildStripes(MRJob):
SORT_VALUES = True
def steps(self):
return [
MRStep(
mapper_init=self.mapper_init,
mapper=self.mapper,
reducer=self.reducer,
jobconf = {
"mapreduce.job.reduces": "64",
"mapreduce.job.maps": "64",
# "SORT_VALUES":True
}
),
MRStep(
reducer=self.reducer_2,
jobconf = {
"mapreduce.job.reduces": "1",
"SORT_VALUES":True
}
)
]
def mapper_init(self):
self.idx = 9001 # To define when feature set starts
self.filename = 'ten_thousand_FULL.json'
self.top_words = []
self.features = []
#with open('features_20.json', 'r') as infile:
# self.features = json.loads(infile.read())
with open(self.filename, 'r') as infile:
self.top_words = json.loads(infile.read())
self.features =self.top_words[self.idx:]
def mapper(self, _, line):
fields = line.lower().strip("\n").split("\t")
words = fields[0].split(" ")
occurrence_count = int(fields[1])
filtered_words = [word.decode('utf-8', 'ignore') for word in words if word.decode('utf-8', 'ignore') in self.top_words]
for subset in itertools.combinations(sorted(set(filtered_words)), 2):
if subset[0] in self.top_words and subset[1] in self.features:
yield subset[0], (subset[1], occurrence_count)
if subset[1] in self.top_words and subset[0] in self.features:
yield subset[1], (subset[0], occurrence_count)
def reducer(self, word, occurrence_counts):
stripe = {}
for other_word, occurrence_count in occurrence_counts:
stripe[other_word] = stripe.get(other_word,0)+occurrence_count
yield word, stripe
def reducer_2(self, key, values):
yield str(key), list(values)[0]
if __name__ == '__main__':
MRbuildStripes.run()
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python buildStripes_stopwords.py \
-r hadoop hdfs://{FULL_DATA}/* \
--file ten_thousand_FULL.json \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > google_stripes_FULL
```
#### Stats for building stripes
> Cluster size: 64 mappers & 64 reducers for step 1, 65 mappers and 1 reducer for step 2.
> Time taken: 30 mins
```
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python invertedIndex_v2.py \
-r hadoop google_stripes_FULL \
--output-dir={OUTPUT_PATH} \
--no-output
!hadoop fs -cat {OUTPUT_PATH}/* > google_index_FULL
```
#### Stats for inverted index
> Cluster size: 2 mappers & 1 reducer for step 1.
> Time taken: 1 min
```
# Run in Hadoop
OUTPUT_PATH = os.path.join(OUTPUT_PATH_BASE,'tests_similarities_full')
!hadoop fs -rm -r {OUTPUT_PATH}
!time python similarity_cosine_sort.py \
-r hadoop google_index_FULL \
--output-dir={OUTPUT_PATH} \
--no-output
# !hadoop fs -cat {OUTPUT_PATH}/* > google_similarities_3_unsorted
# !cat google_similarities_3_unsorted | head
!hadoop fs -cat {OUTPUT_PATH}/* > google_similarities_3
```
#### Stats for calculating similarities
> Cluster size: 128 mappers & 128 reducers for step 1, 130 mappers (not doing anything) and 25 reducer for step 2.
> Time taken: 13 mins
```
!head google_similarities_3
!tail google_similarities_3
```
#### Pretty print results
NOTE: depending on how you processed the stop words your results may differ from the table provided.
```
print "\nTop/Bottom 20 results - Similarity measures - sorted by average"
print "(From the entire data set)"
print '—'*117
print "{0:>30} |{1:>15} |{2:>15} |{3:>15} |{4:>15} |{5:>15}".format(
"pair", "cosine", "jaccard", "overlap", "dice", "average")
print '-'*117
import json
with open("google_similarities_3","r") as f:
sortedSims = f.readlines()
for stripe in sortedSims[:20]:
cosine, other,_ = stripe.split("\t")
data = json.loads(other)
print "{0:>30} |{1:>15f} |{2:>15f} |{3:>15f} |{4:>15f} |{5:>15f}".format(
data[0], float(cosine.strip()), float(data[2]), float(data[3]), float(data[4]), float(data[1]) )
print '—'*117
for stripe in sortedSims[-20:]:
cosine, other,_ = stripe.split("\t")
data = json.loads(other)
print "{0:>30} |{1:>15f} |{2:>15f} |{3:>15f} |{4:>15f} |{5:>15f}".format(
data[0], float(cosine.strip()), float(data[2]), float(data[3]), float(data[4]), float(data[1]) )
Top/Bottom 20 results - Similarity measures - sorted by cosine
(From the entire data set)
—————————————————————————————————————————————————————————————————————————————————————————————————————————————————————
pair | cosine | jaccard | overlap | dice | average
---------------------------------------------------------------------------------------------------------------------
cons - pros | 0.894427 | 0.800000 | 1.000000 | 0.888889 | 0.895829
forties - twenties | 0.816497 | 0.666667 | 1.000000 | 0.800000 | 0.820791
own - time | 0.809510 | 0.670563 | 0.921168 | 0.802799 | 0.801010
little - time | 0.784197 | 0.630621 | 0.926101 | 0.773473 | 0.778598
found - time | 0.783434 | 0.636364 | 0.883788 | 0.777778 | 0.770341
nova - scotia | 0.774597 | 0.600000 | 1.000000 | 0.750000 | 0.781149
hong - kong | 0.769800 | 0.615385 | 0.888889 | 0.761905 | 0.758995
life - time | 0.769666 | 0.608789 | 0.925081 | 0.756829 | 0.765091
time - world | 0.755476 | 0.585049 | 0.937500 | 0.738209 | 0.754058
means - time | 0.752181 | 0.587117 | 0.902597 | 0.739854 | 0.745437
form - time | 0.749943 | 0.588418 | 0.876733 | 0.740885 | 0.738995
infarction - myocardial | 0.748331 | 0.560000 | 1.000000 | 0.717949 | 0.756570
people - time | 0.745788 | 0.573577 | 0.923875 | 0.729010 | 0.743063
angeles - los | 0.745499 | 0.586207 | 0.850000 | 0.739130 | 0.730209
little - own | 0.739343 | 0.585834 | 0.767296 | 0.738834 | 0.707827
life - own | 0.737053 | 0.582217 | 0.778502 | 0.735951 | 0.708430
anterior - posterior | 0.733388 | 0.576471 | 0.790323 | 0.731343 | 0.707881
power - time | 0.719611 | 0.533623 | 0.933586 | 0.695898 | 0.720680
dearly - install | 0.707107 | 0.500000 | 1.000000 | 0.666667 | 0.718443
found - own | 0.704802 | 0.544134 | 0.710949 | 0.704776 | 0.666165
—————————————————————————————————————————————————————————————————————————————————————————————————————————————————————
arrival - essential | 0.008258 | 0.004098 | 0.009615 | 0.008163 | 0.007534
governments - surface | 0.008251 | 0.003534 | 0.014706 | 0.007042 | 0.008383
king - lesions | 0.008178 | 0.003106 | 0.017857 | 0.006192 | 0.008833
clinical - stood | 0.008178 | 0.003831 | 0.011905 | 0.007634 | 0.007887
till - validity | 0.008172 | 0.003367 | 0.015625 | 0.006711 | 0.008469
evidence - started | 0.008159 | 0.003802 | 0.012048 | 0.007576 | 0.007896
forces - record | 0.008152 | 0.003876 | 0.011364 | 0.007722 | 0.007778
primary - stone | 0.008146 | 0.004065 | 0.009091 | 0.008097 | 0.007350
beneath - federal | 0.008134 | 0.004082 | 0.008403 | 0.008130 | 0.007187
factors - rose | 0.008113 | 0.004032 | 0.009346 | 0.008032 | 0.007381
evening - functions | 0.008069 | 0.004049 | 0.008333 | 0.008065 | 0.007129
bone - told | 0.008061 | 0.003704 | 0.012346 | 0.007380 | 0.007873
building - occurs | 0.008002 | 0.003891 | 0.010309 | 0.007752 | 0.007489
company - fig | 0.007913 | 0.003257 | 0.015152 | 0.006494 | 0.008204
chronic - north | 0.007803 | 0.003268 | 0.014493 | 0.006515 | 0.008020
evaluation - king | 0.007650 | 0.003030 | 0.015625 | 0.006042 | 0.008087
resulting - stood | 0.007650 | 0.003663 | 0.010417 | 0.007299 | 0.007257
agent - round | 0.007515 | 0.003289 | 0.012821 | 0.006557 | 0.007546
afterwards - analysis | 0.007387 | 0.003521 | 0.010204 | 0.007018 | 0.007032
posterior - spirit | 0.007156 | 0.002660 | 0.016129 | 0.005305 | 0.007812
```
# HW5.8 - Evaluation of synonyms that your discovered
In this part of the assignment you will evaluate the success of you synonym detector. Take the top 1,000 closest/most similar/correlative pairs of words as determined by your measure in HW5.7, and use the synonyms function from the wordnet synonnyms list from the nltk package (see provided code below).
For each (word1,word2) pair, check to see if word1 is in the list,
synonyms(word2), and vice-versa. If one of the two is a synonym of the other,
then consider this pair a 'hit', and then report the precision, recall, and F1 measure of
your detector across your 1,000 best guesses. Report the macro averages of these measures.
### Calculate performance measures:
$$Precision (P) = \frac{TP}{TP + FP} $$
$$Recall (R) = \frac{TP}{TP + FN} $$
$$F1 = \frac{2 * ( precision * recall )}{precision + recall}$$
We calculate Precision by counting the number of hits and dividing by the number of occurances in our top1000 (opportunities)
We calculate Recall by counting the number of hits, and dividing by the number of synonyms in wordnet (syns)
Other diagnostic measures not implemented here: https://en.wikipedia.org/wiki/F1_score#Diagnostic_Testing
```
!head -n 1000 google_similarities_3 > google_similarities_3_top_1000
''' Performance measures '''
from __future__ import division
import numpy as np
import json
import nltk
from nltk.corpus import wordnet as wn
import sys
nltk.download('wordnet')
#print all the synset element of an element
def synonyms(string):
syndict = {}
for i,j in enumerate(wn.synsets(string)):
syns = j.lemma_names()
for syn in syns:
syndict.setdefault(syn,1)
return syndict.keys()
hits = []
TP = 0
FP = 0
TOTAL = 0
flag = False # so we don't double count, but at the same time don't miss hits
top1000sims = []
with open("google_similarities_3_top_1000","r") as f:
for line in f.readlines():
line = line.strip()
avg,lisst = line.split("\t")
lisst = json.loads(lisst)
lisst.append(avg)
top1000sims.append(lisst)
measures = {}
not_in_wordnet = []
for line in top1000sims:
TOTAL += 1
pair = line[0]
words = pair.split(" - ")
for word in words:
if word not in measures:
measures[word] = {"syns":0,"opps": 0,"hits":0}
measures[word]["opps"] += 1
syns0 = synonyms(words[0])
measures[words[1]]["syns"] = len(syns0)
if len(syns0) == 0:
not_in_wordnet.append(words[0])
if words[1] in syns0:
TP += 1
hits.append(line)
flag = True
measures[words[1]]["hits"] += 1
syns1 = synonyms(words[1])
measures[words[0]]["syns"] = len(syns1)
if len(syns1) == 0:
not_in_wordnet.append(words[1])
if words[0] in syns1:
if flag == False:
TP += 1
hits.append(line)
measures[words[0]]["hits"] += 1
flag = False
precision = []
recall = []
f1 = []
for key in measures:
p,r,f = 0,0,0
if measures[key]["hits"] > 0 and measures[key]["syns"] > 0:
p = measures[key]["hits"]/measures[key]["opps"]
r = measures[key]["hits"]/measures[key]["syns"]
f = 2 * (p*r)/(p+r)
# For calculating measures, only take into account words that have synonyms in wordnet
if measures[key]["syns"] > 0:
precision.append(p)
recall.append(r)
f1.append(f)
# Take the mean of each measure
print "—"*110
print "Number of Hits:",TP, "out of top",TOTAL
print "Number of words without synonyms:",len(not_in_wordnet)
print "—"*110
print "Precision\t", np.mean(precision)
print "Recall\t\t", np.mean(recall)
print "F1\t\t", np.mean(f1)
print "—"*110
print "Words without synonyms:"
print "-"*100
for word in not_in_wordnet:
print synonyms(word),word
```
### Sample output
```
——————————————————————————————————————————————————————————————————————————————————————————————————————————————
Number of Hits: 31 out of top 1000
Number of words without synonyms: 67
——————————————————————————————————————————————————————————————————————————————————————————————————————————————
Precision 0.0280214404967
Recall 0.0178598869579
F1 0.013965517619
——————————————————————————————————————————————————————————————————————————————————————————————————————————————
Words without synonyms:
----------------------------------------------------------------------------------------------------
[] scotia
[] hong
[] kong
[] angeles
[] los
[] nor
[] themselves
[]
.......
```
# HW5.9 - OPTIONAL: using different vocabulary subsets
Repeat HW5 using vocabulary words ranked from 8001,-10,000; 7001,-10,000; 6001,-10,000; 5001,-10,000; 3001,-10,000; and 1001,-10,000;
Dont forget to report you Cluster configuration.
Generate the following graphs:
-- vocabulary size (X-Axis) versus CPU time for indexing
-- vocabulary size (X-Axis) versus number of pairs processed
-- vocabulary size (X-Axis) versus F1 measure, Precision, Recall
# HW5.10 - OPTIONAL
There are many good ways to build our synonym detectors, so for this optional homework,
measure co-occurrence by (left/right/all) consecutive words only,
or make stripes according to word co-occurrences with the accompanying
2-, 3-, or 4-grams (note here that your output will no longer
be interpretable as a network) inside of the 5-grams.
# HW5.11 - OPTIONAL
Once again, benchmark your top 10,000 associations (as in 5.7), this time for your
results from 5.8. Has your detector improved?
| github_jupyter |
# Custom Settings-Set Point
```
import sys
import time
sys.path.append("..")
from simulation_evaluation.microgrid_simulator import ControllEnvironment
from simulation_evaluation.microgrid_ga_v2 import MicrogridGA
from simulation_evaluation.controller_offline import SpaceShareController
from simulation_evaluation.protection_controller import ProtectionController
day_nr=18
precision=1
battery_power=4.1
battery_max_discharge = 40.0
pv_scale=0.4
priorities=[1, 2, 3, 4, 5, 6, 7]
```
# Custsom Utility
```
#Custom Utility Function
def evaluate_individual(individual):
SoC_max = 100.0
SoC_min = 40.0
P_rated = 3000
wb=we=0.125
wp=wd=0.25
for index, row in individual.iterrows():
util = 0.0
#Battery Health
SoC = row['Battery_SoC']
P_Cons = row['Consumed_Energy']
Fb_t = (SoC-SoC_min)/(SoC_max-SoC_min)+P_Cons/P_rated
#Efficiency
P_System_Load = row['System_Load']
P_PV_Gen = row['Generated_Energy']
Fe_t = (P_Cons-P_System_Load)/P_Cons+P_Cons/(P_Cons+P_PV_Gen)
#Penalty
Fp_t = 1.0 if row['Battery_SoC']>SoC_min else 0.0
#Dissatisafaction
priority_keys = [ x for x in list(individual.keys()) if "_Priority" in x]
Fd_t = 0.0
for x in priority_keys:
Fd_t+=(row[x.split("_Priority")[0]+"_Quota"]*1/row[x])/len(priority_keys)
#Combine with weights
individual.loc[index,"Util"] = wb*Fb_t+we*Fe_t+wp*Fp_t+wd*Fd_t
return sum(individual['Util']) / len(individual)
```
# Space Shared Controller
```
#Basic Controller Run
start_time = time.time()
space_shared_controller = SpaceShareController(day_nr=day_nr, precision=precision, battery_power=battery_power,
battery_max_discharge = battery_max_discharge,
pv_scale=pv_scale,priorities=priorities)
#Utility is only used to calculate final result with this methid
space_shared_controller.evaluate_individual = evaluate_individual
spaceshared_util = space_shared_controller.run()
space_time = (time.time()-start_time)
protection_util = protection_controller.run()
print("Space Shared Util: ",spaceshared_util)
print("Total time: ",space_time)
```
# Safety Controller Run
```
# Safety Controller Run
start_time = time.time()
protection_controller = ProtectionController(day_nr=day_nr, precision=precision, battery_power=battery_power,
battery_max_discharge = battery_max_discharge,
pv_scale=pv_scale,priorities=priorities)
#Utility is only used to calculate final result with this methid
protection_controller.evaluate_individual = evaluate_individual
protection_util = protection_controller.run()
print("Protection Util: ",protection_util)
prot_time = (time.time()-start_time)
print("Total time: ",space_time)
```
# GA Controller
```
##Extending the GA with prorietary Utility Calculation
#Set Size and Generation Count
my_ga = MicrogridGA(200, 200,day_nr=day_nr, precision=precision, battery_power=battery_power,
battery_max_discharge = battery_max_discharge, pv_scale=pv_scale,
priorities=priorities,step_type="percentage", mutate_chance=0.2, elit_perc=0.05)
#Redefine to new function
my_ga.evaluate_individual = evaluate_individual
my_ga.run()
```
# Comparative Plots
```
import pandas as pd
import random
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
ga_orig_indi = my_ga.get_original()
util_orig = my_ga.evaluate_individual(ga_orig_indi)
ga_best_indi = my_ga.get_deployed_best()
spaceshare_indi = space_shared_controller.get_deployment()
protection_indi = protection_controller.get_deployment()
f_size = (10,5)
fig,ax = plt.subplots(1,1,figsize=f_size)
plt.title("Utility")
#ax.plot([util_orig for x in my_ga.history],'r-^',label="Original")
ax.plot([my_ga.history[x][0] for x in my_ga.history],'g-^',label="Utility GA")
ax.plot([spaceshared_util for x in my_ga.history],'b-^',label="Utility SpaceShared")
ax.plot([protection_util for x in my_ga.history],'m-^',label="Utility Protection")
ax.legend(loc="lower right")
ax.grid()
plt.show()
parameters = ["Battery_SoC","Consumed_Energy"]
for p in parameters:
fig,ax = plt.subplots(1,1,figsize=f_size)
plt.title(p)
# ax.plot(ga_orig_indi.index,ga_orig_indi[p],'r-^', label="Original")
ax.plot(ga_best_indi.index,ga_best_indi[p],'g-*',label="Best GA")
ax.plot(spaceshare_indi.index,spaceshare_indi[p],'b-o',label="Best SpaceSharedC")
ax.plot(protection_indi.index,protection_indi[p],'m-+',label="Best ProtectionC")
ax.legend(loc="upper left")
ax.grid()
plt.show()
#Device Energy
fig,ax = plt.subplots(1,1,figsize=f_size)
plt.title("Energy Consumed by Devices")
#ax.plot(ga_orig_indi.index,ga_orig_indi["Consumed_Energy"]-ga_orig_indi["System_Load"],'r-^', label="Original")
ax.plot(ga_best_indi.index,ga_best_indi["Consumed_Energy"]-ga_best_indi["System_Load"],'g-*',label="Best GA")
ax.plot(spaceshare_indi.index,spaceshare_indi["Consumed_Energy"]-spaceshare_indi["System_Load"],'b-o',label="Best SpaceSharedC")
ax.plot(protection_indi.index,protection_indi["Consumed_Energy"]-protection_indi["System_Load"],'m-+',label="Best ProtectionC")
ax.legend(loc="upper left")
ax.grid()
plt.show()
```
| github_jupyter |
## Algorithms -I
### Distance, Centrality, Community and Traversal
```
import matplotlib.pyplot as plt
import networkx as nx
import seaborn as sns
sns.set()
%matplotlib inline
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
G = nx.karate_club_graph()
nx.draw(G)
```
### 1. Distance
Graph diameter, radius, eccentricity and other properties.
- ```center(G[, e, usebounds])``` Returns the center of the graph G.
- ```diameter(G[, e, usebounds])``` Returns the diameter of the graph G.
- ```eccentricity(G[, v, sp])``` Returns the eccentricity of nodes in G.
- ```extrema_bounding(G[, compute])``` Compute requested extreme distance metric of undirected graph G
- ```periphery(G[, e, usebounds])``` Returns the periphery of the graph G.
- ```radius(G[, e, usebounds])``` Returns the radius of the graph G.
```
nx.radius(G), nx.diameter(G)
nx.center(G)
nx.eccentricity(G)
nx.extrema_bounding(G)
nx.periphery(G)
```
-----------
### Centrality
#### Degree
- ```degree_centrality(G)``` Compute the degree centrality for nodes.
- ```in_degree_centrality(G)``` Compute the in-degree centrality for nodes.
- ```out_degree_centrality(G)``` Compute the out-degree centrality for nodes.
```
G = nx.balanced_tree(r=3, h=2)
nx.draw(G, node_size = 500,node_color = 'lightblue', with_labels =True)
nx.degree_centrality(G)
nx.in_degree_centrality(nx.to_directed(G))
nx.out_degree_centrality(nx.to_directed(G))
```
-------
#### Eigenvector Centality
- ```eigenvector_centrality(G[, max_iter, tol, …])``` Compute the eigenvector centrality for the graph G.
- ```eigenvector_centrality_numpy(G[, weight, …])``` Compute the eigenvector centrality for the graph G.
- ```katz_centrality(G[, alpha, beta, max_iter, …])``` Compute the Katz centrality for the nodes of the graph G.
- ```katz_centrality_numpy(G[, alpha, beta, …])``` Compute the Katz centrality for the graph G.
```
nx.eigenvector_centrality(G)
nx.katz_centrality(G)
```
-----------
#### Betweenness Centrality
- ```betweenness_centrality(G[, k, normalized, …])``` Compute the shortest-path betweenness centrality for nodes.
- ```edge_betweenness_centrality(G[, k, …])``` Compute betweenness centrality for edges.
- ```betweenness_centrality_subset(G, sources, …)``` Compute betweenness centrality for a subset of nodes.
- ```edge_betweenness_centrality_subset(G, …[, …])``` Compute betweenness centrality for edges for a subset of nodes.
```
nx.betweenness_centrality(G)
```
----------------
### 2. Community
Functions for computing and measuring community structure.
The functions in this class are not imported into the top-level networkx namespace. You can access these functions by importing the ```networkx.algorithms.community``` module, then accessing the functions as attributes of community. For example:
```
from networkx.algorithms import community
G = nx.barbell_graph(5, 1)
nx.draw(G)
communities_generator = community.girvan_newman(G)
top_level_communities = next(communities_generator)
next_level_communities = next(communities_generator)
sorted(map(sorted, next_level_communities))
```
--------
#### K-Clique Communty
```
G = nx.relaxed_caveman_graph(5,10,0.1)
nx.draw(G, node_size = 40,node_color = 'blue')
from networkx.algorithms.community import k_clique_communities
c = list(k_clique_communities(G,k=4))
print(c)
from networkx.algorithms.community import greedy_modularity_communities
c = list(greedy_modularity_communities(G))
print(c)
from networkx.algorithms.community import asyn_lpa_communities
c = list(asyn_lpa_communities(G))
print(c)
from networkx.algorithms.community import label_propagation_communities
c = list(label_propagation_communities(G))
print(c)
from networkx.algorithms.community import asyn_fluidc
c = list(asyn_fluidc(G,k=5))
print(c)
```
---------
### 3 Traversal
#### 3.1 Depth First Search
Basic algorithms for depth-first searching the nodes of a graph.
- ```dfs_edges(G[, source, depth_limit])``` Iterate over edges in a depth-first-search (DFS).
- ```dfs_tree(G[, source, depth_limit])``` Returns oriented tree constructed from a depth-first-search from source.
- ```dfs_predecessors(G[, source, depth_limit])``` Returns dictionary of predecessors in depth-first-search from source.
- ```dfs_successors(G[, source, depth_limit])``` Returns dictionary of successors in depth-first-search from source.
- ```dfs_preorder_nodes(G[, source, depth_limit])``` Generate nodes in a depth-first-search pre-ordering starting at source.
- ```dfs_postorder_nodes(G[, source, depth_limit])``` Generate nodes in a depth-first-search post-ordering starting at source.
- ```dfs_labeled_edges(G[, source, depth_limit])``` Iterate over edges in a depth-first-search (DFS) labeled by type.
```
G = nx.random_tree(20)
nx.draw(G, node_size = 200,node_color = 'lightblue',with_labels=True)
L = list(nx.dfs_edges(G, source=0, depth_limit=5))
print(L)
TG = nx.dfs_tree(G)
nx.draw(G, node_size = 200,node_color = 'lightblue',with_labels=True)
nx.dfs_predecessors(G, source=0)
nx.dfs_successors(G, source=0)
list(nx.dfs_preorder_nodes(G))
list(nx.dfs_postorder_nodes(G))
list(nx.dfs_labeled_edges(G))
```
-----------
#### 3.2 Breadth-first search
Basic algorithms for breadth-first searching the nodes of a graph.
- ```bfs_edges(G, source[, reverse, depth_limit])``` Iterate over edges in a breadth-first-search starting at source.
- ```bfs_tree(G, source[, reverse, depth_limit])``` Returns an oriented tree constructed from of a breadth-first-search starting at source.
- ```bfs_predecessors(G, source[, depth_limit])``` Returns an iterator of predecessors in breadth-first-search from source.
- ```bfs_successors(G, source[, depth_limit])``` Returns an iterator of successors in breadth-first-search from so
```
print(list(nx.bfs_edges(G,0)))
nx.draw(nx.bfs_tree(G,0), node_color = 'lightblue', node_size = 200, with_labels = True)
print(list(nx.bfs_predecessors(G,0)))
print(list(nx.bfs_successors(G,0)))
```
---------
| github_jupyter |
# Pandas
---
macosx
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
import pandas as pd
```
### Reading from csv file
```
%ls ../data
```
*let's use countries.csv file*
```
df = pd.read_csv('../data/countries.csv')
df
```
*If it gives encoding error, define encoding format*
```python
df = pd.read_csv('../data/countries.csv', encoding='utf-8')
```
```
df.head()
# select rows from 10 - 30
ndf = df[10: 30]
ndf.head()
# reset the index of our new dataframe "ndf" to start it from 0
ndf.reset_index()
```
*Every dataframe operation returns a new dataframe, ie it doesnot change the old one*
```
ndf.head()
# now drop the old index while resetting it instead of creating a new column
ndf.reset_index(drop=True)
# set a new index out of a column
ndf.set_index("Code")
ndf.head()
# index of dataframe
ndf.index
mdf = ndf.set_index("Code")
mdf.head()
mdf.index
mdf.columns
df.columns
mdf
mdf[3]
mdf["Name"]
mdf["AR"]
mdf[0]
mdf[2:5]
mdf.ix["AW"]
mdf.loc["AU"]
mdf.ix[0]
mdf.loc[0]
mdf.iloc[0]
ndf.head()
ndf.reindex(columns=["Pop", "Name", "Code"]).head()
ndf.reindex(index=[8, 11, 10, 14, 13, 12])
df2 = pd.read_csv('../data/nepal/API_NPL_DS2_en_csv_v2.csv',
skiprows=3)
df2.head()
df2['Country Name'].unique()
df2['Country Name'].describe()
ag_df = df2[df2["Indicator Code"] == 'AG.AGR.TRAC.NO']
ag_df
ag_df.loc[:, "1960": "2015"]
ag_df
ag_df.drop(["Country Name", "Country Code",
"Indicator Name", "Indicator Code", "Unnamed: 60"], axis=1)
ag_df
ag_df.drop(["Country Name", "Country Code",
"Indicator Name", "Indicator Code", "Unnamed: 60"],
axis=1, inplace=True)
ag_df
ag_df.dropna()
ag_df = ag_df.transpose()
ag_df
ag_df.columns = ["Values"]
ag_df
ag_df = ag_df.dropna()
ag_df
import matplotlib.pyplot as plt
%matplotlib inline
ag_df.plot()
df2.head()
df2[(df2['2009'] > 1000) & (df2['2010'] < 4000)]
df3 = df2.drop(["Country Name", "Country Code", "Indicator Name"],
axis=1)
df3.head()
df3 = df3.transpose()
df3.head()
df3.iloc[0]
df3.columns = df3.iloc[0]
df3.head()
df3 = df3.drop("Indicator Code")
df3.head()
df3[["AG.LND.ARBL.HA.PC", "AG.LND.AGRI.ZS"]].dropna()
df3[["AG.LND.ARBL.HA.PC", "AG.LND.AGRI.ZS"]].dropna().mean()
df3[["AG.LND.ARBL.HA.PC", "AG.LND.AGRI.ZS"]].sort_values("AG.LND.ARBL.HA.PC",
ascending=False)
mdf.sort_index(ascending=False)
mdf.sort_values("Name", ascending=False)
ag_df.plot(kind='bar', color='cyan')
mdf.head()
def lowercase(x):
return x.lower()
lambda x: x.uppercase()
mdf.Name
mdf["Name"]
df5 = mdf.Name.apply(lowercase)
df5.head()
mdf['Name'] = mdf.Name.apply(lambda x: x.upper())
mdf
```
http://pandas.pydata.org/pandas-docs/stable/tutorials.html
http://nbviewer.jupyter.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/02%20-%20Lesson.ipynb
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
df_train = pd.read_excel('ecoli.train.xlsx')
df_test = pd.read_excel('ecoli.test.xlsx')
train = df_train
test = df_test
train.shape
test.shape
train.describe()
import seaborn
import matplotlib.pyplot as plt
def plot_df(df, name):
corr = df[df.columns].corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(20, 15))
seaborn.set(font_scale=1.2)
seaborn.heatmap(corr, mask=mask, center=0, annot=True,
square=True, linewidths=3, alpha=0.7)
plt.title(name)
plot_df(train, 'Train')
print(train.columns)
class_name = input("Chooese the class: ")
minmax_scaler = MinMaxScaler()
standard_scaler = StandardScaler()
temp_tr_ans = train[class_name]
temp_ts_ans = test[class_name]
class_count = len(temp_tr_ans.unique())
print(class_count)
tr_data = train.drop([class_name], axis=1)
ts_data = test.drop([class_name], axis=1)
# #결측치 채우기 if 결측치가 0일 경우
# from sklearn.impute import SimpleImputer
# rep_0 = SimpleImputer(missing_values=0, strategy="mean")
# tr_data = rep_0.fit_transform(tr_data)
# ts_data = rep_0.fit_transform(ts_data)
temp_ts_ans
mm_tr_data = minmax_scaler.fit_transform(tr_data)
mm_ts_data = minmax_scaler.transform(ts_data)
std_tr_data = standard_scaler.fit_transform(tr_data)
std_ts_data = standard_scaler.transform(ts_data)
tr_ans, _ = pd.factorize(temp_tr_ans, sort=True)
ts_ans, _ = pd.factorize(temp_ts_ans, sort=True)
tr_ans
ts_ans
import tensorflow as tf
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Dropout
from sklearn import metrics
from tensorflow.keras.regularizers import l2
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import confusion_matrix
# real Version
def create_model(hidden_layers = 1, neurons =1, init_mode = 'uniform',
activation = 'elu', kernel_regularizer=l2(0.001)):
model = Sequential()
model.add(Dense(neurons, input_dim=len(mm_tr_data.T), kernel_initializer=init_mode, activation=activation))
for i in range(hidden_layers):
model.add(Dense(neurons, kernel_initializer=init_mode, kernel_regularizer=kernel_regularizer))
model.add(BatchNormalization())
model.add(Activation(activation))
model.add(Dropout(0.2))
if class_count == 2:
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
elif class_count != 2:
model.add(Dense(class_count, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
keras_model = KerasClassifier(build_fn=create_model, epochs=64, batch_size=16)
leaky_relu = tf.nn.leaky_relu
hidden_layers = [4,8,12]
neurons = [32, 64, 128]
activation = ['elu', leaky_relu]
init_mode = ['glorot_uniform', 'he_normal']
param_grid = dict(hidden_layers = hidden_layers, neurons = neurons, init_mode = init_mode, activation = activation)
minmax_grid = GridSearchCV(estimator=keras_model, param_grid=param_grid, n_jobs= -1, cv=3)
std_grid = GridSearchCV(estimator=keras_model, param_grid=param_grid, n_jobs= -1, cv=3)
import warnings
warnings.filterwarnings("ignore")
minmax_grid_result = minmax_grid.fit(mm_tr_data, tr_ans)
std_grid_result = std_grid.fit(std_tr_data, tr_ans)
print("Scaler = minmax")
print("Best: %f using %s" % (minmax_grid_result.best_score_, minmax_grid_result.best_params_))
means = minmax_grid_result.cv_results_['mean_test_score']
stds = minmax_grid_result.cv_results_['std_test_score']
params = minmax_grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
print("Scaler = standard")
print("Best: %f using %s" % (std_grid_result.best_score_, std_grid_result.best_params_))
means = std_grid_result.cv_results_['mean_test_score']
stds = std_grid_result.cv_results_['std_test_score']
params = std_grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
pred = minmax_grid.predict(mm_ts_data)
accuracy = accuracy_score(pred, ts_ans)
ts_ans = ts_ans.astype(float)
precision, recall, fbeta_score, support = precision_recall_fscore_support(ts_ans, pred)
conf_mat = confusion_matrix(ts_ans, pred)
print("Accuracy = ", accuracy)
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(ts_ans, pred)))
print("")
print("Classification Report")
print(metrics.classification_report(ts_ans, pred))
pred = std_grid.predict(std_ts_data)
accuracy = accuracy_score(pred, ts_ans)
ts_ans = ts_ans.astype(float)
precision, recall, fbeta_score, support = precision_recall_fscore_support(ts_ans, pred)
conf_mat = confusion_matrix(ts_ans, pred)
print("Accuracy = ", accuracy)
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(ts_ans, pred)))
print("")
print("Classification Report")
print(metrics.classification_report(ts_ans, pred))
# # testbed Version
# def create_model(hidden_layers = 1, neurons =1, init_mode = 'uniform', activation = 'elu'):
# model = Sequential()
# model.add(Dense(neurons, input_dim=len(tr_data.T), kernel_initializer=init_mode, activation=activation))
# for i in range(hidden_layers):
# model.add(Dense(neurons, kernel_initializer=init_mode))
# model.add(BatchNormalization())
# model.add(Activation(activation))
# model.add(Dropout(0.2))
# if class_count == 2:
# model.add(Dense(1,activation='sigmoid'))
# model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# elif class_count != 2:
# model.add(Dense(class_count-1, activation='softmax'))
# model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# return model
# hidden_layers = [5, 10]
# neurons = [32, 64]
# activation = ['elu']
# init_mode = ['he_uniform']
# keras_model = KerasClassifier(build_fn=create_model, epochs=4, batch_size=4)
# param_grid = dict(hidden_layers = hidden_layers, neurons = neurons, init_mode = init_mode, activation = activation)
# grid = GridSearchCV(estimator=keras_model, param_grid=param_grid, n_jobs= -1, cv=2)
```
| github_jupyter |
# Interactive Widget: Back End Code
Throughout this workbook, we used steps from the following web pages to inform our widgets.
- https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Basics.html
- https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html
- https://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
## Setting Up the Model for the Widget
### Set up the training and testing sets.
```
# Import necessary data libraries.
from collections import Counter
from imblearn.datasets import fetch_datasets
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from imblearn.pipeline import make_pipeline as make_pipeline_imb
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import NearMiss
from imblearn.metrics import classification_report_imbalanced
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, accuracy_score, classification_report
import numpy as np
import pandas as pd
# Set up datasets.
master_data_url = 'https://raw.githubusercontent.com/georgetown-analytics/Formula1/main/data/processed/MasterData5.csv'
master_data = pd.read_csv(master_data_url, sep = ',', engine = 'python')
one_hot_url = 'https://raw.githubusercontent.com/georgetown-analytics/Formula1/main/data/processed/OneHot_MasterData5.csv'
one_hot = pd.read_csv(one_hot_url, sep = ',', engine = 'python')
# Drop any nulls.
data_df = one_hot.dropna(axis=0)
# Establish our X (independent) variables.
X = data_df[['grid', 'alt', 'average_lap_time',
'minimum_lap_time', 'PRCP', 'TAVG', 'TMAX', 'TMIN',
'country_CompletionStatus_1', 'nationality_CompletionStatus_1',
'binned_circuits_CompletionStatus_1', "trackType_CompletionStatus_1",
'country_CompletionStatus_2', 'nationality_CompletionStatus_2',
'binned_circuits_CompletionStatus_2', "trackType_CompletionStatus_2"]]
# Establish our y (dependent, target) variable.
y = data_df['CompletionStatus']
# Split our data into training and testing sets.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Import SMOTE so we can deal with our class imbalance.
from imblearn.over_sampling import SMOTE, ADASYN
# Use SMOTE on our X_ and y_train to create X_ and y_resampled.
X_resampled, y_resampled = SMOTE().fit_resample(X_train, y_train)
# Check the balance of our resampled data.
print(sorted(Counter(y_resampled).items()))
```
Above we can see that we've fixed the class imbalance of our training sets.
### Create CSV Files
In order to not have a randomized training set every time someone uses the widget, we'll create CSV files of our training data that we can call back to.
```
# Use pandas.DataFrame.to_csv to create the CSV file.
X_resampled.to_csv("data/interim/X_resampled_forWidget.csv", index = False)
# Use pandas.DataFrame.to_csv to create the CSV file.
y_resampled.to_csv("data/interim/y_resampled_forWidget.csv", index = False)
```
Further down, upon running our model and after we brought in the above CSV files, we got an error stating `"A column-vector y was passed when a 1d array was expected."` We know that the model worked before hand, so we need to revert our new y_resampled to the same type it used to be.
```
# What type was y_resampled?
type(y_resampled)
```
The result above says that `y_resampled` used to be pandas.core.series.Series.
### Set Up the Initial Model
Although our work involves several models, we're only using one for now: Logistic Regression. This model will run with the regular `X_test` and `y_test` data.
```
# Import the necessary data libraries that we'll need for our model.
from sklearn.metrics import f1_score
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split as tts
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import ClassificationReport
# Set up datasets.
X_resampled_url = 'https://raw.githubusercontent.com/georgetown-analytics/Formula1/main/data/interim/X_resampled_forWidget.csv'
X_resampled = pd.read_csv(X_resampled_url, sep = ',', engine = 'python')
y_resampled_url = 'https://raw.githubusercontent.com/georgetown-analytics/Formula1/main/data/interim/y_resampled_forWidget.csv'
y_resampled = pd.read_csv(y_resampled_url, sep = ',', engine = 'python')
# View X_resampled.
X_resampled.head()
```
We know from testing the type of `y_resampled` before we brought in the CSV files that `y_resampled` needs to be a series in order for our model to run correctly. We also know from this site (https://datatofish.com/pandas-dataframe-to-series/) how to change a dataframe into a series.
```
# Change the y_resampled dataframe into a y_resampled series.
y_resampled = y_resampled.squeeze()
# View y_resampled.
y_resampled.head()
# Create the function score_model.
def score_model(X_resampled, y_resampled, X_test, y_test, estimator, **kwargs):
"""
Test various estimators.
"""
# Instantiate the classification model and visualizer.
estimator.fit(X_resampled, y_resampled, **kwargs)
expected = y_test
predicted = estimator.predict(X_test)
# Compute and return F1 (harmonic mean of precision and recall).
print("{}: {}".format(estimator.__class__.__name__, f1_score(expected, predicted)))
# Run the Logistic Regression model.
score_model(X_resampled, y_resampled, X_test, y_test, LogisticRegression(solver='lbfgs'))
```
## Widget Experimentation
### Set Up
```
# Import necessary data libraries.
import pandas as pd
import os
import csv
import io
import requests
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder
import category_encoders as ce
# The following are for Jupyter Widgets.
import ipywidgets as widgets
from IPython.display import display
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import FloatSlider
# What columns are in one_hot?
one_hot.columns
# Select the identifiable columns and the columns that are one-hot encoded. Put these into refined_one_hot.
refined_one_hot = one_hot[['raceId', 'driverId',
'country_CompletionStatus_1', 'trackType_CompletionStatus_1',
'nationality_CompletionStatus_1',
'binned_circuits_CompletionStatus_1',
'country_CompletionStatus_2', 'trackType_CompletionStatus_2',
'nationality_CompletionStatus_2',
'binned_circuits_CompletionStatus_2']]
# Check we have the correct columns in refined_one_hot.
refined_one_hot.columns
# What columns are in master_data?
master_data.columns
# Select the identifiable columns and the columns that will be one-hot encoded. Put these into refined_master.
refined_master = master_data[['raceId', 'driverId', 'country', 'trackType', 'nationality', 'binned_circuits']]
# Check we have the correct columns in refined_master.
refined_master.columns
# Merge refined_one_hot with refined_master by "raceId" and "driverId" to get refined_total.
refined_total = pd.merge(refined_master, refined_one_hot, on = ["raceId", "driverId"])
refined_total.head()
```
### Working with the Data in the Input Columns
```
# What features are in X_resampled and will therefore be required for our widget?
X_resampled.columns
```
As shown above, with slight changes to account for the one-hot encoding, we'll have to ask interactors to choose grid, altitude, an average lap time and minimum lap time, precipitation, temperatures (average, minimum, and maximum), country, nationality, circuit, and track type. We will change the country, nationality, circuit, and track type in the function to match their one-hot encoding. Because there are so many options, though, we will only allow a few choices for these. Track type will be the only one-hot encoded feature that shows all possible choices, as there are only two to begin with.
```
# What are the most popular nationalities?
refined_total[["nationality", "nationality_CompletionStatus_1", "nationality_CompletionStatus_2"]].value_counts()
```
The most popular nationalities of drivers are German, British, and Brazilian.
```
# What are the most popular countries?
refined_total[["country", "country_CompletionStatus_1", "country_CompletionStatus_2"]].value_counts()
```
The most popular countries are Italy, Germany, and Spain.
```
# What are the most popular binned circuits?
refined_total[["binned_circuits", "binned_circuits_CompletionStatus_1", "binned_circuits_CompletionStatus_2"]].value_counts()
```
The most popular binned circuits are Tier1, Tier2, and Tier3.
```
# How was trackType one-hot encoded?
refined_total[["trackType", "trackType_CompletionStatus_1", "trackType_CompletionStatus_2"]].value_counts()
# What minimum and maximum numbers will we have to allow for in our input columns?
X_resampled.describe()
```
- grid has a min of 0 and a max of 24.
- alt has a min of -7.0 and a max of 2227.0.
- average_lap_time has a min of 62932.344828 and a max of 216112.776119.
- minimum_lap_time has a min of 55404.000000 and a max of 122930.000000.
- PRCP has a min of 0.0 and a max of 6.3.
- TAVG has a min of 49.0 and a max of 94.2.
- TMAX has a min of 56.0 and a max of 102.0.
- TMIN has a min of 36.0 and a max of 88.4.
### Building the Widget
Because the final widget's function will have a lot of code in it, we're going to slowly build the function one step at a time. These steps include:
1. Building a dropdown widget connected to a function containing an elif statement. This statement will change the display depending on what the user selects in the dropdown menu.
2. Building four dropdown widgets that all connect to the same function. Each widget connects to a different elif or if-else statement within that function, and each elif or if-else statement changes its own display.
3. Using the build from the prior widget, each elif or if-else statement changes the one-hot encoding for the connected dropdown menu. Each one-hot encoding number is placed in a new dataframe, which is displayed with the dropdown menus.
4. Using the build from the prior widget, we add all of the numeric columns that did not have to be one-hot encoded. These are not based on dropdown menus, but are instead bounded text boxes (both int and float). These are also placed in the dataframe, as well as displayed separately.
5. Using the build from the prior widget, we stop displaying the numeric features. We also add a modeling function that predicts whether a car will finish the race or not, based on the features that users input through the widget. Finally, we use an if-else statement to print a car's predicted outcome.
These steps are enacted below.
```
"""
Establish function "nationality" which allows selection of three nationalities, then returns a country.
"""
def nationality(nationality):
# Use an elif statement to determine the output country name based on the input nationality.
if nationality == "German":
countryname = "Germany"
elif nationality == "British":
countryname = "England"
else:
countryname = "Brazil"
display(countryname)
# Create a widget that will interact with the nationality function.
interact(nationality, nationality = widgets.Dropdown(options = ["German", "British", "Brazilian"], value = "German"));
"""
Establish function "fourreturn" which allows selection of three nationalities,
countries, and circuit tiers, then returns a country, language, and number.
It also includes a selection of two track types, which returns a type number.
"""
def fourreturn(nationality, country, circuit, trackType):
# Use an elif statement to determine the output country name based on the input nationality.
if nationality == "German":
countryname = "Germany"
elif nationality == "British":
countryname = "Great Britain"
else:
countryname = "Brazil"
display(countryname)
# Use an elif statement to determine the output language based on the input country.
if country == "Italy":
language = "Italian"
elif country == "Germany":
language = "German"
else:
language = "Spanish"
display(language)
# Use an elif statement to determine the output number based on the input circuit.
if circuit == "Tier1":
number = "1"
elif circuit == "Tier2":
number = "2"
else:
number = "3"
display(number)
# Use an if-else statement to determine the output typetrack based on the input track.
if trackType == "race":
typetrack = "type0"
else:
typetrack = "type2"
display(typetrack)
# Create a widget that will interact with the nationality function.
interact(fourreturn, nationality = widgets.Dropdown(options = ["German", "British", "Brazilian"], value = "German", description = "Nationality"),
country = widgets.Dropdown(options = ["Italy", "Germany", "Spain"], value = "Italy", description = "Country"),
circuit = widgets.Dropdown(options = ["Tier1", "Tier2", "Tier3"], value = "Tier1", description = "Circuit"),
trackType = widgets.Dropdown(options = ["race", "street"], value = "race", description = "Track Type"));
```
In the function below we create a single row dataframe using this site (https://www.geeksforgeeks.org/different-ways-to-create-pandas-dataframe/).
```
"""
Establish function "onehot" which allows selection of three nationalities,
countries, and circuit tiers, then inputs them into dataframe input_df and returns the dataframe.
It also allows the selection of two track types, and inputs that selection into the dataframe as well.
"""
def onehot(nationality, country, circuit, trackType):
# Use an elif statement to determine the output one-hot encoding based on the input nationality.
if nationality == "German":
nationality_CompletionStatus_1 = 0.209566
nationality_CompletionStatus_2 = 0.790434
elif nationality == "British":
nationality_CompletionStatus_1 = 0.240838
nationality_CompletionStatus_2 = 0.759162
else:
nationality_CompletionStatus_1 = 0.292359
nationality_CompletionStatus_2 = 0.707641
# Use an elif statement to determine the output one-hot encoding based on the input country.
if country == "Italy":
country_CompletionStatus_1 = 0.279099
country_CompletionStatus_2 = 0.720901
elif country == "Germany":
country_CompletionStatus_1 = 0.291429
country_CompletionStatus_2 = 0.708571
else:
country_CompletionStatus_1 = 0.219697
country_CompletionStatus_2 = 0.780303
# Use an elif statement to determine the output one-hot encoding based on the input circuit.
if circuit == "Tier1":
binned_circuits_CompletionStatus_1 = 0.253451
binned_circuits_CompletionStatus_2 = 0.746549
elif circuit == "Tier2":
binned_circuits_CompletionStatus_1 = 0.277588
binned_circuits_CompletionStatus_2 = 0.722412
else:
binned_circuits_CompletionStatus_1 = 0.235686
binned_circuits_CompletionStatus_2 = 0.764314
# Use an if-else statement to determine the output one-hot encoding based on the input track.
if trackType == "race":
trackType_CompletionStatus_1 = 0.237243
trackType_CompletionStatus_2 = 0.762757
else:
trackType_CompletionStatus_1 = 0.287045
trackType_CompletionStatus_2 = 0.712955
# Establish the data of our input_df dataframe.
inputdata = [[nationality_CompletionStatus_1, nationality_CompletionStatus_2,
country_CompletionStatus_1, country_CompletionStatus_2,
binned_circuits_CompletionStatus_1, binned_circuits_CompletionStatus_2,
trackType_CompletionStatus_1, trackType_CompletionStatus_2]]
# Establish the dataframe input_df itself with pd.DataFrame.
input_df = pd.DataFrame(inputdata, columns = ["nationality_CompletionStatus_1", "nationality_CompletionStatus_2",
"country_CompletionStatus_1", "country_CompletionStatus_2",
"binned_circuits_CompletionStatus_1", "binned_circuits_CompletionStatus_2",
"trackType_CompletionStatus_1", "trackType_CompletionStatus_2"])
return(input_df)
# Create a widget that will interact with the onehot function.
interact(onehot, nationality = widgets.Dropdown(options = ["German", "British", "Brazilian"], value = "German", description = "Nationality"),
country = widgets.Dropdown(options = ["Italy", "Germany", "Spain"], value = "Italy", description = "Country"),
circuit = widgets.Dropdown(options = ["Tier1", "Tier2", "Tier3"], value = "Tier1", description = "Circuit"),
trackType = widgets.Dropdown(options = ["race", "street"], value = "race", description = "Track Type"));
"""
Establish function "showvalues" which allows selection of three nationalities,
countries, and circuit tiers, as well as a selection of two track types and
input of one of each of the following values:
grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN. Display the values.
Place these values in the dataframe input_df and display the dataframe.
"""
def showvalues(nationality, country, circuit, trackType, grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN):
# Use an elif statement to determine the output one-hot encoding based on the input nationality.
if nationality == "German":
nationality_CompletionStatus_1 = 0.209566
nationality_CompletionStatus_2 = 0.790434
elif nationality == "British":
nationality_CompletionStatus_1 = 0.240838
nationality_CompletionStatus_2 = 0.759162
else:
nationality_CompletionStatus_1 = 0.292359
nationality_CompletionStatus_2 = 0.707641
# Use an elif statement to determine the output one-hot encoding based on the input country.
if country == "Italy":
country_CompletionStatus_1 = 0.279099
country_CompletionStatus_2 = 0.720901
elif country == "Germany":
country_CompletionStatus_1 = 0.291429
country_CompletionStatus_2 = 0.708571
else:
country_CompletionStatus_1 = 0.219697
country_CompletionStatus_2 = 0.780303
# Use an elif statement to determine the output one-hot encoding based on the input circuit.
if circuit == "Tier1":
binned_circuits_CompletionStatus_1 = 0.253451
binned_circuits_CompletionStatus_2 = 0.746549
elif circuit == "Tier2":
binned_circuits_CompletionStatus_1 = 0.277588
binned_circuits_CompletionStatus_2 = 0.722412
else:
binned_circuits_CompletionStatus_1 = 0.235686
binned_circuits_CompletionStatus_2 = 0.764314
# Use an if-else statement to determine the output one-hot encoding based on the input track.
if trackType == "race":
trackType_CompletionStatus_1 = 0.237243
trackType_CompletionStatus_2 = 0.762757
else:
trackType_CompletionStatus_1 = 0.287045
trackType_CompletionStatus_2 = 0.712955
# Establish the data of our input_df dataframe.
inputdata = [[nationality_CompletionStatus_1, nationality_CompletionStatus_2,
country_CompletionStatus_1, country_CompletionStatus_2,
binned_circuits_CompletionStatus_1, binned_circuits_CompletionStatus_2,
trackType_CompletionStatus_1, trackType_CompletionStatus_2,
grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN]]
# Establish the dataframe input_df itself with pd.DataFrame.
input_df = pd.DataFrame(inputdata, columns =
["nationality_CompletionStatus_1", "nationality_CompletionStatus_2",
"country_CompletionStatus_1", "country_CompletionStatus_2",
"binned_circuits_CompletionStatus_1", "binned_circuits_CompletionStatus_2",
"trackType_CompletionStatus_1", "trackType_CompletionStatus_2",
"grid", "alt", "average_lap_time", "minimum_lap_time", "PRCP", "TAVG", "TMAX", "TMIN"])
display(grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN)
display(input_df)
# Create a widget that will interact with the showvalues function.
interact(showvalues, nationality = widgets.Dropdown(options = ["German", "British", "Brazilian"], value = "German", description = 'Nationality'),
country = widgets.Dropdown(options = ["Italy", "Germany", "Spain"], value = "Italy", description = 'Country'),
circuit = widgets.Dropdown(options = ["Tier1", "Tier2", "Tier3"], value = "Tier1", description = 'Circuit'),
trackType = widgets.Dropdown(options = ["race", "street"], value = "race", description = 'Track Type'),
grid = widgets.BoundedIntText(min = 0, max = 30, description = 'Grid', disabled = False, continuous_update = False),
alt = widgets.BoundedFloatText(min = -100, max = 2500, description = 'Altitude', disabled = False, continuous_update = False),
average_lap_time = widgets.BoundedFloatText(min = 0, max = 300000, description = 'Avg Lap Time', disabled = False, continuous_update = False),
minimum_lap_time = widgets.BoundedFloatText(min = 0, max = 300000, description = 'Min Lap Time', disabled = False, continuous_update = False),
PRCP = widgets.BoundedFloatText(min = 0, max = 20, description = 'Precipitation', disabled = False, continuous_update = False),
TAVG = widgets.BoundedFloatText(min = 0, max = 120, description = 'Avg Temp (F)', disabled = False, continuous_update = False),
TMAX = widgets.BoundedFloatText(min = 0, max = 120, description = 'Max Temp (F)', disabled = False, continuous_update = False),
TMIN = widgets.BoundedFloatText(min = 0, max = 120, description = 'Min Temp (F)', disabled = False, continuous_update = False));
# Create the function widgetpred. We'll use this in the function predict.
def widgetpred(X_resampled, y_resampled, X_test, estimator, **kwargs):
"""
Test various estimators.
"""
# Instantiate the classification model and visualizer
estimator.fit(X_resampled, y_resampled, **kwargs)
predicted = estimator.predict(X_test)
# Compute and return F1 (harmonic mean of precision and recall)
return predicted
"""
Establish function "predict" which allows selection of three nationalities,
countries, and circuit tiers, as well as a selection of two track types and
input of one of each of the following values:
grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN.
Place these values in the dataframe input_df and display the dataframe.
Create prediction based on widgetpred function and display the prediction:
0 for did not finish, 1 for did finish.
"""
def predictfinish(nationality, country, circuit, trackType, grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN):
# Use an elif statement to determine the output one-hot encoding based on the input nationality.
if nationality == "German":
nationality_CompletionStatus_1 = 0.209566
nationality_CompletionStatus_2 = 0.790434
elif nationality == "British":
nationality_CompletionStatus_1 = 0.240838
nationality_CompletionStatus_2 = 0.759162
else:
nationality_CompletionStatus_1 = 0.292359
nationality_CompletionStatus_2 = 0.707641
# Use an elif statement to determine the output one-hot encoding based on the input country.
if country == "Italy":
country_CompletionStatus_1 = 0.279099
country_CompletionStatus_2 = 0.720901
elif country == "Germany":
country_CompletionStatus_1 = 0.291429
country_CompletionStatus_2 = 0.708571
else:
country_CompletionStatus_1 = 0.219697
country_CompletionStatus_2 = 0.780303
# Use an elif statement to determine the output one-hot encoding based on the input circuit.
if circuit == "Tier1":
binned_circuits_CompletionStatus_1 = 0.253451
binned_circuits_CompletionStatus_2 = 0.746549
elif circuit == "Tier2":
binned_circuits_CompletionStatus_1 = 0.277588
binned_circuits_CompletionStatus_2 = 0.722412
else:
binned_circuits_CompletionStatus_1 = 0.235686
binned_circuits_CompletionStatus_2 = 0.764314
# Use an if-else statement to determine the output one-hot encoding based on the input track.
if trackType == "race":
trackType_CompletionStatus_1 = 0.237243
trackType_CompletionStatus_2 = 0.762757
else:
trackType_CompletionStatus_1 = 0.287045
trackType_CompletionStatus_2 = 0.712955
# Establish the data of our input_df dataframe.
inputdata = [[nationality_CompletionStatus_1, nationality_CompletionStatus_2,
country_CompletionStatus_1, country_CompletionStatus_2,
binned_circuits_CompletionStatus_1, binned_circuits_CompletionStatus_2,
trackType_CompletionStatus_1, trackType_CompletionStatus_2,
grid, alt, average_lap_time, minimum_lap_time, PRCP, TAVG, TMAX, TMIN]]
# Establish the dataframe input_df itself with pd.DataFrame.
input_df = pd.DataFrame(inputdata, columns =
["nationality_CompletionStatus_1", "nationality_CompletionStatus_2",
"country_CompletionStatus_1", "country_CompletionStatus_2",
"binned_circuits_CompletionStatus_1", "binned_circuits_CompletionStatus_2",
"trackType_CompletionStatus_1", "trackType_CompletionStatus_2",
"grid", "alt", "average_lap_time", "minimum_lap_time", "PRCP", "TAVG", "TMAX", "TMIN"])
display(input_df)
# Using the widgetpred function, predict whether the car will finish the race or not given input_df.
pred = widgetpred(X_resampled, y_resampled, input_df, LogisticRegression(solver='lbfgs'))
# Using an if-else statement, determine what interactors will see given the data they input.
if pred[0] == 1:
writtenpred = "finish the race."
else:
writtenpred = "not finish the race."
print("According to our Logistic Regression model, your car is predicted to", writtenpred)
# Create a widget that will interact with the predictfinish function.
interact(predictfinish, nationality = widgets.Dropdown(options = ["German", "British", "Brazilian"], value = "German", description = 'Nationality'),
country = widgets.Dropdown(options = ["Italy", "Germany", "Spain"], value = "Italy", description = 'Country'),
circuit = widgets.Dropdown(options = ["Tier1", "Tier2", "Tier3"], value = "Tier1", description = 'Circuit'),
trackType = widgets.Dropdown(options = ["race", "street"], value = "race", description = 'Track Type'),
grid = widgets.BoundedIntText(min = 0, max = 30, description = 'Grid', disabled = False, continuous_update = False),
alt = widgets.BoundedFloatText(min = -100, max = 2500, description = 'Altitude', disabled = False, continuous_update = False),
average_lap_time = widgets.BoundedFloatText(min = 0, max = 300000, description = 'Avg Lap Time', disabled = False, continuous_update = False),
minimum_lap_time = widgets.BoundedFloatText(min = 0, max = 300000, description = 'Min Lap Time', disabled = False, continuous_update = False),
PRCP = widgets.BoundedFloatText(min = 0, max = 20, description = 'Precipitation', disabled = False, continuous_update = False),
TAVG = widgets.BoundedFloatText(min = 0, max = 120, description = 'Avg Temp (F)', disabled = False, continuous_update = False),
TMAX = widgets.BoundedFloatText(min = 0, max = 120, description = 'Max Temp (F)', disabled = False, continuous_update = False),
TMIN = widgets.BoundedFloatText(min = 0, max = 120, description = 'Min Temp (F)', disabled = False, continuous_update = False));
```
| github_jupyter |
# Homework 7
Use this notebook to work on your answers and check solutions. You can then submit your functions using "HW7_submission.ipynb" or directly write your functions in a file named "hw7_answers.py". Note that "hw7_answers.py" will be the only file collected and graded for this assignment.
You will use the cereal dataset from last week.
```
import pandas as pd
import numpy as np
from __future__ import division
%%sh
## RUN BUT DO NOT EDIT THIS CELL
## run this cell to download the cereal dataset into your current directory
cp /home/data/cereal/cereal.csv .
## RUN BUT DO NOT EDIT THIS CELL
# load the data, define ratingID
cer = pd.read_csv('cereal.csv', skiprows=[1], delimiter=';')
cer['ratingID'] = cer['rating'].apply(lambda x: 0 if x<55 else 1)
```
## Question 1
Write a function called "get_corrs" which takes one argument:
* df, which is a pandas data frame
and returns:
* m, a correlation matrix for the numerical variables in df.
```
def get_corrs(df):
return df.corr()
get_corrs(cer[['name','calories','carbo','sugars']])
```
### Sample output:
```
In [1]: get_corrs(cer[['name','calories','carbo','sugars']])
Out[1]: calories carbo sugars
calories 1.000000 0.250681 0.562340
carbo 0.250681 1.000000 -0.331665
sugars 0.562340 -0.331665 1.000000
```
## Question 2
Write a function called "get_corr_pairs" which takes one argument:
* df, which is a pandas data frame
and returns:
* corr_pairs, a dictionary where keys are names of columns of df corresponding to numerical features, and values are arrays of names of columns whose correlation coefficient with the key has magnitude 0.3 or greater.
You can use your function from question 1 to get the correlation values.
```
def get_corr_pairs(df):
cmat = get_corrs(df)
d = dict.fromkeys(cmat.columns.values)
for key in d.iterkeys():
d[key] = cmat.loc[key][cmat.loc[key].abs()>=0.3].index.values.tolist()
d[key].remove(key)
return d
get_corr_pairs(cer[['name','fat','sugars','rating']])
```
### Sample output:
```
In [1]: get_corr_pairs(cer[['name','fat','sugars','rating']])
Out[1]: {'fat': ['rating'], 'rating': ['fat', 'sugars'], 'sugars': ['rating']}
```
Short explanation: the correlation between 'fat' and 'rating' is -0.409, 'sugars' and 'rating' is -0.760; the remaining correlations have magnitude < 0.3.
## Question 3
Write a function called "sample_cereal" which takes two arguments:
* df, which is a pandas data frame
* kind, which is a string that can take value 'up' or 'down'
and returns:
* a pandas data frame with balanced target class 'ratingID', using up sampling if kind='up' and downsampling if kind='down'.
```
def sample_cereal(df, kind):
id_counts = df.groupby('ratingID').ratingID.count()
o1 = df[df.ratingID==id_counts.argmax()]
o2 = df[df.ratingID==id_counts.argmin()]
if (kind=='up'):
return o1.append(o2.iloc[np.random.choice(len(o2), len(o1), replace=True)])
elif (kind=='down'):
return o2.append(o1.iloc[np.random.choice(len(o1), len(o2), replace=False)])
else:
print 'This kind of sampling is not recognized! Returning original DataFrame!'
return df
sample_cereal(cer.ix[3:5,['name','mfr','type','calories','protein','ratingID']], 'up')
sample_cereal(cer.ix[3:5,['name','mfr','type','calories','protein','ratingID']], 'down')
```
### Sample output:
```
In [1]: sample_cereal(cer.ix[3:5,['name','mfr','type','calories','protein','ratingID']], 'up')
Out[1]: name mfr type calories protein ratingID
3 All-Bran with Extra Fiber K C 50 4 1
3 All-Bran with Extra Fiber K C 50 4 1
4 Almond Delight R C 110 2 0
5 Apple Cinnamon Cheerios G C 110 2 0
```
Short explanation: The input has only one positive sample and two negative samples; random sampling from a distribution of 1 can only return one possible result, so our up-sampling of the smaller class merely replicates the row for "All-Bran with Extra Fiber".
## Question 4
Write a function called "find_H" which takes two arguments:
* df, which is a pandas data frame
* cname, which is the name of the target column (that should correspond to a categorical variable)
and returns:
* H, the entropy in the column cname (use logarithm base 2)
```
def find_H(df, cname):
if (cname not in df.columns.values):
print 'Column name not recognized!'
return 0
p = df.groupby(cname)[cname].count()/len(df)
return -sum(p*np.log2(p))
find_H(cer.iloc[:20], 'ratingID')
```
### Sample output:
```
In [1]: find_H(cer.iloc[:20], 'ratingID')
Out[1]: 0.60984030471640038
```
## Question 5
Write a function called "info_gain" which takes four arguments:
* df, which is a pandas data frame
* cname, which is the name of the target column (that should correspond to a categorical variable)
* csplit, which is the name of a numeric column in df
* threshold, which is a numeric value
and returns:
* info_gain, the information gain you get in column cname by splitting the dataset on the threshold value in column csplit.
```
#### play with code here #####
def info_gain(df, cname, csplit, thr):
H0 = find_H(df, cname)
o1 = df[df[csplit]<thr]
o2 = df[df[csplit]>=thr]
R1 = find_H(o1, cname)
R2 = find_H(o2, cname)
return H0 - len(o1)/len(df)*R1 - len(o2)/len(df)*R2
info_gain(cer.iloc[:20], 'ratingID', 'sugars', 7.0)
```
### Sample output:
```
In [1]: info_gain(cer.iloc[:20], 'ratingID', 'sugars', '7.0')
Out[1]: 0.2280667035464144
```
Note: for a probability of 0, use the fact that lim_(p->0+) p log(p) = 0.
| github_jupyter |
<a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
# Introduction to the Lithology and LithoLayers objects
Lithology and LithoLayers are two Landlab components meant to make it easier to work with spatially variable lithology that produces spatially variable parameter values (e.g. stream power erodability or diffusivity).
This tutorial is meant for users who have some experience using Landlab components.
In this tutorial we will explore the creation of spatially variable lithology and its impact on the evolution of topography. After an introductory example that will let you see how LithoLayers works, we will work through two more complicated examples. In the first example, we use the LithoLayers to erode either dipping layeres or an anticline. Then we will use Lithology to create inverted topography.
We will use [xarray](https://xarray.pydata.org/en/stable/) to store and annotate our model output. While we won't extensively discuss the use of xarray, some background will be provided.
To start, we will import the necessary modules. A note: this tutorial uses the [HoloViews package](http://holoviews.org) for visualization. This package is a great tool for dealing with multidimentional annotated data (e.g. an xarray dataset). If you get an error on import, consider updating dask (this is what the author needed to do in April 2018). You will also need to have the [Bokeh](https://bokeh.pydata.org/en/latest/) and [Matplotlib](https://matplotlib.org) packages installed.
In testing we've seen some users have a warning raised related to the Matplotlib backend. In our testing it was OK to ignore these errors.
```
import warnings
warnings.filterwarnings('ignore')
import os
import numpy as np
import xarray as xr
import dask
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
import holoviews as hv
hv.notebook_extension('matplotlib')
from landlab import RasterModelGrid
from landlab.components import FlowAccumulator, FastscapeEroder, LinearDiffuser, Lithology, LithoLayers
from landlab.plot import imshow_grid
```
## Part 1: Creating layered rock
First we will create an instance of a LithoLayers to learn how this component works. Both LithoLayers and Lithology work closely with a Landlab ModelGrid, storing information about rock type at each grid node.
To create LithoLayers you need the following information:
1. A model grid that has the field `'topographic__elevation'` already created.
2. A list of elevations, called `'layer_elevations'` that the bottom of your layers will go through at specified plan-view anchor point (default value for the anchor point is (x, y) = (0, 0)), and a list of rock type IDs that indicate the rock type of that layer. When `'layer_elevations'` is negative that means that the layer goes through the anchor point above the topographic surface. These layers will be created where they extend below the topographic surface.
3. A dictionary of rock property attributes that maps a rock ID type to property values.
4. A functional form in x and y that defines the shape of your surface.
The use of this function form makes it possible for any function of x and y to be passed to LithoLayers.
Both the Lithology and LithoLayers components then know the rock type ID of all the material in the 'block of rock' you have specified. This can be used to continuously know the value of specified rock properties at the topographic surface, even as the rock is eroded, uplifted, or new rock is deposited.
In this tutorial we will first make an example to help build intuition and then do two more complex examples. Most of the functionality of Lithology and LithoLayers is shown in this tutorial, but if you want to read the full component documentation for LithoLayers, it can be found [here](https://landlab.readthedocs.io/en/release/landlab.components.lithology.html). Links to both components documentation can be found at the bottom of the tutorial.
First, we create a small RasterModelGrid with topography.
```
mg = RasterModelGrid((10, 15))
z = mg.add_zeros('topographic__elevation', at='node')
```
Next we make our layer elevations. We will make 20 layers that are 5 meters thick. Note that here, as with most Landlab components, there are no default units. At the anchor point, half of the layers will be above the ground (`'layer_elevations'` will have negative values) and half will be below the ground (`'layer_elevations'` have positive values).
We will make this with the [`np.arange`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html) function. We will also make the bottom layer really really thick so that we won't be able to erode through through it.
```
layer_elevations = 5. * np.arange(-10, 10)
# we create a bottom layer that is very thick.
layer_elevations[-1] = layer_elevations[-2] + 100
```
Next we create an array that represents our rock type ID values. We will create alternating layers of four types of rock by making an array with alternating `0`s `1`s `2`s and `3`s with the [np.tile](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html) function.
```
layer_ids = np.tile([0, 1, 2, 3], 5)
```
Our dictionary containing rock property attributes has the following form:
```
attrs = {'K_sp': {0: 0.0003, 1: 0.0001, 2: 0.0002, 3: 0.0004}}
```
`'K_sp'` is the property that we want to track through the layered rock, `0`, `1`, `2`, `3` are the rock type IDs, and `0.0003` and `0.0001` are the values for `'K_sp'` for the rock types `0` and `1`.
The rock type IDs are unique identifiers for each type of rock. A particular rock type may have many properties (e.g. `'K_sp'`, `'diffusivity'`, and more). You can either specify all the possible rock types and attributes when you instantiate the LithoLayers component, or you can add new ones with the [`lith.add_rock_type`](https://landlab.readthedocs.io/en/release/landlab.components.lithology.html#landlab.components.lithology.lithology.Lithology.add_rock_type) or [`lith.add_property`](https://landlab.readthedocs.io/en/release/landlab.components.lithology.html#landlab.components.lithology.lithology.Lithology.add_property) built in functions.
Finally, we define our function. Here we will use a [lambda expression](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) to create a small anonymous function. In this case we define a function of `x` and `y` that returns the value `x + (2. * y)`. The LithoLayers component will check that this function is a function of two variables and that when passed two arrays of size number-of-nodes it returns an array of size number-of-nodes.
This means that planar rock layers will dip into the ground to the North-North-East. By changing this functional form, we can make more complicated rock layers.
```
func = lambda x, y: x + (2. * y)
```
Finally we construct our LithoLayers component by passing the correct arguments.
```
lith = LithoLayers(mg, layer_elevations, layer_ids, function=func, attrs=attrs)
```
LithoLayers will make sure that the model grid has at-node grid fields with the layer attribute names. In this case, this means that the model grid will now include a grid field called `'K_sp'` and a field called `'rock_type__id'`. We can plot these with the Landlab [imshow_grid](http://landlab.readthedocs.io/en/release/landlab.plot.html#landlab.plot.imshow.imshow_grid) function.
```
imshow_grid(mg, 'rock_type__id', cmap='viridis')
```
As you can see, we have layers that strike East-South-East. Since we can only see the surface expression of the layers, we can't infer the dip direction or magnitude from the plot alone.
If the topographic surface erodes, then you will want to update LithoLayers. Like most Landlab components, LithoLayers uses a `run_one_step` method to update.
Next we will erode the topography by decrementing the variable `z`, which points to the topographic elevation of our model grid, by an amount 1. In a landscape evolution model, this would typically be done by running the `run_one_step` method for each of the process components in the model. If the rock mass is being advected up or down by an external force (e.g. tectonic rock uplift), then then advection must be specified. The `dz_advection` argument can be a single value or an array of size number-of-nodes.
```
z -= 1.
dz_ad = 0.
lith.dz_advection=dz_ad
lith.run_one_step()
```
We can re-plot the value of `'K_sp'`. We will see that the location of the surface expression of the rock layers has changed. As we expect, the location has changed in a way that is consistent with layers dipping to the NNE.
```
imshow_grid(mg, 'rock_type__id', cmap='viridis')
```
Anytime material is added, LithoLayers or Lithology needs to know the type of rock that has been added. LithoLayers and Lithology do not assume to know the correct rock type ID and thus require that the user specify it with the `rock_id` keyword argument. In the `run_one_step` function, both components will check to see if any deposition has occured. If deposition occurs **and** this argument is not passed, then an error will be raised.
For example here we add 1 m of topographic elevation and do not advect the block of rock up or down. When we run `lith.run_one_step` we specify that the type of rock has id `0`.
```
z += 1.
dz_ad = 0.
lith.dz_advection=dz_ad
lith.rock_id=0
lith.run_one_step()
```
When we plot the value of the rock type ID at the surface, we find that it is now all purple, the color of rock type zero.
```
imshow_grid(mg, 'rock_type__id', cmap='viridis', vmin=0, vmax=3)
```
The value passed to the `rock_id` keyword argument can be either a single value (as in the second to last example) or an array of length number-of-nodes. This option permits a user to indicate that more than one type of rock is deposited in a single time step.
Next we will add a 2 m thick layer that is type `1` for x values less than or equal to 6 and type `2` for all other locations.
```
z += 2.
dz_ad = 0.
spatially_variable_rock_id = mg.ones('node')
spatially_variable_rock_id[mg.x_of_node > 6] = 2
lith.dz_advection=dz_ad
lith.rock_id=spatially_variable_rock_id
lith.run_one_step()
imshow_grid(mg, 'rock_type__id', cmap='viridis', vmin=0, vmax=3)
```
As you can see this results in the value of rock type at the surface being about half rock type `1` and about half rock type `2`. Next we will create an xarray dataset that has 3D information about our Lithology to help visualize the layers in space. We will use the `rock_cube_to_xarray` method of the LithoLayers component.
We will then convert this xarray dataset into a HoloViews dataset so we can visualize the result.
As you can see the LithoLayers has a value of rock types `1` and `2` at the surface, then a layer of `0` below, and finally changes to alternating layers.
```
ds = lith.rock_cube_to_xarray(np.arange(30))
hvds_rock = hv.Dataset(ds.rock_type__id)
%opts Image style(cmap='viridis') plot[colorbar=True]
hvds_rock.to(hv.Image, ['x', 'y'])
```
The slider allows us to change the depth below the topographic surface.
We can also plot the cube of rock created with LithoLayers as a cross section. In the cross section we can see the top two layers we made by depositing rock and then dipping layers of alternating rock types.
```
%opts Image style(cmap='viridis') plot[colorbar=True, invert_yaxis=True]
hvds_rock.to(hv.Image, ['x', 'z'])
```
Hopefuly this gives you a sense of how LithoLayers works. The next two blocks of code have all the steps we just worked through in one place.
Try modifying the layer thicknesses, the size of the grid, the function used to create the form of the layers, the layers deposited and eroded, and the location of the anchor point to gain intuition for how you can use LithoLayers to create different types of layered rock.
```
# Parameters that control the size and shape of the model grid
number_of_rows = 50
number_of_columns = 50
dx = 1
# Parameters that control the LithoLayers
# the layer shape function
func = lambda x, y: (0.5 * x)**2 + (0.5 * y)**2
# the layer thicknesses
layer_thickness = 50.
# the location of the anchor point
x0 = 25
y0 = 25
# the resolution at which you sample to create the plan view and cros-section view figures.
sample_depths = np.arange(0, 30, 1)
# create the model grid
mg = RasterModelGrid((number_of_rows, number_of_columns), dx)
z = mg.add_zeros('topographic__elevation', at='node')
# set up LithoLayers inputs
layer_ids = np.tile([0, 1, 2, 3], 5)
layer_elevations = layer_thickness * np.arange(-10, 10)
layer_elevations[-1] = layer_elevations[-2] + 100
attrs = {'K_sp': {0: 0.0003, 1: 0.0001, 2: 0.0002, 3: 0.0004}}
# create LithoLayers
lith = LithoLayers(mg,
layer_elevations,
layer_ids,
x0=x0,
y0=y0,
function=func,
attrs=attrs)
# deposity and erode
dz_ad = 0.
z -= 1.
lith.dz_advection=dz_ad
lith.run_one_step()
z += 1.
lith.dz_advection=dz_ad
lith.rock_id=0
lith.run_one_step()
z += 2.
spatially_variable_rock_id = mg.ones('node')
spatially_variable_rock_id[mg.x_of_node > 6] = 2
lith.dz_advection=dz_ad
lith.rock_id=spatially_variable_rock_id
lith.run_one_step()
# get the rock-cube data structure and plot
ds = lith.rock_cube_to_xarray(sample_depths)
hvds_rock = hv.Dataset(ds.rock_type__id)
# make a plan view image
%opts Image style(cmap='viridis') plot[colorbar=True]
hvds_rock.to(hv.Image, ['x', 'y'])
```
You can also make a cross section of this new LithoLayers component.
```
%opts Image style(cmap='viridis') plot[colorbar=True, invert_yaxis=True]
hvds_rock.to(hv.Image, ['x', 'z'])
```
## Part 2: Creation of a landscape evolution model with LithoLayers
In this next section, we will run LithoLayers with components used for a simple Landscape Evolution Model.
We will start by creating the grid.
```
mg = RasterModelGrid((50, 30), 400)
z = mg.add_zeros('topographic__elevation', at='node')
random_field = 0.01 * np.random.randn(mg.size('node'))
z += random_field - random_field.min()
```
Next we set all the parameters for LithoLayers. Here we have two types of rock with different erodabilities.
```
attrs = {'K_sp': {0: 0.0003, 1: 0.0001}}
z0s = 50 * np.arange(-20, 20)
z0s[-1] = z0s[-2] + 10000
ids = np.tile([0, 1], 20)
```
There are three functional forms that you can choose between. Here we define each of them.
```
# Anticline
anticline_func = lambda x, y: ((0.002 * x)**2 + (0.001 * y)**2)
# Shallow dips
shallow_func = lambda x, y: ((0.001 * x) + (0.003 * y))
# Steeper dips
steep_func = lambda x, y: ((0.01 * x) + (0.01 * y))
```
The default option is to make an anticline, but you can comment/uncomment lines to choose a different functional form.
```
# Anticline
lith = LithoLayers(mg,
z0s,
ids,
x0=6000,
y0=10000,
function=anticline_func,
attrs=attrs)
# Shallow dips
#lith = LithoLayers(mg, z0s, ids, function=shallow_func, attrs=attrs)
# Steeper dips
#lith = LithoLayers(mg, z0s, ids, function=steep_func, attrs=attrs)
```
Now that we've created LithoLayers, model grid fields for each of the LithoLayers attributes exist and have been set to the values of the rock exposed at the surface.
Here we plot the value of `'K_sp'` as a function of the model grid.
```
imshow_grid(mg, 'K_sp')
```
As you can see (in the default anticline option) we have concentric elipses of stronger and weaker rock.
Next, lets instantiate a FlowAccumulator and a FastscapeEroder to create a simple landscape evolution model.
We will point the FastscapeEroder to the model grid field `'K_sp'` so that it will respond to the spatially variable erodabilities created by LithoLayers.
```
nts = 300
U = 0.001
dt = 1000
fa = FlowAccumulator(mg)
sp = FastscapeEroder(mg, K_sp='K_sp')
```
Before we run the model we will also instatiate an xarray dataset used to store the output of our model through time for visualization.
The next block may look intimidating, but I'll try and walk you through what it does.
[xarray](https://xarray.pydata.org/en/stable/) allows us to create a container for our data and label it with information like units, dimensions, short and long names, etc. xarray gives all the tools for dealing with N-dimentional data provided by python packages such as [numpy](http://www.numpy.org), the labeling and named indexing power of the [pandas](https://pandas.pydata.org) package, and the data-model of the [NetCDF file](https://www.unidata.ucar.edu/software/netcdf/).
This means that we can use xarray to make a "self-referential" dataset that contains all of the variables and attributes that describe what each part is and how it was made. In this application, we won't make a fully self-referential dataset, but if you are interested in this, check out the [NetCDF best practices](https://www.unidata.ucar.edu/software/netcdf/docs/BestPractices.html).
Important for our application is that later on we will use the [HoloViews package](http://holoviews.org) for visualization. This package is a great tool for dealing with multidimentional annotated data and will do things like automatically create nice axis labels with units. However, in order for it to work, we must first annotate our data to include this information.
Here we create an xarray Dataset with two variables `'topographic__elevation'` and `'rock_type__id'` and three dimensions `'x'`, `'y'`, and `'time'`.
We pass xarray two dictionaries, one with information about the data variabiables (`data_vars`) and one with information about the coordinate system (`coords`). For each data variable or coordinate, we pass a tuple of three items: `(dims, data, atts)`. The first element is a tuple of the name of the dimensions, the second element is the data, an the third is a dictionary of attributes.
```
ds = xr.Dataset(
data_vars={
'topographic__elevation': (
('time', 'y', 'x'), # tuple of dimensions
np.empty((nts, mg.shape[0], mg.shape[1])), # n-d array of data
{
'units': 'meters', # dictionary with data attributes
'long_name': 'Topographic Elevation'
}),
'rock_type__id':
(('time', 'y', 'x'), np.empty((nts, mg.shape[0], mg.shape[1])), {
'units': '-',
'long_name': 'Rock Type ID Code'
})
},
coords={
'x': (
('x'), # tuple of dimensions
mg.x_of_node.reshape(
mg.shape)[0, :], # 1-d array of coordinate data
{
'units': 'meters'
}), # dictionary with data attributes
'y': (('y'), mg.y_of_node.reshape(mg.shape)[:, 1], {
'units': 'meters'
}),
'time': (('time'), dt * np.arange(nts) / 1e6, {
'units': 'millions of years since model start',
'standard_name': 'time'
})
})
```
We can print the data set to get some basic information about it.
```
print(ds)
```
We can also print a single variable to get more detailed information about it.
Since we initialized the datset with empty arrays for the two data variables, we just see zeros for the data values.
```
ds.topographic__elevation
```
Next, we run the model. In each time step we first run the FlowAccumulator to direct flow and accumulatate drainage area. Then the FastscapeEroder erodes the topography based on the stream power equation using the erodability value in the field `'K_sp'`. We create an uplift field that uplifts only the model grid's core nodes. After uplifting these core nodes, we update LithoLayers. Importantly, we must tell the LithoLayers how it has been advected upward by uplift using the `dz_advection` keyword argument.
As we discussed in the introductory example, the built-in function [`lith.run_one_step`](https://landlab.readthedocs.io/en/release/landlab.components.litholayers.html#landlab.components.lithology.litholayers.LithoLayers.run_one_step) has an optional keyword argument `rock_id` to use when some material may be deposited. The LithoLayers component needs to know what type of rock exists everywhere and it will raise an error if material is deposited **and** no rock type is specified. However, here we are using the FastscapeEroder which is fully detachment limited, and thus we know that no material will be deposited at any time. Thus we can ignore this keyword argument. Later in the tutorial we will use the LinearDiffuser which can deposit sediment and we will need to set this keyword argument correctly.
Within each timestep we save information about the model for plotting.
```
out_fields = ['topographic__elevation', 'rock_type__id']
for i in range(nts):
fa.run_one_step()
sp.run_one_step(dt=dt)
dz_ad = np.zeros(mg.size('node'))
dz_ad[mg.core_nodes] = U * dt
z += dz_ad
lith.dz_advection=dz_ad
lith.run_one_step()
for of in out_fields:
ds[of][i, :, :] = mg['node'][of].reshape(mg.shape)
```
Now that the model has run, lets start by plotting the resulting topography.
```
imshow_grid(mg, 'topographic__elevation', cmap='viridis')
```
The layers of rock clearly influence the form of topography.
Next we will use HoloViews to visualize the topography and rock type together.
To start, we create a HoloViewDataset from our xarray datastructure.
```
hvds_topo = hv.Dataset(ds.topographic__elevation)
hvds_rock = hv.Dataset(ds.rock_type__id)
hvds_topo
```
Next we specify that we want two images, one showing rock type and one showing topographic elevation. A slider bar shows us model time in millions of years.
Be patient. Running this next block may take a moment. HoloViews is rendering an image of all time slices so you can see an animated slider. This is pretty magical (but not instantaneous).
```
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo = hvds_topo.to(hv.Image, ['x', 'y'])
rock = hvds_rock.to(hv.Image, ['x', 'y'])
topo + rock
```
We can see the form of the anticline advecting through the topography. Cool!
## Part 3: Creation of Inverted Topography
Here we will explore making inverted topography by eroding Lithology with constant properties for half of the model evaluation time, and then filling Lithology in with resistant material only where the drainage area is large. This is meant as a simple example of filling in valleys with volcanic material.
All of the details of the options for creating a [Lithology](https://landlab.readthedocs.io/en/release/landlab.components.lithology.html) can be found here.
In the next code block we make a new model and run it. There are a few important differences between this next example and the one we just worked through in Part 2.
Here we will have two rock types. Type `0` that represents non-volcanic material. It will have a higher diffusivity and erodability than the volcanic material, which is type `1`.
Recall that in Part 2 we did not specify a `rock_id` keyword argument to the `lith.run_one_step` method. This was because we used only the FastscapeEroder component which is fully detachment limited and thus never deposits material. In this example we will also use the LinearDiffuser component, which may deposity material. The `Lithology` component needs to know the rock type everywhere and thus we must indicate the rock type of the newly deposited rock. This is done by passing a single value or number-of-nodes sized array rock type values to the `run_one_step` method.
We also are handling the model grid boundary conditions differently than in the last example, setting the boundaries on the top and bottom to closed.
```
mg2 = RasterModelGrid((30, 30), 200)
mg2.set_closed_boundaries_at_grid_edges(False, True, False, True)
z2 = mg2.add_zeros('topographic__elevation', at='node')
random_field = 0.01 * np.random.randn(mg2.size('node'))
z2 += random_field - random_field.min()
thicknesses2 = [10000]
ids2 = [0]
attrs2 = {'K_sp': {0: 0.0001, 1: 0.00001}, 'D': {0: 0.4, 1: 0.001}}
lith2 = Lithology(mg2, thicknesses2, ids2, attrs=attrs2)
nts = 500
U = 0.005
dt = 1000
fa2 = FlowAccumulator(mg2)
sp2 = FastscapeEroder(mg2, K_sp='K_sp')
ld2 = LinearDiffuser(mg2, linear_diffusivity='D')
out_fields = ['topographic__elevation', 'rock_type__id']
out_fields = ['topographic__elevation', 'rock_type__id']
nts = 200
U = 0.001
dt = 1000
ds2 = xr.Dataset(data_vars={
'topographic__elevation':
(('time', 'y', 'x'), np.empty((nts, mg2.shape[0], mg2.shape[1])), {
'units': 'meters',
'long_name': 'Topographic Elevation'
}),
'rock_type__id':
(('time', 'y', 'x'), np.empty((nts, mg2.shape[0], mg2.shape[1])), {
'units': '-',
'long_name': 'Rock Type ID Code'
})
},
coords={
'x': (('x'), mg2.x_of_node.reshape(mg2.shape)[0, :], {
'units': 'meters'
}),
'y': (('y'), mg2.y_of_node.reshape(mg2.shape)[:, 1], {
'units': 'meters'
}),
'time': (('time'), dt * np.arange(nts) / 1e6, {
'units': 'millions of years since model start',
'standard_name': 'time'
})
})
half_nts = int(nts / 2)
dz_ad2 = np.zeros(mg2.size('node'))
dz_ad2[mg2.core_nodes] = U * dt
lith2.dz_advection=dz_ad2
lith2.rock_id=0
for i in range(half_nts):
fa2.run_one_step()
sp2.run_one_step(dt=dt)
ld2.run_one_step(dt=dt)
z2 += dz_ad2
lith2.run_one_step()
for of in out_fields:
ds2[of][i, :, :] = mg2['node'][of].reshape(mg2.shape)
```
After the first half of run time, let's look at the topography.
```
imshow_grid(mg2, 'topographic__elevation', cmap='viridis')
```
We can see that we have developed ridges and valleys as we'd expect from a model with stream power erosion and linear diffusion.
Next we will create some volcanic deposits that fill the channels in our model.
```
volcanic_deposits = np.zeros(mg2.size('node'))
da_big_enough = mg2['node']['drainage_area'] > 5e4
topo_difference_from_top = mg2['node']['topographic__elevation'].max(
) - mg2['node']['topographic__elevation']
volcanic_deposits[
da_big_enough] = 0.25 * topo_difference_from_top[da_big_enough]
volcanic_deposits[mg2.boundary_nodes] = 0.0
z2 += volcanic_deposits
lith2.rock_id=1
lith2.run_one_step()
imshow_grid(mg2, volcanic_deposits)
```
We should expect that the locations of our valleys and ridges change as the river system encouters the much stronger volcanic rock.
```
for i in range(half_nts, nts):
fa2.run_one_step()
sp2.run_one_step(dt=dt)
ld2.run_one_step(dt=dt)
dz_ad2 = np.zeros(mg2.size('node'))
dz_ad2[mg2.core_nodes] = U * dt
z2 += dz_ad2
lith2.dz_advection=dz_ad2
lith2.rock_id=0
lith2.run_one_step()
for of in out_fields:
ds2[of][i, :, :] = mg2['node'][of].reshape(mg2.shape)
```
Now that the model has run, let's plot the final elevation
```
imshow_grid(mg2, 'topographic__elevation', cmap='viridis')
```
And now a HoloView Plot that lets us explore the time evolution of the topography
```
hvds_topo2 = hv.Dataset(ds2.topographic__elevation)
hvds_rock2 = hv.Dataset(ds2.rock_type__id)
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo2 = hvds_topo2.to(hv.Image, ['x', 'y'])
rock2 = hvds_rock2.to(hv.Image, ['x', 'y'])
topo2 + rock2
# if you wanted to output to visualize in something like ParaView, the following commands can be used
#ds.to_netcdf('anticline.nc')
#ds2.to_netcdf('inversion.nc')
```
Sure enough, the volcanic deposits impact the location of the ridges and valleys. The old valleys become ridges because it takes so much time for them to be eroded.
You can explore how this changes as the thickness of the deposit changes and as the relative erodabilities change.
## The end.
Nice work getting to the end of the tutorial!
For more detailed information about the [Lithology](https://landlab.readthedocs.io/en/release/landlab.components.lithology.html) and [LithoLayers](https://landlab.readthedocs.io/en/release/landlab.components.litholayers.html) objects, check out their detailed documentation.
# **Click [here](https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html) for more Landlab tutorials**
| github_jupyter |
## Pandas Practice Notebook
Pandas provides numerous tools to work with tabular data like you'd find in spreadsheets or databases. It is widely used for data preparation, cleaning, and analysis. It can work with a wide variety of data and provides many visualization options. It is built on top of NumPy.
### Series
```
#import the necessary packages
import numpy as np
import pandas as pd
# Pandas uses something called a dataframe. It is a
# 2D data structure that can hold multiple data types.
# Columns have labels.
# Series are built on top of NumPy arrays.
# Create a series by first creating a list
list_1 = ['a', 'b', 'c', 'd']
# I can define that I want the series indexes to be the
# provided labels
labels = [1, 2, 3, 4]
ser_1 = pd.Series(data=list_1, index=labels)
# You can also add a NumPy array
arr_1 = np.array([1, 2, 3, 4])
ser_2 = pd.Series(arr_1)
# You can quickly add labels and values with a dictionary
dict_1 = {"f_name": "Derek",
"l_name": "Banas",
"age": 44}
ser_3 = pd.Series(dict_1)
# Get data by label
ser_3["f_name"]
# You can get the datatype
ser_2.dtype
# You can perform math operations on series
ser_2 + ser_2
ser_2 - ser_2
ser_2 * ser_2
ser_2 / ser_2
# You can pass them into NumPy methods
# See NumPy tutorial for more math methods
np.exp(ser_2)
# The difference between Series and ndarray is that operations
# align by labels
# Create a series from a dictionary
ser_4 = pd.Series({4: 5, 5: 6, 6: 7, 7: 8})
# If labels don't align you will get NaN
ser_2 + ser_4
# You can assign names to series
ser_4 = pd.Series({8: 9, 9: 10}, name='rand_nums')
ser_4.name
```
### DataFrames
DataFrames are the most commonly used data structure with Pandas. They are made up of multiple series that share the same index / label. They can contain multiple data types. They can be created from dicts, series, lists or other dataframes.
### Creating DataFrames
```
from numpy import random
# Create random matrix 2x3 with values between 10 and 50
arr_2 = np.random.randint(10, 50, size=(2, 3))
# Create DF with data, row labels & column labels
df_1 = pd.DataFrame(arr_2, ['A', 'B'], ['C', 'D', 'E'])
# Create a DF from multiple series in a dict
# If series are of different lengthes extra spaces are NaN
dict_3 = {'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df_2 = pd.DataFrame(dict_3)
df_2
# from_dict accepts a column labels and lists
pd.DataFrame.from_dict(dict([('A', [1,2,3]), ('B', [4,5,6])]))
# You can assign the keys as row labels and column labels separate
# with orient='index'
pd.DataFrame.from_dict(dict([('A', [1,2,3]), ('B', [4,5,6])]),
orient='index', columns=['one','two','three'])
# Get number of rows and columns as tuple
print(df_1.shape)
```
### Editing & Retrieving Data
```
# Grab a column
df_1['C']
# Get multiple columns
df_1[['C', 'E']]
# Grabb a row as a series
df_1.loc['A']
# Grab row by index position
df_1.iloc[1]
# Grab cell with Row & Column
df_1.loc['A', 'C']
# Grab multiple cells by defining rows wanted & the
# columns from those rows
print(df_1.loc[['A', 'B'], ['D', 'E']])
# Make new column
df_1['Total'] = df_1['C'] + df_1['D'] + df_1['E']
df_1
# You can perform multiple calculations
df_2['mult'] = df_2['one'] * df_2['two']
df_2
# Make a new row by appending
dict_2 = {'C': 44, 'D': 45, 'E': 46}
new_row = pd.Series(dict_2, name='F')
df_1 = df_1.append(new_row)
# Delete column and set inplace to True which is required
# because Pandas tries to help you not delete data
# by accident
df_1.drop('Total', axis=1, inplace=True)
df_1
# Delete a row
df_1.drop('B', axis=0, inplace=True)
df_1
# Create a new column and make it the index
df_1['Sex'] = ['Men', 'Women']
df_1.set_index('Sex', inplace=True)
# You can reset index values to numbers
#df_1.reset_index(inplace=True)
df_1
# Assign can be used to create a column while leaving the
# original DF untouched
df_2.assign(div=df_2['one'] / df_2['two'])
# You can pass in a function as well
df_2.assign(div=lambda x: (x['one'] / x['two']))
# Combine DataFrames while keeping df_3 data unless
# there is a NaN value
df_3 = pd.DataFrame({'A': [1., np.nan, 3., np.nan]})
df_4 = pd.DataFrame({'A': [8., 9., 2., 4.]})
df_3.combine_first(df_4)
```
### Conditional Selection
```
arr_2 = np.random.randint(10, 50, size=(2, 3))
df_1 = pd.DataFrame(arr_2, ['A', 'B'], ['C', 'D', 'E'])
print(df_1)
# You can use conditional operators to retrieve a table
# based on the condition
print("Greater than 40\n", df_1 > 40.0)
# You can use comparison operater functions as well like
# gt, lt, ge, le, eq, ne
print("Greater than 45\n", df_1.gt(45.0))
# You can place conditions in brackets as well
bool_1 = df_1 >= 45.0
df_1[bool_1]
# Get bools for a column
df_1['E'] > 40
# Return a row if cell value in column matches a condition
df_1[df_1['E']>30]
# You can focus on a column based on resulting dataframe
df_2 = df_1[df_1['E']>30]
df_2['C']
# You can stack these commands
print(df_1[df_1['E']>20]['C'])
print()
# You can also grab multiple columns
print(df_1[df_1['E']>20][['C', 'D']])
print()
# You can use multiple conditions
arr_3 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
df_2 = pd.DataFrame(arr_3, ['A', 'B', 'C'], ['X', 'Y', 'Z'])
print(df_2, "\n")
# You can use or | to combine conditions as well
df_2[(df_2['X']>3) & (df_2['X']<7)]
```
### File Input / Output
Pandas can work with the following types of data : CSV, Plain Text, JSON, XML, PDF, SQL, HTML, XLSX, DOCX, ZIP, Images Hierarchical Data Format, MP3, and MP4.
```
import pymysql
# Read a CSV file
# Type pd.read_ [TAB] to see the file types you can read
cs_df = pd.read_csv('ComputerSales.csv')
# Save a CSV file, but don't save the index as a column
cs_df.to_csv('ComputerSalesBU.csv', index=False)
# You can read data from Excel, but not formulas and macros
pd.read_excel('Financial Sample.xlsx',0)
# Write to Excel
cs_df.to_excel('ComputerSales.xlsx')
# Check if written
pd.read_excel('ComputerSales.xlsx',0)
# Read from MySQL Database
try:
db_connection = pymysql.connect(db='students', user='studentadmin', passwd='TurtleDove', host='localhost', port=3306)
stud_df = pd.read_sql('SELECT * FROM students', con=db_connection)
# print(stud_df)
except Exception as e:
print("Exception : {}".format(e))
finally:
db_connection.close()
# Write to table
try:
db_connection = pymysql.connect(db='students', user='studentadmin', passwd='TurtleDove', host='localhost', port=3306)
# Used to issue queries
cursor = db_connection.cursor()
# Query to enter new student
insert_stmt = "INSERT INTO students VALUES(NULL, 'Frank', 'Silva', 'fsilva@aol.com', '666 Hell St', 'Yakima', 'WA', 98901, '792-223-8966', '1959-2-22', 'M', NOW(), 3.50)"
# Execute query
cursor.execute(insert_stmt)
# Commit changes to DB
db_connection.commit()
stud_df = pd.read_sql('SELECT * FROM students', con=db_connection)
print(stud_df)
except Exception as e:
print("Exception : {}".format(e))
finally:
db_connection.close()
# Just get 1 column of data
cs_df_st = pd.read_csv('ComputerSales.csv', usecols=["State"], squeeze=True)
cs_df_st
```
### Basics & Math
```
# Display 1st 5 rows
cs_df.head()
# Display last 5 rows
cs_df.tail()
# Get 1st 2
cs_df[:2]
# Get 1st through 5 with a 2 step
cs_df[:5:2]
# Get indexes
cs_df.index.array
# Get NumPy array
cs_df.to_numpy()
# Get array from series
ser_1.array
dict_3 = {'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df_2 = pd.DataFrame(dict_3)
# You can replace NaN values with 0 or anything else
print(df_2.fillna(0))
# Get values in row 2
row = df_2.iloc[1]
# Add items in row 2 to all rows including row 2
# You can do the same with sub, mul, and div
df_2.add(row, axis='columns')
# Get column 2
col = df_2['two']
# Subtract from other columns
df_2.sub(col, axis=0)
# Check if empty
df_2.empty
# Transform executes a function on a dataframe
df_5 = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
df_5.transform(lambda x: x+1)
df_5.transform(lambda x: x**2)
df_5.transform(lambda x: np.sqrt(x))
# You can transform using multiple functions
df_5.transform([lambda x: x**2, lambda x: x**3])
# Passing a dictionary allows you to perform different calculations
# on different columns
df_5.transform({'A': lambda x: x**2, 'B': lambda x: x**3})
# map performs a function on a series
df_5['A'].map(lambda x: x**2)
# applymap does the same on a dataframe
df_5.applymap(lambda x: x**2)
# Get unique values in column 2 of DF
df_2['two'].unique()
# Get number of uniques
df_2['two'].nunique()
# Get the number of times each value showed in column 2
df_2['two'].value_counts()
# Get column names
df_2.columns
# Get index info
df_2.index
# Return a DF that lists null values as True
df_2.isnull()
```
### Group Data
```
# Groupby allows you to group rows based on a columnand perform a function
# that combines those values (Aggregate Function)
dict_5 = {'Store': [1,2,1,2], 'Flavor': ['Choc', 'Van', 'Straw', 'Choc'],
'Sales': [26, 12, 18, 22]}
df_11 = pd.DataFrame(dict_5)
# Group data by the store number
by_store = df_11.groupby('Store')
# Get mean sales by store
by_store.mean()
# Get sales total just for store 1
by_store.sum().loc[1]
# You can use multiple functions of get a bunch
by_store.describe()
```
### Concatenate Merge & Join Data
```
# You can concatenate DFs in the order DFs are provided
df_12 = pd.DataFrame({'A': [1,2,3],
'B': [4,5,6]},
index=[1,2,3])
df_13 = pd.DataFrame({'A': [7,8,9],
'B': [10,11,12]},
index=[4,5,6])
pd.concat([df_12, df_13])
# Merge 2 DFs using their shared key column
df_12 = pd.DataFrame({'A': [1,2,3],
'B': [4,5,6],
'key': [1,2,3]})
df_13 = pd.DataFrame({'A': [7,8,9],
'B': [10,11,12],
'key': [1,2,3]})
# inner merges at the intersection of keys
pd.merge(df_12, df_13, how='inner', on='key')
# how='left' or 'right' : Use keys from left or right frame
# how='outer' : Use union of keys
# You can join DFs with different indexes and instead of using
# keys use a column
df_12 = pd.DataFrame({'A': [1,2,3],
'B': [4,5,6]},
index=[1,2,3])
df_13 = pd.DataFrame({'C': [7,8,9],
'D': [10,11,12]},
index=[1,4,5])
df_12.join(df_13, how='outer')
```
### Statistics
```
# Get ice cream sales data
ics_df = pd.read_csv('icecreamsales.csv')
ics_df
# Get total count of both columns
ics_df.count()
# skipna skips null / NaN values
ics_df.sum(skipna=True)
# Get mean for named column
ics_df["Sales"].mean()
ics_df["Sales"].median()
ics_df["Sales"].mode()
ics_df["Sales"].min()
ics_df["Sales"].max()
ics_df["Sales"].prod() # Product of values
ics_df["Sales"].std() # Standard deviation
ics_df["Sales"].var() # Variance
ics_df["Sales"].sem() # Standard error
# Negative : Left long tail, Positive : Right long tail
ics_df["Sales"].skew()
# Kurtosis : < 3 less outliers, 3 Normal Distribution,
# > 3 more outliers
ics_df["Sales"].kurt()
ics_df["Sales"].quantile(.5)
ics_df["Sales"].cumsum()
ics_df["Sales"].cumprod()
ics_df["Sales"].cummax()
ics_df["Sales"].cummin()
# Multiple stats at once
ics_df.describe()
ser_dice = pd.Series(data=[2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6,
6, 6, 6, 7, 7, 7, 7, 7, 7, 8, 8, 8,
8, 8, 9, 9, 9, 9, 10, 10, 10, 11, 11, 12])
# Count for each value in series
ser_dice.value_counts()
# You can perform calculations on multiple columns using
# aggregate
print(df_2)
df_2.agg(np.mean)
# You can do this with multiple functions
df_2.agg(['mean', 'std'])
```
### Iteration
```
# Iterating over series
ser_7 = pd.Series(range(5), index=['a', 'b', 'c', 'd', 'e'])
for col in ser_7:
print(col)
print()
# Iterating over DFs
arr_4 = np.random.randint(10, 50, size=(2, 3))
df_8 = pd.DataFrame(arr_4, ['B', 'C'], ['C', 'D', 'E'])
print(df_8)
# items allows you to iterate through key value pairs to make
# calculations 1 column at a time
for label, ser in df_8.items():
print(label)
print(ser)
print()
# You can also iterate through rows
for index, row in df_8.iterrows():
print(f"{index}\n{row}")
print()
# Get a tuple that contains row data
for row in df_8.itertuples():
print(row)
```
### Sorting
```
df_8
# Sorting by index will return the same results if indexes
# are in order, to reverse indexes mark ascending as False
df_8.sort_index(ascending=False)
# Sort by value for column D (Use the same function for series)
df_8.sort_values(by='D')
```
### Passing Data to Functions
```
import sys
# You can pass DataFrames and Series into functions
def get_profit_total(df):
prof_ser = df['Profit']
print(f"Total Profit : {prof_ser.sum()}")
get_profit_total(cs_df)
# Receives a DataFrame, splits the contact into new columns
# being first and last name
def split_name(df):
def get_names(full_name):
# Split contact at space
f_name, l_name = full_name.split()
# Create a series with first & last names in columns
# with those labels
return pd.Series(
(f_name, l_name),
index=['First Name', 'Last Name']
)
# apply() executes the function on all names in Contact column
names = df['Contact'].apply(get_names)
df[names.columns] = names
return df
# Run function and display top 5 results
split_name(cs_df).head()
# Will assign people to different age groups based on age
def create_age_groups(df):
# Must have 1 more bins than labels
bins = [0, 30, 50, sys.maxsize]
# Group labels
labels = ['<30', '30-50', '>50']
# cut puts values into certain groups based on intervals
# The group assigned to <30 has an age between 0 and 30
# between 30 & 50 is assigned 30-50 and so on
age_group = pd.cut(df['Age'], bins=bins, labels=labels)
# Create new column and return new dataframe info
df['Age Group'] = age_group
return df
create_age_groups(cs_df)
# You can use a pipe to pass a dataframe to multiple functions
cs_df.pipe(split_name).pipe(create_age_groups).head()
```
### Aligning, Reindexing and Renaming Labels
```
ser_6 = pd.Series(range(5), index=['a', 'b', 'c', 'd', 'e'])
sl_1 = ser_6[:4]
sl_2 = ser_6[1:]
print(sl_1)
print(sl_2)
# Align both series by the union of their indexes
sl_1.align(sl_2)
# Align by calling series
sl_1.align(sl_2, join='left')
# Use passed series indexes
sl_1.align(sl_2, join='right')
# Get where indexes intersect
sl_1.align(sl_2, join='inner')
# You can use align with DFs as well
arr_3 = np.random.randint(10, 50, size=(2, 3))
df_6 = pd.DataFrame(arr_3, ['A', 'B'], ['C', 'D', 'E'])
arr_3 = np.random.randint(10, 50, size=(2, 3))
df_7 = pd.DataFrame(arr_3, ['B', 'C'], ['C', 'D', 'E'])
df_6
df_6.align(df_7)
# reindex allows you to align data by index
ser_6.reindex(['c','b','a'])
# Do the same with DFs
df_6.reindex(['B','A'])
# Drop is very similar to reindex except it receives labels
# you don't want to include
df_6.drop(['A'], axis=0)
df_6.drop(['D'], axis=1)
# You can rename labels
df_6.rename(columns={'C': 'Men', 'D': 'Women', 'E': 'Pets'},
index={'A': 1, 'B': 2})
```
### MultiIndex
```
# Multi-level indexing allows you to store data on multiple
# dimensions
days = ['Day 1', 'Day 1', 'Day 1', 'Day 2', 'Day 2', 'Day 2']
meals = [1,2,3,1,2,3]
# zip pairs the days and meals arrays
# Then we create a list of those paired tuples
hier_index = list(zip(days, meals))
print(hier_index)
# Converts list of tuples into each row and column
hier_index = pd.MultiIndex.from_tuples(hier_index)
# Generate random array representing calories eaten per meal
arr_5 = np.random.randint(500, 700, size=(6, 2))
df_9 = pd.DataFrame(arr_5, hier_index, ['M', 'F'])
print(df_9)
# Grab the day 1 DF
df_9.loc['Day 1']
# Grab 1st row as a series
df_9.loc['Day 1'].loc[1]
# Grab calories eaten by the female on day 2 for the 2nd meal
df_9.loc['Day 2'].loc[2]['F']
# We can assign names to the Day and Meals Column
df_9.index.names = ['Day', 'Meal']
df_9
# Get a cross section
# This gets me the Day 2 DF
df_9.xs('Day 2')
# Get calories for the 1st meal for both days by saying what
# meal index you want and the Meal column name
df_9.xs(1, level='Meal')
# Create a MultiIndex out of a DF using a pivot table
dict_6 = {'A':['Day 1', 'Day 1', 'Day 1', 'Day 2', 'Day 2', 'Day 2'],
'B': [1,2,3,1,2,3],
'C': ['M', 'F', 'M', 'F', 'M', 'F'],
'D': [1,2,3,4,5,6]}
df_14 = pd.DataFrame(dict_6)
# Designate the D column is the data
# Make A & B a multilevel index
# Define column names come from column C
# You will have NaNs where data was missing
df_14.pivot_table(values='D', index=['A','B'], columns=['C'])
```
### Handling Missing Data
```
dict_4 = {'A': [1,2,np.nan], 'B': [4, np.nan, np.nan], 'C': [7.,8.,9.]}
df_10 = pd.DataFrame(dict_4)
print(df_10)
# Drop missing data from DF (Drops any row with missing values)
df_10.dropna()
# Drop all columns with any missing data
df_10.dropna(axis=1)
# Drop row unless it has at least 2 non-NaN values
df_10.dropna(thresh=2)
# Fill NaN values with 0
df_10.fillna(value=0.0)
# Fill A column with the mean of column
df_10['A'].fillna(value=df_10['A'].mean())
# Fill with previous value
df_10.fillna(method='ffill')
# Fill with next value (Only works if there is a next value)
df_10.fillna(method='bfill')
```
### Experimenting with Data
```
cs_df.head() # Get 1st 5
print(cs_df.columns) # Get column names
cs_df['Profit'].mean() # Average profit per item
# Get the product with the highest profit
cs_df[['Product ID', 'Profit']].max(axis=0).head()
# Number of people who purchased from WV
cs_df[cs_df['State']=='WV']['State'].count()
# Number of purchases in 2019
len(cs_df[cs_df['Year']==2019].index)
# Get number of sales for each product type
cs_df['Product ID'].value_counts()
# Get list of customers that bought a specific product
cs_df[cs_df['Product ID']=='M01-F0024']['Contact']
# How many made a website purchase for a profit over $200
cs_df[(cs_df['Lead']=='Website') & (cs_df['Profit']>150)]['Lead'].count()
# Find out how many product profit amounts include .89 in cents
cs_df['Profit'].apply(lambda cents: str(cents).split('.')[1]=='89').value_counts()
```
### Visualization
```
# Library usef to create advanced static, animated and
# interactive visualizations
import matplotlib.pyplot as plt
# Displays matplotlib plots in the Notebook
%matplotlib inline
# Histograms provide an approximation of the distribution of
# results. You create them by dividing the range of values into
# bins or buckets. Then you count how many of the results fall
# into each bin.
# Rolls 2 dice 5000 times and charts the frequency and
# a histogram
# Even though the odds increase as you approach 7 and then
# decrease again (1 way to roll a 2 / 6 ways to roll a 7)
# over many rolls they are nearly equal.
df_dice = pd.DataFrame(
np.random.randint(1,7,5000),
columns = ['Hist'])
df_dice['Odds'] = df_dice['Hist'] + np.random.randint(1,7,5000)
# Alpha decreases the opacity in the chart
ax = df_dice.plot.hist(bins=12, alpha=0.5)
# Basic plot using 1000 random values that create cumulative sums
# over an increasing date range
ser_5 = pd.Series(np.random.randn(1000),
index=pd.date_range('11/15/2017', periods=1000))
ser_5 = ser_5.cumsum()
# ser_5.plot()
# Display 3 random plots
df_15 = pd.DataFrame(np.random.randn(1000, 3),
index=pd.date_range('11/15/2017', periods=1000),
columns=list('ABC'))
df_15 = df_15.cumsum()
# df_15.plot()
# Make bar chart from 5 random values
# pd.DataFrame(np.random.randn(5)).plot.bar()
# Make MultiBar Charts
vals = ['A', 'B', 'C', 'D']
df_15 = pd.DataFrame(np.random.rand(10,4), columns=vals)
# df_15.plot.bar()
# Area plot
# Define x range and y values
x_rng = range(1,15)
y_vals = [1,5,4,7,6,9,5,7,10,14,10,12,9,8]
# Change fill color and opacity
# plt.fill_between(x_rng, y_vals, color="skyblue", alpha=0.5)
# plt.show()
# Area plot with multiple areas
# pd.DataFrame(np.random.rand(10,3), columns=['A','B','C']).plot.area()
# Create a scatterplot with 100 random values
# pd.DataFrame(np.random.rand(100,2),
# columns=['A','B']).plot.scatter(x='A', y='B')
# Multiple column scatter plots
df_15 = pd.DataFrame(np.random.rand(50,4), columns=['A','B','C','D'])
# ax = df_15.plot.scatter(x='A', y='B', color='DarkBlue', label='Grp 1')
# df_15.plot.scatter(x='C', y='D', color='Orange', label='Grp 2', ax=ax)
# Pie Charts with 4 random values
# pd.Series(np.random.rand(4),
# index=['a','b','c','d'],
# name='Pie').plot.pie(figsize=(6,6))
```
| github_jupyter |
# Tutorial: Linear Programming, (CPLEX Part 1)
This notebook gives an overview of Linear Programming (or LP). After completing this unit, you should be able to
- describe the characteristics of an LP in terms of the objective, decision variables and constraints,
- formulate a simple LP model on paper,
- conceptually explain some standard terms related to LP, such as dual, feasible region, infeasible, unbounded, slack, reduced cost, and degenerate.
You should also be able to describe some of the algorithms used to solve LPs, explain what presolve does, and recognize the elements of an LP in a basic DOcplex model.
>This notebook is part of **[Prescriptive Analytics for Python](http://ibmdecisionoptimization.github.io/docplex-doc/)**
>
>It requires either an [installation of CPLEX Optimizers](http://ibmdecisionoptimization.github.io/docplex-doc/getting_started.html) or it can be run on [IBM Cloud Pak for Data as a Service](https://www.ibm.com/products/cloud-pak-for-data/as-a-service/) (Sign up for a [free IBM Cloud account](https://dataplatform.cloud.ibm.com/registration/stepone?context=wdp&apps=all>)
and you can start using `IBM Cloud Pak for Data as a Service` right away).
>
> CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
> - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
> - <i>Python 3.x</i> runtime: Community edition
> - <i>Python 3.x + DO</i> runtime: full edition
> - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install the `DO` addon in `Watson Studio Premium` for the full edition
Table of contents:
* [Introduction to Linear Programming](#Introduction-to-Linear-Programming)
* [Example: a production problem](#Example:-a-production-problem)
* [CPLEX Modeling for Python](#Use-IBM-Decision-Optimization-CPLEX-Modeling-for-Python)
* [Algorithms for solving LPs](#Algorithms-for-solving-LPs)
* [Summary](#Summary)
* [References](#References)
# Introduction to Linear Programming
In this topic, you’ll learn what the basic characteristics of a linear program are.
## What is Linear Programming?
Linear programming deals with the maximization (or minimization) of a linear objective function, subject to linear constraints, where all the decision variables are continuous. That is, no discrete variables are allowed. The linear objective and constraints must consist of linear expressions.
## What is a linear expression?
A linear expression is a scalar product, for example, the expression:
$$
\sum{a_i x_i}
$$
where a_i represents constants (that is, data) and x_i represents variables or unknowns.
Such an expression can also be written in short form as a vector product:
$$^{t}A X
$$
where $A$ is the vector of constants and $X$ is the vector of variables.
*Note*: Nonlinear terms that involve variables (such as x and y) are not allowed in linear expressions.
Terms that are not allowed in linear expressions include
- multiplication of two or more variables (such as x times y),
- quadratic and higher order terms (such as x squared or x cubed),
- exponents,
- logarithms,
- absolute values.
## What is a linear constraint?
A linear constraint is expressed by an equality or inequality as follows:
- $linear\_expression = linear\_expression$
- $linear\_expression \le linear\_expression$
- $linear\_expression \ge linear\_expression$
Any linear constraint can be rewritten as one or two expressions of the type linear expression is less than or equal to zero.
Note that *strict* inequality operators (that is, $>$ and $<$) are not allowed in linear constraints.
## What is a continuous variable?
A variable (or _decision_ variable) is an unknown of the problem. Continuous variables are variables the set of real numbers (or an interval).
Restrictions on their values that create discontinuities, for example a restriction that a variable should take integer values, are not allowed.
## Symbolic representation of an LP
A typical symbolic representation of a Linear Programming is as follows:
$
minimize \sum c_{i} x_{i}\\
\\
subject\ to:\\
\ a_{11}x_{1} + a_{12} x_{2} ... + a_{1n} x_{n} \ge b_{1}\\
\ a_{21}x_{1} + a_{22} x_{2} ... + a_{2n} x_{n} \ge b_{2}\\
...
\ a_{m1}x_{1} + a_{m2} x_{2} ... + a_{mn} x_{n} \ge b_{m}\\
x_{1}, x_{2}...x_{n} \ge 0
$
This can be written in a concise form using matrices and vectors as:
$
min\ C^{t}x\\
s.\ t.\ Ax \ge B\\
x \ge 0
$
Where $x$ denotes the vector of variables with size $n$, $A$ denotes the matrix of constraint coefficients, with $m$ rows and $n$ columns and $B$ is a vector of numbers with size $m$.
## Characteristics of a linear program
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/1.png?raw=true" >
</ul>
# Example: a production problem
In this topic, you’ll analyze a simple production problem in terms of decision variables, the objective function, and constraints.
You’ll learn how to write an LP formulation of this problem, and how to construct a graphical representation of the model. You’ll also learn what feasible, optimal, infeasible, and unbounded mean in the context of LP.
## Problem description: telephone production
A telephone company produces and sells two kinds of telephones, namely desk phones and cellular phones.
Each type of phone is assembled and painted by the company. The objective is to maximize profit, and the company has to produce at least 100 of each type of phone.
There are limits in terms of the company’s production capacity, and the company has to calculate the optimal number of each type of phone to produce, while not exceeding the capacity of the plant.
## Writing a descriptive model
It is good practice to start with a descriptive model before attempting to write a mathematical model. In order to come up with a descriptive model, you should consider what the decision variables, objectives, and constraints for the business problem are, and write these down in words.
In order to come up with a descriptive model, consider the following questions:
- What are the decision variables?
- What is the objective?
- What are the constraints?
## Telephone production: a descriptive model
A possible descriptive model of the telephone production problem is as follows:
- Decision variables:
- Number of desk phones produced (DeskProduction)
- Number of cellular phones produced (CellProduction)
- Objective: Maximize profit
- Constraints:
1. The DeskProduction should be greater than or equal to 100.
2. The CellProduction should be greater than or equal to 100.
3. The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours.
4. The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours.
## Writing a mathematical model
Convert the descriptive model into a mathematical model:
- Use the two decision variables DeskProduction and CellProduction
- Use the data given in the problem description (remember to convert minutes to hours where appropriate)
- Write the objective as a mathematical expression
- Write the constraints as mathematical expressions (use “=”, “<=”, or “>=”, and name the constraints to describe their purpose)
- Define the domain for the decision variables
### Telephone production: a mathematical model
To express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:
$
maximize:\\
\ \ 12\ desk\_production + 20\ cell\_production\\
subject\ to: \\
\ \ desk\_production >= 100 \\
\ \ cell\_production >= 100 \\
\ \ 0.2\ desk\_production + 0.4\ cell\_production <= 400 \\
\ \ 0.5\ desk\_production + 0.4\ cell\_production <= 490 \\
$
### Using DOcplex to formulate the mathematical model in Python
Use the [DOcplex](http://ibmdecisionoptimization.github.io/docplex-doc/) Python library to write the mathematical model in Python.
This is done in four steps:
- create a instance of docplex.mp.Model to hold all model objects
- create decision variables,
- create linear constraints,
- finally, define the objective.
But first, we have to import the class `Model` from the docplex module.
## Use IBM Decision Optimization CPLEX Modeling for Python
Let's use the DOcplex Python library to write the mathematical model in Python.
### Step 1: Download the library
Install `CPLEX` (Community Edition) and `docplex` if they are not installed.
In `IBM Cloud Pak for Data as a Service` notebooks, `CPLEX` and `docplex` are preinstalled.
```
import sys
try:
import cplex
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install cplex
else:
!pip install --user cplex
```
Installs `DOcplex`if needed
```
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
```
If either `CPLEX` or `docplex` where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account.
### Step 2: Set up the prescriptive model
#### Create the model
All objects of the model belong to one model instance.
```
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
```
#### Define the decision variables
- The continuous variable `desk` represents the production of desk telephones.
- The continuous variable `cell` represents the production of cell phones.
```
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.continuous_var(name='desk')
cell = m.continuous_var(name='cell')
```
#### Set up the constraints
- Desk and cell phone must both be greater than 100
- Assembly time is limited
- Painting time is limited.
```
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
```
#### Express the objective
We want to maximize the expected revenue.
```
m.maximize(12 * desk + 20 * cell)
```
A few remarks about how we formulated the mathemtical model in Python using DOcplex:
- all arithmetic operations (+, \*, \-) are done using Python operators
- comparison operators used in writing linear constraint use Python comparison operators too.
#### Print information about the model
We can print information about the model to see how many objects of each type it holds:
```
m.print_information()
```
### Graphical representation of a Linear Problem
A simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis.
This is often done to demonstrate optimization concepts.
To do this, follow these steps:
- Assign one variable to the x-axis and the other to the y-axis.
- Draw each of the constraints as you would draw any line in 2 dimensions.
- Use the signs of the constraints (=, <= or >=) to determine which side of each line falls within the feasible region (allowable solutions).
- Draw the objective function as you would draw any line in 2 dimensions, by substituting any value for the objective (for example, 12 * DeskProduction + 20 * CellProduction = 4000)
#### Feasible set of solutions
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/19.png?raw=true" >
</ul>
This graphic shows the feasible region for the telephone problem.
Recall that the feasible region of an LP is the region delimited by the constraints, and it represents all feasible solutions. In this graphic, the variables DeskProduction and CellProduction are abbreviated to be desk and cell instead. Look at this diagram and search intuitively for the optimal solution. That is, which combination of desk and cell phones will yield the highest profit.
#### The optimal solution
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/20.png?raw=true" >
</ul>
To find the optimal solution to the LP, you must find values for the decision variables, within the feasible region, that maximize profit as defined by the objective function. In this problem, the objective function is to maximize
$$12 * desk + 20 * cell
$$
To do this, first draw a line representing the objective by substituting a value for the objective.
Next move the line up (because this is a maximization problem) to find the point where the line last touches the feasible region. Note that all the solutions on one objective line, such as AB, yield the same objective value. Other values of the objective will be found along parallel lines (such as line CD).
In a profit maximizing problem such as this one, these parallel lines are often called isoprofit lines, because all the points along such a line represent the same profit. In a cost minimization problem, they are known as isocost lines. Since all isoprofit lines have the same slope, you can find all other isoprofit lines by pushing the objective value further out, moving in parallel, until the isoprofit lines no longer intersect the feasible region. The last isoprofit line that touches the feasible region defines the largest (therefore maximum) possible value of the objective function. In the case of the telephone production problem, this is found along line EF.
The optimal solution of a linear program always belongs to an extreme point of the feasible region (that is, at a vertex or an edge).
### Solve with the model
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
In any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`.
```
s = m.solve()
m.print_solution()
```
In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region.
### Multiple Optimal Solutions
It is possible that an LP has multiple optimal solutions.
At least one optimal solution will be at a vertex.
By default, the CPLEX® Optimizer reports the first optimal solution found.
#### Example of multiple optimal solutions
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/22.png?raw=true" >
</ul>
This graphic shows an example of an LP with multiple optimal solutions. This can happen when the slope of the objective function is the same as the slope of one of the constraints, in this case line AB. All the points on line AB are optimal solutions, with the same objective value, because they are all extreme points within the feasible region.
### Binding and nonbinding constraints
A constraint is binding if the constraint becomes an equality when the solution values are substituted.
Graphically, binding constraints are constraints where the optimal solution lies exactly on the line representing that constraint.
In the telephone production problem, the constraint limiting time on the assembly machine is binding:
$$
0.2desk + 0.4 cell <= 400\\
desk = 300cell = 8500.2(300) + 0.4(850) = 400
$$
The same is true for the time limit on the painting machine:
$$
0.5desk + 0.4cell <= 4900.5(300) + 0.4(850) = 490
$$
On the other hand, the requirement that at least 100 of each telephone type be produced is nonbinding because the left and right hand sides are not equal:
$$
desk >= 100\\
300 \neq 100
$$
### Infeasibility
A model is infeasible when no solution exists that satisfies all the constraints. This may be because:
The model formulation is incorrect.
The data is incorrect.
The model and data are correct, but represent a real-world conflict in the system being modeled.
When faced with an infeasible model, it's not always easy to identify the source of the infeasibility.
DOcplex helps you identify potential causes of infeasibilities, and it will also suggest changes to make the model feasible.
#### An example of infeasible problem
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/26.png?raw=true" >
</ul>
This graphic shows an example of an infeasible constraint set for the telephone production problem. Assume in this case that the person entering data had accidentally entered lower bounds on the production of 1100 instead of 100. The arrows show the direction of the feasible region with respect to each constraint. This data entry error moves the lower bounds on production higher than the upper bounds from the assembly and painting constraints, meaning that the feasible region is empty and there are no possible solutions.
#### Infeasible models in DOcplex
Calling `solve()` on an infeasible model returns None. Let's experiment this with DOcplex. First, we take a copy of our model and an extra infeasible constraint which states that desk telephone production must be greater than 1100
```
# create a new model, copy of m
im = m.copy()
# get the 'desk' variable of the new model from its name
idesk = im.get_var_by_name('desk')
# add a new (infeasible) constraint
im.add_constraint(idesk >= 1100);
# solve the new proble, we expect a result of None as the model is now infeasible
ims = im.solve()
if ims is None:
print('- model is infeasible')
```
### Correcting infeasible models
To correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.
If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.
DOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints.
### Relaxing constraints by changing the model
In the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.
For example, a relaxation of the assembly time constraint is as follows:
$$
0.2 \ desk + 0.4\ cell <= 440
$$
Here, the right hand side has been relaxed from 400 to 440, meaning that you allow more time for assembly than originally planned.
#### Relaxing model by converting hard constraints to soft constraints
- A _soft_ constraint is a constraint that can be violated in some circumstances.
- A _hard_ constraint cannot be violated under any circumstances. So far, all constraints we have encountered are hard constraints.
Converting hard constraints to soft is one way to resolve infeasibilities.
The original hard constraint on assembly time is as follows:
$$
0.2 \ desk + 0.4 \ cell <= 400
$$
You can turn this into a soft constraint if you know that, for example, an additional 40 hours of overtime are available at an additional cost. First add an overtime term to the right-hand side:
$$
0.2 \ desk + 0.4 \ cell <= 400 + overtime
$$
Next, add a hard limit to the amount of overtime available:
$$
overtime <= 40
$$
Finally, add an additional cost to the objective to penalize use of overtime.
Assume that in this case overtime costs an additional $2/hour, then the new objective becomes:
$$
maximize\ 12 * desk + 20 * cell — 2 * overtime
$$
#### Implement the soft constraint model using DOcplex
First add an extra variable for overtime, with an upper bound of 40. This suffices to express the hard limit on overtime.
```
overtime = m.continuous_var(name='overtime', ub=40)
```
Modify the assembly time constraint by changing its right-hand side by adding overtime.
*Note*: this operation modifies the model by performing a _side-effect_ on the constraint object. DOcplex allows dynamic edition of model elements.
```
ct_assembly.rhs = 400 + overtime
```
Last, modify the objective expression to add the penalization term.
Note that we use the Python decrement operator.
```
m.maximize(12*desk + 20 * cell - 2 * overtime)
```
And solve again using DOcplex:
```
s2 = m.solve()
m.print_solution()
```
### Unbounded Variable vs. Unbounded model
A variable is unbounded when one or both of its bounds is infinite.
A model is unbounded when its objective value can be increased or decreased without limit.
The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded.
An unbounded model is almost certainly not correctly formulated.
While infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.
By default, DOcplex variables are unbounded: their upper bound is infinite (but their lower bound is zero).
#### Unbounded feasible region
The telephone production problem would become unbounded if, for example, the constraints on the assembly and painting time were neglected. The feasible region would then look as in this diagram where the objective value can increase without limit, up to infinity, because there is no upper boundary to the region.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/32.png?raw=true" >
</ul>
## Algorithms for solving LPs
The IBM® CPLEX® Optimizers to solve LP problems in CPLEX include:
- Simplex Optimizer
- Dual-simplex Optimizer
- Barrier Optimizer
### The Simplex algorithm
The Simplex algorithm, developed by George Dantzig in 1947, was the first generalized algorithm for solving LP problems. It is the basis of many optimization algorithms. The simplex method is an iterative method. It starts with an initial feasible solution, and then tests to see if it can improve the result of the objective function. It continues until the objective function cannot be further improved.
The following diagram illustrates how the simplex algorithm traverses the boundary of the feasible region for the telephone production problem. The algorithm, starts somewhere along the edge of the shaded feasible region, and advances vertex-by-vertex until arriving at the vertex that also intersects the optimal objective line. Assume it starts at the red dot indicated on the diagam.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/36.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/37.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/38.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/39.png?raw=true" >
</ul>
### The Revised Simplex algorithm
To improve the efficiency of the Simplex algorithm, George Dantzig and W. Orchard-Hays revised it in 1953. CPLEX uses the Revised Simplex algorithm, with a number of improvements. The CPLEX Optimizers are particularly efficient and can solve very large problems rapidly. You can tune some CPLEX Optimizer parameters to change the algorithmic behavior according to your needs.
### The Dual Simplex algorithm
#### The dual of a LP
The concept of duality is important in Linear Programming (LP). Every LP problem has an associated LP problem known as its _dual_. The dual of this associated problem is the original LP problem (known as the primal problem). If the primal problem is a minimization problem, then the dual problem is a maximization problem and vice versa.
#### A primal-dual pair
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/42.png?raw=true" >
</ul>
*Primal (P)*
--------------------
$max\ z=\sum_{i} c_{i}x_{i}$
*Dual (D)*
-------------------------------
$min\ w= \sum_{j}b_{j}y_{j}$
- Each constraint in the primal has an associated dual variable, yi.
- Any feasible solution to D is an upper bound to P, and any feasible solution to P is a lower bound to D.
- In LP, the optimal objective values of D and P are equivalent, and occurs where these bounds meet.
- The dual can help solve difficult primal problems by providing a bound that in the best case equals the optimal solution to the primal problem.
#### Dual prices
In any solution to the dual, the values of the dual variables are known as the dual prices, also called shadow prices.
For each constraint in the primal problem, its associated dual price indicates how much the dual objective will change with a unit change in the right hand side of the constraint.
The dual price of a non-binding constraint is zero. That is, changing the right hand side of the constraint will not affect the objective value.
The dual price of a binding constraint can help you make decisions regarding the constraint.
For example, the dual price of a binding resource constraint can be used to determine whether more of the resource should be purchased or not.
#### The Dual Simplex algorithm
The Simplex algorithm works by finding a feasible solution and moving progressively toward optimality.
The Dual Simplex algorithm implicitly uses the dual to try and find an optimal solution to the primal as early as it can, and regardless of whether the solution is feasible or not.
It then moves from one vertex to another, gradually decreasing the infeasibility while maintaining optimality, until an optimal feasible solution to the primal problem is found.
In CPLEX, the Dual-Simplex Optimizer is the first choice for most LP problems.
### Basic solutions and basic variables
You learned earlier that the Simplex algorithm travels from vertex to vertex to search for the optimal solution.
A solution at a vertex is known as a _basic_ solution. Without getting into too much detail, it's worth knowing that part of the Simplex algorithm involves setting a subset of variables to zero at each iteration.
These variables are known as non-basic variables. The remaining variables are the _basic_ variables. The concepts of basic solutions and variables are relevant in the definition of reduced costs that follows next.
### Reduced Costs
The reduced cost of a variable gives an indication of the amount the objective will change with a unit increase in the variable value.
Consider the simplest form of an LP:
$
minimize\ c^{t}x\\
s.t. \\
Ax = b \\
x \ge 0
$
If $y$ represents the dual variables for a given basic solution, then the reduced costs are defined as:
$$
c - y^{t}A
$$
Such a basic solution is optimal if:
$$
c - y^{t}A \ge 0
$$
If all reduced costs for this LP are non-negative, it follows that the objective value can only increase with a change in the variable value, and therefore the solution (when minimizing) is optimal.
DOcplex lets you acces sreduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
#### Getting reduced cost values with DOcplex
DOcplex lets you access reduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
```
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))
print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
```
### Default optimality criteria for CPLEX optimizer
Because CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs.
The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being:
$$
c — y^{t}A> –10^{-6}
$$
You can adjust this optimality tolerance, for example if the algorithm takes very long to converge and has already achieved a solution sufficiently close to optimality.
### Reduced Costs and multiple optimal solutions
In the earlier example you saw how one can visualize multiple optimal solutions for an LP with two variables.
For larger LPs, the reduced costs can be used to determine whether multiple optimal solutions exist. Multiple optimal solutions exist when one or more non-basic variables with a zero reduced cost exist in an optimal solution (that is, variable values that can change without affecting the objective value).
In order to determine whether multiple optimal solutions exist, you can examine the values of the reduced costs with DOcplex.
### Slack values
For any solution, the difference between the left and right hand sides of a constraint is known as the _slack_ value for that constraint.
For example, if a constraint states that f(x) <= 100, and in the solution f(x) = 80, then the slack value of this constraint is 20.
In the earlier example, you learned about binding and non-binding constraints. For example, f(x) <= 100 is binding if f(x) = 100, and non-binding if f(x) = 80.
The slack value for a binding constraint is always zero, that is, the constraint is met exactly.
You can determine which constraints are binding in a solution by examining the slack values with DOcplex.
This might help to better interpret the solution and help suggest which constraints may benefit from a change in bounds or a change into a soft constraint.
#### Accessing slack values with DOcplex
As an example, let's examine the slack values of some constraints in our problem, after we revert the change to soft constrants
```
# revert soft constraints
ct_assembly.rhs = 440
s3 = m.solve()
# now get slack value for assembly constraint: expected value is 40
print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))
# get slack value for painting time constraint, expected value is 0.
print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))
```
### Degeneracy
It is possible that multiple non-optimal solutions with the same objective value exist.
As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as _degeneracy_.
Modern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.
If the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa.
#### Setting a LP algorithm with DOcplex
Users can change the algorithm by editing the `lpmethod` parameter of the model.
We won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...
For example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the `parameters` property of the model and from there, assign the `lpmethod` parameter
```
m.parameters.lpmethod = 4
m.solve(log_output=True)
```
### Barrier methods
Most of the CPLEX Optimizers for MP call upon the basic simplex method or some variation of it.
Some, such as the Barrier Optimizer, use alternative methods.
In graphical terms, the Simplex Algorithm starts along the edge of the feasible region and searches for an optimal vertex.
The barrier method starts somewhere inside the feasible region – in other words, it avoids the “barrier” that is created by the constraints, and burrows through the feasible region to find the optimal solution.
In its search, the method uses what is known as a predictor-corrector algorithm that constantly adjusts its path through the center of the feasible region (the central path).
This diagram shows how the barrier method works compared to the simplex method. As you can see, the simplex method traverses the edge of the feasible region, while the barrier method moves through the interior, with a predictor-corrector determining the path. In general, it’s a good idea to experiment with different algorithms in CPLEX when trying to improve performance.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/52.png?raw=true" >
</ul>
### Presolve
CPLEX Optimizer provides a _presolve_ procedure.
Presolve evaluates the model formulation before solving it, and attempts to reduce the size of the problem that is sent to the solver engine.
A reduction in problem size typically translates to a reduction in total run time.
For example, a real problem presented to CPLEX Optimizer with approximately 160,000 constraints and 596,000 decision variables, was reduced by presolve to a problem with 27,000 constraints and 150,000 decision variables.
The presolve time was only 1.32 seconds and reduced the solution time from nearly half an hour to under 25 seconds.
#### An example of presolve operations
Let's consider the following Linear problem:
$
maximize:\\
[1]\ 2x_{1}+ 3x_{2} — x_{3} — x_{4}\\
subject\ to:\\
[2]\ x_{1} + x_{2} + x_{3} — 2x_{4} <= 4\\
[3]\ -x_{1} — x_{2} + x_{3} — x_{4} <= 1\\
[4]\ x_{1} + x_{4} <= 3\\
[5]\ x_{1}, x_{2}, x_{3}, x_{4} >= 0
$
- Because $x_{3}$ has a negative coefficient in the objective, the optimization will minimize $x_{3}$.
- In constraints [2] and [3] $x_{3}$ has positive coefficients, and the constraints are <=. Thus, $x_{3}$ can be reduced to 0, and becomes redundant.
- In constraint [3], all the coefficients are now negative. Because the left hand side of [3] can never be positive, any assignment of values will satisfy the constraint. The constraint is redundant and can be removed.
# Summary
Having completed this notebook, you should be able to:
- Describe the characteristics of an LP in terms of the objective, decision variables and constraints
- Formulate a simple LP model on paper
- Conceptually explain the following terms in the context of LP:
- dual
- feasible region
- infeasible
- unbounded
- slacks
- reduced costs
- degeneracy
- Describe some of the algorithms used to solve LPs
- Explain what presolve does
- Write a simple LP model with DOcplex
## References
* [CPLEX Modeling for Python documentation](http://ibmdecisionoptimization.github.io/docplex-doc/)
* [IBM Decision Optimization](https://www.ibm.com/analytics/decision-optimization)
* Need help with DOcplex or to report a bug? Please go [here](https://stackoverflow.com/questions/tagged/docplex).
* Contact us at dofeedback@wwpdl.vnet.ibm.com.
| github_jupyter |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
output = activation(torch.sum(features * weights) + bias)
y_hat = activation((features * weights).sum() + bias)
print(output)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
output = activation(torch.matmul(features, weights.view(5, 1)) + bias)
print(output)
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
output_W1 = activation(torch.matmul(features , W1) + B1)
output = activation(torch.matmul(output_W1, W2) + B2)
print(output)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
# CIFAR10 Federated Mobilenet Server Side
This code is the server part of CIFAR10 federated mobilenet for **multi** client and a server.
## Setting variables
```
rounds = 100
local_epoch = 1
users = 1 # number of clients
import os
import h5py
import socket
import struct
import pickle
import sys
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from threading import Thread
from threading import Lock
import time
from tqdm import tqdm
```
## Cuda
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
```
## Pytorch layer modules for *Conv1D* Network
### `Conv1d` layer
- `torch.nn.Conv1d(in_channels, out_channels, kernel_size)`
### `MaxPool1d` layer
- `torch.nn.MaxPool1d(kernel_size, stride=None)`
- Parameter `stride` follows `kernel_size`.
### `ReLU` layer
- `torch.nn.ReLU()`
### `Linear` layer
- `torch.nn.Linear(in_features, out_features, bias=True)`
### `Softmax` layer
- `torch.nn.Softmax(dim=None)`
- Parameter `dim` is usually set to `1`.
```
# -*- coding: utf-8 -*-
"""
Created on Thu Nov 1 14:23:31 2018
@author: tshzzz
"""
import torch
import torch.nn as nn
def conv_dw(inplane,outplane,stride=1):
return nn.Sequential(
nn.Conv2d(inplane,inplane,kernel_size = 3,groups = inplane,stride=stride,padding=1),
nn.BatchNorm2d(inplane),
nn.ReLU(),
nn.Conv2d(inplane,outplane,kernel_size = 1,groups = 1,stride=1),
nn.BatchNorm2d(outplane),
nn.ReLU()
)
def conv_bw(inplane,outplane,kernel_size = 3,stride=1):
return nn.Sequential(
nn.Conv2d(inplane,outplane,kernel_size = kernel_size,groups = 1,stride=stride,padding=1),
nn.BatchNorm2d(outplane),
nn.ReLU()
)
class MobileNet(nn.Module):
def __init__(self,num_class=10):
super(MobileNet,self).__init__()
layers = []
layers.append(conv_bw(3,32,3,1))
layers.append(conv_dw(32,64,1))
layers.append(conv_dw(64,128,2))
layers.append(conv_dw(128,128,1))
layers.append(conv_dw(128,256,2))
layers.append(conv_dw(256,256,1))
layers.append(conv_dw(256,512,2))
for i in range(5):
layers.append(conv_dw(512,512,1))
layers.append(conv_dw(512,1024,2))
layers.append(conv_dw(1024,1024,1))
self.classifer = nn.Sequential(
nn.Dropout(0.5),
nn.Linear(1024,num_class)
)
self.feature = nn.Sequential(*layers)
def forward(self,x):
out = self.feature(x)
out = out.mean(3).mean(2)
out = out.view(-1,1024)
out = self.classifer(out)
return out
mobile_net = MobileNet()
mobile_net.to('cpu')
```
## variables
```
import copy
clientsoclist = [0]*users
start_time = 0
weight_count = 0
global_weights = copy.deepcopy(mobile_net.state_dict())
datasetsize = [0]*users
weights_list = [0]*users
lock = Lock()
```
## Comunication overhead
```
total_sendsize_list = []
total_receivesize_list = []
client_sendsize_list = [[] for i in range(users)]
client_receivesize_list = [[] for i in range(users)]
train_sendsize_list = []
train_receivesize_list = []
```
## Socket initialization
### Set host address and port number
### Required socket functions
```
def send_msg(sock, msg):
# prefix each message with a 4-byte length in network byte order
msg = pickle.dumps(msg)
l_send = len(msg)
msg = struct.pack('>I', l_send) + msg
sock.sendall(msg)
return l_send
def recv_msg(sock):
# read message length and unpack it into an integer
raw_msglen = recvall(sock, 4)
if not raw_msglen:
return None
msglen = struct.unpack('>I', raw_msglen)[0]
# read the message data
msg = recvall(sock, msglen)
msg = pickle.loads(msg)
return msg, msglen
def recvall(sock, n):
# helper function to receive n bytes or return None if EOF is hit
data = b''
while len(data) < n:
packet = sock.recv(n - len(data))
if not packet:
return None
data += packet
return data
import copy
def average_weights(w, datasize):
"""
Returns the average of the weights.
"""
for i, data in enumerate(datasize):
for key in w[i].keys():
w[i][key] *= float(data)
w_avg = copy.deepcopy(w[0])
# when client use only one kinds of device
for key in w_avg.keys():
for i in range(1, len(w)):
w_avg[key] += w[i][key]
w_avg[key] = torch.div(w_avg[key], float(sum(datasize)))
# when client use various devices (cpu, gpu) you need to use it instead
#
# for key, val in w_avg.items():
# common_device = val.device
# break
# for key in w_avg.keys():
# for i in range(1, len(w)):
# if common_device == 'cpu':
# w_avg[key] += w[i][key].cpu()
# else:
# w_avg[key] += w[i][key].cuda()
# w_avg[key] = torch.div(w_avg[key], float(sum(datasize)))
return w_avg
```
## Thread define
## Receive users before training
```
def run_thread(func, num_user):
global clientsoclist
global start_time
thrs = []
for i in range(num_user):
conn, addr = s.accept()
print('Conntected with', addr)
# append client socket on list
clientsoclist[i] = conn
args = (i, num_user, conn)
thread = Thread(target=func, args=args)
thrs.append(thread)
thread.start()
print("timmer start!")
start_time = time.time() # store start time
for thread in thrs:
thread.join()
end_time = time.time() # store end time
print("TrainingTime: {} sec".format(end_time - start_time))
def receive(userid, num_users, conn): #thread for receive clients
global weight_count
global datasetsize
msg = {
'rounds': rounds,
'client_id': userid,
'local_epoch': local_epoch
}
datasize = send_msg(conn, msg) #send epoch
total_sendsize_list.append(datasize)
client_sendsize_list[userid].append(datasize)
train_dataset_size, datasize = recv_msg(conn) # get total_batch of train dataset
total_receivesize_list.append(datasize)
client_receivesize_list[userid].append(datasize)
with lock:
datasetsize[userid] = train_dataset_size
weight_count += 1
train(userid, train_dataset_size, num_users, conn)
```
## Train
```
def train(userid, train_dataset_size, num_users, client_conn):
global weights_list
global global_weights
global weight_count
global mobile_net
global val_acc
for r in range(rounds):
with lock:
if weight_count == num_users:
for i, conn in enumerate(clientsoclist):
datasize = send_msg(conn, global_weights)
total_sendsize_list.append(datasize)
client_sendsize_list[i].append(datasize)
train_sendsize_list.append(datasize)
weight_count = 0
client_weights, datasize = recv_msg(client_conn)
total_receivesize_list.append(datasize)
client_receivesize_list[userid].append(datasize)
train_receivesize_list.append(datasize)
weights_list[userid] = client_weights
print("User" + str(userid) + "'s Round " + str(r + 1) + " is done")
with lock:
weight_count += 1
if weight_count == num_users:
#average
global_weights = average_weights(weights_list, datasetsize)
host = socket.gethostbyname(socket.gethostname())
port = 10080
print(host)
s = socket.socket()
s.bind((host, port))
s.listen(5)
```
### Open the server socket
```
run_thread(receive, users)
end_time = time.time() # store end time
print("TrainingTime: {} sec".format(end_time - start_time))
```
## Print all of communication overhead
```
# def commmunication_overhead():
print('\n')
print('---total_sendsize_list---')
total_size = 0
for size in total_sendsize_list:
# print(size)
total_size += size
print("total_sendsize size: {} bytes".format(total_size))
print("number of total_send: ", len(total_sendsize_list))
print('\n')
print('---total_receivesize_list---')
total_size = 0
for size in total_receivesize_list:
# print(size)
total_size += size
print("total receive sizes: {} bytes".format(total_size) )
print("number of total receive: ", len(total_receivesize_list) )
print('\n')
for i in range(users):
print('---client_sendsize_list(user{})---'.format(i))
total_size = 0
for size in client_sendsize_list[i]:
# print(size)
total_size += size
print("total client_sendsizes(user{}): {} bytes".format(i, total_size))
print("number of client_send(user{}): ".format(i), len(client_sendsize_list[i]))
print('\n')
print('---client_receivesize_list(user{})---'.format(i))
total_size = 0
for size in client_receivesize_list[i]:
# print(size)
total_size += size
print("total client_receive sizes(user{}): {} bytes".format(i, total_size))
print("number of client_send(user{}): ".format(i), len(client_receivesize_list[i]))
print('\n')
print('---train_sendsize_list---')
total_size = 0
for size in train_sendsize_list:
# print(size)
total_size += size
print("total train_sendsizes: {} bytes".format(total_size))
print("number of train_send: ", len(train_sendsize_list) )
print('\n')
print('---train_receivesize_list---')
total_size = 0
for size in train_receivesize_list:
# print(size)
total_size += size
print("total train_receivesizes: {} bytes".format(total_size))
print("number of train_receive: ", len(train_receivesize_list) )
print('\n')
root_path = '../../models/cifar10_data'
from torch.utils.data import Dataset, DataLoader
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2470, 0.2435, 0.2616))])
```
## Making Batch Generator
```
trainset = torchvision.datasets.CIFAR10 (root=root_path, train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10 (root=root_path, train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
### `DataLoader` for batch generating
`torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False)`
### Number of total batches
```
train_total_batch = len(trainloader)
print(train_total_batch)
test_batch = len(testloader)
print(test_batch)
mobile_net.load_state_dict(global_weights)
mobile_net.eval()
mobile_net = mobile_net.to(device)
lr = 0.001
optimizer = optim.SGD(mobile_net.parameters(), lr=lr, momentum=0.9)
criterion = nn.CrossEntropyLoss()
```
## Accuracy of train and each of classes
```
# train acc
with torch.no_grad():
corr_num = 0
total_num = 0
train_loss = 0.0
for j, trn in enumerate(trainloader):
trn_x, trn_label = trn
trn_x = trn_x.to(device)
trn_label = trn_label.clone().detach().long().to(device)
trn_output = mobile_net(trn_x)
loss = criterion(trn_output, trn_label)
train_loss += loss.item()
model_label = trn_output.argmax(dim=1)
corr = trn_label[trn_label == model_label].size(0)
corr_num += corr
total_num += trn_label.size(0)
print("train_acc: {:.2f}%, train_loss: {:.4f}".format(corr_num / total_num * 100, train_loss / len(trainloader)))
# test acc
with torch.no_grad():
corr_num = 0
total_num = 0
val_loss = 0.0
for j, val in enumerate(testloader):
val_x, val_label = val
val_x = val_x.to(device)
val_label = val_label.clone().detach().long().to(device)
val_output = mobile_net(val_x)
loss = criterion(val_output, val_label)
val_loss += loss.item()
model_label = val_output.argmax(dim=1)
corr = val_label[val_label == model_label].size(0)
corr_num += corr
total_num += val_label.size(0)
accuracy = corr_num / total_num * 100
test_loss = val_loss / len(testloader)
print("test_acc: {:.2f}%, test_loss: {:.4f}".format( accuracy, test_loss))
# acc of each acc
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
x, labels = data
x = x.to(device)
labels = labels.to(device)
outputs = mobile_net(x)
labels = labels.long()
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(len(labels)):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
# Let's quickly save our trained model:
PATH = './cifar10_fd_mobile.pth'
torch.save(mobile_net.state_dict(), PATH)
end_time = time.time() # store end time
print("WorkingTime: {} sec".format(end_time - start_time))
```
| github_jupyter |
Before running this notebook, run this command in your terminal `pip install pylandtemp` or simply copy and paste this `!pip install pylandtemp` into a cell and run
# 1. Import python dependencies
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import rasterio.plot
import rasterio
```
# 2. Location
For this tutorial, we’ll use the NIR and Red bands from a Landsat-8 scene above part of the central valley and the Sierra Nevada in California. We’ll be using Level 1TP datasets, orthorectified, map-projected images containing radiometrically calibrated data.
## Load RGB images towards true color visualization
```
# Amazon. cloud image folder
url = 'http://landsat-pds.s3.amazonaws.com/c1/L8/042/034/LC08_L1TP_042034_20170616_20170629_01_T1/'
# RGB images file names with file extensions
redband = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(4)
greenband = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(3)
blueband = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(2)
with rasterio.open(url+redband) as src:
redImage = src.read(1).astype('f4')
with rasterio.open(url+greenband) as src:
greenImage = src.read(1).astype('f4')
with rasterio.open(url+blueband) as src:
blueImage = src.read(1).astype('f4')
rgb_image = np.concatenate(
(np.expand_dims(redImage, axis=0),
np.expand_dims(greenImage, axis=0),
np.expand_dims(blueImage, axis=0)),
axis=0
)
normed_rgb = rgb_image / np.max(rgb_image)
```
## Get get map view of location
```
from rasterio.plot import show
figure(figsize = (20, 6), dpi = 80)
show(normed_rgb)
```
We can see here that there is some snowy mountain range that crosses that top left to bottom right of the image. We expect the temperature to be low on this snowy surface
# 3. Bands needed from land surface temperature computation (Split window)
- Red: Band 4
- Near-Infrared (NIR): Band 5
- Thermal infrared 1: Band 10
- Thermal infrared 2: Band 11
Read band has already been loaded above
Here, I have used `rasterio` to load the images/bands needed.
```
nirband = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(5)
tempband10 = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(10)
tempband11 = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(11)
with rasterio.open(url+nirband) as src:
nirImage = src.read(1).astype('f4')
with rasterio.open(url+tempband10) as src:
tempImage10 = src.read(1).astype('f4')
with rasterio.open(url+tempband11) as src:
tempImage11 = src.read(1).astype('f4')
```
# 8. Compute land surface temperature
```
from pylandtemp import split_window
```
### Split window
Available methods to choose from include:
`'jiminez-munoz'`: Jiminez-Munoz et al, 2008
`'kerr'`: Kerr Y et al, 2004
`'mc-clain'`: McMillin L. M. , 1975
`'price'`: Price J. C., 1984
`'sobrino-1993'`: Sobrino J. A. et al, 1993
```
method = 'jiminez-munoz'
lst_image_split_window = split_window(
tempImage10,
tempImage11,
redImage,
nirImage,
lst_method=method,
emissivity_method='avdan',
unit='celcius'
)
```
##### Visualize the Land Surface Temperature obtained
```
plt.imshow(lst_image_split_window, cmap='viridis')
plt.colorbar()
plt.title('{}\n LST in Celcius {}'.format(f'{method} method', lst_image_split_window.shape))
plt.xlabel('Column #')
plt.ylabel('Row #')
plt.show()
```
| github_jupyter |
# Part 2: Administrating Portal and Server
## Managing Online and Portal
<img src="./img/portal.png" width=50%/>
- ArcGIS Online (AGOL)
- Portal for ArcGIS
### Administration Overview
<img src="./img/gis.admin.png"/>
### Portal vs AGOL
<img src="./img/org_differences.jpg" />
```
from arcgis.gis import GIS
gis_agol = GIS(profile='agol_profile')
```
### Licensing
- Supports assigning licenses for Esri premium apps
- The licenses include:
+ ArcGIS Pro, Navigator for ArcGIS, AppStudio for ArcGIS Standard, Drone2Map for ArcGIS, ArcGIS Business Analyst web app, ArcGIS Community Analyst, GeoPlanner for ArcGIS
**Listing Licenses**
```
license = gis_agol.admin.license
license.all()
```
**Getting a Single License**
```
pro_license = license.get('ArcGIS Pro')
pro_license
```
**Viewing Licensing Reports**
- provides both visual and tabular reports
**Tabular Reports**
```
pro_license.report
```
**Demo: Assigning Entitlements**
```
user = gis_agol.users.create(firstname="Demo", lastname="Account2",
user_type="creatorUT",
username="entitleduser",
password="flimFLAM2!",
email="fakeperson@esri.com")
r = pro_license.report
print(r[r['Entitlement'] == 'desktopStdN'])
pro_license.assign(username=user.username, entitlements="desktopStdN")
r = pro_license.report
r[r['Entitlement'] == 'desktopStdN']
```
**Revoking Entitlements**
- `revoke` removes a license for the product assigned to a user.
```
pro_license.revoke(username="entitleduser", entitlements="*")
r = pro_license.report
r[r['Entitlement'] == 'desktopStdN']
```
#### Releasing ArcGIS Pro License for Offline Users
- Users can check out licenses, but it holds a license from anyone else to use
- Administrators can force the release of that license on enterprise
```python
licenses.release_license('username')
{'status' : 'success'}
```
```
user = gis_agol.users.get("entitleduser")
for l in gis_agol.admin.license.all():
entitle = l.user_entitlement(username=user.username)
if 'entitlements' in entitle:
print(entitle['entitlements'])
l.revoke(username=user.username, entitlements=entitle['entitlements'], suppress_email=True)
user.delete()
```
### Customizing the UX
<img src="http://esri.github.io/arcgis-python-api/notebooks/nbimages/guide_gis_ux_default_install_portal.PNG" />
- Organizations look and feel can be fully customized
- When migrating from one version to another, script! Don't click!
```
gis = GIS(profile='geodevagol')
#%load reset_site.py
gis.admin.ux.name = "ArcGIS Enterprise"
gis.admin.ux.description = "<br/>"
```
**Get the UX Manager**
```
ux = gis.admin.ux
```
**Modifying the Site Name**
- Get/Set operation
```
ux.name
ux.name = "Dev Site for GeoSaurus"
ux.name
```
**Set the Banner**
```
ux.set_banner(banner_file='./img/ux/banner.jpg')
```
##### Results:
<img src="./img/banner_result.jpg" />
#### Site Description
```
ux.description_visibility = True
ux.description
ux.description = "This site is used in assisting the ArcGIS API for Python develop new functionality.<div><br /></div>"
ux.description
```
#### Results:
<img src="./img/desc_result.jpg" />
#### Setting featured content
- designate the contents of a groups to show on the featured content homepage
```
grp = gis.groups.search("DevGroup")[0]
grp
ux.featured_content = {'group' : grp}
```
#### Result:
<img src="./img/fc_result.jpg" />
### Credit Management
- The currency of ArcGIS Online
- Allows users to do analysis, enrich data, and much more
- Hosting services cost credit as well.
**Access the CreditManager**
```
cm = gis_agol.admin.credits
cm
```
**View Available Credits**
- Get the total number of credits on an Organization
```
cm.credits
```
**Managing Credits**
- Setup budgeting rules by enabling management
```
if cm.is_enabled:
print(cm.enable())
cm.is_enabled
```
**Checking the Default Limit**
- The default credit setting is unlimited (-1)
```
cm.default_limit
```
**Demo: Setting the default user credits**
```
# %load ./solutions/setting_credits.py
if cm.default_limit == -1:
cm.default_limit = 500
cm.default_limit
```
**Allocating Credits to a User**
- Assignment of credits beyond the default is some necessary
```
cm.allocate(username=gis_agol.users.me.username,
credits=10000)
```
**Give Specific User Unlimited Credits**
```
cm.allocate(username=gis_agol.users.me.username,
credits=-1)
```
**Disabling Credit Management**
- `disable()` on credit manager will disable any credit monitoring
- **THIS IS A BAD IDEA**
### Metadata Management
- Information about information
- Allows users to provide robust information about datasets beyond the description, title, tags, etc...
#### Enabling Metadata
```
meta = gis_agol.admin.metadata
meta
if meta.is_enabled == False:
meta.enable()
meta.is_enabled
```
#### Disabling Metadata
```
meta.disable()
```
### User History
- Examine what users are doing with the organization
- Ensure specific content does not change or get modified
```
%matplotlib inline
import datetime
import pandas as pd
then = datetime.datetime.now() - datetime.timedelta(days=4)
df = pd.read_csv(gis_agol.admin.history(start_date=then, num=100000))
df.action.value_counts().plot.bar()
```
### Portal Logs
- A record of events that occurred
- Used for monitoring and troubleshooting portal
#### Examples of Logs Incidents:
+ Installation and upgrade events, such as authorizing the software and creating the portal website
+ Publishing of services and items, such as hosted services, web maps, and data items
+ Content management events, such as sharing items, changing item ownership, and adding, updating, moving, and deleting items
+ Security events, such as users logging in to the portal, creating, deleting, and disabling users, creating and changing user roles, updating HTTP and HTTPS settings, import and export of security certificates, and updating the portal's identity store
+ Organization management events, such as adding and configuring groups, adding or removing users from a group, configuration of the gallery, basemaps, utility services, and federated servers, and configuring log settings and deleting logs
+ General events, such as updating the portal's search index and restarting the portal
```
gis = GIS(verify_cert=False, profile="python_playground_admin")
logs = gis.admin.logs
logs
```
#### Log Settings
- Modify, update basic storage and save setting
```
logs.settings
```
#### Query Portal Logs
```
import datetime
import pandas as pd
results = logs.query(start_time=datetime.datetime.now() - datetime.timedelta(days=10))
%matplotlib inline
df = pd.DataFrame(results['logMessages'])
df.type.value_counts().plot.bar()
df.head()
```
### Deep Dive into PortalAdmin API
#### Inspecting the Portal's Machines
- query the machines that power your portal
```
machines = gis.admin.machines
machines
machines.list()
```
**Check if Machine is Running**
```
machine = machines.list()[0]
machine.status()
```
#### System Directories
- inspect the physical pathes where resources are stored
```
directories = gis.admin.system.directories
directories
d = directories[0]
d.properties
```
#### The System
```
system = gis.admin.system
system
system.properties
```
#### Re-indexing
- Sometimes artifacts remain after deleting items
- Forcing re-indexing can solved that problem
```
system.index_status
system.reindex(mode="SEARCH_MODE")
system.index_status
```
## Managing ArcGIS Server
### Managing Federated Servers
- The `admin` property provides useful tools to manage ArcGIS Server instances
```
gis = GIS(verify_cert=False, profile="achap_profile")
servers = gis.admin.servers
servers
```
#### Listing the Federated Servers
```
s = servers.list()
s
```
#### Check if Servers are Working
- validate ensures everything is federated and running correctly
```
servers.validate()
```
### Connecting to a Server
#### Accessing Single Server
```
server = s[0]
server
server.properties
```
#### Accessing Server Logs
- Like the portal Logs, server provide a host of information
```
logs = server.logs
logs
logs.settings
```
**Demo: Querying Logs**
```
msgs = logs.query(
start_time=datetime.datetime.now() - datetime.timedelta(days=10))['logMessages']
msgs
server.services
server.properties
```
### Managing service folders
**Creating a Folder**
- use `create_folder`
```
server.services.create_folder("crime_analysis")
```
**Delete a Folder**
- use `delete_folder`
```
server.services.delete_folder('crime_analysis')
```
### Managing Services
- Access service management from `services` property
- Provides the ability start,stop, delete, and modify services
```
services = server.services
services
```
#### Checking if Service Exists
To check if a service exists on your server, call the `exists` method and specify the folder name, service name and type. You can also use this method to verify if a folder exists on the server.
```
services.exists(folder_name='Hosted', name='Ports', service_type='FeatureServer')
```
#### Demo: Listing all Services
```
for folder in services.folders:
for s in services.list(folder):
print(s)
```
#### Control a Service's State
- `start`, `stop` and `restart` services
```
for service in services.list():
if service.properties.serviceName == 'SampleWorldCities':
break
service
```
**Check the Service Status**
- Shows if the services is running or not
```
service.status
service.stop()
service.status
service.start()
service.status
```
### Modifying a Service
- modify extensions, pooling, etc...
```
for service in services.list():
if service.properties.serviceName == 'SampleWorldCities':
break
service
for ext in service.extensions:
if ext.typeName == "KmlServer":
ext.enabled = True
[(ext.typeName, ext.enabled) for ext in service.extensions]
```
### Publishing an SD File
- directly publish SD file to server
```
fp = r"./data/dino_AttachmentManager_basic.sd"
services.publish_sd(sd_file=fp)
s = services.list()[0]
s.properties
s.delete()
```
### Server Logs
ArcGIS Server records events that occur, and any errors associated with those events, to logs. Logs are an important tool for monitoring and troubleshooting problems with your site. Information in the logs will help you identify errors and provide context on how to address problems
```
logs = server.logs
logs
logs.settings
```
#### Filtering and querying server logs
```
import datetime
import pandas as pd
now = datetime.datetime.now()
start_time = now - datetime.timedelta(days=10)
start_time
recent_logs = logs.query(start_time = start_time)
#print a message as a sample
recent_logs['logMessages']
```
### Monitoring Server Usage
ArcGIS Server records various service statistics, such as total requests, average response time and timeouts. Administrators and publishers can use this information to monitor service activity to better understand how clients are using services. For example, monitoring server statistics help you answer questions such as:
- What is the total number of requests that my ArcGIS Server site handled during the past week?
- How was the service request load distributed during the past month?
- How are my services performing on an hourly basis?
- What was the maximum number of service instances used at any given time for a particular service?
```
usage = server.usage
usage
```
#### Using built-in report
```
reports = usage.list()
reports
for r in reports:
print(r.properties['reportname'])
```
#### Querying maximum response times for the last 7 days
```
data = reports[0].query()
from datetime import datetime
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
#store reponse times in Y axis
data_y = data['report']['report-data'][0][0]['data']
#convert dates to readable dates and store in X axis
data_x = [pd.to_datetime(datetime.fromtimestamp(d//1000)) \
for d in data['report']['time-slices']]
df = pd.DataFrame(list(zip(data_x, data_y)), columns=["date", "count"])
q = df['count'].isnull() # change NaN values to 0
df.loc[q, 'count'] = 0
df.index = df['date']
df['count'] = df['count']
ax = df['count'].plot(kind='bar', x=df['date'])
ticklabels = ['']*len(df.index)
ticklabels[::4] = [item.strftime('%b %d') for item in df.index[::4]]
ax.xaxis.set_major_formatter(ticker.FixedFormatter(ticklabels))
ax.set_title('Maximum reponse time in the last 7 days')
ax.set_ylabel('Time in seconds')
plt.gcf().autofmt_xdate()
plt.show()
```
#### Creating Quick Reports
- On the fly reporting
- Data is not saved
**Metrics Available**
- RequestCount - the number of requests received
- RequestsFailed - the number of requests that failed
- RequestsTimedOut - the number of requests that timed out
- RequestMaxResponseTime - the maximum response time
- RequestAvgResponseTime - the average response time
- ServiceActiveInstances - the maximum number of active (running) service instances sampled at 1 minute intervals, for a specified service
```
data = usage.quick_report(since="LAST_MONTH", metrics="RequestCount")
data.keys()
type(data['report']['report-data']), len(data['report']['time-slices'])
import pandas as pd
data_flat = {
#'report_data' : data['report']['report-data'],
'time_slices' : data['report']['time-slices']
}
for d in data['report']['report-data'][0]:
data_flat[d['metric-type']] = d['data']
pd.DataFrame(data_flat).tail()
```
## Questions?
| github_jupyter |
## Jupyter 机器学习环境搭建
- Mac:[Mac 10.13 安装 Python-3.6.8 和 IPython-Notebook](https://dlonng.com/posts/ipynb-install)
- Windows:待更新
- Ubuntu:待更新
- Other Linux:待更新
## 一、读取原始数据
```
# pandas: Python Data Analysis Library 提供大量能快速便捷地处理数据的函数和方法
import pandas as pd
# seaborn: statistical data visualization 高层数据可视化库
import seaborn as sns
sns.set(context="notebook", style="whitegrid", palette="dark")
# Matplotlib:2D Python 绘图包
import matplotlib.pyplot as plt
# Tensorflow:端到端开源机器学习平台
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# Numpy:一个运行速度非常快的数学库,主要用于大量的维度数组与矩阵运算
import numpy as np
# 读取数据,第一列命令为 population,第二列命名为 profit
df = pd.read_csv('ex1data1.txt', names=['population', 'profit'])
# 输出前 5 行数据
df.head()
# 输出数据的属性
df.info()
```
# 二、可视化原始数据
```
sns.lmplot('population', 'profit', df, height=6, fit_reg=False)
plt.show()
```
## 三、定义函数
```
# 读取数据特征
def get_X(df):
# 创建 m 行 1 列的数据帧
ones = pd.DataFrame({'ones': np.ones(len(df))})
# 合并全 1 向量作为元素数据第一列,axis = 0 按行合并,anix = 1 按列合并
data = pd.concat([ones, df], axis=1)
return data.iloc[:, :-1].values
# 读取数据值,即数据标签
def get_y(df):
# 返回原始数据最后一列组成的数组,df.iloc[:, -1] 指最后一列
return np.array(df.iloc[:, -1])
```
## 四、计算代价函数 Cost Function
```
# 读取数据
data = pd.read_csv('ex1data1.txt', names=['population', 'profit'])
# 输出前 5 行
data
# 查看特征属性
X = get_X(data)
print(X.shape, type(X))
print(X)
# 查看标签属性
y = get_y(data)
print(y.shape, type(y))
# X.shape[1] = 2,表示特征数 n = 2,theta = [theta_0, theta_1]
theta = np.zeros(X.shape[1])
# Cost Function
# X: R(m * n) 特征矩阵
# y: R(m * 1) 标签值矩阵
# theta: R(n) 线性回归参数
def lr_cost(theta, X, y):
# m 为样本数
m = X.shape[0]
# 误差 = theta * x - y
inner = X @ theta - y
# 将平方计算转换为:行向量 * 列向量
square_sum = inner.T @ inner
# 缩小成本量大小
cost = square_sum / (2 * m)
return cost;
# 当 theta 初始为 0 时,计算当前成本 cost
lr_cost(theta, X, y)
```
## 五、批量梯度下降法
```
# 计算偏导数
def gradient(theta, X, y):
m = X.shape[0]
inner = X.T @ (X @ theta - y)
return inner / m
# 批量梯度下降
# epoch: 下降迭代次数
# alpha: 初始学习率
def batch_gradient_decent(theta, X, y, epoch, alpha = 0.01):
# 计算初始成本:theta 都为 0
cost_data = [lr_cost(theta, X, y)]
# 创建新的 theta 变量,不与原来的混淆
_theta = theta.copy()
for _ in range(epoch):
_theta = _theta - alpha * gradient(_theta, X, y)
cost_data.append(lr_cost(_theta, X, y))
return _theta, cost_data
epoch = 500
final_theta, cost_data = batch_gradient_decent(theta, X, y, epoch)
# 输出最优参数
final_theta
# 输出成本量的迭代过程
cost_data
# 计算最终的成本量
lr_cost(final_theta, X, y)
# 可视化代价数据
ax = sns.tsplot(cost_data, time = np.arange(epoch + 1))
ax.set_xlabel('epoch')
ax.set_ylabel('cost')
# 第二轮代价数据变化很大,之后平滑
plt.show()
# 可视化最优参数拟合效果, b 为 y 轴截距, m 为斜率
b = final_theta[0]
m = final_theta[1]
plt.scatter(data.population, data.profit, label = 'Training data')
plt.plot(data.population, b + m * data.population, label = 'Prediction')
# loc 表示标签的显示位置,0 显示在最好的位置,2 显示在左上角
plt.legend(loc = 0)
plt.show()
```
## 六、标准化数据(特征缩放)
```
# 特征缩放
def normalize_feature(df):
# 对原始数据每一列应用一个 lambda 函数,mean() 求每列平均值,std() 求标准差
return df.apply(lambda column: (column - column.mean()) / column.std())
# 显示原始数据
raw_data = pd.read_csv('ex1data2.txt', names = ['square', 'bedrooms', 'price'])
raw_data.head()
# 对原始数据进行特征缩放
data = normalize_feature(raw_data)
data.head()
```
## 七、多变量梯度下降
```
# 输出特征数据维度和类型
X = get_X(data)
print(X.shape, type(X))
# 输出标签数据维度和类型
y = get_y(data)
print(y.shape, type(y))
# 初始化 theta 参数均为 0
theta = np.zeros(X.shape[1])
# 设置迭代次数
epoch = 500
# 设置学习率
alpha = 0.01
# 多变量批量梯度下降
final_theta, cost_data = batch_gradient_decent(theta, X, y, epoch, alpha = alpha)
# 代价 - 迭代次数可视化
sns.tsplot(time = np.arange(len(cost_data)), data = cost_data)
plt.xlabel('epoch', fontsize = 18)
plt.ylabel('cost', fontsize = 18)
plt.show()
# 输出迭代的最优参数 theta
final_theta
```
## 八、学习率
```
# 以 10 的幂次创建等比数列学习率,开始点 -1 次,结束点 -5 次,元素个数为 4 个
base = np.logspace(-1, -5, num = 4)
print(base)
# 将 3 倍的 base 等比数列加入原数列,并排序显示出候选学习率
candidate = np.sort(np.concatenate((base, base * 3)))
print(candidate)
epoch = 50
# 创建一个子图,宽 X 高 = 16 X 9 英寸
fig, ax = plt.subplots(figsize = (16, 9))
# 绘制每个学习率对应的代价 - 迭代次数曲线
for alpha in candidate:
_, cost_data = batch_gradient_decent(theta, X, y, epoch, alpha = alpha)
# np.arange(51) = array([0, 1, 2, ..., 50])
ax.plot(np.arange(epoch + 1), cost_data, label = alpha)
ax.set_xlabel('epoch', fontsize = 18)
ax.set_ylabel('cost', fontsize = 18)
# bbox_to_anchor(num1, num2), 指定图例的起始位置,参数为起始点 (整个坐标轴的高度为1)
# num1 越大曲线标签表格越靠右,num2 越大曲线参数表格越靠上
# borderaxespad:坐标轴与图例之间的距离
ax.legend(bbox_to_anchor = (1.05, 1), loc = 2, borderaxespad = 0.)
ax.set_title('learning rate', fontsize = 18)
# 学习率太小会导致算法收敛速度过慢
plt.show()
```
## 九、正规方程
```
# 求解正规方程
def normalEqn(X, y):
theta = np.linalg.inv(X.T @ X) @ X.T @ y
return theta
final_theta_normal = normalEqn(X, y)
final_theta_normal
```
## 十、Tensorflow 不同优化算法的比较
```
# 读取数据,显示维度和类型
X_data = get_X(data)
print(X_data.shape, type(X_data))
# 转为 Tensorflow 需要的数据
y_data = get_y(data).reshape(len(X_data), 1)
print(y_data.shape, type(y_data))
def linear_regression(X_data, y_data, alpha, epoch, optimizer=tf.train.GradientDescentOptimizer):# 这个函数是旧金山的一个大神Lucas Shen写的
# placeholder for graph input
X = tf.placeholder(tf.float32, shape=X_data.shape)
y = tf.placeholder(tf.float32, shape=y_data.shape)
# construct the graph
with tf.variable_scope('linear-regression'):
W = tf.get_variable("weights",
(X_data.shape[1], 1),
initializer=tf.constant_initializer()) # n*1
y_pred = tf.matmul(X, W) # m*n @ n*1 -> m*1
loss = 1 / (2 * len(X_data)) * tf.matmul((y_pred - y), (y_pred - y), transpose_a=True) # (m*1).T @ m*1 = 1*1
opt = optimizer(learning_rate=alpha)
opt_operation = opt.minimize(loss)
# run the session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_data = []
for i in range(epoch):
_, loss_val, W_val = sess.run([opt_operation, loss, W], feed_dict={X: X_data, y: y_data})
loss_data.append(loss_val[0, 0]) # because every loss_val is 1*1 ndarray
if len(loss_data) > 1 and np.abs(loss_data[-1] - loss_data[-2]) < 10 ** -9: # early break when it's converged
# print('Converged at epoch {}'.format(i))
break
# clear the graph
tf.reset_default_graph()
return {'loss': loss_data, 'parameters': W_val} # just want to return in row vector format
# 设置迭代次数和学习率
epoch = 2000
alpha = 0.01
# 构建优化器字典
optimizer_dict = {'GD': tf.train.GradientDescentOptimizer,
'Adagrad': tf.train.AdagradOptimizer,
'Adam': tf.train.AdamOptimizer,
'Ftrl': tf.train.FtrlOptimizer,
'RMS': tf.train.RMSPropOptimizer
}
# 保存线性回归的结果
results = []
# 对每个优化算法进行一次线性回归
for name in optimizer_dict:
res = linear_regression(X_data, y_data, alpha, epoch, optimizer = optimizer_dict[name])
res['name'] = name
results.append(res)
fig, ax = plt.subplots(figsize = (16, 9))
# 绘制每个优化算法的回归曲线
for res in results:
loss_data = res['loss']
ax.plot(np.arange(len(loss_data)), loss_data, label = res['name'])
ax.set_xlabel('epoch', fontsize = 18)
ax.set_ylabel('cost', fontsize = 18)
ax.legend(bbox_to_anchor = (1.05, 1), loc = 2, borderaxespad = 0.)
ax.set_title('different optimizer', fontsize = 18)
plt.show()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Convolutional Variational Autoencoder
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/generative/cvae">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cvae.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/cvae.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>

This notebook demonstrates how to generate images of handwritten digits by training a Variational Autoencoder ([1](https://arxiv.org/abs/1312.6114), [2](https://arxiv.org/abs/1401.4082)).
```
# to generate gifs
!pip install imageio
```
## Import TensorFlow and other libraries
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
```
## Load the MNIST dataset
Each MNIST image is originally a vector of 784 integers, each of which is between 0-255 and represents the intensity of a pixel. We model each pixel with a Bernoulli distribution in our model, and we statically binarize the dataset.
```
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
# Normalizing the images to the range of [0., 1.]
train_images /= 255.
test_images /= 255.
# Binarization
train_images[train_images >= .5] = 1.
train_images[train_images < .5] = 0.
test_images[test_images >= .5] = 1.
test_images[test_images < .5] = 0.
TRAIN_BUF = 60000
BATCH_SIZE = 100
TEST_BUF = 10000
```
## Use *tf.data* to create batches and shuffle the dataset
```
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(TRAIN_BUF).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices(test_images).shuffle(TEST_BUF).batch(BATCH_SIZE)
```
## Wire up the generative and inference network with *tf.keras.Sequential*
In our VAE example, we use two small ConvNets for the generative and inference network. Since these neural nets are small, we use `tf.keras.Sequential` to simplify our code. Let $x$ and $z$ denote the observation and latent variable respectively in the following descriptions.
### Generative Network
This defines the generative model which takes a latent encoding as input, and outputs the parameters for a conditional distribution of the observation, i.e. $p(x|z)$. Additionally, we use a unit Gaussian prior $p(z)$ for the latent variable.
### Inference Network
This defines an approximate posterior distribution $q(z|x)$, which takes as input an observation and outputs a set of parameters for the conditional distribution of the latent representation. In this example, we simply model this distribution as a diagonal Gaussian. In this case, the inference network outputs the mean and log-variance parameters of a factorized Gaussian (log-variance instead of the variance directly is for numerical stability).
### Reparameterization Trick
During optimization, we can sample from $q(z|x)$ by first sampling from a unit Gaussian, and then multiplying by the standard deviation and adding the mean. This ensures the gradients could pass through the sample to the inference network parameters.
### Network architecture
For the inference network, we use two convolutional layers followed by a fully-connected layer. In the generative network, we mirror this architecture by using a fully-connected layer followed by three convolution transpose layers (a.k.a. deconvolutional layers in some contexts). Note, it's common practice to avoid using batch normalization when training VAEs, since the additional stochasticity due to using mini-batches may aggravate instability on top of the stochasticity from sampling.
```
class CVAE(tf.keras.Model):
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.inference_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.generative_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation='relu'),
tf.keras.layers.Conv2DTranspose(
filters=32,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME"),
]
)
@tf.function
def sample(self, eps=None):
if eps is None:
eps = tf.random.normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.inference_net(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.generative_net(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
```
## Define the loss function and the optimizer
VAEs train by maximizing the evidence lower bound (ELBO) on the marginal log-likelihood:
$$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$
In practice, we optimize the single sample Monte Carlo estimate of this expectation:
$$\log p(x| z) + \log p(z) - \log q(z|x),$$
where $z$ is sampled from $q(z|x)$.
**Note**: we could also analytically compute the KL term, but here we incorporate all three terms in the Monte Carlo estimator for simplicity.
```
optimizer = tf.keras.optimizers.Adam(1e-4)
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.math.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
@tf.function
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
@tf.function
def compute_apply_gradients(model, x, optimizer):
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
```
## Training
* We start by iterating over the dataset
* During each iteration, we pass the image to the encoder to obtain a set of mean and log-variance parameters of the approximate posterior $q(z|x)$
* We then apply the *reparameterization trick* to sample from $q(z|x)$
* Finally, we pass the reparameterized samples to the decoder to obtain the logits of the generative distribution $p(x|z)$
* **Note:** Since we use the dataset loaded by keras with 60k datapoints in the training set and 10k datapoints in the test set, our resulting ELBO on the test set is slightly higher than reported results in the literature which uses dynamic binarization of Larochelle's MNIST.
## Generate Images
* After training, it is time to generate some images
* We start by sampling a set of latent vectors from the unit Gaussian prior distribution $p(z)$
* The generator will then convert the latent sample $z$ to logits of the observation, giving a distribution $p(x|z)$
* Here we plot the probabilities of Bernoulli distributions
```
epochs = 100
latent_dim = 50
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random.normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_input):
predictions = model.sample(test_input)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
generate_and_save_images(model, 0, random_vector_for_generation)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
compute_apply_gradients(model, train_x, optimizer)
end_time = time.time()
if epoch % 1 == 0:
loss = tf.keras.metrics.Mean()
for test_x in test_dataset:
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, '
'time elapse for current epoch {}'.format(epoch,
elbo,
end_time - start_time))
generate_and_save_images(
model, epoch, random_vector_for_generation)
```
### Display an image using the epoch number
```
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
plt.imshow(display_image(epochs))
plt.axis('off')# Display images
```
### Generate a GIF of all the saved images.
```
anim_file = 'cvae.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info >= (6,2,0,''):
display.Image(filename=anim_file)
```
If you're working in Colab you can download the animation with the code below:
```
try:
from google.colab import files
except ImportError:
pass
else:
files.download(anim_file)
```
| github_jupyter |
# Text Mining DocSouth Slave Narrative Archive
---
*Note:* This is one in [a series of documents and notebooks](https://jeddobson.github.io/textmining-docsouth/) that will document and evaluate various machine learning and text mining tools for use in literary studies. These notebooks form the practical and critical archive of my book-in-progress, _Digital Humanities and the Search for a Method_. I have published a critique of some existing methods (Dobson 2016) that takes up some of these concerns and provides some theoretical background for my account of computational methods as used within the humanities.
### Revision Date and Notes:
10/12/2017: Initial version (james.e.dobson@dartmouth.edu)
### Producing Topic Models from DocSouth North American Slave Narrative Texts
```
# local Natural Language Toolkit
import nltk
print("nltk version: ",nltk.__version__)
# load scikit-learn
import sklearn
from sklearn.feature_extraction import text
from sklearn import decomposition
from sklearn import datasets
print("sklearn version: ",sklearn.__version__)
# load all library and all the texts
import sys
sys.path.append("lib")
import docsouth_utils
neh_slave_archive = docsouth_utils.load_narratives()
#
# create input list of documents for the topic model
# and perform additional preprocessing (stopword removal,
# lowercase, dropping non-alpha characters, etc.)
#
topic_model_source=list()
for i in neh_slave_archive:
preprocessed=docsouth_utils.preprocess(i['text'])
topic_model_source.append(' '.join(preprocessed))
# topics to model
num_topics = 20
# features to extract
num_features = 50
print('reading files and loading into vectorizer')
print('generating',num_topics,'topics with',num_features,'features')
# for LDA (Just TF)
lda_vectorizer = text.CountVectorizer(max_df=0.95, min_df=2,
max_features=num_features,
lowercase='true',
ngram_range=(2,4),
strip_accents='unicode',
stop_words='english')
lda_vectorizer.decode_error='replace'
lda_tf = lda_vectorizer.fit_transform(topic_model_source)
# fit to model
lda_model = decomposition.LatentDirichletAllocation(n_topics=num_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
batch_size=128,
max_doc_update_iter=100,
random_state=None)
lda_model.fit(lda_tf)
print("LDA Model:")
feature_names = lda_vectorizer.get_feature_names()
for topic_idx, topic in enumerate(lda_model.components_):
print("Topic #%d:" % topic_idx)
print(", ".join([feature_names[i] for i in topic.argsort()[:-num_features - 1:-1]]))
print()
```
| github_jupyter |
In this notebook, we'll be implementing the Transformer-XL architecture from scratch.
To run this notebook in full, make sure to use the `download_data.sh` script to download the Penn Treebank data.
```
from typing import *
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cpu") if not torch.cuda.is_available() else torch.device("cuda:0")
```
# Review of the Transformer
Let's start off with a quick overview of the Transformer architecture which will be the basis of the Transformer XL.
Overall, the Transformer architecture is composed of multiple MultiHeadAttention layer stacked on top of each other, followed by feedforward layers, residual connections, and layer normalization layers.

The MultiHeadAttention layer is composed of multiple attention heads. Each attention head applies a linear transformation to its inputs and computes attention over its input values using keys and queries.

This approach is incapable of handling position, so the Transformer adds embeddings representing the position of the input to the word embeddings.
For more details, please refer to [this tutorial](https://github.com/keitakurita/Practical_NLP_in_PyTorch/blob/master/deep_dives/transformer_from_scratch.ipynb) I've written on just the Transformer.
We'll be building the Transformer XL by first implementing a single attention head, scaling it to compose the MultiHeadAttention layer for the Transformer XL, then building the DecoderBlock and stacking them to create the full Transformer XL.
# Implementing the Transformer XL
### A Single Attention Head
We'll start off by implementing a single attention head in a MultiHeadAttention layer. To make things concrete, let's consider the first layer and assume we receive an input of word embeddings of shape `(seq=7, batch_size=3, embedding_dim=32)`. Note that the Transformer XL does not add positional embeddings to the input.
```
seq, batch_size, embedding_dim = 7, 3, 32
word_embs = torch.rand(seq, batch_size, embedding_dim)
```
In the Transformer XL, we also feed the cached outputs of the model for the previous sequence. In this case, we would be feeding the word embeddings from the previous sequence as an additional input to our model.

To make things concrete, let's imagine our previous sequence was of length `prev_seq=6`
```
prev_seq = 6
memory = torch.rand(prev_seq, batch_size, embedding_dim) # hidden states from the previous sequence
```
Each attention head takes keys, queries, and values as input. The processing goes like this:
1. Apply a separate linear transformation to each of the keys, queries, and values.
2. Compute attention scores for each of the values.
3. For each query, compute an attention-weighted sum of the values.
4. Apply a residual connection and layer normalization.
We'll start off with the linear transformation.
```
inner_dim = 17 # this will be the internal dimension
linear_k = nn.Linear(embedding_dim, inner_dim)
linear_v = nn.Linear(embedding_dim, inner_dim)
linear_q = nn.Linear(embedding_dim, inner_dim)
```
The memory is concatenated across the sequence dimension and fed as keys/values. Be careful, as it's not concatenated with the queries. This is because each query represents one word we want to predict, so we can't modify the number of queries.
```
word_embs_w_memory = torch.cat([memory, word_embs], dim=0)
k_tfmd = linear_k(word_embs_w_memory)
v_tfmd = linear_v(word_embs_w_memory)
q_tfmd = linear_q(word_embs) # No memory for the queries
```
Now, we compute scaled dot product attention as per the usual Transformer. Scaled dot product attention computes the attention score as the dot product between the query and key vectors. To prevent the values from exploding as the dimensionality of the vectors increases, we divide the raw attention score by the sqrt of the embedding size.
$$ \textrm{Attention}(Q, K, V) = \textrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V $$

We'll be using einsum notation here to make the code easy to read: if you're not familiar with einsum, check out [this awesome tutorial](https://rockt.github.io/2018/04/30/einsum). In short, einsum denotes the shape of the inputs and outputs using one letter to represent each dimension. Below, the inputs are shaped `(i, b, d)` and `(j, b, d)` and the output is shaped `(i, j, b)` where the same letter represents the same size. Einsum is computed by taking the dot product across dimensions with the same character.
```
content_attn = torch.einsum("ibd,jbd->ijb", q_tfmd, k_tfmd) / (embedding_dim ** 0.5) # scale
```
Notice we're not yet applying the softmax activation. This is because we need a couple more pieces to get the full attention score. The first of these is the relative positional embeddings.
### Relative positional encodings
One of the key ideas in the Transformer XL is the idea of relative positional encodings. Instead of having a single embedding represent each **absolute** position, the Transformer XL computes an embedding that represents the **distance** between any two tokens. This is used to compute the attention between the two words.
The authors use the following equation to compute the attention between a query vector $ q_i $ and key vector $k_j$:
\begin{align}
A^{rel}_{i,j} =
\underbrace{E_{x_i}^TW_q^TW_{k,E}E_{x_j}}_{(a)}
+ \underbrace{E_{x_i}^TW_q^TW_{k,R} \color{green}R_\color{green}{i-j} }_{(b)}
\\
+ \underbrace{ \color{red}u^\color{red}T W_{k,E}E_{x_j}}_{(c)}
+ \underbrace{ \color{red}v^\color{red}T W_{k,R} \color{green}R_\color{green}{i-j}}_{(d)}
\end{align}
Here, $E_{x}$ is the embedding for $ x $ and $ W $ are all transformation matrices. The (a) term is the content-based attention terms that we already computed above. (b) and (d) are based on the relative positional embeddings and are dependent on the distance between $q_i$ and $k_j$. $u$ and $v$ are global bias terms that represent biases for certain content and certain positions.
Let's move on to the detailed implementation of terms (b) to (d). We'll first add the content bias (term (c) in the above equation) since it is the most simple to compute.
```
u = torch.rand(17).expand_as(q_tfmd)
content_attn = content_attn + torch.einsum("ibd,jbd->ijb", u, k_tfmd) / (embedding_dim ** 0.5)
```
Next, we compute the relative positional embeddings necessary for the positional attention terms. For the relative positional embeddings, the Transformer XL uses fixed sinusoidal embeddings.
```
pos_idxs = torch.arange(seq + prev_seq - 1, -1, -1.0, dtype=torch.float)
pos_idxs
inv_freq = 1 / (10000 ** (torch.arange(0.0, embedding_dim, 2.0) / embedding_dim))
sinusoid_inp = torch.einsum("i,j->ij", pos_idxs, inv_freq)
plt.plot(sinusoid_inp[0, :].detach().numpy());
plt.plot(sinusoid_inp[6, :].detach().numpy());
relative_positional_embeddings = torch.cat([sinusoid_inp.sin(), sinusoid_inp.cos()], dim=-1)[:,None,:]
relative_positional_embeddings.shape
```
Now, we can gather this into its own class
```
class PositionalEmbedding(nn.Module):
def __init__(self, d):
super().__init__()
self.d = d
inv_freq = 1 / (10000 ** (torch.arange(0.0, d, 2.0) / d))
# register buffer tells pytorch that this tensor is part of the modle
# this means that it will be saved in the state_dict and moved to the GPU
# along with the model
self.register_buffer("inv_freq", inv_freq)
def forward(self, positions: torch.LongTensor, # (seq, )
):
# outer product
sinusoid_inp = torch.einsum("i,j->ij", positions.float(), self.inv_freq)
pos_emb = torch.cat([sinusoid_inp.sin(), sinusoid_inp.cos()], dim=-1)
return pos_emb[:,None,:]
```
We also apply transformations to the positional embeddings separate from the values/keys.
```
linear_p = nn.Linear(embedding_dim, inner_dim)
pos_tfmd = linear_p(relative_positional_embeddings)
```
This time, we'll be adding the positional bias during attention computation.
```
v = torch.rand(17) # positional bias
pos_attn = torch.einsum("ibd,jd->ijb", q_tfmd + v, pos_tfmd[:,0,:]) / (embedding_dim ** 0.5) # scale
pos_attn.shape
```
Since we compute a relative postional embedding for each key-query pair, a naive implementation of attention using relative positional embeddings would be $O(n^2)$ in terms of computational complexity. Luckily, the authors proposed a trick to reduce this to $O(n)$ time by computing the attention for one query then shifting the embeddings for different query positions..
```
zero_pad = torch.zeros((seq, 1, batch_size), dtype=torch.float)
# this padding + shifting efficiently computes the attention for all
pos_attn = (torch.cat([zero_pad, pos_attn], dim=1)
.view(seq + prev_seq + 1, seq, batch_size)[1:]
.view_as(pos_attn))
```
The attention is computed as the sum of content and positional attention.
```
raw_attn = content_attn + pos_attn
```
When we do language modeling, we need to prevent the model from being able to look at the word that it is supposed to be predicting. In the Transformer, we achieve this by setting the attention score to zero. This masks out words that we don't want the model to be able to see.
```
mask = torch.triu(
torch.ones((seq, seq + prev_seq)),
diagonal=1 + prev_seq,
).byte()[...,None]
raw_attn = raw_attn.masked_fill(mask, -float('inf'))
```
We can now compute the outputs as the weighted sum of the value vectors using the attention scores.
```
attn = torch.softmax(raw_attn, dim=1)
attn_weighted_sum = torch.einsum("ijb,jbd->ibd", attn, v_tfmd)
attn_weighted_sum.shape
```
Finally, we project the attention weighted sums back to their original dimension and apply a residual connection and layer normalization. We apply layer normalization after the residual connection.
```
linear_out = nn.Linear(inner_dim, embedding_dim)
layer_norm = nn.LayerNorm(embedding_dim)
output = layer_norm(word_embs + linear_out(attn_weighted_sum))
output.shape
```
### MultiHeadAttention (MHA): The core component
Aggregating all the above and applying a couple of optimizations by grouping some computations together as well as adding dropout, we get the following MultiHeadAttention module.
```
from typing import *
class MultiHeadAttention(nn.Module):
def __init__(self, d_input: int, d_inner: int, n_heads: int=4,
dropout: float=0.1, dropouta: float=0.):
super().__init__()
self.d_input = d_input
self.d_inner = d_inner
self.n_heads = n_heads
# this layer applies the linear transformation required
# for the keys and values for all heads at once for efficiency
self.linear_kv = nn.Linear(
d_input,
(d_inner * n_heads * 2), # 2 is for keys and values
bias=False, # we don't apply bias, making this a simple matrix multiplication
)
# for queries (will not be concatenated with memorized states so separate)
self.linear_q = nn.Linear(
d_input, d_inner * n_heads,
bias=False
)
# for positional embeddings
self.linear_p = nn.Linear(
d_input, d_inner * n_heads,
bias=False
)
self.scale = 1 / (d_inner ** 0.5) # for scaled dot product attention
self.dropa = nn.Dropout(dropouta)
# we will use this to project back to the input dimension
self.lout = nn.Linear(self.d_inner * self.n_heads, self.d_input, bias=False)
self.norm = nn.LayerNorm(self.d_input)
self.dropo = nn.Dropout(dropout)
def _rel_shift(self, x):
zero_pad = torch.zeros((x.size(0), 1, *x.size()[2:]),
device=x.device, dtype=x.dtype)
return (torch.cat([zero_pad, x], dim=1)
.view(x.size(1) + 1, x.size(0), *x.size()[2:])[1:]
.view_as(x))
def forward(self, input_: torch.FloatTensor, # (cur_seq, b, d_in)
pos_embs: torch.FloatTensor, # (cur_seq + prev_seq, d_in)
memory: torch.FloatTensor, # (prev_seq, b, d_in)
u: torch.FloatTensor, # (H, d)
v: torch.FloatTensor, # (H, d)
mask: Optional[torch.FloatTensor]=None,
):
"""
pos_embs: we pass the positional embeddings in separately
because we need to handle relative positions
input shape: (seq, bs, self.d_input)
pos_embs shape: (seq + prev_seq, bs, self.d_input)
output shape: (seq, bs, self.d_input)
"""
cur_seq = input_.shape[0] # sequence length of current segment
prev_seq = memory.shape[0] # sequence length of previous segment
H, d = self.n_heads, self.d_inner
input_with_memory = torch.cat([memory, input_], dim=0) # concatenate recurrent memory
# across sequence dimension
# we will use the following symbols to represent the shape of the tensors
# cs: current sequence length, b: batch, H: number of heads
# d: inner dimension, ps: previous sequence length
# The key and value are now conditioned on the preceding context
k_tfmd, v_tfmd = \
torch.chunk(self.linear_kv(input_with_memory), 2, dim=-1) # (cs + ps, b, H * d)
q_tfmd = self.linear_q(input_) # (cs, b, H * d)
# apply scaled dot product attention
# look at the following dimensions carefully, since this is the key operation
# in the Transformer/Transformer XL architecture
_, bs, _ = q_tfmd.shape
assert bs == k_tfmd.shape[1]
# content-based attention term ((a) + (c) in the paper)
# this is the standard attention term in the original Transformer, except without positional embeddings
# which are handled separately in the Transformer XL (see below)
# here, i corresponds to the number of queries = number of current inputs/targets (seq-wise)
# j corresponds to the number of key/values = number of vectors that we can use to compute the
# vector for each query
content_attn = torch.einsum("ibhd,jbhd->ijbh", (
(q_tfmd.view(cur_seq, bs, H, d) + # (a)
u), # (c): u represents the global (independent of the query)
# bias towards certain key/values = words
# Note: maybe this could be a per-attention head parameter?
k_tfmd.view(cur_seq + prev_seq, bs, H, d) # There is no positional information to be found here
)) # (cs, cs + ps, b, H)
# position-based attention term ((b) + (d) in the paper)
# this attention is solely based on the position of the key/values
# (i.e. it does not take the content of the key/values into account)
p_tfmd = self.linear_p(pos_embs) # (cs + ps, b, H * d)
position_attn = torch.einsum("ibhd,jhd->ijbh", (
(q_tfmd.view(cur_seq, bs, H, d) + # (b)
v), # (d): v represents the global (independent of the query)
# bias towards certain positions
p_tfmd.view(cur_seq + prev_seq, H, d) # Notice there is not content information
# regarding keys and values here!
)) # (cs, cs + ps, b, H)
# Compute positional attention efficiently
position_attn = self._rel_shift(position_attn)
# the attention is the sum of content-based and position-based attention
attn = content_attn + position_attn
if mask is not None and mask.any().item():
attn = attn.masked_fill(
mask[...,None], -float('inf'))
attn = torch.softmax(attn * self.scale, # rescale to prevent values from exploding
dim=1) # normalize across the value sequence dimension
attn = self.dropa(attn)
attn_weighted_values = (torch.einsum("ijbh,jbhd->ibhd",
(attn, # (cs, cs + ps, b, H)
v_tfmd.view(cur_seq + prev_seq, bs, H, d), # (cs + ps, b, H, d)
)) # (cs, b, H, d)
.contiguous() # we need to change the memory layout to make `view` work
.view(cur_seq, bs, H * d)) # (cs, b, H * d)
# Project back to input dimension and add residual connection
output = input_ + self.dropo(self.lout(attn_weighted_values))
output = self.norm(output)
return output
```
Let's test it out to see if it runs successfully
```
mha = MultiHeadAttention(32, 17, n_heads=4)
inpt = torch.rand(7, 3, 32)
pos = torch.rand(13, 32)
mem = torch.rand(6, 3, 32)
u, v = torch.rand(4, 17), torch.rand(4, 17)
x1 = mha(inpt, pos, mem, u, v)
```
Looks good
```
x1.shape
x1[0]
```
### Building the decoder
To construct the decoder block, all we need in addition to the MultiHeadAttention layer is the Positionwise Feed Forward layer.

```
class PositionwiseFF(nn.Module):
def __init__(self, d_input, d_inner, dropout):
super().__init__()
self.d_input = d_input
self.d_inner = d_inner
self.dropout = dropout
self.ff = nn.Sequential(
nn.Linear(d_input, d_inner), nn.ReLU(inplace=True),
nn.Dropout(dropout),
nn.Linear(d_inner, d_input),
nn.Dropout(dropout),
)
self.layer_norm = nn.LayerNorm(d_input)
def forward(self, input_: torch.FloatTensor, # (cur_seq, bs, d_input)
) -> torch.FloatTensor: # (cur_seq, bs, d_input)
ff_out = self.ff(input_)
output = self.layer_norm(input_ + ff_out)
return output
```
Now we can implement the decoder block.
```
class DecoderBlock(nn.Module):
def __init__(self, n_heads, d_input,
d_head_inner, d_ff_inner,
dropout, dropouta=0.):
super().__init__()
self.mha = MultiHeadAttention(d_input, d_head_inner, n_heads=n_heads,
dropout=dropout, dropouta=dropouta)
self.ff = PositionwiseFF(d_input, d_ff_inner, dropout)
def forward(self, input_: torch.FloatTensor, # (cur_seq, bs, d_input)
pos_embs: torch.FloatTensor, # (cur_seq + prev_seq, d_input),
u: torch.FloatTensor, # (H, d_input),
v: torch.FloatTensor, # (H, d_input),
mask=None,
mems=None,
):
return self.ff(self.mha(input_, pos_embs, mems, u, v, mask=mask))
```
### The full Transformer XL
```
import torch.nn.functional as F
```
Now with these components in place, we can build the full Transformer XL model.
Aside from what we mentioned above, one common trick in language modeling that we haven't covered yet is tying the input embedding matrix $ E $ and output projection matrix $ P $. Remember, a language model predicts the next token in a sequence, so its output dimension is $\mathbb{R}^{|V|}$ where $|V|$ is the vocab size. If we constrain the penultimate layer output to be the same dimension as the embeddings $d$, the embedding matrix $ E $ will be of shape $ \mathbb{R}^{|V| \times d}$ and the output projection matrix $ P $ will be of shape $ \mathbb{R}^{d \times |V|} $.
In [this paper](https://arxiv.org/abs/1608.05859), the authors found that constraining the matrices such that $ P = E^T $ improved performance while greatly reducing the total parameter count (and thus memory usage!) of the model.
Instead of simply using the exact same weights, we scale the embeddings by the embedding dim. This trick is included in the codebase but not mentioned in the paper as far as I can tell. If you're aware of a paper where this trick was originally introduced, please let me know!
```
class StandardWordEmbedding(nn.Module):
def __init__(self, num_embeddings, embedding_dim,
div_val=1, sample_softmax=False):
super().__init__()
self.num_embeddings = num_embeddings
self.embedding_dim = embedding_dim
self.embedding = nn.Embedding(num_embeddings, embedding_dim)
self.scale = embedding_dim ** 0.5
def forward(self, input_: torch.LongTensor):
return self.embedding(input_) * self.scale
```
Now, all we need to do is to put everything we have implemented above together.
```
class TransformerXL(nn.Module):
def __init__(self, num_embeddings, n_layers, n_heads,
d_model, d_head_inner, d_ff_inner,
dropout=0.1, dropouta=0.,
seq_len: int=0, mem_len: int=0):
super().__init__()
self.n_layers,self.n_heads,self.d_model,self.d_head_inner,self.d_ff_inner = \
n_layers,n_heads,d_model,d_head_inner,d_ff_inner
# Embedding layers
self.word_embs = StandardWordEmbedding(num_embeddings, d_model)
self.pos_embs = PositionalEmbedding(d_model)
# Core transformer
self.drop = nn.Dropout(dropout)
self.layers = nn.ModuleList([DecoderBlock(n_heads, d_model, d_head_inner=d_head_inner,
d_ff_inner=d_ff_inner,
dropout=dropout, dropouta=dropouta)
for _ in range(n_layers)])
# tie weights
self.output_projection = nn.Linear(d_model, num_embeddings)
self.output_projection.weight = self.word_embs.embedding.weight
self.loss_fn = nn.CrossEntropyLoss()
self.seq_len, self.mem_len = seq_len, mem_len
# u and v are global parameters: maybe changing these to per-head parameters
# might help performance?
self.u, self.v = (nn.Parameter(torch.Tensor(self.n_heads, self.d_head_inner)),
nn.Parameter(torch.Tensor(self.n_heads, self.d_head_inner)))
def init_memory(self, device=torch.device("cpu")) -> torch.FloatTensor:
return [torch.empty(0, dtype=torch.float).to(device) for _ in range(self.n_layers+1)]
def update_memory(self,
previous_memory: List[torch.FloatTensor],
hidden_states: List[torch.FloatTensor],
):
assert len(hidden_states) == len(previous_memory)
mem_len, seq_len = previous_memory[0].size(0), hidden_states[0].size(0)
# For the updated memory, we use the most recent `self.mem_len`
# states, including the previous memory
# In other words, if `seq_len` < `self.mem_len` some of the previous memory
# will carry over to the next memory
with torch.no_grad():
new_memory = []
end_idx = mem_len + seq_len
beg_idx = max(0, end_idx - self.mem_len)
for m, h in zip(previous_memory, hidden_states):
cat = torch.cat([m, h], dim=0) # (mem_len + seq_len, bs, d)
new_memory.append(cat[beg_idx:end_idx].detach()) # (self.mem_len, bs, d)
return new_memory
def reset_length(self, seq_len, ext_len, mem_len):
self.seq_len = seq_len
self.mem_len = mem_len
def forward(self, idxs: torch.LongTensor, # (cs, bs)
target: torch.LongTensor, # (cs, bs)
memory: Optional[List[torch.FloatTensor]]=None,
) -> Dict[str, torch.Tensor]:
if memory is None:
memory: List[torch.FloatTensor] = self.init_memory(idxs.device)
assert len(memory) == len(self.layers) + 1
cur_seq, bs = idxs.size()
prev_seq = memory[0].size(0)
# Construct attention mask
dec_attn_mask = torch.triu(
torch.ones((cur_seq, cur_seq + prev_seq)),
diagonal=1 + prev_seq,
).byte()[...,None].to(idxs.device)
word_embs = self.drop(self.word_embs(idxs))
pos_idxs = torch.arange(cur_seq + prev_seq - 1, -1, -1.0, dtype=torch.float).to(word_embs.device)
pos_embs = self.drop(self.pos_embs(pos_idxs))
# Main part of forward pass
hidden_states = [word_embs]
layer_out = word_embs
for mem, layer in zip(memory, self.layers):
layer_out = layer(layer_out, pos_embs, self.u, self.v,
mask=dec_attn_mask, mems=mem)
hidden_states.append(layer_out)
logits = self.output_projection(self.drop(layer_out))
loss = self.loss_fn(logits.view(-1, logits.size(-1)), target.view(-1))
# Update memory
# Ensure the memory is treated as a constant
# and we do not back propagate through them
new_memory = self.update_memory(memory, hidden_states)
return {"loss": loss, "logits": logits, "memory": new_memory}
transformer = TransformerXL(1000, 4, 3, 32, 17, 71, mem_len=5)
```
Again, let's feed some random inputs to confirm the model is working.
```
idxs = torch.randint(1000, (5, 9))
tgts = torch.randint(1000, (5, 9))
transformer(idxs, tgts)
```
# Training the Transformer XL
Now, let's move on to actually training the Transformer XL.
```
TESTING = True
```
We'll be using the following configurations.
```
class Config(dict):
def __init__(self, **kwargs):
super().__init__(**kwargs)
for k, v in kwargs.items():
setattr(self, k, v)
def set(self, key, val):
self[key] = val
setattr(self, key, val)
def update(self, dct):
for k, v in dct.items():
self.set(k, v)
# We will use prime numbers to ensure our implementation
# is actually correct
config = Config(
seed=101,
debug=False,
warmup_step=0,
# Check default params
min_lr=0.,
dropouta=0.,
clip=0.25,
log_interval=200,
eval_interval=100,
)
if TESTING:
config.update(dict(
debug=True,
lr=0.00025,
bs=8,
epochs=2,
max_step=10000, # shorten for testing
n_layers=4,
n_heads=3,
d_model=32,
d_head_inner=17,
d_ff_inner=71,
dropout=0.1,
train_bptt=33,
eval_bptt=41,
mem_len=41,
eval_mem_len=63,
))
else:
config.update(dict(
lr=0.0025,
bs=22,
epochs=2,
max_step=400000,
n_layers=12,
n_heads=8,
d_model=512,
d_head_inner=64,
d_ff_inner=2048,
dropout=0.1,
train_bptt=512,
eval_bptt=128,
mem_len=512,
eval_mem_len=2100,
))
```
### Preparing the Data Loader
Data loading for the Transformer XL is similar to data loading for an RNN-based language model but is quite different from standard data loading, so we'll go over it in detail.
Suppose we chunked the input into sequences of 4 words to feed into the model. Remember that the Transformer XL is stateful, meaning the computations of each minibatch are carried over to the next minibatch. For a minibatch size of 1, handling this is simple. We just chunk the input and feed it into the model like this:

Now, what happens if the `batch_size` is 2? We can't split the sentence like this, otherwise, we would be breaking the dependencies between segments.

The correct way to handle the corpus with a `batch_size`of 2 is to feed it like this.

Generalizing this, we first divide the corpus into `batch_size` length segments, then feed each segment piece by piece into the model. Let's go through an example. Suppose `batch_size` is 4 and our entire corpus looks like this:
pytorch is an amazing deep learning framework that makes nlp really easy
We want to make sure that the previous batch contains the previous segment at the same position.
In other words, assuming we fed the model one word at a time, we want to iterate over this sentence like this
Batch 1: pytorch amazing framework nlp
Batch 2: is deep that really
Batch 3: an learning makes easy
Notice that you can reconstruct the original sentence by reading from top to bottom, left to right
instead of left to right, top to bottom.
In reality, we feed the model with a sequence of words for each batch. The length of this sequence is commonly referred to the bptt (back propagation through time) length, since this is the maximum length the gradients propagate through in the sequence direction. With a longer bptt length of 2 for example, the
minibatch would be of shape (batch_size, bptt) and would look like
Batch 1: pytorch amazing framework nlp
is deep that really
Batch 2: an learning makes easy
We can implement this in a dataloader like this:
```
from torch.utils import data
import math
class LMDataLoader(data.DataLoader):
def __init__(self, data: torch.LongTensor, batch_size: int, bptt: int,
device=torch.device("cpu")):
self.batch_size = batch_size
self.bptt = bptt
self.n_steps = data.size(0) // batch_size
# we reshape the data here so that we can index
# efficiently into it while training
self.data = (data[:self.n_steps * batch_size] # trim off any elements that don't fit cleanly
.view(batch_size, self.n_steps) #
.transpose(0, 1) #
.contiguous().to(device) # put on device as contiguous tensor
)
def __iter__(self):
for batch_start_idx in range(0, self.data.size(0) - 1, self.bptt):
batch_end_idx = min(batch_start_idx + self.bptt, self.data.size(0) - 1)
# TODO: What is `self.ext_len` in the original code?
batch_data = self.data[batch_start_idx:batch_end_idx]
target = self.data[batch_start_idx+1:batch_end_idx+1]
# we generate the sequence length as well for loss calculation later
yield batch_data, target, batch_end_idx - batch_start_idx
def __len__(self):
return math.ceil(self.data.size(0) / self.bptt)
```
Let's test this out
```
test_corpus = torch.arange(1000)
BS = 16
BPTT = 10
test_corpus[:BPTT]
loader = LMDataLoader(test_corpus, BS, BPTT)
b1, *_ = next(iter(loader))
b1.shape
b1
b1, b2, sl = next(iter(loader))
```
### Loading the actual data
We'll be using the penn treebank dataset to benchmark our model.
```
from pathlib import Path
DATASET = "penn"
DATA_DIR = Path("../data") / DATASET
```
We'll be using a utility vocabulary class borrowed directly from the Transformer XL repo to numericalize our inputs.
```
import sys; sys.path.append("../utils")
from vocabulary import Vocab
vocab = Vocab(special=["<eos>"], lower_case=True)
vocab.count_file(DATA_DIR / "train.txt")
vocab.count_file(DATA_DIR / "valid.txt")
vocab.count_file(DATA_DIR / "test.txt")
None
vocab.build_vocab()
train_dataset = vocab.encode_file(DATA_DIR / "train.txt", ordered=True, add_eos=True)
valid_dataset = vocab.encode_file(DATA_DIR / "valid.txt", ordered=True, add_eos=True)
test_dataset = vocab.encode_file(DATA_DIR / "test.txt", ordered=True, add_eos=True)
train_dataset[:50]
```
Now we can prepare the data loaders
```
train_iter = LMDataLoader(train_dataset, config.bs, config.train_bptt, device=device)
valid_iter = LMDataLoader(valid_dataset, config.bs, config.eval_bptt, device=device)
test_iter = LMDataLoader(test_dataset, config.bs, config.eval_bptt, device=device)
next(iter(train_iter))
```
### Initialization
We borrow the following initialization from the Transformer XL repo
```
def init_weight(weight):
nn.init.normal_(weight, 0.0, 0.02)
def init_bias(bias):
nn.init.constant_(bias, 0.0)
# Borrowed from the transformer XL repo
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Linear') != -1:
if hasattr(m, 'weight') and m.weight is not None:
init_weight(m.weight)
if hasattr(m, 'bias') and m.bias is not None:
init_bias(m.bias)
elif classname.find('Embedding') != -1:
if hasattr(m, 'weight'):
init_weight(m.weight)
elif classname.find('LayerNorm') != -1:
if hasattr(m, 'weight'):
nn.init.normal_(m.weight, 1.0, 0.02)
if hasattr(m, 'bias') and m.bias is not None:
init_bias(m.bias)
else:
if hasattr(m, 'u'):
init_weight(m.u)
if hasattr(m, 'v'):
init_weight(m.v)
```
No fancy initialization here. Since we have multiple Layer Normalization layers, we can get away with initializing everything using a simple normal distribution.
### Training Loop
The training loop is fairly standard. You can use any framework you like here including [ignite](https://github.com/pytorch/ignite), [allennlp](https://github.com/allenai/allennlp), and [fastai](https://github.com/fastai/fastai). We'll be writing our own loop to simplify things.
```
import torch.optim as optim
import math
import time
import os
from tqdm import tqdm
loss_change = []
val_loss_change = []
def train_epoch(
epoch: int,
model: nn.Module, train_loader: data.DataLoader,
val_loader: data.DataLoader,
optimizer: optim.Optimizer,
scheduler,
train_step_start=0.,
):
# Turn on training mode which enables dropout.
model.train()
mems = None
train_step = train_step_start
train_loss = 0
log_start_time = time.time()
best_val_loss = float("inf")
pbar = tqdm(train_loader, total=min(config.max_step - train_step_start, len(train_loader)))
for batch_idx, (data, target, seq_len) in enumerate(pbar):
model.zero_grad()
out_dict = model(data, target, memory=mems)
loss, mems = out_dict["loss"], out_dict["memory"]
loss.backward()
train_loss += loss.item()
loss_change.append(loss.item())
torch.nn.utils.clip_grad_norm_(model.parameters(), config.clip)
optimizer.step()
# step-wise learning rate annealing
train_step += 1
# linear warmup stage
if train_step < config.warmup_step:
curr_lr = config.lr * train_step / config.warmup_step
optimizer.param_groups[0]['lr'] = curr_lr
else:
scheduler.step(train_step)
if train_step % config.log_interval == 0:
cur_loss = train_loss / config.log_interval
elapsed = time.time() - log_start_time
log_str = '| epoch {:3d} step {:>8d} | lr {:.3g} ' \
'| loss {:5.2f}'.format(
epoch, train_step, optimizer.param_groups[0]['lr'], cur_loss)
log_str += ' | ppl {:9.3f}'.format(math.exp(cur_loss))
pbar.set_description(log_str)
train_loss = 0
log_start_time = time.time()
if train_step % config.eval_interval == 0:
val_loss = evaluate(model, val_loader)
val_loss_change.append(val_loss)
eval_start_time = time.time()
if train_step == config.max_step:
return train_step
return train_step
def train(model, train_loader, valid_loader):
optimizer = optim.Adam(model.parameters(), lr=config.lr)
total_steps = min(config.max_step, len(train_loader) * config.epochs)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer,
total_steps, eta_min=config.min_lr)
train_step_start = 0
for epoch in range(config.epochs):
if train_step_start >= config.max_step:
break
train_step_start = train_epoch(
epoch,
model,
train_iter,
valid_iter,
optimizer,
scheduler,
train_step_start,
)
```
Language models are normally evaluated by perplexity. Perplexity is the exponential of the cross entropy loss. It is also equivalent to the reciprocal of the likelihood. If the language model assigns a probability of 0.1 to each word in each input sentence on average, it would receive a perplexity of 100.
Intuitively, perplexity represents how many tries it would take for the model to guess the correct word. A perplexity of 100 would signify that the model would need 100 tries to guess each word in the input sequence correctly.
Keeping this in mind, we can write the evaluation code like this.
```
def evaluate(model: nn.Module, val_loader: data.DataLoader):
# Turn on evaluation mode which disables dropout.
model.eval()
model.reset_length(config.eval_bptt,
0, config.eval_mem_len+config.train_bptt-config.eval_bptt)
# Evaluation
total_len, total_loss = 0, 0.
with torch.no_grad():
mems = None
for i, (data, target, seq_len) in enumerate(val_loader):
out_dict = model(data, target, memory=mems)
loss, mems = out_dict["loss"], out_dict["memory"]
total_loss += seq_len * loss.float().item()
total_len += seq_len
# Switch back to the training mode
model.reset_length(config.train_bptt, 0, config.mem_len)
model.train()
return total_loss / total_len
def evaluate_final(model, val_loader):
model.eval()
total_len, total_loss = 0, 0.
start_time = time.time()
model.reset_length(config.eval_bptt, 0, config.eval_mem_len + config.train_bptt - config.eval_bptt)
with torch.no_grad():
mems = None
for i, (data, target, seq_len) in enumerate(val_loader):
out_dict = model(data, target, memory=mems)
loss, mems = out_dict["loss"], out_dict["memory"]
total_loss += seq_len * loss.item()
total_len += seq_len
total_time = time.time() - start_time
model.reset_length(config.train_bptt, 0, config.mem_len)
loss_val = total_loss / total_len
return {"loss": loss_val, "ppl": math.exp(loss_val)}
```
Now all we have to do is initialize the model and start training it!
```
transformer_xl = TransformerXL(
num_embeddings=len(vocab), n_layers=config.n_layers,
n_heads=config.n_heads, d_model=config.d_model,
d_head_inner=config.d_head_inner,
d_ff_inner=config.d_ff_inner,
dropout=config.dropout,
dropouta=config.dropouta,
seq_len=config.train_bptt,
mem_len=config.mem_len,
)
if torch.cuda.is_available(): transformer_xl.cuda()
transformer_xl.apply(weights_init);
train(
transformer_xl,
train_iter,
valid_iter,
)
evaluate_final(transformer_xl, valid_iter)
```
Let's observe the loss change.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(loss_change)
plt.plot(val_loss_change)
```
Overall the loss is decreasing and everything looks nice!
| github_jupyter |
```
from zipline.pipeline import Pipeline
from zipline.component.research import run_pipeline
from zipline.pipeline.factors import SimpleMovingAverage, AverageDollarVolume
```
## Classifiers
A classifier is a function from an asset and a moment in time to a [categorical output](https://en.wikipedia.org/wiki/Categorical_variable) such as a `string` or `integer` label:
```
F(asset, timestamp) -> category
```
An example of a classifier producing a string output is the exchange ID of a security. To create this classifier, we'll have to import `Fundamentals.exchange_id` and use the [latest](https://www.zipline.com/tutorials/pipeline#lesson3) attribute to instantiate our classifier:
```
from zipline.pipeline.data import Fundamentals
# 使用地区来代替
# is of type string, .latest returns a Classifier
region = Fundamentals.region.latest
```
Previously, we saw that the `latest` attribute produced an instance of a `Factor`. In this case, since the underlying data is of type `string`, `latest` produces a `Classifier`.
Similarly, a computation producing the latest Morningstar sector code of a security is a `Classifier`. In this case, the underlying type is an `int`, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in `Sector` classifier.
```
#from zipline.pipeline.classifiers.fundamentals import Sector
# sector放在fundamentals中,单独作为一个绑定列
cninfo_sector = Fundamentals.cninfo.sector.latest
```
使用类似原版的 `Sector`自定义因子时,请直接用`Fundamentals.cninfo.sector.latest`.
### Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like `isnull`, `eq`, and `startswith`. The full list of `Classifier` methods producing `Filters` can be found [here](https://www.zipline.com/help#zipline_pipeline_classifiers_Classifier).
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the `eq` method of our `exchange` classifier.
```
region_filter = region.eq('湖南')
```
This filter will return `True` for securities having `'湖南'` as their most recent `region`.
使用字符串过滤时,建议使用`has_substring`方法,搜索范围更大。
### Quantiles
Classifiers can also be produced from various `Factor` methods. The most general of these is the `quantiles` method which accepts a bin count as an argument. The `quantiles` method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a `Classifier` with these labels. `NaN`s are labeled with -1. Aliases are available for [quartiles](https://www.zipline.com/help/#zipline_pipeline_factors_Factor_quartiles) (`quantiles(4)`), [quintiles](https://www.zipline.com/help/#zipline_pipeline_factors_Factor_quintiles) (`quantiles(5)`), and [deciles](https://www.zipline.com/help/#zipline_pipeline_factors_Factor_deciles) (`quantiles(10)`). As an example, this is what a filter for the top decile of a factor might look like:
```
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
```
Let's put each of our classifiers into a pipeline and run it to see what they look like.
```
def make_pipeline():
region = Fundamentals.region.latest
region_filter = region.eq('湖南')
cninfo_sector = Fundamentals.cninfo.sector.latest
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
return Pipeline(
columns={
'region': region,
'cninfo_sector': cninfo_sector,
'dollar_volume_decile': dollar_volume_decile
},
screen=(region_filter & top_decile)
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print('Number of securities that passed the filter: %d' % len(result))
result.head(5)
```
Classifiers are also useful for describing grouping keys for complex transformations on Factor outputs. Grouping operations such as [demean](https://www.zipline.com/help#zipline_pipeline_factors_Factor_demean) and [groupby](https://www.zipline.com/help#zipline_pipeline_factors_Factor_groupby) are outside the scope of this tutorial. A future tutorial will cover more advanced uses for classifiers.
In the next lesson, we'll look at the different datasets that we can use in pipeline.
| github_jupyter |
# Classes and Objects Exercises
**Exercises by Orion Buske with thanks to Jon Pipitone.**
Orion is excited at the prospect of having sushi for lunch tomorrow, so this
seems like a perfect opportunity to practice object-oriented programming. My
apologies in advance for the abundant over-simplifications and high likelihood
of other mistakes.
Let's start with a little bit of background. Sushi is, to quote Wikipedia, "a
Japanese dish consisting of cooked vinegared rice which is commonly topped with
other ingredients, such as fish or other seafood, or put into rolls." The two
most popular forms of sushi, with a few sub-types, are:
1. **Nigiri**- a hand-formed ball of rice topped with something tasty
* Gunkanmaki - with a loose or soft topping that is held in place with a
strip of seaweed
* Temarizushi - where the topping is just pressed into a ball of rice
2. **Maki**- one or more tasty things rolled up in seaweed and rice
* Futomaki - seaweed on the outside, usually vegetarian
* Temaki - cone-shaped seaweed filled with rice and tasty things
* Uramaki - rice on the outside
Luckily, this sort of hierarchical structure lends itself nicely to Classes and
inheritance. If you have been in a sushi restaurant before, you know how often
there are typos in the English descriptions. We are going to write a simple
program that a sushi restaurant owner could (theoretically) use to create a
menu, complete with English translation and Japanese transliteration (but not
actual Japanese, forgive me).
## Exercise 1.1
To start, create an `Ingredient` class that inherits from `object`. The
constructor should accept two strings as arguments, `japanese` and `english`,
that correspond to the Japanese transliteration and English translation. The
`english` argument should be optional, and should default to the value of
`japanese` if not supplied (just like on menus, where some ingredients aren't
translated and you're left to wonder hopelessly). The value of both should be
saved as members of the Ingredient class.
```
class Ingredient():
def __init__(self, japanese, english = None):
assert japanese # check that japanese name is not None
# if english is not given, set self.english to the japanese name
# set self.japanese
```
## Execise 1.2
Add to the `Ingredient` class two methods: `__str__(self)` and
`to_english(self)`. Both methods must return a string, and the `__str__` method
is what gets called when you print an object or "cast" it to a string. We will
have `__str__` return the Japanese name of the ingredient, and `to_english` will
return the English translation.
```
class Ingredient():
#### COPY & PASTE __init__() HERE ###
# returns japanese name
def __str__(self):
return
# returns english translation
def to_english(self):
return
```
I have created (with data from http://www.bento.com/sushivoc.html) a file of
common sushi ingredients, [sushi.txt](OtherFiles/sushi.txt). The first column is the
transliteration, the second column is the translation, if available (I
selectively removed a few). OtherFiles/small_sushi.txt is the first 7 lines.
## Exercise 1.3
Write a function, `read_ingredients`, that accepts an opened file object as its
argument and returns a list of `Ingredient` objects.
Try calling this function on the sushi_terms.txt in an `if __name__ =='__main__'` block and printing the first few ingredients to make sure it works.
```
def read_ingredients(file):
return
# this code will only run if we are running this code cell
if __name__=='__main__':
in_file = open("OtherFiles/small_sushi.txt","r")
ingredients = read_ingredients(in_file)
for i in ingredients:
# compare output to file
print("Japanese: %s English: %s" % (i, i.to_english()))
```
## Execise 4
Now, create a `Sushi` class that inherits from `object`. `Sushi`s It should have
a constructor that accepts a list of `Ingredient` objects.
```
# your code here
```
## Exercise 5
Next, add a `__str__(self)` method. This method must return a string. Â The
string should contain the Japanese representation of all the ingredients, but
the string itself should be in proper English so, for example, "buri", "buri and
tsubugai", and "buri, tsubugai, and kanpachi" are the correct way to print one,
two, or three ingredients, respectively. Do not just join the ingredients with
commas.
**Hint:**
Since all the ingredients are `Ingredient` objects, you can just turn them
into strings to get their Japanese representation.
**Hint:**
There are three cases: 1, 2, or 3+ items. It's okay to handle them separately.
```
#### COPY & PASTE SUSHI CLASS HERE ###
# Add __str__() method for Sushi.
# test solution
if __name__ == '__main__':
ingredients1 = [Ingredient("buri")]
test_sushi1 = Sushi(ingredients1)
print(test_sushi1)
assert(str(test_sushi1) == "buri")
ingredients2 = [Ingredient("buri"), Ingredient("tsubugai")]
test_sushi2 = Sushi(ingredients2)
print(test_sushi2)
assert(str(test_sushi2) == "buri and tsubugai")
ingredients3 = [Ingredient("buri"), Ingredient("tsubugai"), Ingredient("kanpachi")]
test_sushi3 = Sushi(ingredients3)
print(test_sushi3)
assert(str(test_sushi3) == "buri, tsubugai, and kanpachi")
print("All tests passed.")
```
## Execise 6
Next, add a loop to your `__main__` block that prompts the user for a menu item
and reads a line from `sys.stdin`. Provide a command for the user to quit (and
tell them what it is). For now, expect the user to just type one or more
ingredients on a line. You can use the built-in function `input()` for this.
You should then parse the ingredients, find the appropriate `Ingredient`
objects, create a `Sushi` object, and print it in response. For example:
Enter your sushi ('QUIT' to exit): unagi fugu ika sake
unagi, fugu, ika, and sake
You may need to review
dictionaries for this
exercise.
```
if __name__ == "__main__":
exit = False
while not exit:
# your code here
# remember to set exit to true to close the loop when the user says 'QUIT'
```
## Execise 7
Now, add another method to the Sushi class, `to_english(self)`, which should
return the English translation for the Sushi object. Thus, it should return a
similar string as the `__str__` method, but with English ingredients instead of
Japanese ones. Do not call `__str__` and translate its string. Since you were
given the ingredients initially, just use their `to_english` methods, format
them correctly with commas and "and"s, and return that. Since both `to_english`
and `__str__` have to format their ingredients in the same way, you might want
to create a helper method that formats a list of ingredients (regardless of
their language).
You should now also print the result of calling `to_english` on the `Sushi`
objects you make at the user's request. Thus:
Enter your sushi ('QUIT' to exit): <strong>unagi fugu ika sake</strong>
unagi, fugu, ika, and sake
eel, fugu, squid, and salmon
```
#### COPY & PASTE SUSHI CLASS HERE ###
# Add __str__() method for Sushi.
if __name__ == "__main__":
exit = False
while not exit:
#### COPY & PASTE EXERCISE 6 HERE ###
```
## Execise 8
Now let's add a `Maki` class that inherits from `Sushi`. Everything will be the
same, except instead of just printing the ingredients, we want to print
something more descriptive. Let's have its `__str__` and `to_english` methods
return a string of the form: `[ingredients] rolled in [rice] and [seaweed]`,
where `[ingredients]` is our grammatical list of ingredients, and `[rice]` and
`[seaweed]` are two other ingredients that will be consistent across all sushi
types, but you should be sure to use the correct language at the correct time,
like other ingredients. However, these ingredients won't be specified in the
list of ingredients; they are implied by the type of sushi! You can create
constants for these ingredients or handle them in some other way. I did the
following:
```
class Maki(Sushi):
RICE = Ingredient('su-meshi', 'sushi rice')
SEAWEED = Ingredient('nori', 'seaweed')
#### COPY & PASTE SUSHI CLASS HERE ###
```
## Execise 9
Now, revise the `__main__` block so that if someone enters "unagi fugu" or
"unagi fugu sushi" we consider it to be general sushi and create an appropriate
`Sushi` object. However, if the last word was "maki", we should create a `Maki`
object instead. You should do this in a way that is very easy to extend, because
there are going to be many more of these. As a general rule, we'll expect the
user to enter a number of Japanese ingredients, possibly following by a sushi
type. If no sushi type is specified, we should default to the base class,
otherwise we should use the type the user specified.
**Hint:**
```
#### COPY & PASTE EXERCISE 7 MAIN HERE ###
# Some code to help:
# key is type name, values is class
# you can create a new object by using types[type]()
types = {'sushi': Sushi,
'maki': Maki}
if words[-1] not in types: # make a general sushi
Sushi(words)
else: # make a new object of the given class
```
## Exercise 10
Wonderful! We have a few more kinds of sushi to add, though. Futomaki, Temaki,
and Uramaki are all types of Maki, and all should inherit from it. Their
respective format strings should be of the following sort:
- **Futomaki:** "[ingredients] rolled in [rice] and [seaweed], with [seaweed]
facing out"
- **Temaki:** "cone of [seaweed] filled with [rice] and [ingredients]"
- **Uramaki:** "[ingredients] rolled in [seaweed] and [rice], with [rice]
facing out"
You may find the notion of a format string useful in this endeavor. For
instance, if you have the following string:
```
my_str = "{ingredients} rolled in {rice} and {seaweed}, with {seaweed} facing out"
```
Then you can do the following:
```
ingredient_str = "yummy things"
vals = {'rice': RICE, 'seaweed': SEAWEED, 'ingredients': ingredient_str}
print(my_str.format(**vals))
```
This is in fact quite powerful, because you can include rice and seaweed even if
they don't occur in the format string! Given this knowledge, you should try to
rewrite the `Sushi` base class so that it formats a member variable with a
dictionary of 'rice', 'seaweed', and 'ingredients'. Then, any child class need
only change their value of this member and everything works. For example:
```
class Sushi():
#### COPY & PASTE SUSHI METHODS HERE ####
# Add a field in __init__ that creates a dict with:
# {'rice': RICE, 'seaweed': SEAWEED, 'ingredients': ingredient_str}
# Add a description field:
# "{ingredients}"
# change __str__() method to format ingredients using the dictionary
class Maki(Sushi):
#### COPY & PASTE MAKI METHODS HERE ####
class Futomaki(Maki):
description = "{ingredients} rolled in {rice} and {seaweed}, with {seaweed} facing out"
class Temaki(Maki):
description = "cone of {seaweed} filled with {rice} and {ingredients}"
#### COPY & PASTE MAIN BLOCK HERE ####
```
Make sure this works for both Japanese and English strings, and make sure you've
added these new Maki types to your `__main__` block so that the following work:
Enter your sushi ('QUIT' to exit): <strong>unagi ohyo uramaki</strong>
unagi and ohyo rolled in nori and su-meshi, with su-meshi facing out
eel and halibut rolled in seaweed and sushi rice, with sushi rice facing out
Enter your sushi ('QUIT' to exit): <strong>ikura temaki</strong>
cone of nori filled with su-meshi and ikura
cone of seaweed filled with sushi rice and salmon roe
## Exercise 11
Almost done. One last set of sushi classes to add. Add a Nigiri class that
inherits from Sushi, and Gunkanmaki and Temarizushi classes that inherit from
Nigiri. Since Nigiri usually only has one topping, you should take advantage of
inheritance to make sure this is true for all such sushi by checking that this
is the case in Nigiri's `__init__` method. If you run into an error, raise an
InvalidSushiError (you will have to define one; Python's libraries aren't quite
that complete). Don't forget to call it's parent's init method as well. Their
descriptions are as follows:
- **Nigiri:** "hand-formed [rice] topped with [ingredients]"
- **Gunkanmaki:** "[ingredient] on [rice] wrapped in a strip of [seaweed]"
- **Temarizushi:** "[ingredients] pressed into a ball of [rice]"
**Hint:**
```
# Nigiri class here that inherits from Sushi
# Gunkanmaki class here that inherits from Nigiri
# Temarizushi class here that inherits from Nigiri
class InvalidSushiError(Exception):
# Use the internet if you are unsure how to do this
pass
#### COPY & PASTE MAIN BLOCK HERE ####
```
As a final test, the following example should work:
Enter your sushi ('QUIT' to exit): <strong>fugu ohyo ika unagi</strong>
fugu, ohyo, ika, and unagi
fugu, halibut, squid, and eel
Enter your sushi ('QUIT' to exit): <strong>fugu ohyo ika unagi sushi</strong>
fugu, ohyo, ika, and unagi
fugu, halibut, squid, and eel
Enter your sushi ('QUIT' to exit): <strong>ika sake gunkanmaki</strong>
Traceback (most recent call last):
...
__main__.InvalidSushiError: Nigiri has only one topping
| github_jupyter |
# Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as [Complete Python Bootcamp]()
## Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
** What is 7 to the power of 4?**
```
7**4
```
** Split this string:**
s = "Hi there Sam!"
**into a list. **
```
"Hi there Sam!".split()
"Hi there dad!".split()
```
** Given the variables:**
planet = "Earth"
diameter = 12742
** Use .format() to print the following string: **
The diameter of Earth is 12742 kilometers.
```
planet = "Earth"
diameter = 12742
"The diameter of {0} is {1} kilometers.".format(planet, diameter)
```
** Given this nested list, use indexing to grab the word "hello" **
```
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
print(lst[3][1][2])
```
** Given this nest dictionary grab the word "hello". Be prepared, this will be annoying/tricky **
```
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
print(d['k1'][3]['tricky'][3]['target'][3])
```
** What is the main difference between a tuple and a list? **
```
# Immutability?
```
** Create a function that grabs the email website domain from a string in the form: **
user@domain.com
**So for example, passing "user@domain.com" would return: domain.com**
```
email = "user@domain.com"
email.split("@")[-1]
```
** Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. **
```
sentence = "Dog dot dog is the word"
sentence.count("dog") != 0
```
** Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases. **
### Final Problem
**You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases. **
```
def ticket(speed):
return "No ticket" if speed <= 60 else "Small ticket" if speed > 60 and speed <= 80 else "Big ticket"
ticket(81)
```
# Great job!
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
```
### load packages
```
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
```
### Load dataset
```
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
syllable_df = pd.read_pickle(DATA_DIR/'cassins'/ 'cassins.pickle')
#syllable_df= syllable_df[:1000]
syllable_df[:3]
top_labels = (
pd.DataFrame(
{i: [np.sum(syllable_df.labels.values == i)] for i in syllable_df.labels.unique()}
)
.T.sort_values(by=0, ascending=False)[:20]
.T
)
top_labels
syllable_df = syllable_df[syllable_df.labels.isin(top_labels.columns)]
syllable_df[:3]
syllable_df = syllable_df.reset_index()
syllable_df['subset'] = 'train'
syllable_df.loc[:1000, 'subset'] = 'valid'
syllable_df.loc[1000:1999, 'subset'] = 'test'
#syllable_df.loc[:100, 'subset'] = 'valid'
#syllable_df.loc[100:199, 'subset'] = 'test'
specs = np.array(list(syllable_df.spectrogram.values))
specs = np.array([np.concatenate([np.zeros((32,1)), i], axis=1) for i in tqdm(specs)])
specs.shape
syllable_df['spectrogram'] = syllable_df['spectrogram'].astype('object')
syllable_df['spectrogram'] = list(specs)
np.shape(syllable_df['spectrogram'].values[0])
len(syllable_df)
Y_train = np.array(list(syllable_df.labels.values[syllable_df.subset == 'train']))
Y_valid = np.array(list(syllable_df.labels.values[syllable_df.subset == 'valid']))
Y_test = np.array(list(syllable_df.labels.values[syllable_df.subset == 'test']))
X_train = np.array(list(syllable_df.spectrogram.values[syllable_df.subset == 'train'])) #/ 255.
X_valid = np.array(list(syllable_df.spectrogram.values[syllable_df.subset == 'valid']))# / 255.
X_test = np.array(list(syllable_df.spectrogram.values[syllable_df.subset == 'test'])) #/ 255.
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).astype('int').flatten()
plt.matshow(X_train[10])
X_train[0].shape
```
### create models
```
dims = (32,32,1)
from tensorflow.keras.layers import (
Conv2D,
Reshape,
Bidirectional,
Dense,
RepeatVector,
TimeDistributed,
LSTM
)
n_components=64
#shape_final = (8,2,128)
encoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=dims),
Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation=tf.nn.leaky_relu, padding="same"
),
Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation=tf.nn.leaky_relu, padding="same"
),
Conv2D(
filters=128, kernel_size=3, strides=(2, 1), activation=tf.nn.leaky_relu, padding="same"
),
Conv2D(
filters=128, kernel_size=3, strides=(2, 1), activation=tf.nn.leaky_relu, padding="same"
),
Reshape(target_shape=(8, 2*128)),
Bidirectional(LSTM(units=100, activation="relu")),
Dense(units=512),
Dense(units=n_components),
])
decoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(n_components)),
Dense(units=512),
RepeatVector(8),
Bidirectional(LSTM(units=100, activation="relu", return_sequences=True)),
TimeDistributed(Dense(2*128)),
Reshape(target_shape=(8,2,128)),
tf.keras.layers.Conv2DTranspose(
filters=128, kernel_size=3, strides=(1, 2), padding="SAME", activation=tf.nn.leaky_relu
),
tf.keras.layers.Conv2DTranspose(
filters=128, kernel_size=3, strides=(1, 2), padding="SAME", activation=tf.nn.leaky_relu
),
tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=(2, 2), padding="SAME", activation=tf.nn.leaky_relu
),
tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=(2, 2), padding="SAME", activation=tf.nn.leaky_relu
),
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME", activation="tanh"
),
Reshape(target_shape=(32, 32, 1)),
])
```
### Prepare metric
```
from tfumap.dtw_mse import build_dtw_mse
dtw_metric = build_dtw_mse(X_train[0].shape)
```
### Create model and train
```
embedder = tfUMAP(
metric = dtw_metric,
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
encoder=encoder,
dims=dims,
n_components=n_components,
)
z = embedder.fit_transform(X_train_flat)
z = embedder.transform(X_train_flat)
```
### Plot model output
```
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train[:len(z)],
cmap="tab20",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
```
### View loss
```
from tfumap.umap import retrieve_tensors
import seaborn as sns
loss_df = retrieve_tensors(embedder.tensorboard_logdir)
loss_df[:3]
ax = sns.lineplot(x="step", y="val", hue="group", data=loss_df[loss_df.variable=='umap_loss'])
ax.set_xscale('log')
ax.set_yscale('log')
```
### Save output
```
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'cassins' / '64'/ 'seq2seq'
ensure_dir(output_dir)
embedder.save(output_dir)
loss_df.to_pickle(output_dir / 'loss_df.pickle')
np.save(output_dir / 'z.npy', z)
```
| github_jupyter |
# cuDF Cheat Sheets sample code
(c) 2020 NVIDIA, Blazing SQL
Distributed under Apache License 2.0
### Imports
```
import cudf
import numpy as np
import pandas as pd
```
### Sample DataFrame
```
# pandas
pandas_df = pd.DataFrame(
[
(39, 6.88, np.datetime64('2020-10-08T12:12:01'), np.timedelta64(14378,'s'), 'C', 'D', 'data'
, 'RAPIDS.ai is a suite of open-source libraries that allow you to run your end to end data science and analytics pipelines on GPUs.')
, (11, 4.21, None, None , 'A', 'D', 'cuDF'
, 'cuDF is a Python GPU DataFrame (built on the Apache Arrow columnar memory format)')
, (31, 4.71, np.datetime64('2020-10-10T09:26:43'), np.timedelta64(12909,'s'), 'U', 'D', 'memory'
, 'cuDF allows for loading, joining, aggregating, filtering, and otherwise manipulating tabular data using a DataFrame style API.')
, (40, 0.93, np.datetime64('2020-10-11T17:10:00'), np.timedelta64(10466,'s'), 'P', 'B', 'tabular'
, '''If your workflow is fast enough on a single GPU or your data comfortably fits in memory on
a single GPU, you would want to use cuDF.''')
, (33, 9.26, np.datetime64('2020-10-15T10:58:02'), np.timedelta64(35558,'s'), 'O', 'D', 'parallel'
, '''If you want to distribute your workflow across multiple GPUs or have more data than you can fit
in memory on a single GPU you would want to use Dask-cuDF''')
, (42, 4.21, np.datetime64('2020-10-01T10:02:23'), np.timedelta64(20480,'s'), 'U', 'C', 'GPUs'
, 'BlazingSQL provides a high-performance distributed SQL engine in Python')
, (36, 3.01, np.datetime64('2020-09-30T14:36:26'), np.timedelta64(24409,'s'), 'T', 'D', None
, 'BlazingSQL is built on the RAPIDS GPU data science ecosystem')
, (38, 6.44, np.datetime64('2020-10-10T08:34:36'), np.timedelta64(90171,'s'), 'X', 'B', 'csv'
, 'BlazingSQL lets you ETL raw data directly into GPU memory as a GPU DataFrame (GDF)')
, (17, 5.28, np.datetime64('2020-10-09T08:34:40'), np.timedelta64(30532,'s'), 'P', 'D', 'dataframes'
, 'Dask is a flexible library for parallel computing in Python')
, (10, 8.28, np.datetime64('2020-10-03T03:31:21'), np.timedelta64(23552,'s'), 'W', 'B', 'python'
, None)
]
, columns = ['num', 'float', 'datetime', 'timedelta', 'char', 'category', 'word', 'string']
)
pandas_df['category'] = pandas_df['category'].astype('category')
# cudf
cudf_df = cudf.DataFrame(
[
(39, 6.88, np.datetime64('2020-10-08T12:12:01'), np.timedelta64(14378,'s'), 'C', 'D', 'data'
, 'RAPIDS.ai is a suite of open-source libraries that allow you to run your end to end data science and analytics pipelines on GPUs.')
, (11, 4.21, None, None , 'A', 'D', 'cuDF'
, 'cuDF is a Python GPU DataFrame (built on the Apache Arrow columnar memory format)')
, (31, 4.71, np.datetime64('2020-10-10T09:26:43'), np.timedelta64(12909,'s'), 'U', 'D', 'memory'
, 'cuDF allows for loading, joining, aggregating, filtering, and otherwise manipulating tabular data using a DataFrame style API.')
, (40, 0.93, np.datetime64('2020-10-11T17:10:00'), np.timedelta64(10466,'s'), 'P', 'B', 'tabular'
, '''If your workflow is fast enough on a single GPU or your data comfortably fits in memory on
a single GPU, you would want to use cuDF.''')
, (33, 9.26, np.datetime64('2020-10-15T10:58:02'), np.timedelta64(35558,'s'), 'O', 'D', 'parallel'
, '''If you want to distribute your workflow across multiple GPUs or have more data than you can fit
in memory on a single GPU you would want to use Dask-cuDF''')
, (42, 4.21, np.datetime64('2020-10-01T10:02:23'), np.timedelta64(20480,'s'), 'U', 'C', 'GPUs'
, 'BlazingSQL provides a high-performance distributed SQL engine in Python')
, (36, 3.01, np.datetime64('2020-09-30T14:36:26'), np.timedelta64(24409,'s'), 'T', 'D', None
, 'BlazingSQL is built on the RAPIDS GPU data science ecosystem')
, (38, 6.44, np.datetime64('2020-10-10T08:34:36'), np.timedelta64(90171,'s'), 'X', 'B', 'csv'
, 'BlazingSQL lets you ETL raw data directly into GPU memory as a GPU DataFrame (GDF)')
, (17, 5.28, np.datetime64('2020-10-09T08:34:40'), np.timedelta64(30532,'s'), 'P', 'D', 'dataframes'
, 'Dask is a flexible library for parallel computing in Python')
, (10, 8.28, np.datetime64('2020-10-03T03:31:21'), np.timedelta64(23552,'s'), 'W', 'B', 'python'
, None)
]
, columns = ['num', 'float', 'datetime', 'timedelta', 'char', 'category', 'word', 'string']
)
cudf_df['category'] = cudf_df['category'].astype('category')
```
---
# Object creation
---
```
# pandas
pd.DataFrame([1,2,3,4], columns=['ints'])
# cudf
cudf.DataFrame([1,2,3,4], columns=['ints'])
# pandas
pd.DataFrame({'ints': [1,2,3,4], 'strings': ['a','b','c',None]})
# cudf
cudf.DataFrame({'ints': [1,2,3,4], 'strings': ['a','b','c',None]})
# pandas
pandas_df_sample = pd.DataFrame()
pandas_df_sample['ints'] = [1,2,3,4]
pandas_df_sample['strings'] = ['a','b','c',None]
pandas_df_sample
# cudf
cudf_df_sample = cudf.DataFrame()
cudf_df_sample['ints'] = [1,2,3,4]
cudf_df_sample['strings'] = ['a','b','c',None]
cudf_df_sample
# pandas
pd.DataFrame([
(1, 'a')
, (2, 'b')
, (3, 'c')
, (4, None)
], columns=['ints', 'strings'])
# cudf
cudf.DataFrame([
(1, 'a')
, (2, 'b')
, (3, 'c')
, (4, None)
], columns=['ints', 'strings'])
```
## <span style="color:blue">DataFrame</span>
#### cudf.core.dataframe.DataFrame.from_pandas()
```
cudf.DataFrame.from_pandas(pandas_df)
```
#### cudf.core.dataframe.DataFrame.from_records()
```
# pandas
pd.DataFrame.from_records(pandas_df[['num', 'float']].to_records())
# cudf
cudf.DataFrame.from_records(cudf_df[['num', 'float']].to_records())
```
#### cudf.core.dataframe.DataFrame.to_csv()
```
# pandas
pandas_df.to_csv('../results/pandas_df_with_index.csv')
# cudf
cudf_df.to_csv('../results/cudf_df_with_index.csv')
# pandas
pandas_df.to_csv('../results/pandas_df_no_index_no_header.csv', index=False, header=False)
# cudf
cudf_df.to_csv('../results/cudf_df_no_index_no_header.csv', index=False, header=False)
# pandas
pandas_df.to_csv('../results/pandas_df_tab_sep.tsv', sep='\t')
# cudf
cudf_df.to_csv('../results/cudf_df_tab_sep.tsv', sep='\t')
# pandas
with open('../results/pandas_df_buffer.csv', 'w') as f:
pandas_df.to_csv(f)
# cudf
with open('../results/cudf_df_buffer.csv', 'w') as f:
cudf_df.to_csv(f)
```
#### cudf.core.dataframe.DataFrame.to_json()
```
# pandas
pandas_df.to_json('../results/pandas_df_default.json')
# cudf
cudf_df.to_json('../results/cudf_df_default.json')
# pandas
pandas_df.to_json('../results/pandas_df_records.json', orient='records', lines=True)
# cudf
cudf_df.to_json('../results/cudf_df_records.json', orient='records', lines=True)
# pandas
pandas_df.to_json('../results/pandas_df_iso_dttm.json', date_format='iso')
# cudf
cudf_df.to_json('../results/cudf_df_iso_dttm.json', date_format='iso')
```
#### cudf.core.dataframe.DataFrame.to_pandas()
```
cudf_df.to_pandas()
```
#### cudf.io.csv.read_csv()
```
# pandas
pandas_df_csv_read = pd.read_csv('../results/pandas_df_with_index.csv')
pandas_df_csv_read.head()
# cudf
cudf_df_csv_read = cudf.read_csv('../results/cudf_df_with_index.csv')
cudf_df_csv_read.head()
# pandas
pandas_df_csv_read = pd.read_csv('../results/pandas_df_with_index.csv', nrows=2)
pandas_df_csv_read.head()
# cudf
cudf_df_csv_read = cudf.read_csv('../results/cudf_df_with_index.csv', nrows=2)
cudf_df_csv_read.head()
# pandas
pandas_df_csv_read = pd.read_csv(
'../results/pandas_df_with_index.csv'
, skiprows=1
, names=['Index', 'num', 'float', 'datetime', 'timedelta', 'char',
'category', 'word', 'string'])
pandas_df_csv_read.head()
# cudf
cudf_df_csv_read = cudf.read_csv(
'../results/cudf_df_with_index.csv'
, skiprows=1
, names=['Index', 'num', 'float', 'datetime', 'timedelta', 'char',
'category', 'word', 'string'])
cudf_df_csv_read.head()
# pandas
pandas_df_csv_read = pd.read_csv('../results/pandas_df_tab_sep.tsv', delimiter='\t', usecols=['num', 'float'])
pandas_df_csv_read.head()
# cudf
cudf_df_csv_read = cudf.read_csv('../results/cudf_df_tab_sep.tsv', delimiter='\t', usecols=['num', 'float'])
cudf_df_csv_read.head()
```
#### cudf.io.json.read_json()
```
# pandas
pandas_df_json_read = pd.read_json('../results/pandas_df_default.json')
pandas_df_json_read['timedelta'] = pd.to_timedelta(pandas_df_json_read['timedelta'])
pandas_df_json_read.head()
# cudf
cudf_df_json_read = cudf.read_json('../results/cudf_df_default.json')
cudf_df_json_read['timedelta'] = cudf_df_json_read['timedelta'].astype('timedelta64[ms]')
cudf_df_json_read.head()
# pandas
pandas_df_json_read = pd.read_json('../results/pandas_df_records.json', lines=True)
pandas_df_json_read.head()
# cudf
cudf_df_json_read = cudf.read_json('../results/cudf_df_records.json', lines=True, engine='cudf')
cudf_df_json_read.head()
# pandas
import pandas as fmwrk
df = fmwrk.read_csv('../results/pandas_df_with_index.csv')
desc_stats = df.describe()
(
df
.groupby(by='category')
.agg({'num': 'sum', 'char': 'count'})
.reset_index()
)
df[['category', 'num', 'char']]
# cudf
import cudf as fmwrk
df = fmwrk.read_csv('../results/pandas_df_with_index.csv')
desc_stats = df.describe()
(
df
.groupby(by='category')
.agg({'num': 'sum', 'char': 'count'})
.reset_index()
)
```
| github_jupyter |
# Multiclass Reductions
#### by Chiyuan Zhang and Sören Sonnenburg
This notebook demonstrates the reduction of a <a href="http://en.wikipedia.org/wiki/Multiclass_classification">multiclass problem</a> into binary ones using Shogun. Here, we will describe the built-in <a href="http://en.wikipedia.org/wiki/Multiclass_classification#one_vs_all">One-vs-Rest</a>, One-vs-One and Error Correcting Output Codes strategies.
In `SHOGUN`, the strategies of reducing a multiclass problem to binary
classification problems are described by an instance of
`CMulticlassStrategy`. A multiclass strategy describes
* How to train the multiclass machine as a number of binary machines?
* How many binary machines are needed?
* For each binary machine, what subset of the training samples are used, and how are they colored? In multiclass problems, we use *coloring* to refer partitioning the classes into two groups: $+1$ and $-1$, or black and white, or any other meaningful names.
* How to combine the prediction results of binary machines into the final multiclass prediction?
The user can derive from the virtual class [CMulticlassStrategy](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMulticlassStrategy.html) to
implement a customized multiclass strategy. But usually the built-in strategies
are enough for general problems. We will describe the built-in *One-vs-Rest*,
*One-vs-One* and *Error-Correcting Output Codes* strategies in this tutorial.
The basic routine to use a multiclass machine with reduction to binary problems
in shogun is to create a generic multiclass machine and then assign a particular
multiclass strategy and a base binary machine.
## One-vs-Rest and One-vs-One
The *One-vs-Rest* strategy is implemented in
`CMulticlassOneVsRestStrategy`. As indicated by the name, this
strategy reduce a $K$-class problem to $K$ binary sub-problems. For the $k$-th
problem, where $k\in\{1,\ldots,K\}$, the samples from class $k$ are colored as
$+1$, and the samples from other classes are colored as $-1$. The multiclass
prediction is given as
$$
f(x) = \operatorname*{argmax}_{k\in\{1,\ldots,K\}}\; f_k(x)
$$
where $f_k(x)$ is the prediction of the $k$-th binary machines.
The One-vs-Rest strategy is easy to implement yet produces excellent performance
in many cases. One interesting paper, [Rifkin, R. M. and Klautau, A. (2004). *In defense of one-vs-all classification*. Journal of Machine
Learning Research, 5:101–141](http://jmlr.org/papers/v5/rifkin04a.html), it was shown that the
One-vs-Rest strategy can be
> as accurate as any other approach, assuming that the underlying binary
classifiers are well-tuned regularized classifiers such as support vector
machines.
Implemented in [CMulticlassOneVsOneStrategy](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMulticlassOneVsOneStrategy.html), the
*One-vs-One* strategy is another simple and intuitive
strategy: it basically produces one binary problem for each pair of classes. So there will be $\binom{K}{2}$ binary problems. At prediction time, the
output of every binary classifiers are collected to do voting for the $K$
classes. The class with the highest vote becomes the final prediction.
Compared with the One-vs-Rest strategy, the One-vs-One strategy is usually more
costly to train and evaluate because more binary machines are used.
In the following, we demonstrate how to use `SHOGUN`'s One-vs-Rest and
One-vs-One multiclass learning strategy on the USPS dataset. For
demonstration, we randomly 200 samples from each class for training and 200
samples from each class for testing.
The [CLibLinear](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibLinear.html) is used as the base binary classifier in a [CLinearMulticlassMachine](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearMulticlassMachine.html), with One-vs-Rest and One-vs-One strategies. The running time and performance (on my machine) is reported below:
First we load the data and initialize random splitting:
```
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import numpy as np
from numpy import random
from scipy.io import loadmat
mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = mat['data']
#normalize examples to have norm one
Xall = Xall / np.sqrt(sum(Xall**2,0))
Yall = mat['label'].squeeze()
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
N_train_per_class = 200
N_test_per_class = 200
N_class = 10
# to make the results reproducable
random.seed(0)
# index for subsampling
index = np.zeros((N_train_per_class+N_test_per_class, N_class), 'i')
for k in range(N_class):
Ik = (Yall == k).nonzero()[0] # index for samples of class k
I_subsample = random.permutation(len(Ik))[:N_train_per_class+N_test_per_class]
index[:, k] = Ik[I_subsample]
idx_train = index[:N_train_per_class, :].reshape(N_train_per_class*N_class)
idx_test = index[N_train_per_class:, :].reshape(N_test_per_class*N_class)
random.shuffle(idx_train)
random.shuffle(idx_test)
```
import SHOGUN components and convert features into SHOGUN format:
```
import shogun as sg
from shogun import features, MulticlassLabels
from shogun import LibLinear, L2R_L2LOSS_SVC, LinearMulticlassMachine
from shogun import MulticlassOneVsOneStrategy, MulticlassOneVsRestStrategy
from shogun import MulticlassAccuracy
import time
feats_train = features(Xall[:, idx_train])
feats_test = features(Xall[:, idx_test])
lab_train = MulticlassLabels(Yall[idx_train].astype('d'))
lab_test = MulticlassLabels(Yall[idx_test].astype('d'))
```
define a helper function to train and evaluate multiclass machine given a strategy:
```
def evaluate(strategy, C):
bin_machine = LibLinear(L2R_L2LOSS_SVC)
bin_machine.put('use_bias', True)
bin_machine.put('C1', C)
bin_machine.put('C2', C)
mc_machine = LinearMulticlassMachine(strategy, feats_train, bin_machine, lab_train)
t_begin = time.clock()
mc_machine.train()
t_train = time.clock() - t_begin
t_begin = time.clock()
pred_test = mc_machine.apply_multiclass(feats_test)
t_test = time.clock() - t_begin
evaluator = MulticlassAccuracy()
acc = evaluator.evaluate(pred_test, lab_test)
print("training time: %.4f" % t_train)
print("testing time: %.4f" % t_test)
print("accuracy: %.4f" % acc)
```
Test on One-vs-Rest and One-vs-One strategies.
```
print("\nOne-vs-Rest")
print("="*60)
evaluate(MulticlassOneVsRestStrategy(), 5.0)
print("\nOne-vs-One")
print("="*60)
evaluate(MulticlassOneVsOneStrategy(), 2.0)
```
LibLinear also has a true multiclass SVM implemenentation - so it is worthwhile to compare training time and accuracy with the above reduction schemes:
```
from shogun import MulticlassLibLinear
mcsvm = MulticlassLibLinear(5.0, feats_train, lab_train)
mcsvm.put('m_use_bias', True)
t_begin = time.clock()
mcsvm.train(feats_train)
t_train = time.clock() - t_begin
t_begin = time.clock()
pred_test = mcsvm.apply_multiclass(feats_test)
t_test = time.clock() - t_begin
evaluator = MulticlassAccuracy()
acc = evaluator.evaluate(pred_test, lab_test)
print("training time: %.4f" % t_train)
print("testing time: %.4f" % t_test)
print("accuracy: %.4f" % acc)
```
As you can see performance of all the three is very much the same though the multiclass svm is a bit faster in training. Usually training time of the true multiclass SVM is much slower than one-vs-rest approach. It should be noted that classification performance of one-vs-one is known to be slightly superior to one-vs-rest since the machines do not have to be properly scaled like in the one-vs-rest approach. However, with larger number of classes one-vs-one quickly becomes prohibitive and so one-vs-rest is the only suitable approach - or other schemes presented below.
## Error-Correcting Output Codes
*Error-Correcting Output Codes* (ECOC) is a
generalization of the One-vs-Rest and One-vs-One strategies. For example, we
can represent the One-vs-Rest strategy with the following $K\times K$ coding
matrix, or a codebook:
$$
\begin{bmatrix}
+1 & -1 & -1 & \ldots & -1 & -1 \\\\
-1 & +1 & -1 & \ldots & -1 & -1\\\\
-1 & -1 & +1 & \ldots & -1 & -1\\\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\\\
-1 & -1 & -1 & \ldots & +1 & -1 \\\\
-1 & -1 & -1 & \ldots & -1 & +1
\end{bmatrix}
$$
Denote the codebook by $B$, there is one column of the codebook associated with
each of the $K$ classes. For example, the code for class $1$ is
$[+1,-1,-1,\ldots,-1]$. Each row of the codebook corresponds to a binary
coloring of all the $K$ classes. For example, in the first row, the class $1$
is colored as $+1$, while the rest of the classes are all colored as $-1$.
Associated with each row, there is a binary classifier trained according to the
coloring. For example, the binary classifier associated with the first row is
trained by treating all the examples of class $1$ as positive examples, and all
the examples of the rest of the classes as negative examples.
In this special case, there are $K$ rows in the codebook. The number of rows in
the codebook is usually called the *code length*. As we can see, this
codebook exactly describes how the One-vs-Rest strategy trains the binary
sub-machines.
```
OvR=-np.ones((10,10))
fill_diagonal(OvR, +1)
_=gray()
_=imshow(OvR, interpolation='nearest')
_=gca().set_xticks([])
_=gca().set_yticks([])
```
A further generalization is to allow $0$-values in the codebook. A $0$ for a
class $k$ in a row means we ignore (the examples of) class $k$ when training
the binary classifiers associated with this row. With this generalization, we
can also easily describes the One-vs-One strategy with a $\binom{K}{2}\times K$
codebook:
$$
\begin{bmatrix}
+1 & -1 & 0 & \ldots & 0 & 0 \\\\
+1 & 0 & -1 & \ldots & 0 & 0 \\\\
\vdots & \vdots & \vdots & \ddots & \vdots & 0 \\\\
+1 & 0 & 0 & \ldots & -1 & 0 \\\\
0 & +1 & -1 & \ldots & 0 & 0 \\\\
\vdots & \vdots & \vdots & & \vdots & \vdots \\\\
0 & 0 & 0 & \ldots & +1 & -1
\end{bmatrix}
$$
Here each of the $\binom{K}{2}$ rows describes a binary classifier trained with
a pair of classes. The resultant binary classifiers will be identical as those
described by a One-vs-One strategy.
Since $0$ is allowed in the codebook to ignore some classes, this kind of
codebooks are usually called *sparse codebook*, while the codebooks with
only $+1$ and $-1$ are usually called *dense codebook*.
In general case, we can specify any code length and fill the codebook
arbitrarily. However, some rules should be followed:
* Each row must describe a *valid* binary coloring. In other words, both $+1$ and $-1$ should appear at least once in each row. Or else a binary classifier cannot be obtained for this row.
* It is good to avoid duplicated rows. There is generally no harm to have duplicated rows, but the resultant binary classifiers are completely identical provided the training algorithm for the binary classifiers are deterministic. So this can be a waste of computational resource.
* Negative rows are also duplicated. Simply inversing the sign of a code row does not produce a "new" code row. Because the resultant binary classifier will simply be the negative classifier associated with the original row.
Though you can certainly generate your own codebook, it is usually easier to
use the `SHOGUN` built-in procedures to generate codebook automatically. There
are various codebook generators (called *encoders*) in `SHOGUN`. However,
before describing those encoders in details, let us notice that a codebook
only describes how the sub-machines are trained. But we still need a
way to specify how the binary classification results of the sub-machines can be
combined to get a multiclass classification result.
Review the codebook again: corresponding to each class, there is a column. We
call the codebook column the (binary) *code* for that class. For a new
sample $x$, by applying the binary classifiers associated with each row
successively, we get a prediction vector of the same length as the
*code*. Deciding the multiclass label from the prediction vector (called
*decoding*) can be done by minimizing the *distance* between the
codes and the prediction vector. Different *decoders* define different
choices of distance functions. For this reason, it is usually good to make the
mutual distance between codes of different classes large. In this way, even
though several binary classifiers make wrong predictions, the distance of
the resultant prediction vector to the code of the *true* class is likely
to be still smaller than the distance to other classes. So correct results can
still be obtained even when some of the binary classifiers make mistakes. This
is the reason for the name *Error-Correcting Output Codes*.
In `SHOGUN`, encoding schemes are described by subclasses of
[CECOCEncoder](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CECOCEncoder.html), while decoding schemes are described by subclasses
of [CECOCDecoder](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CECOCDecoder.html). Theoretically, any combinations of
encoder-decoder pairs can be used. Here we will introduce several common
encoder/decoders in shogun.
* [CECOCRandomDenseEncoder](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CECOCRandomDenseEncoder.html): This encoder generate random dense ($+1$/$-1$) codebooks and choose the one with the largest *minimum mutual distance* among the classes. The recommended code length for this encoder is $10\log K$.
* [CECOCRandomSparseEncoder](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CECOCRandomSparseEncoder.html): This is similar to the random dense encoder, except that sparse ($+1$/$-1$/$0$) codebooks are generated. The recommended code length for this encoder is $15\log K$.
* [CECOCOVREncoder](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CECOCOVREncoder.html), [CECOCOVOEncoder](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CECOCOVOEncoder.html): These two encoders mimic the One-vs-Rest and One-vs-One strategies respectively. They are implemented mainly for demonstrative purpose. When suitable decoders are used, the results will be equivalent to the corresponding strategies, respectively.
Using ECOC Strategy in `SHOGUN` is similar to ordinary one-vs-rest or one-vs-one. You need to choose an encoder and a decoder, and then construct a `ECOCStrategy`, as demonstrated below:
```
from shogun import ECOCStrategy, ECOCRandomDenseEncoder, ECOCLLBDecoder
print("\nRandom Dense Encoder + Margin Loss based Decoder")
print("="*60)
evaluate(ECOCStrategy(ECOCRandomDenseEncoder(), ECOCLLBDecoder()), 2.0)
```
### Using a kernel multiclass machine
Expanding on the idea of creating a generic multiclass machine and then assigning a particular multiclass strategy and a base binary machine, one can also use the [KernelMulticlassMachine](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelMulticlassMachine.html) with a kernel of choice.
Here we will use a [GaussianKernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html) with [LibSVM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibSVM.html) as the classifer.
All we have to do is define a new helper evaluate function with the features defined as in the above examples.
```
def evaluate_multiclass_kernel(strategy):
from shogun import KernelMulticlassMachine, LibSVM
width=2.1
epsilon=1e-5
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
kerne.init(feats_train, feats_train)
classifier = LibSVM()
classifier.put('epsilon', epsilon)
mc_machine = KernelMulticlassMachine(strategy, kernel, classifier, lab_train)
t_begin = time.clock()
mc_machine.train()
t_train = time.clock() - t_begin
t_begin = time.clock()
pred_test = mc_machine.apply_multiclass(feats_test)
t_test = time.clock() - t_begin
evaluator = MulticlassAccuracy()
acc = evaluator.evaluate(pred_test, lab_test)
print("training time: %.4f" % t_train)
print("testing time: %.4f" % t_test)
print("accuracy: %.4f" % acc)
print("\nOne-vs-Rest")
print("="*60)
evaluate_multiclass_kernel(MulticlassOneVsRestStrategy())
```
So we have seen that we can classify multiclass samples using a base binary machine. If we dwell on this a bit more, we can easily spot the intuition behind this.
The [MulticlassOneVsRestStrategy](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMulticlassOneVsOneStrategy.html) classifies one class against the rest of the classes. This is done for each and every class by training a separate classifier for it.So we will have total $k$ classifiers where $k$ is the number of classes.
Just to see this in action lets create some data using the gaussian mixture model class ([GMM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html)) from which we sample the data points.Four different classes are created and plotted.
```
from shogun import *
from numpy import *
num=1000;
dist=1.0;
gmm=GMM(4)
gmm.set_nth_mean(array([-dist*4,-dist]),0)
gmm.set_nth_mean(array([-dist*4,dist*4]),1)
gmm.set_nth_mean(array([dist*4,dist*4]),2)
gmm.set_nth_mean(array([dist*4,-dist]),3)
gmm.set_nth_cov(array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(array([[1.0,0.0],[0.0,1.0]]),1)
gmm.set_nth_cov(array([[1.0,0.0],[0.0,1.0]]),2)
gmm.set_nth_cov(array([[1.0,0.0],[0.0,1.0]]),3)
gmm.put('m_coefficients', array([1.0,0.0,0.0,0.0]))
x0=array([gmm.sample() for i in range(num)]).T
x0t=array([gmm.sample() for i in range(num)]).T
gmm.put('m_coefficients', array([0.0,1.0,0.0,0.0]))
x1=array([gmm.sample() for i in range(num)]).T
x1t=array([gmm.sample() for i in range(num)]).T
gmm.put('m_coefficients', array([0.0,0.0,1.0,0.0]))
x2=array([gmm.sample() for i in range(num)]).T
x2t=array([gmm.sample() for i in range(num)]).T
gmm.put('m_coefficients', array([0.0,0.0,0.0,1.0]))
x3=array([gmm.sample() for i in range(num)]).T
x3t=array([gmm.sample() for i in range(num)]).T
traindata=concatenate((x0,x1,x2,x3), axis=1)
testdata=concatenate((x0t,x1t,x2t,x3t), axis=1)
l0 = array([0.0 for i in range(num)])
l1 = array([1.0 for i in range(num)])
l2 = array([2.0 for i in range(num)])
l3 = array([3.0 for i in range(num)])
trainlab=concatenate((l0,l1,l2,l3))
testlab=concatenate((l0,l1,l2,l3))
_=jet()
_=scatter(traindata[0,:], traindata[1,:], c=trainlab, s=100)
```
Now that we have the data ready , lets convert it to shogun format features.
```
feats_tr=features(traindata)
labels=MulticlassLabels(trainlab)
```
The [KernelMulticlassMachine](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelMulticlassMachine.html) is used with [LibSVM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibSVM.html) as the classifer just as in the above example.
Now we have four different classes, so as explained above we will have four classifiers which in shogun terms are submachines.
We can see the outputs of two of the four individual submachines (specified by the index) and of the main machine. The plots clearly show how the submachine classify each class as if it is a binary classification problem and this provides the base for the whole multiclass classification.
```
from shogun import KernelMulticlassMachine, LibSVM
width=2.1
epsilon=1e-5
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
kernel.init(feats_tr, feats_tr)
classifier=LibSVM()
classifier.put('epsilon', epsilon)
mc_machine=KernelMulticlassMachine(MulticlassOneVsRestStrategy(), kernel, classifier, labels)
mc_machine.train()
size=100
x1=linspace(-10, 10, size)
x2=linspace(-10, 10, size)
x, y=meshgrid(x1, x2)
grid=features(array((ravel(x), ravel(y)))) #test features
out=mc_machine.apply_multiclass(grid) #main output
z=out.get_labels().reshape((size, size))
sub_out0=mc_machine.get_submachine_outputs(0) #first submachine
sub_out1=mc_machine.get_submachine_outputs(1) #second submachine
z0=sub_out0.get_labels().reshape((size, size))
z1=sub_out1.get_labels().reshape((size, size))
figure(figsize=(20,5))
subplot(131, title="Submachine 1")
c0=pcolor(x, y, z0)
_=contour(x, y, z0, linewidths=1, colors='black', hold=True)
_=colorbar(c0)
subplot(132, title="Submachine 2")
c1=pcolor(x, y, z1)
_=contour(x, y, z1, linewidths=1, colors='black', hold=True)
_=colorbar(c1)
subplot(133, title="Multiclass output")
c2=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c2)
```
The `MulticlassOneVsOneStrategy` is a bit different with more number of machines.
Since it trains a classifer for each pair of classes, we will have a total of $\frac{k(k-1)}{2}$ submachines for $k$ classes. Binary classification then takes place on each pair.
Let's visualize this in a plot.
```
C=2.0
bin_machine = LibLinear(L2R_L2LOSS_SVC)
bin_machine.put('use_bias', True)
bin_machine.put('C1', C)
bin_machine.put('C2', C)
mc_machine1 = LinearMulticlassMachine(MulticlassOneVsOneStrategy(), feats_tr, bin_machine, labels)
mc_machine1.train()
out1=mc_machine1.apply_multiclass(grid) #main output
z1=out1.get_labels().reshape((size, size))
sub_out10=mc_machine1.get_submachine_outputs(0) #first submachine
sub_out11=mc_machine1.get_submachine_outputs(1) #second submachine
z10=sub_out10.get_labels().reshape((size, size))
z11=sub_out11.get_labels().reshape((size, size))
no_color=array([5.0 for i in range(num)])
figure(figsize=(20,5))
subplot(131, title="Submachine 1") #plot submachine and traindata
c10=pcolor(x, y, z10)
_=contour(x, y, z10, linewidths=1, colors='black', hold=True)
lab1=concatenate((l0,l1,no_color,no_color))
_=scatter(traindata[0,:], traindata[1,:], c=lab1, cmap='gray', s=100)
_=colorbar(c10)
subplot(132, title="Submachine 2")
c11=pcolor(x, y, z11)
_=contour(x, y, z11, linewidths=1, colors='black', hold=True)
lab2=concatenate((l0, no_color, l2, no_color))
_=scatter(traindata[0,:], traindata[1,:], c=lab2, cmap="gray", s=100)
_=colorbar(c11)
subplot(133, title="Multiclass output")
c12=pcolor(x, y, z1)
_=contour(x, y, z1, linewidths=1, colors='black', hold=True)
_=colorbar(c12)
```
The first two plots help us visualize how the submachines do binary classification for each pair. The class with maximum votes is chosen for test samples, leading to a refined multiclass output as in the last plot.
| github_jupyter |
# Sequence to Sequence Classification by RNN
- Creating the **data pipeline** with `tf.data`
- Preprocessing word sequences (variable input sequence length) using `padding technique` by `user function (pad_seq)`
- Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)
- Training **many to many classification** with `tf.contrib.seq2seq.sequence_loss`
- Masking unvalid token with `tf.sequence_mask`
- Creating the model as **Class**
```
import os
import sys
import time
import string
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
slim = tf.contrib.slim
rnn = tf.contrib.rnn
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
```
## Prepare example data
```
sentences = [['I', 'feel', 'hungry'],
['You', 'are', 'a', 'genius'],
['tensorflow', 'is', 'very', 'difficult'],
['tensorflow', 'is', 'a', 'framework', 'for', 'deep', 'learning'],
['tensorflow', 'is', 'very', 'fast', 'changing']]
pos = [['pronoun', 'verb', 'adjective'],
['pronoun', 'verb', 'preposition', 'noun'],
['noun', 'verb', 'adverb', 'adjective'],
['noun', 'verb', 'determiner', 'noun', 'preposition', 'adjective', 'noun'],
['noun', 'verb', 'adverb', 'adjective', 'verb']]
# word dictionary
bag_of_words = []
for sentence in sentences:
bag_of_words += sentence
bag_of_words = list(set(bag_of_words))
bag_of_words.sort()
bag_of_words = ['<pad>'] + bag_of_words
word2idx = {word : idx for idx, word in enumerate(bag_of_words)} # word to index
idx2word = [word for word in bag_of_words] # index to word
#print("word2idx: {}".format(word2idx))
word2idx
#print("idx2word: {}".format(idx2word))
idx2word
# pos dictionary
bag_of_pos = []
for item in pos:
bag_of_pos += item
bag_of_pos = list(set(bag_of_pos))
bag_of_pos.sort()
bag_of_pos = ['<pad>'] + bag_of_pos
print("bag_of_pos: {}".format(bag_of_pos))
pos2idx = {pos : idx for idx, pos in enumerate(bag_of_pos)} # pos to index
idx2pos = [pos for pos in bag_of_pos] # index to pos
#print("pos2idx: {}".format(pos2idx))
pos2idx
#print("idx2pos: {}".format(idx2pos))
idx2pos
```
### Create pad_seq function
```
def pad_seq(sequences, max_length, dic):
"""Padding sequences
Padding a special charcter '<pad>' from the end of sentence to max_length
Args:
sequences (list of characters): input data
max_length (int): max length for padding
dic (dictionary): char to index
Returns:
seq_indices (2-rank np.array):
seq_length (1-rank np.array): sequence lengthes of all data
"""
seq_length, seq_indices = [], []
for sequence in sequences:
seq_length.append(len(sequence))
seq_idx = [dic.get(char) for char in sequence]
seq_idx += (max_length - len(seq_idx)) * [dic.get('<pad>')] # 0 is idx of meaningless token "<pad>"
seq_indices.append(seq_idx)
return np.array(seq_indices), np.array(seq_length)
```
### Pre-process data
```
max_length = 10
X_indices, X_length = pad_seq(sequences=sentences, max_length=max_length, dic=word2idx)
print("X_indices")
print(X_indices)
print("X_length")
print(X_length)
y_string = np.array([item + ['<pad>'] * (max_length - len(item)) for item in pos])
print(y_string)
y = np.array([list(map(lambda el : pos2idx.get(el), item)) for item in y_string])
print(y)
```
### Define SimPosRNN
```
class PosRNN:
def __init__(self, seq_indices, seq_length, labels, num_classes, hidden_dim, max_length, word2idx):
# Data pipeline
with tf.variable_scope('input_layer'):
self._seq_indices = seq_indices
self._seq_length = seq_length
self._labels = labels
one_hot = tf.eye(len(word2idx), dtype=tf.float32)
self._one_hot = tf.get_variable(name='one_hot_embedding',
initializer=one_hot,
trainable=False) # embedding vector training 안할 것이기 때문
self._seq_embeddings = tf.nn.embedding_lookup(params=self._one_hot,
ids=self._seq_indices)
# RNN cell (many to many)
with tf.variable_scope('rnn_cell'):
cell = rnn.BasicRNNCell(num_units=hidden_dim)
score_cell = rnn.OutputProjectionWrapper(cell=cell,
output_size=num_classes)
self._outputs, _ = tf.nn.dynamic_rnn(cell=score_cell, inputs=self._seq_embeddings,
sequence_length=self._seq_length,
dtype=tf.float32)
with tf.variable_scope('seq2seq_loss'):
masks = tf.sequence_mask(lengths=self._seq_length, maxlen=max_length, dtype=tf.float32)
self.seq2seq_loss = tf.contrib.seq2seq.sequence_loss(logits=self._outputs,
targets=self._labels,
weights=masks)
with tf.variable_scope('prediction'):
self._prediction = tf.argmax(input=self._outputs,
axis=2, output_type=tf.int32)
def predict(self, sess, seq_indices, seq_length):
feed_dict = {self._seq_indices : seq_indices, self._seq_length : seq_length}
return sess.run(self._prediction, feed_dict=feed_dict)
```
### Create a model of SimPosRNN
```
# hyper-parameter
num_classes = len(idx2pos)
learning_rate = .003
batch_size = 2
max_epochs = 100
```
### Set up dataset with `tf.data`
#### create input pipeline with `tf.data.Dataset`
```
## create data pipeline with tf.data
train_dataset = tf.data.Dataset.from_tensor_slices((X_indices, X_length, y))
train_dataset = train_dataset.shuffle(buffer_size = 100)
train_dataset = train_dataset.batch(batch_size = batch_size)
print(train_dataset)
```
#### Define Iterator
```
train_iterator = train_dataset.make_initializable_iterator()
seq_indices, seq_length, labels = train_iterator.get_next()
pos_rnn = PosRNN(seq_indices=seq_indices, seq_length=seq_length,
labels=labels, num_classes=num_classes,
hidden_dim=16, max_length=max_length,
word2idx=word2idx)
```
### Creat training op and train model
```
## create training op
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(pos_rnn.seq2seq_loss)
```
### `tf.Session()` and train
```
sess = tf.Session()
sess.run(tf.global_variables_initializer())
loss_history = []
step = 0
for epochs in range(max_epochs):
start_time = time.time()
sess.run(train_iterator.initializer)
avg_loss = []
while True:
try:
_, loss_ = sess.run([train_op, pos_rnn.seq2seq_loss])
avg_loss.append(loss_)
step += 1
except tf.errors.OutOfRangeError:
#print("End of dataset") # ==> "End of dataset"
break
avg_loss_ = np.mean(avg_loss)
loss_history.append(avg_loss_)
duration = time.time() - start_time
examples_per_sec = batch_size / float(duration)
print("epochs: {}, step: {}, loss: {:g}, ({:.2f} examples/sec; {:.3f} sec/batch)".format(epochs+1, step, avg_loss_, examples_per_sec, duration))
plt.plot(loss_history, label='train')
y_pred = pos_rnn.predict(sess=sess, seq_indices=X_indices, seq_length=X_length)
print(y_pred)
result_str = []
for example in y_pred:
result_str.append([idx2pos[idx] for idx in example])
for examples in zip(y_string, result_str):
print(" Label: ", ' '.join(examples[0]))
print("Prediction: ", ' '.join(examples[1]))
```
| github_jupyter |
## 1.- Imports, setup and configure
### 1.1.- Imports
Bring in the different dependencies from installed standard modules
```
import sys
import time
import glob
import os
import numpy as np
import pandas as pd
from scipy.spatial import distance
from scipy import signal
```
Now the ad-hoc created modules for this project. We use the jupyter magics %load_ext autoreload and %autoreload set to 2. Imported classes are located in the ../scripts folder of our volume
```
import sys
sys.path.insert(0, '../../scripts/asset_processor/')
# load the autoreload extension
%load_ext autoreload
# Set extension to reload modules every time before executing code
%autoreload 2
from video_asset_processor import VideoAssetProcessor
# Configure pandas to display enough information
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
```
### 1.2.- Custom functions
Add the necessary custom functions for the notebook
```
def read_metric_log(path, metric):
if metric == 'vmaf':
with open(path) as f:
for line in f:
if '= ' in line:
return float(line.split('= ')[-1])
if metric == 'ms-ssim':
ms_ssim_df = pd.read_csv(path)
return(ms_ssim_df['ms-ssim'].mean())
```
### 1.3.- Configure the inputs
Setup the needed parameters to pass to the functions
```
# Enumerate the list of metrics to extract
# -hash_euclidean
# -hash_cosine
# -hash_hamming
# -temporal_difference (this creates two output columns):
# -temporal_difference_euclidean
# -temporal_difference_cosine
metrics_list = ['temporal_difference', 'temporal_canny', 'temporal_histogram_distance', 'temporal_cross_correlation', 'temporal_dct']
renditions_folders = [
'1080p',
'1080p_watermark',
'1080p_flip_vertical',
'1080p_rotate_90_clockwise',
'1080p_vignette',
'1080p_black_and_white',
'1080p_low_bitrate_4',
'720p',
'720p_vignette',
'720p_black_and_white',
'720p_low_bitrate_4',
'720p_watermark',
'720p_flip_vertical',
'720p_rotate_90_clockwise',
'480p',
'480p_watermark',
'480p_vignette',
'480p_black_and_white',
'480p_low_bitrate_4',
'480p_flip_vertical',
'480p_rotate_90_clockwise',
'360p',
'360p_watermark',
'360p_vignette',
'360p_black_and_white',
'360p_low_bitrate_4',
'360p_flip_vertical',
'360p_rotate_90_clockwise',
'240p',
'240p_watermark',
'240p_vignette',
'240p_black_and_white',
'240p_low_bitrate_4',
'240p_flip_vertical',
'240p_rotate_90_clockwise',
'144p',
'144p_watermark',
'144p_vignette',
'144p_black_and_white',
'144p_low_bitrate_4',
'144p_flip_vertical',
'144p_rotate_90_clockwise',
]
originals_path = '../../data/{}/'
```
## 2.- Iterate all assets in the data set and extract their metrics
```
metrics_df = pd.DataFrame()
list = os.listdir(originals_path.format('1080p')) # dir is your directory path
number_assets = len(list)
print ('Number of assets: {}'.format(number_assets))
count = 0
for original_asset in glob.iglob(originals_path.format('1080p') + '**', recursive=False):
count += 1
if os.path.isfile(original_asset): # filter dirs
print('Processing asset {} of {}: {}'.format(count, number_assets, original_asset))
start_time = time.time()
renditions_list = []
for folder in renditions_folders:
rendition_folder = originals_path.format(folder)
renditions_list.append(rendition_folder + os.path.basename(original_asset))
asset_processor = VideoAssetProcessor(original_asset, renditions_list, metrics_list, False)
asset_metrics_df = asset_processor.process()
metrics_df = pd.concat([metrics_df, asset_metrics_df], axis=0, sort=False).reset_index(inplace=False)
if 'level_0' in metrics_df.columns:
metrics_df = metrics_df.drop(['level_0'], axis=1)
metrics_df.to_csv('../output/metrics-tmp.csv')
elapsed_time = time.time() - start_time
print('Elapsed time:', elapsed_time)
print('***************************')
```
## 3.- Extract aggregated metrics values to a pandas DataFrame
Once we have iterated through each and every asset of the dataset, it is time to drop the contents of the dictionary to a pandas DataFrame.
But before, other metrics computed by means of external scripts need to be collected (namely VMAF and MS-SSIM). Checkout Readme.md to see how to extract those metrics.
```
metrics_path = '../data-analysis/output'
real_path = os.path.realpath(metrics_path)
extra_metrics = ['vmaf', 'ms-ssim']
for index,row in metrics_df.iterrows():
for metric in extra_metrics:
asset_name = row['level_0'].split('/')[-1].split('.')[0]
attack = row['level_1'].split('/')[3]
dimension = attack.split('_')[0].replace('p','')
attack_name = attack.replace('{}p'.format(dimension), dimension)
log_path = '{}/{}/{}/{}/{}_{}.log'.format(metrics_path, metric, attack_name, asset_name, asset_name, dimension)
print('LEVEL 0', row['level_0'])
print('LEVEL 1:', row['level_1'])
print('ASSET NAME:', asset_name)
print('ATTACK:', attack)
print('DIMENSION', dimension)
print('ATTACK NAME', attack_name)
print('PATH:', log_path)
if os.path.isfile(log_path):
print('ADDING:',log_path)
print('*****************************')
metric_value = read_metric_log(log_path, metric)
metrics_df.at[index, metric] = metric_value
else:
print('Path not found')
metrics_df.head()
metrics_df.to_csv('../output/metrics.csv')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.