code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Node classification with Node2Vec using Stellargraph components
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/keras-node2vec-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/keras-node2vec-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
This example demonstrates how to perform node classification with Node2Vec using the Stellargraph components. This uses a keras implementation of Node2Vec available in stellargraph instead of the reference implementation provided by ``gensim``.
<a name="refs"></a>
**References**
[1] Node2Vec: Scalable Feature Learning for Networks. A. Grover, J. Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016. ([link](https://snap.stanford.edu/node2vec/))
[2] Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))
[3] word2vec Parameter Learning Explained. X. Rong. arXiv preprint arXiv:1411.2738. 2014 Nov 11. ([link](https://arxiv.org/pdf/1411.2738.pdf))
## Introduction
Following word2vec [2,3], for each (``target``,``context``) node pair $(v_i,v_j)$ collected from random walks, we learn the representation for the target node $v_i$ by using it to predict the existence of context node $v_j$, with the following three-layer neural network.

Node $v_i$'s representation in the hidden layer is obtained by multiplying $v_i$'s one-hot representation in the input layer with the input-to-hidden weight matrix $W_{in}$, which is equivalent to look up the $i$th row of input-to-hidden weight matrix $W_{in}$. The existence probability of each node conditioned on node $v_i$ is outputted in the output layer, which is obtained by multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation. To capture the ``target-context`` relation between $v_i$ and $v_j$, we need to maximize the probability $\mathrm{P}(v_j|v_i)$. However, computing $\mathrm{P}(v_j|v_i)$ is time consuming, which involves the matrix multiplication between $v_i$'s hidden-layer representation and the hidden-to-out weight matrix $W_{out}$.
To speed up the computing, we adopt the negative sampling strategy [2,3]. For each (``target``, ``context``) node pair, we sample a negative node $v_k$, which is not $v_i$'s context. To obtain the output, instead of multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation, we only calculate the dot product between $v_i$'s hidden-layer representation and the $j$th column as well as the $k$th column of the hidden-to-output weight matrix $W_{out}$ followed by a sigmoid activation respectively. According to [3], the original objective to maximize $\mathrm{P}(v_j|v_i)$ can be approximated by minimizing the cross entropy between $v_j$ and $v_k$'s outputs and their ground-truth labels (1 for $v_j$ and 0 for $v_k$).
Following [2,3], we denote the rows of the input-to-hidden weight matrix $W_{in}$ as ``input_embeddings`` and the columns of the hidden-to-out weight matrix $W_{out}$ as ``output_embeddings``. To build the Node2Vec model, we need look up ``input_embeddings`` for target nodes and ``output_embeddings`` for context nodes and calculate their inner product together with a sigmoid activation.
```
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.1.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.1.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.1.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import accuracy_score
import os
import networkx as nx
import numpy as np
import pandas as pd
from tensorflow import keras
from stellargraph import StellarGraph
from stellargraph.data import BiasedRandomWalk
from stellargraph.data import UnsupervisedSampler
from stellargraph.data import BiasedRandomWalk
from stellargraph.mapper import Node2VecLinkGenerator, Node2VecNodeGenerator
from stellargraph.layer import Node2Vec, link_classification
from stellargraph import datasets
from IPython.display import display, HTML
%matplotlib inline
```
### Dataset
For clarity, we use only the largest connected component, ignoring isolated nodes and subgraphs; having these in the data does not prevent the algorithm from running and producing valid results.
```
dataset = datasets.Cora()
display(HTML(dataset.description))
G, subjects = dataset.load(largest_connected_component_only=True)
print(G.info())
```
### The Node2Vec algorithm
The Node2Vec algorithm introduced in [[1]](#refs) is a 2-step representation learning algorithm. The two steps are:
1. Use random walks to generate sentences from a graph. A sentence is a list of node ids. The set of all sentences makes a corpus.
2. The corpus is then used to learn an embedding vector for each node in the graph. Each node id is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [[2]](#refs) is used for calculating the embedding vectors.
In this implementation, we train the Node2Vec algorithm in the following two steps:
1. Generate a set of (`target`, `context`) node pairs through starting the biased random walk with a fixed length at per node. The starting nodes are taken as the target nodes and the following nodes in biased random walks are taken as context nodes. For each (`target`, `context`) node pair, we generate 1 negative node pair.
2. Train the Node2Vec algorithm through minimizing cross-entropy loss for `target-context` pair prediction, with the predictive value obtained by performing the dot product of the 'input embedding' of the target node and the 'output embedding' of the context node, followed by a sigmoid activation.
Specify the optional parameter values: the number of walks to take per node, the length of each walk. Here, to guarantee the running efficiency, we respectively set `walk_number` and `walk_length` to 100 and 5. Larger values can be set to them to achieve better performance.
```
walk_number = 100
walk_length = 5
```
Create the biased random walker to perform context node sampling, with the specified parameters.
```
walker = BiasedRandomWalk(
G,
n=walk_number,
length=walk_length,
p=0.5, # defines probability, 1/p, of returning to source node
q=2.0, # defines probability, 1/q, for moving to a node away from the source node
)
```
Create the UnsupervisedSampler instance with the biased random walker.
```
unsupervised_samples = UnsupervisedSampler(G, nodes=list(G.nodes()), walker=walker)
```
Set the batch size and the number of epochs.
```
batch_size = 50
epochs = 2
```
Define an attri2vec training generator, which generates a batch of (index of target node, index of context node, label of node pair) pairs per iteration.
```
generator = Node2VecLinkGenerator(G, batch_size)
```
Build the Node2Vec model, with the dimension of learned node representations set to 128.
```
emb_size = 128
node2vec = Node2Vec(emb_size, generator=generator)
x_inp, x_out = node2vec.in_out_tensors()
```
Use the link_classification function to generate the prediction, with the 'dot' edge embedding generation method and the 'sigmoid' activation, which actually performs the dot product of the 'input embedding' of the target node and the 'output embedding' of the context node followed by a sigmoid activation.
```
prediction = link_classification(
output_dim=1, output_act="sigmoid", edge_embedding_method="dot"
)(x_out)
```
Stack the Node2Vec encoder and prediction layer into a Keras model. Our generator will produce batches of positive and negative context pairs as inputs to the model. Minimizing the binary crossentropy between the outputs and the provided ground truth is much like a regular binary classification task.
```
model = keras.Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.binary_crossentropy,
metrics=[keras.metrics.binary_accuracy],
)
```
Train the model.
```
history = model.fit(
generator.flow(unsupervised_samples),
epochs=epochs,
verbose=1,
use_multiprocessing=False,
workers=4,
shuffle=True,
)
```
## Visualise Node Embeddings
Build the node based model for predicting node representations from node ids and the learned parameters. Below a Keras model is constructed, with x_inp[0] as input and x_out[0] as output. Note that this model's weights are the same as those of the corresponding node encoder in the previously trained node pair classifier.
```
x_inp_src = x_inp[0]
x_out_src = x_out[0]
embedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src)
```
Get the node embeddings from node ids.
```
node_gen = Node2VecNodeGenerator(G, batch_size).flow(G.nodes())
node_embeddings = embedding_model.predict(node_gen, workers=4, verbose=1)
```
Transform the embeddings to 2d space for visualisation.
```
transform = TSNE # PCA
trans = transform(n_components=2)
node_embeddings_2d = trans.fit_transform(node_embeddings)
# draw the embedding points, coloring them by the target label (paper subject)
alpha = 0.7
label_map = {l: i for i, l in enumerate(np.unique(subjects))}
node_colours = [label_map[target] for target in subjects]
plt.figure(figsize=(7, 7))
plt.axes().set(aspect="equal")
plt.scatter(
node_embeddings_2d[:, 0],
node_embeddings_2d[:, 1],
c=node_colours,
cmap="jet",
alpha=alpha,
)
plt.title("{} visualization of node embeddings".format(transform.__name__))
plt.show()
```
### Node Classification
In this task, we will use the `Node2Vec` node embeddings to train a classifier to predict the subject of a paper in Cora.
```
# X will hold the 128-dimensional input features
X = node_embeddings
# y holds the corresponding target values
y = np.array(subjects)
```
### Data Splitting
We split the data into train and test sets.
We use 10% of the data for training and the remaining 90% for testing as a hold-out test set.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.1, test_size=None)
print(
"Array shapes:\n X_train = {}\n y_train = {}\n X_test = {}\n y_test = {}".format(
X_train.shape, y_train.shape, X_test.shape, y_test.shape
)
)
```
### Classifier Training
We train a Logistic Regression classifier on the training data.
```
clf = LogisticRegressionCV(
Cs=10, cv=10, scoring="accuracy", verbose=False, multi_class="ovr", max_iter=300
)
clf.fit(X_train, y_train)
```
Predict the hold-out test set.
```
y_pred = clf.predict(X_test)
```
Calculate the accuracy of the classifier on the test set.
```
accuracy_score(y_test, y_pred)
```
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/keras-node2vec-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/keras-node2vec-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| github_jupyter |
```
cd /content/drive/My Drive/Dava with ML
!unzip chronic-kidney-disease.zip
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
import sklearn.metrics as m
ls
dataset=pd.read_csv('new_model.csv')
dataset
dataset.columns
dataset.corr()
dataset.isnull().values.any()
sns.set(style="ticks", color_codes=True)
plt.figure(figsize=(30,15))
sns.heatmap(dataset.corr(),annot=True)
plt.show()
dataset.describe()
```
##Splitting of Data
```
features=dataset.iloc[:,:-1]
labels=dataset.iloc[:,[-1]]
features
labels
feature_train,feature_test,label_train,label_test=train_test_split(features,labels,test_size=0.2,random_state=42)
```
##Logistic Regression
```
model=LogisticRegression(max_iter=1000)
model.fit(feature_train,label_train)
label_pred=model.predict(feature_test)
m.accuracy_score(label_test,label_pred)
label_pred
label_test
print(m.classification_report(label_test,label_pred))
print(m.confusion_matrix(label_test,label_pred))
```
##KNN
```
from sklearn.model_selection import GridSearchCV
knn=KNeighborsClassifier()
param={'n_neighbors':list(np.arange(1,20))}
model=GridSearchCV(knn,param_grid=param)
model.fit(feature_train,label_train)
model.best_params_
model=KNeighborsClassifier(n_neighbors=10)
model.fit(feature_train,label_train)
label_pred=model.predict(feature_test)
m.accuracy_score(label_test,label_pred)
print(m.classification_report(label_test,label_pred))
print(m.confusion_matrix(label_test,label_pred))
label_pred
label_test
```
##Decision Tree
```
from sklearn.model_selection import GridSearchCV
dt=DecisionTreeClassifier()
param={'max_depth':list(np.arange(1,20))}
model=GridSearchCV(dt,param_grid=param)
model.fit(feature_train,label_train)
model.best_params_
model=DecisionTreeClassifier(max_depth=10)
model.fit(feature_train,label_train)
label_pred=model.predict(feature_test)
m.accuracy_score(label_test,label_pred)
print(m.classification_report(label_test,label_pred))
print(m.confusion_matrix(label_test,label_pred))
label_pred
label_test
```
##SVM
```
model=SVC(kernel='linear')
model.fit(feature_train,label_train)
label_pred=model.predict(feature_test)
m.accuracy_score(label_test,label_pred)
print(m.classification_report(label_test,label_pred))
print(m.confusion_matrix(label_test,label_pred))
label_pred
label_test
```
##Random Forest Classifier
```
from sklearn.model_selection import GridSearchCV
rfc=RandomForestClassifier()
param={'n_estimators':[10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200],'max_depth':list(np.arange(1,20))}
model=GridSearchCV(rfc,param_grid=param)
model.fit(feature_train,label_train)
model.best_params_
model=model.best_estimator_
label_pred=model.predict(feature_test)
m.accuracy_score(label_test,label_pred)
print(m.classification_report(label_test,label_pred))
print(m.confusion_matrix(label_test,label_pred))
label_pred
label_test
```
#Neural Networks
```
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from keras.utils.np_utils import to_categorical
label_train=to_categorical(label_train)
label_test=to_categorical(label_test)
feature_train.shape
model=Sequential()
model.add(Dense(300,input_shape=[13],activation='relu'))
model.add(Dense(2,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
model.fit(feature_train,label_train,epochs=100)
label_pred=model.predict(feature_test)
label_pred
label_pred=np.argmax(label_pred,axis=1)
label_pred
label_test=np.argmax(label_test,axis=1)
label_train=np.argmax(label_train,axis=1)
m.accuracy_score(label_test,label_pred)
print(m.classification_report(label_test,label_pred))
print(m.confusion_matrix(label_test,label_pred))
label_pred
label_test
import tensorflow
model.save('model_kidney.h5')
# We need to create a TFLite Converter Object from model we created
converter = tensorflow.lite.TFLiteConverter.from_keras_model(model=model)
# Create a tflite model object from TFLite Converter
tfmodel = converter.convert()
# Save TFLite model into a .tflite file
open("degree_kidney.tflite","wb").write(tfmodel)
```
| github_jupyter |
## 线性回归
```
import numpy as np
import pandas as pd
### 初始化模型参数
def initialize_params(dims):
'''
输入:
dims:训练数据变量维度
输出:
w:初始化权重参数值
b:初始化偏差参数值
'''
# 初始化权重参数为零矩阵
w = np.zeros((dims, 1))
# 初始化偏差参数为零
b = 0
return w, b
### 定义模型主体部分
### 包括线性回归公式、均方损失和参数偏导三部分
def linear_loss(X, y, w, b):
'''
输入:
X:输入变量矩阵
y:输出标签向量
w:变量参数权重矩阵
b:偏差项
输出:
y_hat:线性模型预测输出
loss:均方损失值
dw:权重参数一阶偏导
db:偏差项一阶偏导
'''
# 训练样本数量
num_train = X.shape[0]
# 训练特征数量
num_feature = X.shape[1]
# 线性回归预测输出
y_hat = np.dot(X, w) + b
# 计算预测输出与实际标签之间的均方损失
loss = np.sum((y_hat-y)**2)/num_train
# 基于均方损失对权重参数的一阶偏导数
dw = np.dot(X.T, (y_hat-y)) /num_train
# 基于均方损失对偏差项的一阶偏导数
db = np.sum((y_hat-y)) /num_train
return y_hat, loss, dw, db
### 定义线性回归模型训练过程
def linear_train(X, y, learning_rate=0.01, epochs=10000):
'''
输入:
X:输入变量矩阵
y:输出标签向量
learning_rate:学习率
epochs:训练迭代次数
输出:
loss_his:每次迭代的均方损失
params:优化后的参数字典
grads:优化后的参数梯度字典
'''
# 记录训练损失的空列表
loss_his = []
# 初始化模型参数
w, b = initialize_params(X.shape[1])
# 迭代训练
for i in range(1, epochs):
# 计算当前迭代的预测值、损失和梯度
y_hat, loss, dw, db = linear_loss(X, y, w, b)
# 基于梯度下降的参数更新
w += -learning_rate * dw
b += -learning_rate * db
# 记录当前迭代的损失
loss_his.append(loss)
# 每1000次迭代打印当前损失信息
if i % 10000 == 0:
print('epoch %d loss %f' % (i, loss))
# 将当前迭代步优化后的参数保存到字典
params = {
'w': w,
'b': b
}
# 将当前迭代步的梯度保存到字典
grads = {
'dw': dw,
'db': db
}
return loss_his, params, grads
X = np.ones(shape=(353,10))
X.shape
w, b = initialize_params(X.shape[1])
w.shape
y=np.ones(shape=(353,))
y.shape
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
data = diabetes.data
target = diabetes.target
print(data.shape)
print(target.shape)
print(data[:5])
print(target[:5])
# 导入sklearn diabetes数据接口
from sklearn.datasets import load_diabetes
# 导入sklearn打乱数据函数
from sklearn.utils import shuffle
# 获取diabetes数据集
diabetes = load_diabetes()
# 获取输入和标签
data, target = diabetes.data, diabetes.target
# 打乱数据集
X, y = shuffle(data, target, random_state=13)
# 按照8/2划分训练集和测试集
offset = int(X.shape[0] * 0.8)
# 训练集
X_train, y_train = X[:offset], y[:offset]
# 测试集
X_test, y_test = X[offset:], y[offset:]
# 将训练集改为列向量的形式
y_train = y_train.reshape((-1,1))
# 将验证集改为列向量的形式
y_test = y_test.reshape((-1,1))
# 打印训练集和测试集维度
print("X_train's shape: ", X_train.shape)
print("X_test's shape: ", X_test.shape)
print("y_train's shape: ", y_train.shape)
print("y_test's shape: ", y_test.shape)
# 线性回归模型训练
loss_his, params, grads = linear_train(X_train, y_train, 0.01, 200000)
# 打印训练后得到模型参数
print(params)
### 定义线性回归预测函数
def predict(X, params):
'''
输入:
X:测试数据集
params:模型训练参数
输出:
y_pred:模型预测结果
'''
# 获取模型参数
w = params['w']
b = params['b']
# 预测
y_pred = np.dot(X, w) + b
return y_pred
# 基于测试集的预测
y_pred = predict(X_test, params)
# 打印前五个预测值
y_pred[:5]
print(y_test[:5])
### 定义R2系数函数
def r2_score(y_test, y_pred):
'''
输入:
y_test:测试集标签值
y_pred:测试集预测值
输出:
r2:R2系数
'''
# 测试标签均值
y_avg = np.mean(y_test)
# 总离差平方和
ss_tot = np.sum((y_test - y_avg)**2)
# 残差平方和
ss_res = np.sum((y_test - y_pred)**2)
# R2计算
r2 = 1 - (ss_res/ss_tot)
return r2
print(r2_score(y_test, y_pred))
import matplotlib.pyplot as plt
f = X_test.dot(params['w']) + params['b']
plt.scatter(range(X_test.shape[0]), y_test)
plt.plot(f, color = 'darkorange')
plt.xlabel('X_test')
plt.ylabel('y_test')
plt.show();
plt.plot(loss_his, color = 'blue')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.show()
from sklearn.utils import shuffle
X, y = shuffle(data, target, random_state=13)
X = X.astype(np.float32)
data = np.concatenate((X, y.reshape((-1,1))), axis=1)
data.shape
from random import shuffle
def k_fold_cross_validation(items, k, randomize=True):
if randomize:
items = list(items)
shuffle(items)
slices = [items[i::k] for i in range(k)]
for i in range(k):
validation = slices[i]
training = [item
for s in slices if s is not validation
for item in s]
training = np.array(training)
validation = np.array(validation)
yield training, validation
for training, validation in k_fold_cross_validation(data, 5):
X_train = training[:, :10]
y_train = training[:, -1].reshape((-1,1))
X_valid = validation[:, :10]
y_valid = validation[:, -1].reshape((-1,1))
loss5 = []
#print(X_train.shape, y_train.shape, X_valid.shape, y_valid.shape)
loss, params, grads = linear_train(X_train, y_train, 0.001, 100000)
loss5.append(loss)
score = np.mean(loss5)
print('five kold cross validation score is', score)
y_pred = predict(X_valid, params)
valid_score = np.sum(((y_pred-y_valid)**2))/len(X_valid)
print('valid score is', valid_score)
from sklearn.datasets import load_diabetes
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
diabetes = load_diabetes()
data = diabetes.data
target = diabetes.target
X, y = shuffle(data, target, random_state=13)
X = X.astype(np.float32)
y = y.reshape((-1, 1))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y_test, y_pred))
print(r2_score(y_test, y_pred))
# Plot outputs
plt.scatter(range(X_test.shape[0]), y_test, color='red')
plt.plot(range(X_test.shape[0]), y_pred, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show();
import numpy as np
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.linear_model import LinearRegression
### 交叉验证
def cross_validate(model, x, y, folds=5, repeats=5):
ypred = np.zeros((len(y),repeats))
score = np.zeros(repeats)
for r in range(repeats):
i=0
print('Cross Validating - Run', str(r + 1), 'out of', str(repeats))
x,y = shuffle(x, y, random_state=r) #shuffle data before each repeat
kf = KFold(n_splits=folds,random_state=i+1000) #random split, different each time
for train_ind, test_ind in kf.split(x):
print('Fold', i+1, 'out of', folds)
xtrain,ytrain = x[train_ind,:],y[train_ind]
xtest,ytest = x[test_ind,:],y[test_ind]
model.fit(xtrain, ytrain)
#print(xtrain.shape, ytrain.shape, xtest.shape, ytest.shape)
ypred[test_ind]=model.predict(xtest)
i+=1
score[r] = R2(ypred[:,r],y)
print('\nOverall R2:',str(score))
print('Mean:',str(np.mean(score)))
print('Deviation:',str(np.std(score)))
pass
cross_validate(regr, X, y, folds=5, repeats=5)
```
| github_jupyter |
# Introduction
- ElasticNetを使ってみる
- permutation importance を追加
# Import everything I need :)
```
import warnings
warnings.filterwarnings('ignore')
import time
import multiprocessing
import glob
import gc
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.model_selection import KFold, train_test_split, GridSearchCV
from sklearn.metrics import mean_absolute_error
from sklearn import linear_model
from functools import partial
from hyperopt import fmin, hp, tpe, Trials, space_eval, STATUS_OK, STATUS_RUNNING
from fastprogress import progress_bar
```
# Preparation
```
nb = 60
isSmallSet = False
length = 50000
model_name = 'elastic_net'
pd.set_option('display.max_columns', 200)
# use atomic numbers to recode atomic names
ATOMIC_NUMBERS = {
'H': 1,
'C': 6,
'N': 7,
'O': 8,
'F': 9
}
file_path = '../input/champs-scalar-coupling/'
glob.glob(file_path + '*')
# train
path = file_path + 'train.csv'
if isSmallSet:
train = pd.read_csv(path) [:length]
else:
train = pd.read_csv(path)
# test
path = file_path + 'test.csv'
if isSmallSet:
test = pd.read_csv(path)[:length]
else:
test = pd.read_csv(path)
# structure
path = file_path + 'structures.csv'
structures = pd.read_csv(path)
# fc_train
path = file_path + 'nb47_fc_train.csv'
if isSmallSet:
fc_train = pd.read_csv(path)[:length]
else:
fc_train = pd.read_csv(path)
# fc_test
path = file_path + 'nb47_fc_test.csv'
if isSmallSet:
fc_test = pd.read_csv(path)[:length]
else:
fc_test = pd.read_csv(path)
# train dist-interact
path = file_path + 'nb33_train_dist-interaction.csv'
if isSmallSet:
dist_interact_train = pd.read_csv(path)[:length]
else:
dist_interact_train = pd.read_csv(path)
# test dist-interact
path = file_path + 'nb33_test_dist-interaction.csv'
if isSmallSet:
dist_interact_test = pd.read_csv(path)[:length]
else:
dist_interact_test = pd.read_csv(path)
# ob charge train
path = file_path + 'train_ob_charges_V7EstimatioofMullikenChargeswithOpenBabel.csv'
if isSmallSet:
ob_charge_train = pd.read_csv(path)[:length].drop(['Unnamed: 0', 'error'], axis=1)
else:
ob_charge_train = pd.read_csv(path).drop(['Unnamed: 0', 'error'], axis=1)
# ob charge test
path = file_path + 'test_ob_charges_V7EstimatioofMullikenChargeswithOpenBabel.csv'
if isSmallSet:
ob_charge_test = pd.read_csv(path)[:length].drop(['Unnamed: 0', 'error'], axis=1)
else:
ob_charge_test = pd.read_csv(path).drop(['Unnamed: 0', 'error'], axis=1)
len(test), len(fc_test)
len(train), len(fc_train)
if isSmallSet:
print('using SmallSet !!')
print('-------------------')
print(f'There are {train.shape[0]} rows in train data.')
print(f'There are {test.shape[0]} rows in test data.')
print(f"There are {train['molecule_name'].nunique()} distinct molecules in train data.")
print(f"There are {test['molecule_name'].nunique()} distinct molecules in test data.")
print(f"There are {train['atom_index_0'].nunique()} unique atoms.")
print(f"There are {train['type'].nunique()} unique types.")
```
---
## myFunc
**metrics**
```
def kaggle_metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes)
```
---
**momory**
```
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
class permutation_importance():
def __init__(self, model, metric):
self.is_computed = False
self.n_feat = 0
self.base_score = 0
self.model = model
self.metric = metric
self.df_result = []
def compute(self, X_valid, y_valid):
self.n_feat = len(X_valid.columns)
self.base_score = self.metric(y_valid, self.model.predict(X_valid))
self.df_result = pd.DataFrame({'feat': X_valid.columns,
'score': np.zeros(self.n_feat),
'score_diff': np.zeros(self.n_feat)})
# predict
for i, col in enumerate(X_valid.columns):
df_perm = X_valid.copy()
np.random.seed(1)
df_perm[col] = np.random.permutation(df_perm[col])
y_valid_pred = model.predict(df_perm)
score = self.metric(y_valid, y_valid_pred)
self.df_result['score'][self.df_result['feat']==col] = score
self.df_result['score_diff'][self.df_result['feat']==col] = self.base_score - score
self.is_computed = True
def get_negative_feature(self):
assert self.is_computed!=False, 'compute メソッドが実行されていません'
idx = self.df_result['score_diff'] < 0
return self.df_result.loc[idx, 'feat'].values.tolist()
def get_positive_feature(self):
assert self.is_computed!=False, 'compute メソッドが実行されていません'
idx = self.df_result['score_diff'] > 0
return self.df_result.loc[idx, 'feat'].values.tolist()
def show_permutation_importance(self, score_type='loss'):
assert self.is_computed!=False, 'compute メソッドが実行されていません'
if score_type=='loss':
ascending = True
elif score_type=='accuracy':
ascending = False
else:
ascending = ''
plt.figure(figsize=(15, int(0.25*self.n_feat)))
sns.barplot(x="score_diff", y="feat", data=self.df_result.sort_values(by="score_diff", ascending=ascending))
plt.title('base_score - permutation_score')
```
# Feature Engineering
Build Distance Dataset
```
def build_type_dataframes(base, structures, coupling_type):
base = base[base['type'] == coupling_type].drop('type', axis=1).copy()
base = base.reset_index()
base['id'] = base['id'].astype('int32')
structures = structures[structures['molecule_name'].isin(base['molecule_name'])]
return base, structures
# a,b = build_type_dataframes(train, structures, '1JHN')
def add_coordinates(base, structures, index):
df = pd.merge(base, structures, how='inner',
left_on=['molecule_name', f'atom_index_{index}'],
right_on=['molecule_name', 'atom_index']).drop(['atom_index'], axis=1)
df = df.rename(columns={
'atom': f'atom_{index}',
'x': f'x_{index}',
'y': f'y_{index}',
'z': f'z_{index}'
})
return df
def add_atoms(base, atoms):
df = pd.merge(base, atoms, how='inner',
on=['molecule_name', 'atom_index_0', 'atom_index_1'])
return df
def merge_all_atoms(base, structures):
df = pd.merge(base, structures, how='left',
left_on=['molecule_name'],
right_on=['molecule_name'])
df = df[(df.atom_index_0 != df.atom_index) & (df.atom_index_1 != df.atom_index)]
return df
def add_center(df):
df['x_c'] = ((df['x_1'] + df['x_0']) * np.float32(0.5))
df['y_c'] = ((df['y_1'] + df['y_0']) * np.float32(0.5))
df['z_c'] = ((df['z_1'] + df['z_0']) * np.float32(0.5))
def add_distance_to_center(df):
df['d_c'] = ((
(df['x_c'] - df['x'])**np.float32(2) +
(df['y_c'] - df['y'])**np.float32(2) +
(df['z_c'] - df['z'])**np.float32(2)
)**np.float32(0.5))
def add_distance_between(df, suffix1, suffix2):
df[f'd_{suffix1}_{suffix2}'] = ((
(df[f'x_{suffix1}'] - df[f'x_{suffix2}'])**np.float32(2) +
(df[f'y_{suffix1}'] - df[f'y_{suffix2}'])**np.float32(2) +
(df[f'z_{suffix1}'] - df[f'z_{suffix2}'])**np.float32(2)
)**np.float32(0.5))
def add_distances(df):
n_atoms = 1 + max([int(c.split('_')[1]) for c in df.columns if c.startswith('x_')])
for i in range(1, n_atoms):
for vi in range(min(4, i)):
add_distance_between(df, i, vi)
def add_n_atoms(base, structures):
dfs = structures['molecule_name'].value_counts().rename('n_atoms').to_frame()
return pd.merge(base, dfs, left_on='molecule_name', right_index=True)
def build_couple_dataframe(some_csv, structures_csv, coupling_type, n_atoms=10):
base, structures = build_type_dataframes(some_csv, structures_csv, coupling_type)
base = add_coordinates(base, structures, 0)
base = add_coordinates(base, structures, 1)
base = base.drop(['atom_0', 'atom_1'], axis=1)
atoms = base.drop('id', axis=1).copy()
if 'scalar_coupling_constant' in some_csv:
atoms = atoms.drop(['scalar_coupling_constant'], axis=1)
add_center(atoms)
atoms = atoms.drop(['x_0', 'y_0', 'z_0', 'x_1', 'y_1', 'z_1'], axis=1)
atoms = merge_all_atoms(atoms, structures)
add_distance_to_center(atoms)
atoms = atoms.drop(['x_c', 'y_c', 'z_c', 'atom_index'], axis=1)
atoms.sort_values(['molecule_name', 'atom_index_0', 'atom_index_1', 'd_c'], inplace=True)
atom_groups = atoms.groupby(['molecule_name', 'atom_index_0', 'atom_index_1'])
atoms['num'] = atom_groups.cumcount() + 2
atoms = atoms.drop(['d_c'], axis=1)
atoms = atoms[atoms['num'] < n_atoms]
atoms = atoms.set_index(['molecule_name', 'atom_index_0', 'atom_index_1', 'num']).unstack()
atoms.columns = [f'{col[0]}_{col[1]}' for col in atoms.columns]
atoms = atoms.reset_index()
# # downcast back to int8
for col in atoms.columns:
if col.startswith('atom_'):
atoms[col] = atoms[col].fillna(0).astype('int8')
# atoms['molecule_name'] = atoms['molecule_name'].astype('int32')
full = add_atoms(base, atoms)
add_distances(full)
full.sort_values('id', inplace=True)
return full
def take_n_atoms(df, n_atoms, four_start=4):
labels = ['id', 'molecule_name', 'atom_index_1', 'atom_index_0']
for i in range(2, n_atoms):
label = f'atom_{i}'
labels.append(label)
for i in range(n_atoms):
num = min(i, 4) if i < four_start else 4
for j in range(num):
labels.append(f'd_{i}_{j}')
if 'scalar_coupling_constant' in df:
labels.append('scalar_coupling_constant')
return df[labels]
atoms = structures['atom'].values
types_train = train['type'].values
types_test = test['type'].values
structures['atom'] = structures['atom'].replace(ATOMIC_NUMBERS).astype('int8')
fulls_train = []
fulls_test = []
for type_ in progress_bar(train['type'].unique()):
full_train = build_couple_dataframe(train, structures, type_, n_atoms=10)
full_test = build_couple_dataframe(test, structures, type_, n_atoms=10)
full_train = take_n_atoms(full_train, 10)
full_test = take_n_atoms(full_test, 10)
fulls_train.append(full_train)
fulls_test.append(full_test)
structures['atom'] = atoms
train = pd.concat(fulls_train).sort_values(by=['id']) #, axis=0)
test = pd.concat(fulls_test).sort_values(by=['id']) #, axis=0)
train['type'] = types_train
test['type'] = types_test
train = train.fillna(0)
test = test.fillna(0)
```
<br>
<br>
dist-interact
```
train['dist_interact'] = dist_interact_train.values
test['dist_interact'] = dist_interact_test.values
```
<br>
<br>
basic
```
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
# structure and ob_charges
ob_charge = pd.concat([ob_charge_train, ob_charge_test])
merge = pd.merge(ob_charge, structures, how='left',
left_on = ['molecule_name', 'atom_index'],
right_on = ['molecule_name', 'atom_index'])
for atom_idx in [0,1]:
train = map_atom_info(train, merge, atom_idx)
test = map_atom_info(test, merge, atom_idx)
train = train.rename(columns={
'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}',
'eem': f'eem_{atom_idx}',
'mmff94': f'mmff94_{atom_idx}',
'gasteiger': f'gasteiger_{atom_idx}',
'qeq': f'qeq_{atom_idx}',
'qtpie': f'qtpie_{atom_idx}',
'eem2015ha': f'eem2015ha_{atom_idx}',
'eem2015hm': f'eem2015hm_{atom_idx}',
'eem2015hn': f'eem2015hn_{atom_idx}',
'eem2015ba': f'eem2015ba_{atom_idx}',
'eem2015bm': f'eem2015bm_{atom_idx}',
'eem2015bn': f'eem2015bn_{atom_idx}',})
test = test.rename(columns={
'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}',
'eem': f'eem_{atom_idx}',
'mmff94': f'mmff94_{atom_idx}',
'gasteiger': f'gasteiger_{atom_idx}',
'qeq': f'qeq_{atom_idx}',
'qtpie': f'qtpie_{atom_idx}',
'eem2015ha': f'eem2015ha_{atom_idx}',
'eem2015hm': f'eem2015hm_{atom_idx}',
'eem2015hn': f'eem2015hn_{atom_idx}',
'eem2015ba': f'eem2015ba_{atom_idx}',
'eem2015bm': f'eem2015bm_{atom_idx}',
'eem2015bn': f'eem2015bn_{atom_idx}'})
# test = test.rename(columns={'atom': f'atom_{atom_idx}',
# 'x': f'x_{atom_idx}',
# 'y': f'y_{atom_idx}',
# 'z': f'z_{atom_idx}'})
# ob_charges
# train = map_atom_info(train, ob_charge_train, 0)
# test = map_atom_info(test, ob_charge_test, 0)
# train = map_atom_info(train, ob_charge_train, 1)
# test = map_atom_info(test, ob_charge_test, 1)
```
<br>
<br>
type0
```
def create_type0(df):
df['type_0'] = df['type'].apply(lambda x : x[0])
return df
# train['type_0'] = train['type'].apply(lambda x: x[0])
# test['type_0'] = test['type'].apply(lambda x: x[0])
```
<br>
<br>
distances
```
def distances(df):
df_p_0 = df[['x_0', 'y_0', 'z_0']].values
df_p_1 = df[['x_1', 'y_1', 'z_1']].values
df['dist'] = np.linalg.norm(df_p_0 - df_p_1, axis=1)
df['dist_x'] = (df['x_0'] - df['x_1']) ** 2
df['dist_y'] = (df['y_0'] - df['y_1']) ** 2
df['dist_z'] = (df['z_0'] - df['z_1']) ** 2
return df
# train = distances(train)
# test = distances(test)
```
<br>
<br>
統計量
```
def create_features(df):
df['molecule_couples'] = df.groupby('molecule_name')['id'].transform('count')
df['molecule_dist_mean'] = df.groupby('molecule_name')['dist'].transform('mean')
df['molecule_dist_min'] = df.groupby('molecule_name')['dist'].transform('min')
df['molecule_dist_max'] = df.groupby('molecule_name')['dist'].transform('max')
df['atom_0_couples_count'] = df.groupby(['molecule_name', 'atom_index_0'])['id'].transform('count')
df['atom_1_couples_count'] = df.groupby(['molecule_name', 'atom_index_1'])['id'].transform('count')
df[f'molecule_atom_index_0_x_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['x_1'].transform('std')
df[f'molecule_atom_index_0_y_1_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('mean')
df[f'molecule_atom_index_0_y_1_mean_diff'] = df[f'molecule_atom_index_0_y_1_mean'] - df['y_1']
df[f'molecule_atom_index_0_y_1_mean_div'] = df[f'molecule_atom_index_0_y_1_mean'] / df['y_1']
df[f'molecule_atom_index_0_y_1_max'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('max')
df[f'molecule_atom_index_0_y_1_max_diff'] = df[f'molecule_atom_index_0_y_1_max'] - df['y_1']
df[f'molecule_atom_index_0_y_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('std')
df[f'molecule_atom_index_0_z_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['z_1'].transform('std')
df[f'molecule_atom_index_0_dist_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('mean')
df[f'molecule_atom_index_0_dist_mean_diff'] = df[f'molecule_atom_index_0_dist_mean'] - df['dist']
df[f'molecule_atom_index_0_dist_mean_div'] = df[f'molecule_atom_index_0_dist_mean'] / df['dist']
df[f'molecule_atom_index_0_dist_max'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('max')
df[f'molecule_atom_index_0_dist_max_diff'] = df[f'molecule_atom_index_0_dist_max'] - df['dist']
df[f'molecule_atom_index_0_dist_max_div'] = df[f'molecule_atom_index_0_dist_max'] / df['dist']
df[f'molecule_atom_index_0_dist_min'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df[f'molecule_atom_index_0_dist_min_diff'] = df[f'molecule_atom_index_0_dist_min'] - df['dist']
df[f'molecule_atom_index_0_dist_min_div'] = df[f'molecule_atom_index_0_dist_min'] / df['dist']
df[f'molecule_atom_index_0_dist_std'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('std')
df[f'molecule_atom_index_0_dist_std_diff'] = df[f'molecule_atom_index_0_dist_std'] - df['dist']
df[f'molecule_atom_index_0_dist_std_div'] = df[f'molecule_atom_index_0_dist_std'] / df['dist']
df[f'molecule_atom_index_1_dist_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('mean')
df[f'molecule_atom_index_1_dist_mean_diff'] = df[f'molecule_atom_index_1_dist_mean'] - df['dist']
df[f'molecule_atom_index_1_dist_mean_div'] = df[f'molecule_atom_index_1_dist_mean'] / df['dist']
df[f'molecule_atom_index_1_dist_max'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('max')
df[f'molecule_atom_index_1_dist_max_diff'] = df[f'molecule_atom_index_1_dist_max'] - df['dist']
df[f'molecule_atom_index_1_dist_max_div'] = df[f'molecule_atom_index_1_dist_max'] / df['dist']
df[f'molecule_atom_index_1_dist_min'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('min')
df[f'molecule_atom_index_1_dist_min_diff'] = df[f'molecule_atom_index_1_dist_min'] - df['dist']
df[f'molecule_atom_index_1_dist_min_div'] = df[f'molecule_atom_index_1_dist_min'] / df['dist']
df[f'molecule_atom_index_1_dist_std'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('std')
df[f'molecule_atom_index_1_dist_std_diff'] = df[f'molecule_atom_index_1_dist_std'] - df['dist']
df[f'molecule_atom_index_1_dist_std_div'] = df[f'molecule_atom_index_1_dist_std'] / df['dist']
df[f'molecule_atom_1_dist_mean'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('mean')
df[f'molecule_atom_1_dist_min'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('min')
df[f'molecule_atom_1_dist_min_diff'] = df[f'molecule_atom_1_dist_min'] - df['dist']
df[f'molecule_atom_1_dist_min_div'] = df[f'molecule_atom_1_dist_min'] / df['dist']
df[f'molecule_atom_1_dist_std'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('std')
df[f'molecule_atom_1_dist_std_diff'] = df[f'molecule_atom_1_dist_std'] - df['dist']
df[f'molecule_type_0_dist_std'] = df.groupby(['molecule_name', 'type_0'])['dist'].transform('std')
df[f'molecule_type_0_dist_std_diff'] = df[f'molecule_type_0_dist_std'] - df['dist']
df[f'molecule_type_dist_mean'] = df.groupby(['molecule_name', 'type'])['dist'].transform('mean')
df[f'molecule_type_dist_mean_diff'] = df[f'molecule_type_dist_mean'] - df['dist']
df[f'molecule_type_dist_mean_div'] = df[f'molecule_type_dist_mean'] / df['dist']
df[f'molecule_type_dist_max'] = df.groupby(['molecule_name', 'type'])['dist'].transform('max')
df[f'molecule_type_dist_min'] = df.groupby(['molecule_name', 'type'])['dist'].transform('min')
df[f'molecule_type_dist_std'] = df.groupby(['molecule_name', 'type'])['dist'].transform('std')
df[f'molecule_type_dist_std_diff'] = df[f'molecule_type_dist_std'] - df['dist']
# fc
df[f'molecule_type_fc_max'] = df.groupby(['molecule_name', 'type'])['fc'].transform('max')
df[f'molecule_type_fc_min'] = df.groupby(['molecule_name', 'type'])['fc'].transform('min')
df[f'molecule_type_fc_std'] = df.groupby(['molecule_name', 'type'])['fc'].transform('std')
df[f'molecule_type_fc_std_diff'] = df[f'molecule_type_fc_std'] - df['fc']
df[f'molecule_atom_index_0_fc_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('mean')
df[f'molecule_atom_index_0_fc_mean_diff'] = df[f'molecule_atom_index_0_fc_mean'] - df['fc']
df[f'molecule_atom_index_0_fc_mean_div'] = df[f'molecule_atom_index_0_fc_mean'] / df['dist']
df[f'molecule_atom_index_0_fc_max'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('max')
df[f'molecule_atom_index_0_fc_max_diff'] = df[f'molecule_atom_index_0_fc_max'] - df['fc']
df[f'molecule_atom_index_0_fc_max_div'] = df[f'molecule_atom_index_0_fc_max'] / df['fc']
df[f'molecule_atom_index_0_fc_min'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('min')
df[f'molecule_atom_index_0_fc_min_diff'] = df[f'molecule_atom_index_0_fc_min'] - df['fc']
df[f'molecule_atom_index_0_fc_min_div'] = df[f'molecule_atom_index_0_fc_min'] / df['fc']
df[f'molecule_atom_index_0_fc_std'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('std')
df[f'molecule_atom_index_0_fc_std_diff'] = df[f'molecule_atom_index_0_fc_std'] - df['fc']
df[f'molecule_atom_index_0_fc_std_div'] = df[f'molecule_atom_index_0_fc_std'] / df['fc']
df[f'molecule_atom_index_1_fc_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('mean')
df[f'molecule_atom_index_1_fc_mean_diff'] = df[f'molecule_atom_index_1_fc_mean'] - df['fc']
df[f'molecule_atom_index_1_fc_mean_div'] = df[f'molecule_atom_index_1_fc_mean'] / df['fc']
df[f'molecule_atom_index_1_fc_max'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('max')
df[f'molecule_atom_index_1_fc_max_diff'] = df[f'molecule_atom_index_1_fc_max'] - df['fc']
df[f'molecule_atom_index_1_fc_max_div'] = df[f'molecule_atom_index_1_fc_max'] / df['fc']
df[f'molecule_atom_index_1_fc_min'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('min')
df[f'molecule_atom_index_1_fc_min_diff'] = df[f'molecule_atom_index_1_fc_min'] - df['fc']
df[f'molecule_atom_index_1_fc_min_div'] = df[f'molecule_atom_index_1_fc_min'] / df['fc']
df[f'molecule_atom_index_1_fc_std'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('std')
df[f'molecule_atom_index_1_fc_std_diff'] = df[f'molecule_atom_index_1_fc_std'] - df['fc']
df[f'molecule_atom_index_1_fc_std_div'] = df[f'molecule_atom_index_1_fc_std'] / df['fc']
return df
```
angle features
```
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
def create_closest(df):
df_temp=df.loc[:,["molecule_name","atom_index_0","atom_index_1","dist","x_0","y_0","z_0","x_1","y_1","z_1"]].copy()
df_temp_=df_temp.copy()
df_temp_= df_temp_.rename(columns={'atom_index_0': 'atom_index_1',
'atom_index_1': 'atom_index_0',
'x_0': 'x_1',
'y_0': 'y_1',
'z_0': 'z_1',
'x_1': 'x_0',
'y_1': 'y_0',
'z_1': 'z_0'})
df_temp=pd.concat(objs=[df_temp,df_temp_],axis=0)
df_temp["min_distance"]=df_temp.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df_temp= df_temp[df_temp["min_distance"]==df_temp["dist"]]
df_temp=df_temp.drop(['x_0','y_0','z_0','min_distance', 'dist'], axis=1)
df_temp= df_temp.rename(columns={'atom_index_0': 'atom_index',
'atom_index_1': 'atom_index_closest',
'distance': 'distance_closest',
'x_1': 'x_closest',
'y_1': 'y_closest',
'z_1': 'z_closest'})
for atom_idx in [0,1]:
df = map_atom_info(df,df_temp, atom_idx)
df = df.rename(columns={'atom_index_closest': f'atom_index_closest_{atom_idx}',
'distance_closest': f'distance_closest_{atom_idx}',
'x_closest': f'x_closest_{atom_idx}',
'y_closest': f'y_closest_{atom_idx}',
'z_closest': f'z_closest_{atom_idx}'})
return df
def add_cos_features(df):
df["distance_0"]=((df['x_0']-df['x_closest_0'])**2+(df['y_0']-df['y_closest_0'])**2+(df['z_0']-df['z_closest_0'])**2)**(1/2)
df["distance_1"]=((df['x_1']-df['x_closest_1'])**2+(df['y_1']-df['y_closest_1'])**2+(df['z_1']-df['z_closest_1'])**2)**(1/2)
df["vec_0_x"]=(df['x_0']-df['x_closest_0'])/df["distance_0"]
df["vec_0_y"]=(df['y_0']-df['y_closest_0'])/df["distance_0"]
df["vec_0_z"]=(df['z_0']-df['z_closest_0'])/df["distance_0"]
df["vec_1_x"]=(df['x_1']-df['x_closest_1'])/df["distance_1"]
df["vec_1_y"]=(df['y_1']-df['y_closest_1'])/df["distance_1"]
df["vec_1_z"]=(df['z_1']-df['z_closest_1'])/df["distance_1"]
df["vec_x"]=(df['x_1']-df['x_0'])/df["dist"]
df["vec_y"]=(df['y_1']-df['y_0'])/df["dist"]
df["vec_z"]=(df['z_1']-df['z_0'])/df["dist"]
df["cos_0_1"]=df["vec_0_x"]*df["vec_1_x"]+df["vec_0_y"]*df["vec_1_y"]+df["vec_0_z"]*df["vec_1_z"]
df["cos_0"]=df["vec_0_x"]*df["vec_x"]+df["vec_0_y"]*df["vec_y"]+df["vec_0_z"]*df["vec_z"]
df["cos_1"]=df["vec_1_x"]*df["vec_x"]+df["vec_1_y"]*df["vec_y"]+df["vec_1_z"]*df["vec_z"]
df=df.drop(['vec_0_x','vec_0_y','vec_0_z','vec_1_x','vec_1_y','vec_1_z','vec_x','vec_y','vec_z'], axis=1)
return df
%%time
print('add fc')
print(len(train), len(test))
train['fc'] = fc_train.values
test['fc'] = fc_test.values
print('type0')
print(len(train), len(test))
train = create_type0(train)
test = create_type0(test)
print('distances')
print(len(train), len(test))
train = distances(train)
test = distances(test)
print('create_featueres')
print(len(train), len(test))
train = create_features(train)
test = create_features(test)
print('create_closest')
print(len(train), len(test))
train = create_closest(train)
test = create_closest(test)
train.drop_duplicates(inplace=True, subset=['id']) # なぜかtrainの行数が増えるバグが発生
train = train.reset_index(drop=True)
print('add_cos_features')
print(len(train), len(test))
train = add_cos_features(train)
test = add_cos_features(test)
```
---
<br>
<br>
<br>
nanがある特徴量を削除
```
drop_feats = train.columns[train.isnull().sum(axis=0) != 0].values
drop_feats
train = train.drop(drop_feats, axis=1)
test = test.drop(drop_feats, axis=1)
assert sum(train.isnull().sum(axis=0))==0, f'train に nan があります。'
assert sum(test.isnull().sum(axis=0))==0, f'test に nan があります。'
```
<br>
<br>
<br>
エンコーディング
```
cat_cols = ['atom_1']
num_cols = list(set(train.columns) - set(cat_cols) - set(['type', "scalar_coupling_constant", 'molecule_name', 'id',
'atom_0', 'atom_1','atom_2', 'atom_3', 'atom_4', 'atom_5', 'atom_6', 'atom_7', 'atom_8', 'atom_9']))
print(f'カテゴリカル: {cat_cols}')
print(f'数値: {num_cols}')
```
<br>
<br>
LabelEncode
- `atom_1` = {H, C, N}
- `type_0` = {1, 2, 3}
- `type` = {2JHC, ...}
```
for f in ['type_0', 'type']:
if f in train.columns:
lbl = LabelEncoder()
lbl.fit(list(train[f].values) + list(test[f].values))
train[f] = lbl.transform(list(train[f].values))
test[f] = lbl.transform(list(test[f].values))
```
<br>
<br>
<br>
one hot encoding
```
train = pd.get_dummies(train, columns=cat_cols)
test = pd.get_dummies(test, columns=cat_cols)
```
<br>
<br>
<br>
標準化
```
scaler = StandardScaler()
train[num_cols] = scaler.fit_transform(train[num_cols])
test[num_cols] = scaler.transform(test[num_cols])
```
<br>
<br>
---
**show features**
```
train.head(2)
print(train.columns)
```
# create train, test data
```
y = train['scalar_coupling_constant']
train = train.drop(['id', 'molecule_name', 'atom_0', 'scalar_coupling_constant'], axis=1)
test = test.drop(['id', 'molecule_name', 'atom_0'], axis=1)
# train = reduce_mem_usage(train)
# test = reduce_mem_usage(test)
X = train.copy()
X_test = test.copy()
assert len(X.columns) == len(X_test.columns), f'X と X_test のサイズが違います X: {len(X.columns)}, X_test: {len(X_test.columns)}'
del train, test, full_train, full_test
gc.collect()
```
# Hyperopt
```
X_train, X_valid, y_train, y_valid = train_test_split(X,
y,
test_size = 0.30,
random_state = 0)
# Define searched space
hyper_space = {'alpha': hp.choice('alpha', [0.01, 0.05, 0.1, 0.5, 1, 2]),
'l1_ratio': hp.choice('l1_ratio', [0, 0.1, 0.3, 0.5, 0.7, 0.9, 1])}
# Seting the number of Evals
MAX_EVALS= 30
%%time
# type ごとの学習
best_params_list = []
for t in sorted(X_train['type'].unique()):
print('*'*80)
print(f'- Training of type {t}')
print('*'*80)
X_t_train = X_train.loc[X_train['type'] == t]
X_t_valid = X_valid.loc[X_valid['type'] == t]
y_t_train = y_train[X_train['type'] == t]
y_t_valid = y_valid[X_valid['type'] == t]
# evaluate_metric
def evaluate_metric(params):
model = linear_model.ElasticNet(**params, random_state=42, max_iter=3000) # <=======================
model.fit(X_t_train, y_t_train)
pred = model.predict(X_t_valid)
y_t_train_pred = model.predict(X_t_train)
_X_t_valid = X_t_valid.copy()
_X_t_valid['scalar_coupling_constant'] = y_t_valid
cv_score = kaggle_metric(_X_t_valid, pred)
_X_t_valid = _X_t_valid.drop(['scalar_coupling_constant'], axis=1)
# print(f'mae(valid): {mean_absolute_error(y_t_valid, pred)}')
print(params)
print(f'training l1: {mean_absolute_error(y_t_train, y_t_train_pred) :.5f} \t valid l1: {mean_absolute_error(y_t_valid, pred) :.5f} ')
print(f'cv_score: {cv_score}')
print('-'*80)
print('\n')
return {
'loss': cv_score,
'status': STATUS_OK,
'stats_running': STATUS_RUNNING
}
# hyperopt
# Trail
trials = Trials()
# Set algoritm parameters
algo = partial(tpe.suggest,
n_startup_jobs=-1)
# Seting the number of Evals
MAX_EVALS= 20
# Fit Tree Parzen Estimator
best_vals = fmin(evaluate_metric, space=hyper_space, verbose=1,
algo=algo, max_evals=MAX_EVALS, trials=trials)
# Print best parameters
best_params = space_eval(hyper_space, best_vals)
best_params_list.append(best_params)
print("BEST PARAMETERS: " + str(best_params))
print('')
best_params_list
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from datetime import datetime
import os
```
# Define Which Input Files to Use
The default settings will use the input files recently produced in Step 1) using the notebook `get_eia_demand_data.ipynb`. For those interested in reproducing the exact results included in the repository, you will need to point to the files containing the original `raw` EIA demand data that we querried on 10 Sept 2019.
```
merge_with_step1_files = False # used to run step 2 on the most recent files
merge_with_10sept2019_files = True # used to reproduce the documented results
assert((merge_with_step1_files != merge_with_10sept2019_files) and
(merge_with_step1_files == True or merge_with_10sept2019_files == True)), "One of these must be true: 'merge_with_step1_files' and 'merge_with_10sept2019_files'"
if merge_with_step1_files:
input_path = './data'
if merge_with_10sept2019_files:
# input_path is the path to the downloaded data from Zenodo: https://zenodo.org/record/3517197
input_path = '/BASE/PATH/TO/ZENODO'
input_path += '/data/release_2019_Oct/original_eia_files'
assert(os.path.exists(input_path)), f"You must set the base directory for the Zenodo data {input_path} does not exist"
# If you did not run step 1, make the /data directory
if not os.path.exists('./data'):
os.mkdir('./data')
```
# Make the output directories
```
# Make output directories
out_base = './data/final_results'
if not os.path.exists(out_base):
os.mkdir(out_base)
for subdir in ['balancing_authorities', 'regions', 'interconnects', 'contiguous_US']:
os.mkdir(f"{out_base}/{subdir}")
print(f"Final results files will be located here: {out_base}/{subdir}")
```
# Useful functions
```
# All 56 balancing authorities that have demand (BA)
def return_all_regions():
return [
'AEC', 'AECI', 'CPLE', 'CPLW',
'DUK', 'FMPP', 'FPC',
'FPL', 'GVL', 'HST', 'ISNE',
'JEA', 'LGEE', 'MISO', 'NSB',
'NYIS', 'PJM', 'SC',
'SCEG', 'SOCO',
'SPA', 'SWPP', 'TAL', 'TEC',
'TVA', 'ERCO',
'AVA', 'AZPS', 'BANC', 'BPAT',
'CHPD', 'CISO', 'DOPD',
'EPE', 'GCPD', 'IID',
'IPCO', 'LDWP', 'NEVP', 'NWMT',
'PACE', 'PACW', 'PGE', 'PNM',
'PSCO', 'PSEI', 'SCL', 'SRP',
'TEPC', 'TIDC', 'TPWR', 'WACM',
'WALC', 'WAUW',
'OVEC', 'SEC',
]
# All 54 "usable" balancing authorities (BA) (excludes OVEC and SEC)
# These 2 have significant
# enough reporting problems that we do not impute cleaned data for them.
def return_usable_BAs():
return [
'AEC', 'AECI', 'CPLE', 'CPLW',
'DUK', 'FMPP', 'FPC',
'FPL', 'GVL', 'HST', 'ISNE',
'JEA', 'LGEE', 'MISO', 'NSB',
'NYIS', 'PJM', 'SC',
'SCEG', 'SOCO',
'SPA', 'SWPP', 'TAL', 'TEC',
'TVA', 'ERCO',
'AVA', 'AZPS', 'BANC', 'BPAT',
'CHPD', 'CISO', 'DOPD',
'EPE', 'GCPD', 'IID',
'IPCO', 'LDWP', 'NEVP', 'NWMT',
'PACE', 'PACW', 'PGE', 'PNM',
'PSCO', 'PSEI', 'SCL', 'SRP',
'TEPC', 'TIDC', 'TPWR', 'WACM',
'WALC', 'WAUW',
# 'OVEC', 'SEC',
]
# mapping of each balancing authority (BA) to its associated
# U.S. interconnect (IC).
def return_ICs_from_BAs():
return {
'EASTERN_IC' : [
'AEC', 'AECI', 'CPLE', 'CPLW',
'DUK', 'FMPP', 'FPC',
'FPL', 'GVL', 'HST', 'ISNE',
'JEA', 'LGEE', 'MISO', 'NSB',
'NYIS', 'PJM', 'SC',
'SCEG', 'SOCO',
'SPA', 'SWPP', 'TAL', 'TEC',
'TVA',
'OVEC', 'SEC',
],
'TEXAS_IC' : [
'ERCO',
],
'WESTERN_IC' : [
'AVA', 'AZPS', 'BANC', 'BPAT',
'CHPD', 'CISO', 'DOPD',
'EPE', 'GCPD',
'IID',
'IPCO', 'LDWP', 'NEVP', 'NWMT',
'PACE', 'PACW', 'PGE', 'PNM',
'PSCO', 'PSEI', 'SCL', 'SRP',
'TEPC', 'TIDC', 'TPWR', 'WACM',
'WALC', 'WAUW',
]
}
# Defines a mapping between the balancing authorities (BAs)
# and their locally defined region based on EIA naming.
# This uses a json file defining the mapping.
def return_BAs_per_region_map():
regions = {
'CENT' : 'Central',
'MIDW' : 'Midwest',
'TEN' : 'Tennessee',
'SE' : 'Southeast',
'FLA' : 'Florida',
'CAR' : 'Carolinas',
'MIDA' : 'Mid-Atlantic',
'NY' : 'New York',
'NE' : 'New England',
'TEX' : 'Texas',
'CAL' : 'California',
'NW' : 'Northwest',
'SW' : 'Southwest'
}
rtn_map = {}
for k, v in regions.items():
rtn_map[k] = []
# Load EIA's Blancing Authority Acronym table
# https://www.eia.gov/realtime_grid/
df = pd.read_csv('data/balancing_authority_acronyms.csv',
skiprows=1) # skip first row as it is source info
# Loop over all rows and fill map
for idx in df.index:
# Skip Canada and Mexico
if df.loc[idx, 'Region'] in ['Canada', 'Mexico']:
continue
reg_acronym = ''
# Get region to acronym
for k, v in regions.items():
if v == df.loc[idx, 'Region']:
reg_acronym = k
break
assert(reg_acronym != '')
rtn_map[reg_acronym].append(df.loc[idx, 'Code'])
tot = 0
for k, v in rtn_map.items():
tot += len(v)
print(f"Total US48 BAs mapped {tot}. Recall 11 are generation only.")
return rtn_map
# Assume the MICE results file is a subset of the original hours
def trim_rows_to_match_length(mice, df):
mice_start = mice.loc[0, 'date_time']
mice_end = mice.loc[len(mice.index)-1, 'date_time']
to_drop = []
for idx in df.index:
if df.loc[idx, 'date_time'] != mice_start:
to_drop.append(idx)
else: # stop once equal
break
for idx in reversed(df.index):
if df.loc[idx, 'date_time'] != mice_end:
to_drop.append(idx)
else: # stop once equal
break
df = df.drop(to_drop, axis=0)
df = df.reset_index()
assert(len(mice.index) == len(df.index))
return df
# Load balancing authority files already containing the full MICE results.
# Aggregate associated regions into regional, interconnect, or CONUS files.
# Treat 'MISSING' and 'EMPTY' values as zeros when aggregating.
def merge_BAs(region, bas, out_base, folder):
print(region, bas)
# Remove BAs which are generation only as well as SEC and OVEC.
# See main README regarding SEC and OVEC.
usable_BAs = return_usable_BAs()
good_bas = []
for ba in bas:
if ba in usable_BAs:
good_bas.append(ba)
first_ba = good_bas.pop()
master = pd.read_csv(f'{out_base}/balancing_authorities/{first_ba}.csv', na_values=['MISSING', 'EMPTY'])
master = master.fillna(0)
master = master.drop(['category', 'forecast demand (MW)'], axis=1)
for ba in good_bas:
df = pd.read_csv(f'{out_base}/balancing_authorities/{ba}.csv', na_values=['MISSING', 'EMPTY'])
df = df.fillna(0)
master['raw demand (MW)'] += df['raw demand (MW)']
master['cleaned demand (MW)'] += df['cleaned demand (MW)']
master.to_csv(f'{out_base}/{folder}/{region}.csv', index=False)
# Do both the distribution of balancing authority level results to new BA files
# and generate regional, interconnect, and CONUS aggregate files.
def distribute_MICE_results(raw_demand_file_loc, screening_file, mice_results_csv, out_base):
# Load screening results
screening = pd.read_csv(screening_file)
# Load MICE results
mice = pd.read_csv(mice_results_csv)
screening = trim_rows_to_match_length(mice, screening)
# Distribute to single BA results files first
print("Distribute MICE results per-balancing authority:")
for ba in return_usable_BAs():
print(ba)
df = pd.read_csv(f"{raw_demand_file_loc}/{ba}.csv")
df = trim_rows_to_match_length(mice, df)
df_out = pd.DataFrame({
'date_time': df['date_time'],
'raw demand (MW)': df['demand (MW)'],
'category': screening[f'{ba}_category'],
'cleaned demand (MW)': mice[ba],
'forecast demand (MW)': df['forecast demand (MW)']
})
df_out.to_csv(f'./{out_base}/balancing_authorities/{ba}.csv', index=False)
# Aggregate balancing authority level results into EIA regions
print("\nEIA regional aggregation:")
for region, bas in return_BAs_per_region_map().items():
merge_BAs(region, bas, out_base, 'regions')
# Aggregate balancing authority level results into CONUS interconnects
print("\nCONUS interconnect aggregation:")
for region, bas in return_ICs_from_BAs().items():
merge_BAs(region, bas, out_base, 'interconnects')
# Aggregate balancing authority level results into CONUS total
print("\nCONUS total aggregation:")
merge_BAs('CONUS', return_usable_BAs(), out_base, 'contiguous_US')
```
# Run the distribution and aggregation
```
# The output file generated by Step 2 listing the categories for each time step
screening_file = './data/csv_MASTER.csv'
# The output file generated by Step 3 which runs the MICE algo and has the cleaned demand values
mice_file = 'MICE_output/mean_impute_csv_MASTER.csv'
distribute_MICE_results(input_path, screening_file, mice_file, out_base)
```
# Test distribution and aggregation
This cell simply checks that the results all add up.
```
# Compare each value in the vectors
def compare(vect1, vect2):
cnt = 0
clean = True
for v1, v2 in zip(vect1, vect2):
if v1 != v2:
print(f"Error at idx {cnt} {v1} != {v2}")
clean = False
cnt += 1
return clean
def test_aggregation(raw_demand_file_loc, screening_file, mice_results_csv, out_base):
# Load MICE results
usable_BAs = return_usable_BAs()
mice = pd.read_csv(mice_results_csv)
# Sum all result BAs
tot_imp = np.zeros(len(mice.index))
for col in mice.columns:
if col not in usable_BAs:
continue
tot_imp += mice[col]
# Sum Raw
tot_raw = np.zeros(len(mice.index))
for ba in return_usable_BAs():
df = pd.read_csv(f"{raw_demand_file_loc}/{ba}.csv", na_values=['MISSING', 'EMPTY'])
df = trim_rows_to_match_length(mice, df)
df = df.fillna(0)
tot_raw += df['demand (MW)']
# Check BA results distribution
print("\nBA Distribution:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
for ba in return_usable_BAs():
df = pd.read_csv(f"{out_base}/balancing_authorities/{ba}.csv", na_values=['MISSING', 'EMPTY'])
df = df.fillna(0)
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("BA Distribution okay!")
# Check aggregate balancing authority level results into EIA regions
print("\nEIA regional aggregation:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
for region, bas in return_BAs_per_region_map().items():
df = pd.read_csv(f"{out_base}/regions/{region}.csv")
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("Regional sums okay!")
# Aggregate balancing authority level results into CONUS interconnects
print("\nCONUS interconnect aggregation:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
for region, bas in return_ICs_from_BAs().items():
df = pd.read_csv(f"{out_base}/interconnects/{region}.csv")
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("Interconnect sums okay!")
# Aggregate balancing authority level results into CONUS total
print("\nCONUS total aggregation:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
df = pd.read_csv(f"{out_base}/contiguous_US/CONUS.csv")
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("CONUS sums okay!")
test_aggregation(input_path, screening_file, mice_file, out_base)
```
| github_jupyter |
# Dataset
```
import sys
sys.path.append('../../datasets/')
from prepare_individuals import prepare, germanBats
import matplotlib.pyplot as plt
import torch
import numpy as np
import tqdm
import pickle
classes = germanBats
patch_len = 44 # 88 bei 44100, 44 bei 22050 = 250ms ~ 25ms
X_train, Y_train, X_test, Y_test, X_val, Y_val = prepare("../../datasets/prepared.h5", classes, patch_len)
with open('../call_nocall.indices', 'rb') as file:
indices, labels = pickle.load(file)
train_indices = indices[0][:len(X_train)]
test_indices = indices[1][:len(X_test)]
val_indices = indices[2][:len(X_val)]
X_train = X_train[train_indices]
X_test = X_test[test_indices]
X_val = X_val[val_indices]
Y_train = Y_train[train_indices]
Y_test = Y_test[test_indices]
Y_val = Y_val[val_indices]
print("Total calls:", len(X_train) + len(X_test) + len(X_val))
print(X_train.shape, Y_train.shape)
'''species = [0, 1]
def filterSpecies(s, X, Y):
idx = np.in1d(Y, s)
return X[idx], Y[idx]
X_train, Y_train = filterSpecies(species, X_train, Y_train)
X_test, Y_test = filterSpecies(species, X_test, Y_test)
X_val, Y_val = filterSpecies(species, X_val, Y_val)
classes = {
"Rhinolophus ferrumequinum": 0,
"Rhinolophus hipposideros": 1,
}'''
species = np.asarray([0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
Y_train = species[Y_train]
Y_test = species[Y_test]
Y_val = species[Y_val]
classes = {
"Rhinolophus ferrumequinum": 0,
"Rhinolophus hipposideros": 1,
"Other": 2,
}
print("Total calls:", len(X_train) + len(X_test) + len(X_val))
print(X_train.shape, Y_train.shape)
```
# Model
```
import time
import datetime
import tqdm
import torch.nn as nn
import torchvision
from torch.cuda.amp import autocast
from torch.utils.data import TensorDataset, DataLoader
from timm.data.mixup import Mixup
use_stochdepth = False
use_mixedprecision = False
use_imbalancedsampler = False
use_sampler = True
use_cosinescheduler = False
use_reduceonplateu = False
use_nadam = False
use_mixup = False
mixup_args = {
'mixup_alpha': 1.,
'cutmix_alpha': 0.,
'cutmix_minmax': None,
'prob': 1.0,
'switch_prob': 0.,
'mode': 'batch',
'label_smoothing': 0,
'num_classes': len(list(classes))}
mixup_fn = Mixup(**mixup_args)
class Block(nn.Module):
def __init__(self, num_layers, in_channels, out_channels, identity_downsample=None, stride=1):
assert num_layers in [18, 34, 50, 101, 152], "should be a a valid architecture"
super(Block, self).__init__()
self.num_layers = num_layers
if self.num_layers > 34:
self.expansion = 4
else:
self.expansion = 1
# ResNet50, 101, and 152 include additional layer of 1x1 kernels
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
self.bn1 = nn.BatchNorm2d(out_channels)
if self.num_layers > 34:
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=stride, padding=1)
else:
# for ResNet18 and 34, connect input directly to (3x3) kernel (skip first (1x1))
self.conv2 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
self.bn2 = nn.BatchNorm2d(out_channels)
self.conv3 = nn.Conv2d(out_channels, out_channels * self.expansion, kernel_size=1, stride=1, padding=0)
self.bn3 = nn.BatchNorm2d(out_channels * self.expansion)
self.relu = nn.ReLU()
self.identity_downsample = identity_downsample
def forward(self, x):
identity = x
if self.num_layers > 34:
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.conv3(x)
x = self.bn3(x)
if self.identity_downsample is not None:
identity = self.identity_downsample(identity)
x = torchvision.ops.stochastic_depth(input=x, p=0.25, mode='batch', training=self.training) # randomly zero input tensor
x += identity
x = self.relu(x)
return x
class ResNet(nn.Module):
def __init__(self, num_layers, block, image_channels, num_classes):
assert num_layers in [18, 34, 50, 101, 152], f'ResNet{num_layers}: Unknown architecture! Number of layers has ' \
f'to be 18, 34, 50, 101, or 152 '
super(ResNet, self).__init__()
if num_layers < 50:
self.expansion = 1
else:
self.expansion = 4
if num_layers == 18:
layers = [2, 2, 2, 2]
elif num_layers == 34 or num_layers == 50:
layers = [3, 4, 6, 3]
elif num_layers == 101:
layers = [3, 4, 23, 3]
else:
layers = [3, 8, 36, 3]
self.in_channels = 64
self.conv1 = nn.Conv2d(image_channels, 64, kernel_size=7, stride=2, padding=3)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
# ResNetLayers
self.layer1 = self.make_layers(num_layers, block, layers[0], intermediate_channels=64, stride=1)
self.layer2 = self.make_layers(num_layers, block, layers[1], intermediate_channels=128, stride=2)
self.layer3 = self.make_layers(num_layers, block, layers[2], intermediate_channels=256, stride=2)
self.layer4 = self.make_layers(num_layers, block, layers[3], intermediate_channels=512, stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * self.expansion, num_classes)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.reshape(x.shape[0], -1)
x = self.fc(x)
return x
def make_layers(self, num_layers, block, num_residual_blocks, intermediate_channels, stride):
layers = []
identity_downsample = nn.Sequential(nn.Conv2d(self.in_channels, intermediate_channels*self.expansion, kernel_size=1, stride=stride),
nn.BatchNorm2d(intermediate_channels*self.expansion))
layers.append(block(num_layers, self.in_channels, intermediate_channels, identity_downsample, stride))
self.in_channels = intermediate_channels * self.expansion # 256
for i in range(num_residual_blocks - 1):
layers.append(block(num_layers, self.in_channels, intermediate_channels)) # 256 -> 64, 64*4 (256) again
return nn.Sequential(*layers)
def train_epoch(model, epoch, criterion, optimizer, scheduler, dataloader, device):
model.train()
running_loss = 0.0
running_corrects = 0
num_batches = len(dataloader)
num_samples = len(dataloader.dataset)
for batch, (inputs, labels) in enumerate(tqdm.tqdm(dataloader)):
# Transfer Data to GPU if available
inputs, labels = inputs.to(device), labels.to(device)
if use_mixup:
inputs, labels = mixup_fn(inputs, labels)
# Clear the gradients
optimizer.zero_grad()
with autocast(enabled=use_mixedprecision):
# Forward Pass
outputs = model(inputs)
_, predictions = torch.max(outputs, 1)
# Compute Loss
loss = criterion(outputs, labels)
# Calculate gradients
loss.backward()
# Update Weights
optimizer.step()
# Calculate Loss
running_loss += loss.item() * inputs.size(0)
if use_mixup:
running_corrects += (predictions == torch.max(labels, 1)[1]).sum().item()
else:
running_corrects += (predictions == labels).sum().item()
# Perform learning rate step
if use_cosinescheduler:
scheduler.step(epoch + batch / num_batches)
epoch_loss = running_loss / num_samples
epoch_acc = running_corrects / num_samples
return epoch_loss, epoch_acc
def test_epoch(model, epoch, criterion, optimizer, dataloader, device):
model.eval()
num_batches = len(dataloader)
num_samples = len(dataloader.dataset)
with torch.no_grad():
running_loss = 0.0
running_corrects = 0
for batch, (inputs, labels) in enumerate(tqdm.tqdm(dataloader)):
# Transfer Data to GPU if available
inputs, labels = inputs.to(device), labels.to(device)
if use_mixup:
labels = torch.nn.functional.one_hot(labels.to(torch.int64), num_classes=len(list(classes))).float()
# Clear the gradients
optimizer.zero_grad()
# Forward Pass
outputs = model(inputs)
_, predictions = torch.max(outputs, 1)
# Compute Loss
loss = criterion(outputs, labels)
# Update Weights
# optimizer.step()
# Calculate Loss
running_loss += loss.item() * inputs.size(0)
if use_mixup:
running_corrects += (predictions == torch.max(labels, 1)[1]).sum().item()
else:
running_corrects += (predictions == labels).sum().item()
epoch_loss = running_loss / num_samples
epoch_acc = running_corrects / num_samples
return epoch_loss, epoch_acc
from torchsampler import ImbalancedDatasetSampler
from torch.utils.data import WeightedRandomSampler
batch_size = 64
epochs = 40
lr = 0.01
warmup_epochs = 5
wd = 0.01
'''# Experiment: wrong sampling
X = np.concatenate([X_train, X_test, X_val])
Y = np.concatenate([Y_train, Y_test, Y_val])
full_data = TensorDataset(torch.Tensor(np.expand_dims(X, axis=1)), torch.from_numpy(Y))
train_size = int(0.75 * len(full_data))
test_size = len(full_data) - train_size
val_size = int(0.2 * test_size)
test_size -= val_size
train_data, test_data, val_data = torch.utils.data.random_split(full_data, [train_size, test_size, val_size],
generator=torch.Generator().manual_seed(42))'''
train_data = TensorDataset(torch.Tensor(np.expand_dims(X_train, axis=1)), torch.from_numpy(Y_train))
test_data = TensorDataset(torch.Tensor(np.expand_dims(X_test, axis=1)), torch.from_numpy(Y_test))
val_data = TensorDataset(torch.Tensor(np.expand_dims(X_val, axis=1)), torch.from_numpy(Y_val))
if use_imbalancedsampler:
train_loader = DataLoader(train_data, sampler=ImbalancedDatasetSampler(train_data), batch_size=batch_size)
test_loader = DataLoader(test_data, sampler=ImbalancedDatasetSampler(test_data), batch_size=batch_size)
val_loader = DataLoader(val_data, sampler=ImbalancedDatasetSampler(val_data), batch_size=batch_size)
elif use_sampler:
def getSampler(y):
_, counts = np.unique(y, return_counts=True)
weights = [len(y)/c for c in counts]
samples_weights = [weights[t] for t in y]
return WeightedRandomSampler(samples_weights, len(y))
train_loader = DataLoader(train_data, sampler=getSampler(Y_train), batch_size=batch_size)
test_loader = DataLoader(test_data, sampler=getSampler(Y_test), batch_size=batch_size)
val_loader = DataLoader(val_data, sampler=getSampler(Y_val), batch_size=batch_size)
else:
train_loader = DataLoader(train_data, batch_size=batch_size)
test_loader = DataLoader(test_data, batch_size=batch_size)
val_loader = DataLoader(val_data, batch_size=batch_size)
model = ResNet(18, Block, image_channels=1, num_classes=len(list(classes)))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model, device_ids=[0, 1])
model.to(device)
print(device)
import wandb
wandb.init(project="BAT-baseline-hierarchical", entity="frankfundel")
wandb.config = {
"learning_rate": lr,
"epochs": epochs,
"batch_size": batch_size
}
criterion = nn.CrossEntropyLoss()
if use_mixup:
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
if use_nadam:
optimizer = torch.optim.NAdam(model.parameters(), lr=lr, weight_decay=wd)
scheduler = None
if use_cosinescheduler:
scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer=optimizer, T_0=warmup_epochs, T_mult=1)
if use_reduceonplateu:
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
min_val_loss = np.inf
torch.autograd.set_detect_anomaly(True)
for epoch in range(epochs):
end = time.time()
print(f"==================== Starting at epoch {epoch} ====================", flush=True)
train_loss, train_acc = train_epoch(model, epoch, criterion, optimizer, scheduler, train_loader, device)
print('Training loss: {:.4f} Acc: {:.4f}'.format(train_loss, train_acc), flush=True)
val_loss, val_acc = test_epoch(model, epoch, criterion, optimizer, val_loader, device)
print('Validation loss: {:.4f} Acc: {:.4f}'.format(val_loss, val_acc), flush=True)
if use_reduceonplateu:
scheduler.step(val_loss)
wandb.log({
"train_loss": train_loss,
"train_acc": train_acc,
"val_loss": val_loss,
"val_acc": val_acc,
})
if min_val_loss > val_loss:
print('val_loss decreased, saving model', flush=True)
min_val_loss = val_loss
# Saving State Dict
torch.save(model.state_dict(), 'baseline_rhinolophus.pth')
wandb.finish()
model.load_state_dict(torch.load('baseline_rhinolophus.pth'))
compiled_model = torch.jit.script(model)
torch.jit.save(compiled_model, 'baseline_rhinolophus.pt')
from sklearn.metrics import confusion_matrix
import seaborn as sn
import pandas as pd
Y_pred = []
Y_true = []
corrects = 0
model.eval()
# iterate over test data
for inputs, labels in tqdm.tqdm(test_loader):
output = model(inputs.cuda()) # Feed Network
output = (torch.max(output, 1)[1]).data.cpu().numpy()
Y_pred.extend(output) # Save Prediction
labels = labels.data.cpu().numpy()
Y_true.extend(labels) # Save Truth
# Build confusion matrix
cf_matrix = confusion_matrix(Y_true, Y_pred)
df_cm = pd.DataFrame(cf_matrix / np.sum(cf_matrix, axis=-1), index = [i for i in classes],
columns = [i for i in classes])
plt.figure(figsize = (12,7))
sn.heatmap(df_cm, annot=True)
plt.savefig('baseline_rhinolophus_cf.png')
from sklearn.metrics import f1_score
corrects = np.equal(Y_pred, Y_true).sum()
print("Test accuracy:", corrects/len(Y_pred))
print("F1-score:", f1_score(Y_true, Y_pred, average=None).mean())
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from scipy.stats import ks_2samp, chi2
import scipy
from astropy.table import Table
import astropy
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
from matplotlib.colors import colorConverter
import matplotlib
%matplotlib notebook
print('numpy version: {}'.format(np.__version__))
print('pandas version: {}'.format(pd.__version__))
print('matplotlib version: {}'.format(matplotlib.__version__))
print('scipy version: {}'.format(scipy.__version__))
```
# Figure 7
Create Figure 7 (the host-galaxy offset of ASAS-SN SNe relative to SNe in the ZTF BTS) in [Fremling et al. 2020](https://ui.adsabs.harvard.edu/abs/2019arXiv191012973F/abstract).
Data for ASAS-SN are from [Holoien et al. 2019](https://ui.adsabs.harvard.edu/abs/2019MNRAS.484.1899H/abstract).
```
# BTS data
bts_df = pd.read_hdf('../data/final_rcf_table.h5')
z_sn = bts_df.z_sn.values
z_host = bts_df.z_host.values
norm_Ia = np.where( ( (bts_df.sn_type == 'Ia-norm') |
(bts_df.sn_type == 'Ia') |
(bts_df.sn_type == 'Ia-91bg') |
(bts_df.sn_type == 'Ia-91T') |
(bts_df.sn_type == 'Ia-99aa') |
(bts_df.sn_type == 'ia')
| (bts_df.sn_type == 'Ia-norm*')
| (bts_df.sn_type == 'Ia-91T*')
| (bts_df.sn_type == 'Ia-91T**')
| (bts_df.sn_type == 'SN Ia')
)
)
norm_cc = np.where( (bts_df.sn_type == 'IIb') |
(bts_df.sn_type == 'Ib') |
(bts_df.sn_type == 'IIP') |
(bts_df.sn_type == 'Ib/c') |
(bts_df.sn_type == 'Ic-norm') |
(bts_df.sn_type == 'IIn') |
(bts_df.sn_type == 'IIL') |
(bts_df.sn_type == 'Ic-broad') |
(bts_df.sn_type == 'II') |
(bts_df.sn_type == 'II-pec') |
(bts_df.sn_type == 'Ib-pec') |
(bts_df.sn_type == 'Ic') |
(bts_df.sn_type == 'Ic-BL') |
(bts_df.sn_type == 'IIP*') |
(bts_df.sn_type == 'II*') |
(bts_df.sn_type == 'Ibn') |
(bts_df.sn_type == 'II**') |
(bts_df.sn_type == 'Ib-norm') |
(bts_df.sn_type == 'IIn*')
)
has_host_z = np.where((z_host > 0) & np.isfinite(z_host))
no_host = np.where((z_host < 0) | np.isnan(z_host))
has_host_cc = np.intersect1d(has_host_z, norm_cc)
has_host_ia = np.intersect1d(has_host_z, norm_Ia)
no_host_cc = np.intersect1d(no_host, norm_cc)
no_host_ia = np.intersect1d(no_host, norm_Ia)
z_mix = z_sn.copy()
z_mix[has_host_z] = z_host[has_host_z]
```
#### Read in SN data from ASAS-SN
```
n_asas_ia = 0
n_asas_91T = 0
n_asas_91bg = 0
n_asas_ii = 0
n_asas_ibc = 0
n_asas_slsn = 0
asas_offset = np.array([])
for release in ['1','2','3','4']:
tab1 = '../data/ASAS_SN/bright_sn_catalog_{}/table1.txt'.format(release)
tab2 = '../data/ASAS_SN/bright_sn_catalog_{}/table2.txt'.format(release)
asassn_tab1 = Table.read(tab1, format='cds')
asassn_tab2 = Table.read(tab2, format='cds')
n_asas_ia += len(np.where( (asassn_tab1['Type'] == 'Ia') |
(asassn_tab1['Type'] == 'Ia-91T') |
(asassn_tab1['Type'] == 'Ia-91bg') |
(asassn_tab1['Type'] == 'Ia+CSM') |
(asassn_tab1['Type'] == 'Ia-pec') |
(asassn_tab1['Type'] == 'Ia-00cx') |
(asassn_tab1['Type'] == 'Ia-06bt') |
(asassn_tab1['Type'] == 'Ia-07if') |
(asassn_tab1['Type'] == 'Ia-09dc') |
(asassn_tab1['Type'] == 'Ia-02cx')
)[0])
n_asas_91T += len(np.where( (asassn_tab1['Type'] == 'Ia-91T') )[0])
n_asas_91bg += len(np.where( (asassn_tab1['Type'] == 'Ia-91bg') )[0])
n_asas_ii += len(np.where( (asassn_tab1['Type'] == 'II') |
(asassn_tab1['Type'] == 'IIP') |
(asassn_tab1['Type'] == 'IIb') |
(asassn_tab1['Type'] == 'II-pec') |
(asassn_tab1['Type'] == 'IIn') |
(asassn_tab1['Type'] == 'IIn-pec') |
(asassn_tab1['Type'] == 'IIn/LBV') |
(asassn_tab1['Type'] == 'IIn-09ip')
)[0])
n_asas_ibc += len(np.where( (asassn_tab1['Type'] == 'Ib') |
(asassn_tab1['Type'] == 'Ib/c') |
(asassn_tab1['Type'] == 'Ibn') |
(asassn_tab1['Type'] == 'Ic') |
(asassn_tab1['Type'] == 'Ic-pec') |
(asassn_tab1['Type'] == 'Ib/c-BL') |
(asassn_tab1['Type'] == 'Ic-BL')
)[0])
n_asas_slsn += len(np.where( (asassn_tab1['Type'] == 'SLSN-II') |
(asassn_tab1['Type'] == 'SLSN-I')
)[0])
n_asas_ia += len(np.where( ( (asassn_tab2['Type'] == 'Ia') |
(asassn_tab2['Type'] == 'Ia-91T') |
(asassn_tab2['Type'] == 'Ia-91bg') |
(asassn_tab2['Type'] == 'Ia+CSM') |
(asassn_tab2['Type'] == 'Ia-pec') |
(asassn_tab2['Type'] == 'Ia-00cx') |
(asassn_tab2['Type'] == 'Ia-06bt') |
(asassn_tab2['Type'] == 'Ia-07if') |
(asassn_tab2['Type'] == 'Ia-09dc') |
(asassn_tab2['Type'] == 'Ia-02cx')
) &
(asassn_tab2['Recovered'] == 'Yes')
)[0])
n_asas_91T += len(np.where( (asassn_tab2['Type'] == 'Ia-91T') &
(asassn_tab2['Recovered'] == 'Yes')
)[0])
n_asas_91bg += len(np.where( (asassn_tab2['Type'] == 'Ia-91bg') &
(asassn_tab2['Recovered'] == 'Yes')
)[0])
n_asas_ii += len(np.where( ( (asassn_tab2['Type'] == 'II') |
(asassn_tab2['Type'] == 'IIP') |
(asassn_tab2['Type'] == 'IIb') |
(asassn_tab2['Type'] == 'II-pec') |
(asassn_tab2['Type'] == 'IIn') |
(asassn_tab2['Type'] == 'IIn-pec') |
(asassn_tab2['Type'] == 'IIn/LBV') |
(asassn_tab2['Type'] == 'IIn-09ip')
) &
(asassn_tab2['Recovered'] == 'Yes')
)[0])
n_asas_ibc += len(np.where( ( (asassn_tab2['Type'] == 'Ib') |
(asassn_tab2['Type'] == 'Ib/c') |
(asassn_tab2['Type'] == 'Ibn') |
(asassn_tab2['Type'] == 'Ic') |
(asassn_tab2['Type'] == 'Ic-pec') |
(asassn_tab2['Type'] == 'Ib/c-BL') |
(asassn_tab2['Type'] == 'Ic-BL')
) &
(asassn_tab2['Recovered'] == 'Yes')
)[0])
n_asas_slsn += len(np.where( ( (asassn_tab2['Type'] == 'SLSN-II') |
(asassn_tab2['Type'] == 'SLSN-I')
) &
(asassn_tab2['Recovered'] == 'Yes')
)[0])
asas_offset = np.append(asas_offset, np.array(asassn_tab1['Offset'][asassn_tab1['HostName'] != 'None'], dtype=float))
asas_offset = np.append(asas_offset,
np.array(asassn_tab2['Offset'][np.where((asassn_tab2['Recovered'] == 'Yes') &
(asassn_tab2['SNName'] != 'PS16dtm'))], dtype=float))
tot_asas = n_asas_ia + n_asas_ii + n_asas_ibc + n_asas_slsn
bts_df.columns
not_ambiguous = np.where(np.isfinite(bts_df.sep))
brighter_than_17 = np.where((bts_df.g_max < 17) | (bts_df.r_max < 17))
bright_bts = np.intersect1d(not_ambiguous, brighter_than_17)
print(len(bright_bts))
color_dict = {'blue': '#2C5361',
'orange': '#DB6515',
'yellow': '#CA974C',
'maroon': '#3B2525',
'purple': '#A588AC',
'beige': '#D2A176'}
fig, ax1 = plt.subplots(1, 1, figsize=(6,8/3))
ax1.plot(np.sort(bts_df.sep.iloc[bright_bts]),
np.arange(len(bts_df.sep.iloc[bright_bts]))/float(len(bts_df.sep.iloc[bright_bts])),
label = 'ZTF BTS',
lw=3, color=color_dict['orange'])
ax1.plot(np.sort(asas_offset),
np.arange(len(asas_offset))/float(len(asas_offset)),
label = 'ASAS-SN',
lw=2, dashes=[6, 1],
color=color_dict['blue'])
ax1.set_xlabel('SN offset (arcsec)',fontsize=14)
ax1.legend(loc=4, fontsize=13)
ax1.set_xlim(-1, 24)
ax1.set_ylim(0,1)
ax1.xaxis.set_minor_locator(MultipleLocator(1))
ax1.yaxis.set_minor_locator(MultipleLocator(.1))
ax1.set_ylabel('cumulative $f_\mathrm{SN}$',fontsize=14)
ax1.tick_params(top=True,right=True,labelsize=11,which='both')
fig.subplots_adjust(left=0.105,bottom=0.2,top=0.97,right=0.98, hspace=0.3)
fig.savefig('ZTF_ASASSN_offset.pdf')
```
#### KS test
```
ks_2samp(bts_df.sep.iloc[bright_bts], asas_offset)
```
#### $\chi^2$ test
```
logbins = np.logspace(-2,1.57,11)
ztf_cnts, _ = np.histogram(bts_df.sep.iloc[bright_bts],
range=(0,25), bins=50)
# bins=logbins)
asas_cnts, _ = np.histogram(asas_offset,
range=(0,25), bins=50)
# bins=logbins)
not_empty = np.where((ztf_cnts > 0) & (asas_cnts > 0))
k1 = np.sqrt(np.sum(asas_cnts[not_empty])/np.sum(ztf_cnts[not_empty]))
k2 = np.sqrt(np.sum(ztf_cnts[not_empty])/np.sum(asas_cnts[not_empty]))
chisq_test = np.sum((k1*ztf_cnts[not_empty] - k2*asas_cnts[not_empty])**2 / (ztf_cnts[not_empty] + asas_cnts[not_empty]))
dof = len(not_empty[0])
chisq = scipy.stats.chi2(dof)
print(chisq_test, dof, chisq.sf(chisq_test))
```
| github_jupyter |
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import glob
import time
import os
from utils import calibrate_cam, weighted_img, warp
print("ready")
def warpTest(img, img_name):
imshape = img.shape
bot_x = 0.13*imshape[1] # offset from bottom corner
top_x = 0.04*imshape[1] # offset from centre of image
top_y = 0.63*imshape[0]
bot_y = imshape[0]
vertices = np.array([[(bot_x,bot_y),((imshape[1]/2) - top_x, top_y), ((imshape[1]/2) + top_x, top_y), (imshape[1] - bot_x,bot_y)]], dtype=np.int32)
x = [vertices[0][0][0], vertices[0][1][0], vertices[0][2][0], vertices[0][3][0]]
y = [vertices[0][0][1], vertices[0][1][1], vertices[0][2][1], vertices[0][3][1]]
roi_lines = np.copy(img)*0
for i in range(0, len(x)-1):
cv2.line(roi_lines,(x[i],y[i]),(x[i+1],y[i+1]),(0,0,255),3)
roi_img = weighted_img(img, roi_lines, α=0.8, β=1., γ=0.)
cv2.imwrite("./test_images_output/" + img_name[:-4] +"/02_" + img_name[:-4] + "_roi.jpg" , cv2.cvtColor(roi_img, cv2.COLOR_BGR2RGB))
print("x:\n", x)
print("________\ny:\n", y)
src = np.float32([[x[0],y[0]], [x[1],y[1]], [x[2],y[2]], [x[3],y[3]]])
dst = np.float32([[x[0],y[0]], [x[0],y[1]], [x[3],y[2]], [x[3],y[3]]])
dst = np.float32([[x[0],y[0]], [x[0],0], [x[3],0], [x[3],y[3]]])
print("________\nsrc:\n", src)
print("________\ndst:\n", dst)
roi_mask = np.zeros_like(img)
ignore_mask_color = (255,255,255)
cv2.fillPoly(roi_mask, vertices, ignore_mask_color)
masked_img = cv2.bitwise_and(img, roi_mask)
cv2.imwrite("./test_images_output/" + img_name[:-4] +"/03_" + img_name[:-4] + "_masked.jpg" , cv2.cvtColor(masked_img, cv2.COLOR_BGR2RGB))
M = cv2.getPerspectiveTransform(src, dst)
warped_img = cv2.warpPerspective(img, M, (imshape[1],imshape[0]))
cv2.imwrite("./test_images_output/" + img_name[:-4] +"/04_" + img_name[:-4] + "_warped.jpg" , cv2.cvtColor(warped_img, cv2.COLOR_BGR2RGB))
return warped_img
calibration_imgs = glob.glob("camera_cal/calibration*.jpg")
ret, mtx, dist = calibrate_cam(calibration_imgs)
print(ret)
test_imgs = glob.glob("test_images/*.jpg")
for img_name in test_imgs:
if not os.path.exists("./test_images_output/" + img_name[12:-4]):
os.makedirs("./test_images_output/" + img_name[12:-4])
print(img_name[12:])
img = mpimg.imread(img_name)
cv2.imwrite("./test_images_output/" + img_name[12:-4] +"/00_" + img_name[12:-4] + "_original.jpg" , cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite("./test_images_output/" + img_name[12:-4] +"/01_" + img_name[12:-4] + "_undistorted.jpg" , cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB))
warped_img = warp(undistorted_img, img_name[12:])
test_imgs = glob.glob("test_images/*.jpg")
for img_name in test_imgs:
if not os.path.exists("./test_images_output/" + img_name[12:-4]):
os.makedirs("./test_images_output/" + img_name[12:-4])
print(img_name[12:])
img = mpimg.imread(img_name)
undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)
warped_img = warp(undistorted_img, img_name[12:])
cv2.imwrite("./test_images_output/" + img_name[12:-4] +"/05_" + img_name[12:-4] + "_warpFunction.jpg" , cv2.cvtColor(warped_img, cv2.COLOR_BGR2RGB))
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Using side features: feature preprocessing
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/examples/movielens"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/featurization.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/featurization.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/featurization.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
One of the great advantages of using a deep learning framework to build recommender models is the freedom to build rich, flexible feature representations.
The first step in doing so is preparing the features, as raw features will usually not be immediately usable in a model.
For example:
- User and item ids may be strings (titles, usernames) or large, noncontiguous integers (database IDs).
- Item descriptions could be raw text.
- Interaction timestamps could be raw Unix timestamps.
These need to be appropriately transformed in order to be useful in building models:
- User and item ids have to be translated into embedding vectors: high-dimensional numerical representations that are adjusted during training to help the model predict its objective better.
- Raw text needs to be tokenized (split into smaller parts such as individual words) and translated into embeddings.
- Numerical features need to be normalized so that their values lie in a small interval around 0.
Fortunately, by using TensorFlow we can make such preprocessing part of our model rather than a separate preprocessing step. This is not only convenient, but also ensures that our pre-processing is exactly the same during training and during serving. This makes it safe and easy to deploy models that include even very sophisticated pre-processing.
In this tutorial, we are going to focus on recommenders and the preprocessing we need to do on the [MovieLens dataset](https://grouplens.org/datasets/movielens/). If you're interested in a larger tutorial without a recommender system focus, have a look at the full [Keras preprocessing guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).
## The MovieLens dataset
Let's first have a look at what features we can use from the MovieLens dataset:
```
#!pip install -q --upgrade tensorflow-datasets
import pprint
import tensorflow_datasets as tfds
ratings = tfds.load("movielens/100k-ratings", split="train")
for x in ratings.take(1).as_numpy_iterator():
pprint.pprint(x)
```
There are a couple of key features here:
- Movie title is useful as a movie identifier.
- User id is useful as a user identifier.
- Timestamps will allow us to model the effect of time.
The first two are categorical features; timestamps are a continuous feature.
## Turning categorical features into embeddings
A [categorical feature](https://en.wikipedia.org/wiki/Categorical_variable) is a feature that does not express a continuous quantity, but rather takes on one of a set of fixed values.
Most deep learning models express these feature by turning them into high-dimensional vectors. During model training, the value of that vector is adjusted to help the model predict its objective better.
For example, suppose that our goal is to predict which user is going to watch which movie. To do that, we represent each user and each movie by an embedding vector. Initially, these embeddings will take on random values - but during training, we will adjust them so that embeddings of users and the movies they watch end up closer together.
Taking raw categorical features and turning them into embeddings is normally a two-step process:
1. Firstly, we need to translate the raw values into a range of contiguous integers, normally by building a mapping (called a "vocabulary") that maps raw values ("Star Wars") to integers (say, 15).
2. Secondly, we need to take these integers and turn them into embeddings.
### Defining the vocabulary
The first step is to define a vocabulary. We can do this easily using Keras preprocessing layers.
```
import numpy as np
import tensorflow as tf
movie_title_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
```
The layer itself does not have a vocabulary yet, but we can build it using our data.
```
movie_title_lookup.adapt(ratings.map(lambda x: x["movie_title"]))
print(f"Vocabulary: {movie_title_lookup.get_vocabulary()[:3]}")
```
Once we have this we can use the layer to translate raw tokens to embedding ids:
```
movie_title_lookup(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
```
Note that the layer's vocabulary includes one (or more!) unknown (or "out of vocabulary", OOV) tokens. This is really handy: it means that the layer can handle categorical values that are not in the vocabulary. In practical terms, this means that the model can continue to learn about and make recommendations even using features that have not been seen during vocabulary construction.
### Using feature hashing
In fact, the `StringLookup` layer allows us to configure multiple OOV indices. If we do that, any raw value that is not in the vocabulary will be deterministically hashed to one of the OOV indices. The more such indices we have, the less likley it is that two different raw feature values will hash to the same OOV index. Consequently, if we have enough such indices the model should be able to train about as well as a model with an explicit vocabulary without the disdvantage of having to maintain the token list.
We can take this to its logical extreme and rely entirely on feature hashing, with no vocabulary at all. This is implemented in the `tf.keras.layers.experimental.preprocessing.Hashing` layer.
```
# We set up a large number of bins to reduce the chance of hash collisions.
num_hashing_bins = 200_000
movie_title_hashing = tf.keras.layers.experimental.preprocessing.Hashing(
num_bins=num_hashing_bins
)
```
We can do the lookup as before without the need to build vocabularies:
```
movie_title_hashing(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
```
### Defining the embeddings
Now that we have integer ids, we can use the [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer to turn those into embeddings.
An embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be.
When creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.
```
movie_title_embedding = tf.keras.layers.Embedding(
# Let's use the explicit vocabulary lookup.
input_dim=movie_title_lookup.vocab_size(),
output_dim=32
)
```
We can put the two together into a single layer which takes raw text in and yields embeddings.
```
movie_title_model = tf.keras.Sequential([movie_title_lookup, movie_title_embedding])
```
Just like that, we can directly get the embeddings for our movie titles:
```
movie_title_model(["Star Wars (1977)"])
```
We can do the same with user embeddings:
```
user_id_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
user_id_lookup.adapt(ratings.map(lambda x: x["user_id"]))
user_id_embedding = tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32)
user_id_model = tf.keras.Sequential([user_id_lookup, user_id_embedding])
```
## Normalizing continuous features
Continuous features also need normalization. For example, the `timestamp` feature is far too large to be used directly in a deep model:
```
for x in ratings.take(3).as_numpy_iterator():
print(f"Timestamp: {x['timestamp']}.")
```
We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.
### Standardization
[Standardization](https://en.wikipedia.org/wiki/Feature_scaling#Standardization_(Z-score_Normalization)) rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.
This can be easily accomplished using the [`tf.keras.layers.experimental.preprocessing.Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer:
```
timestamp_normalization = tf.keras.layers.experimental.preprocessing.Normalization()
timestamp_normalization.adapt(ratings.map(lambda x: x["timestamp"]).batch(1024))
for x in ratings.take(3).as_numpy_iterator():
print(f"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.")
```
### Discretization
Another common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.
To do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally:
```
max_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
tf.cast(0, tf.int64), tf.maximum).numpy().max()
min_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
np.int64(1e9), tf.minimum).numpy().min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000)
print(f"Buckets: {timestamp_buckets[:3]}")
```
Given the bucket boundaries we can transform timestamps into embeddings:
```
timestamp_embedding_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32)
])
for timestamp in ratings.take(1).map(lambda x: x["timestamp"]).batch(1).as_numpy_iterator():
print(f"Timestamp embedding: {timestamp_embedding_model(timestamp)}.")
```
## Processing text features
We may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.
While the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.
The first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.
The Keras [`tf.keras.layers.experimental.preprocessing.TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer can do the first two steps for us:
```
title_text = tf.keras.layers.experimental.preprocessing.TextVectorization()
title_text.adapt(ratings.map(lambda x: x["movie_title"]))
```
Let's try it out:
```
for row in ratings.batch(1).map(lambda x: x["movie_title"]).take(1):
print(title_text(row))
```
Each title is translated into a sequence of tokens, one for each piece we've tokenized.
We can check the learned vocabulary to verify that the layer is using the correct tokenization:
```
title_text.get_vocabulary()[40:45]
```
This looks correct: the layer is tokenizing titles into individual words.
To finish the processing, we now need to embed the text. Because each title contains multiple words, we will get multiple embeddings for each title. For use in a donwstream model these are usually compressed into a single embedding. Models like RNNs or Transformers are useful here, but averaging all the words' embeddings together is a good starting point.
## Putting it all together
With these components in place, we can build a model that does all the preprocessing together.
### User model
The full user model may look like the following:
```
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
user_id_lookup,
tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32)
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"])
], axis=1)
```
Let's try it out:
```
user_model = UserModel()
user_model.normalized_timestamp.adapt(
ratings.map(lambda x: x["timestamp"]).batch(128))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {user_model(row)[0, :3]}")
```
### Movie model
We can do the same for the movie model:
```
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
movie_title_lookup,
tf.keras.layers.Embedding(movie_title_lookup.vocab_size(), 32)
])
self.title_text_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.TextVectorization(max_tokens=max_tokens),
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
# We average the embedding of individual words to get one embedding vector
# per title.
tf.keras.layers.GlobalAveragePooling1D(),
])
def call(self, inputs):
return tf.concat([
self.title_embedding(inputs["movie_title"]),
self.title_text_embedding(inputs["movie_title"]),
], axis=1)
```
Let's try it out:
```
movie_model = MovieModel()
movie_model.title_text_embedding.layers[0].adapt(
ratings.map(lambda x: x["movie_title"]))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {movie_model(row)[0, :3]}")
```
## Next steps
With the two models above we've taken the first steps to representing rich features in a recommender model: to take this further and explore how these can be used to build an effective deep recomender model, take a look at our Deep Recommenders tutorial.
| github_jupyter |
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Displaying eigenmodes of vibration using `warp_by_vector`
=========================================================
This example applies the `warp_by_vector` filter to a cube whose
eigenmodes have been computed using the Ritz method, as outlined in
Visscher, William M., Albert Migliori, Thomas M. Bell, et Robert A.
Reinert. \"On the normal modes of free vibration of inhomogeneous and
anisotropic elastic objects\". The Journal of the Acoustical Society of
America 90, n.4 (october 1991): 2154-62.
<https://asa.scitation.org/doi/10.1121/1.401643>
First, let\'s solve the eigenvalue problem for a vibrating cube. We use
a crude approximation (by choosing a low max polynomial order) to get a
fast computation.
```
import numpy as np
from scipy.linalg import eigh
import pyvista as pv
def analytical_integral_rppd(p, q, r, a, b, c):
"""Returns the analytical value of the RPPD integral, i.e. the integral
of x**p * y**q * z**r for (x, -a, a), (y, -b, b), (z, -c, c)."""
if p < 0:
return 0.
elif q < 0:
return 0.
elif r < 0.:
return 0.
else:
return a ** (p + 1) * b ** (q + 1) * c ** (r + 1) * \
((-1) ** p + 1) * ((-1) ** q + 1) * ((-1) ** r + 1) \
/ ((p + 1) * (q + 1) * (r + 1))
def make_cijkl_E_nu(E=200, nu=0.3):
"""Makes cijkl from E and nu.
Default values for steel are: E=200 GPa, nu=0.3."""
lambd = E * nu / (1 + nu) / (1 - 2 * nu)
mu = E / 2 / (1 + nu)
cij = np.zeros((6, 6))
cij[(0, 1, 2), (0, 1, 2)] = lambd + 2 * mu
cij[(0, 0, 1, 1, 2, 2), (1, 2, 0, 2, 0, 1)] = lambd
cij[(3, 4, 5), (3, 4, 5)] = mu
# check symmetry
assert np.allclose(cij, cij.T)
# convert to order 4 tensor
coord_mapping = {(1, 1): 1,
(2, 2): 2,
(3, 3): 3,
(2, 3): 4,
(1, 3): 5,
(1, 2): 6,
(2, 1): 6,
(3, 1): 5,
(3, 2): 4}
cijkl = np.zeros((3, 3, 3, 3))
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
u = coord_mapping[(i + 1, j + 1)]
v = coord_mapping[(k + 1, l + 1)]
cijkl[i, j, k, l] = cij[u - 1, v - 1]
return cijkl, cij
def get_first_N_above_thresh(N, freqs, thresh, decimals=3):
"""Returns first N unique frequencies with amplitude above threshold based
on first decimals."""
unique_freqs, unique_indices = np.unique(
np.round(freqs, decimals=decimals), return_index=True)
nonzero = unique_freqs > thresh
unique_freqs, unique_indices = unique_freqs[nonzero], unique_indices[
nonzero]
return unique_freqs[:N], unique_indices[:N]
def assemble_mass_and_stiffness(N, F, geom_params, cijkl):
"""This routine assembles the mass and stiffness matrix.
It first builds an index of basis functions as a quadruplet of
component and polynomial order for (x^p, y^q, z^r) of maximum order N.
This routine only builds the symmetric part of the matrix to speed
things up.
"""
# building coordinates
triplets = []
for p in range(N + 1):
for q in range(N - p + 1):
for r in range(N - p - q + 1):
triplets.append((p, q, r))
assert len(triplets) == (N + 1) * (N + 2) * (N + 3) // 6
quadruplets = []
for i in range(3):
for triplet in triplets:
quadruplets.append((i, *triplet))
assert len(quadruplets) == 3 * (N + 1) * (N + 2) * (N + 3) // 6
# assembling the mass and stiffness matrix in a single loop
R = len(triplets)
E = np.zeros((3 * R, 3 * R)) # the mass matrix
G = np.zeros((3 * R, 3 * R)) # the stiffness matrix
for index1, quad1 in enumerate(quadruplets):
I, p1, q1, r1 = quad1
for index2, quad2 in enumerate(quadruplets[index1:]):
index2 = index2 + index1
J, p2, q2, r2 = quad2
G[index1, index2] = cijkl[I, 1 - 1, J, 1 - 1] * p1 * p2 * F(
p1 + p2 - 2, q1 + q2, r1 + r2, **geom_params) + \
cijkl[I, 1 - 1, J, 2 - 1] * p1 * q2 * F(
p1 + p2 - 1, q1 + q2 - 1, r1 + r2,
**geom_params) + \
cijkl[I, 1 - 1, J, 3 - 1] * p1 * r2 * F(
p1 + p2 - 1, q1 + q2, r1 + r2 - 1,
**geom_params) + \
cijkl[I, 2 - 1, J, 1 - 1] * q1 * p2 * F(
p1 + p2 - 1, q1 + q2 - 1, r1 + r2,
**geom_params) + \
cijkl[I, 2 - 1, J, 2 - 1] * q1 * q2 * F(
p1 + p2, q1 + q2 - 2, r1 + r2, **geom_params) + \
cijkl[I, 2 - 1, J, 3 - 1] * q1 * r2 * F(
p1 + p2, q1 + q2 - 1, r1 + r2 - 1,
**geom_params) + \
cijkl[I, 3 - 1, J, 1 - 1] * r1 * p2 * F(
p1 + p2 - 1, q1 + q2, r1 + r2 - 1,
**geom_params) + \
cijkl[I, 3 - 1, J, 2 - 1] * r1 * q2 * F(
p1 + p2, q1 + q2 - 1, r1 + r2 - 1,
**geom_params) + \
cijkl[I, 3 - 1, J, 3 - 1] * r1 * r2 * F(
p1 + p2, q1 + q2, r1 + r2 - 2, **geom_params)
G[index2, index1] = G[
index1, index2] # since stiffness matrix is symmetric
if I == J:
E[index1, index2] = F(p1 + p2, q1 + q2, r1 + r2, **geom_params)
E[index2, index1] = E[
index1, index2] # since mass matrix is symmetric
return E, G, quadruplets
N = 8 # maximum order of x^p y^q z^r polynomials
rho = 8.0 # g/cm^3
l1, l2, l3 = .2, .2, .2 # all in cm
geometry_parameters = {'a': l1 / 2., 'b': l2 / 2., 'c': l3 / 2.}
cijkl, cij = make_cijkl_E_nu(200, 0.3) # Gpa, without unit
E, G, quadruplets = assemble_mass_and_stiffness(N, analytical_integral_rppd,
geometry_parameters, cijkl)
# solving the eigenvalue problem using symmetric solver
w, vr = eigh(a=G, b=E)
omegas = np.sqrt(np.abs(w) / rho) * 1e5 # convert back to Hz
freqs = omegas / (2 * np.pi)
# expected values from (Bernard 2014, p.14),
# error depends on polynomial order ``N``
expected_freqs_kHz = np.array(
[704.8, 949., 965.2, 1096.3, 1128.4, 1182.8, 1338.9, 1360.9])
computed_freqs_kHz, mode_indices = get_first_N_above_thresh(8, freqs / 1e3,
thresh=1,
decimals=1)
print('found the following first unique eigenfrequencies:')
for ind, (freq1, freq2) in enumerate(
zip(computed_freqs_kHz, expected_freqs_kHz)):
error = np.abs(freq2 - freq1) / freq1 * 100.
print(
f"freq. {ind + 1:1}: {freq1:8.1f} kHz," + \
f" expected: {freq2:8.1f} kHz, error: {error:.2f} %")
```
Now, let\'s display a mode on a mesh of the cube.
```
# Create the 3D NumPy array of spatially referenced data
# (nx by ny by nz)
nx, ny, nz = 30, 31, 32
x = np.linspace(-l1 / 2., l1 / 2., nx)
y = np.linspace(-l2 / 2., l2 / 2., ny)
x, y = np.meshgrid(x, y)
z = np.zeros_like(x) + l3 / 2.
grid = pv.StructuredGrid(x, y, z)
slices = []
for zz in np.linspace(-l3 / 2., l3 / 2., nz)[::-1]:
slice = grid.points.copy()
slice[:, -1] = zz
slices.append(slice)
vol = pv.StructuredGrid()
vol.points = np.vstack(slices)
vol.dimensions = [*grid.dimensions[0:2], nz]
for i, mode_index in enumerate(mode_indices):
eigenvector = vr[:, mode_index]
displacement_points = np.zeros_like(vol.points)
for weight, (component, p, q, r) in zip(eigenvector, quadruplets):
displacement_points[:, component] += weight * vol.points[:, 0] ** p * \
vol.points[:, 1] ** q * \
vol.points[:, 2] ** r
if displacement_points.max() > 0.:
displacement_points /= displacement_points.max()
vol[f'eigenmode_{i:02}'] = displacement_points
warpby = 'eigenmode_00'
warped = vol.warp_by_vector(warpby, factor=0.04)
warped.translate([-1.5 * l1, 0., 0.], inplace=True)
p = pv.Plotter()
p.add_mesh(vol, style='wireframe', scalars=warpby)
p.add_mesh(warped, scalars=warpby)
p.show()
```
Finally, let\'s make a gallery of the first 8 unique eigenmodes.
```
p = pv.Plotter(shape=(2, 4))
for i in range(2):
for j in range(4):
p.subplot(i, j)
current_index = 4 * i + j
vector = f"eigenmode_{current_index:02}"
p.add_text(
f"mode {current_index}," + \
f" freq. {computed_freqs_kHz[current_index]:.1f} kHz",
font_size=10)
p.add_mesh(vol.warp_by_vector(vector, factor=0.03), scalars=vector)
p.show()
```
| github_jupyter |
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# K-Nearest Neighbors
Estimated time needed: **25** minutes
## Objectives
After completing this lab you will be able to:
- Use K Nearest neighbors to classify data
In this Lab you will load a customer dataset, fit the data, and use K-Nearest Neighbors to predict a data point. But what is **K-Nearest Neighbors**?
**K-Nearest Neighbors** is an algorithm for supervised learning. Where the data is 'trained' with data points corresponding to their classification. Once a point is to be predicted, it takes into account the 'K' nearest points to it to determine it's classification.
### Here's an visualization of the K-Nearest Neighbors algorithm.
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/images/KNN_Diagram.png">
In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A.
In this sense, it is important to consider the value of k. But hopefully from this diagram, you should get a sense of what the K-Nearest Neighbors algorithm is. It considers the 'K' Nearest Neighbors (points) when it predicts the classification of the test point.
<h1>Table of contents</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#about_dataset">About the dataset</a></li>
<li><a href="#visualization_analysis">Data Visualization and Analysis</a></li>
<li><a href="#classification">Classification</a></li>
</ol>
</div>
<br>
<hr>
```
!pip install scikit-learn==0.23.1
```
Lets load required libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn import preprocessing
%matplotlib inline
```
<div id="about_dataset">
<h2>About the dataset</h2>
</div>
Imagine a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, the company can customize offers for individual prospective customers. It is a classification problem. That is, given the dataset, with predefined labels, we need to build a model to be used to predict class of a new or unknown case.
The example focuses on using demographic data, such as region, age, and marital, to predict usage patterns.
The target field, called **custcat**, has four possible values that correspond to the four customer groups, as follows:
1- Basic Service
2- E-Service
3- Plus Service
4- Total Service
Our objective is to build a classifier, to predict the class of unknown cases. We will use a specific type of classification called K nearest neighbour.
Lets download the dataset. To download the data, we will use !wget to download it from IBM Object Storage.
```
!wget -O teleCust1000t.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/teleCust1000t.csv
```
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Load Data From CSV File
```
df = pd.read_csv('teleCust1000t.csv')
df.head()
```
<div id="visualization_analysis">
<h2>Data Visualization and Analysis</h2>
</div>
#### Let’s see how many of each class is in our data set
```
df['custcat'].value_counts()
```
#### 281 Plus Service, 266 Basic-service, 236 Total Service, and 217 E-Service customers
You can easily explore your data using visualization techniques:
```
df.hist(column='income', bins=50)
```
### Feature set
Lets define feature sets, X:
```
df.columns
```
To use scikit-learn library, we have to convert the Pandas data frame to a Numpy array:
```
X = df[['region', 'tenure','age', 'marital', 'address', 'income', 'ed', 'employ','retire', 'gender', 'reside']] .values #.astype(float)
X[0:5]
```
What are our labels?
```
y = df['custcat'].values
y[0:5]
```
## Normalize Data
Data Standardization give data zero mean and unit variance, it is good practice, especially for algorithms such as KNN which is based on distance of cases:
```
X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]
```
### Train Test Split
Out of Sample Accuracy is the percentage of correct predictions that the model makes on data that that the model has NOT been trained on. Doing a train and test on the same dataset will most likely have low out-of-sample accuracy, due to the likelihood of being over-fit.
It is important that our models have a high, out-of-sample accuracy, because the purpose of any model, of course, is to make correct predictions on unknown data. So how can we improve out-of-sample accuracy? One way is to use an evaluation approach called Train/Test Split.
Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
<div id="classification">
<h2>Classification</h2>
</div>
<h3>K nearest neighbor (KNN)</h3>
#### Import library
Classifier implementing the k-nearest neighbors vote.
```
from sklearn.neighbors import KNeighborsClassifier
```
### Training
Lets start the algorithm with k=4 for now:
```
k = 4
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
neigh
```
### Predicting
we can use the model to predict the test set:
```
yhat = neigh.predict(X_test)
yhat[0:5]
```
### Accuracy evaluation
In multilabel classification, **accuracy classification score** is a function that computes subset accuracy. This function is equal to the jaccard_score function. Essentially, it calculates how closely the actual labels and predicted labels are matched in the test set.
```
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat))
```
## Practice
Can you build the model again, but this time with k=6?
```
# write your code here
```
<details><summary>Click here for the solution</summary>
```python
k = 6
neigh6 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
yhat6 = neigh6.predict(X_test)
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh6.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat6))
```
</details>
#### What about other K?
K in KNN, is the number of nearest neighbors to examine. It is supposed to be specified by the User. So, how can we choose right value for K?
The general solution is to reserve a part of your data for testing the accuracy of the model. Then chose k =1, use the training part for modeling, and calculate the accuracy of prediction using all samples in your test set. Repeat this process, increasing the k, and see which k is the best for your model.
We can calculate the accuracy of KNN for different Ks.
```
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
for n in range(1,Ks):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat=neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
mean_acc
```
#### Plot model accuracy for Different number of Neighbors
```
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.fill_between(range(1,Ks),mean_acc - 3 * std_acc,mean_acc + 3 * std_acc, alpha=0.10,color="green")
plt.legend(('Accuracy ', '+/- 1xstd','+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Neighbors (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
```
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio">Watson Studio</a>
### Thank you for completing this lab!
## Author
Saeed Aghabozorgi
### Other Contributors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2021-01-21 | 2.4 | Lakshmi | Updated sklearn library |
| 2020-11-20 | 2.3 | Lakshmi | Removed unused imports |
| 2020-11-17 | 2.2 | Lakshmi | Changed plot function of KNN |
| 2020-11-03 | 2.1 | Lakshmi | Changed URL of csv |
| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
# Welcome to Kijang Emas analysis!

I was found around last week (18th March 2019), our Bank Negara opened public APIs for certain data, it was really cool and I want to help people get around with the data and what actually they can do with the data!
We are going to cover 2 things here,
1. Data Analytics
2. Predictive Modelling (Linear regression, ARIMA, LSTM)
Hell, I know nothing about Kijang Emas.
**Again, do not use this code to buy something on the real world (if got positive return, please donate some to me)**
```
import requests
```
## Data gathering
To get the data is really simple, use this link to get kijang emas data, https://api.bnm.gov.my/public/kijang-emas/year/{year}/month/{month}
Now, I want to get data from january 2018 - march 2019.
#### 2018 data
```
data_2018 = []
for i in range(12):
data_2018.append(requests.get(
'https://api.bnm.gov.my/public/kijang-emas/year/2018/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json())
```
#### 2019 data
```
data_2019 = []
for i in range(3):
data_2019.append(requests.get(
'https://api.bnm.gov.my/public/kijang-emas/year/2019/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json())
```
#### Take a peak our data ya
```
data_2018[0]['data'][:5]
```
Again, I got zero knowledge on kijang emas and I don't really care about the value, and I don't know what the value represented.
Now I want to parse `effective_date` and `buying` from `one_oz`.
```
timestamp, selling = [], []
for month in data_2018 + data_2019:
for day in month['data']:
timestamp.append(day['effective_date'])
selling.append(day['one_oz']['selling'])
len(timestamp), len(selling)
```
Going to import matplotlib and seaborn for visualization, I really seaborn because of the font and colors, thats all, hah!
```
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set()
plt.figure(figsize = (15, 5))
plt.plot(selling)
plt.xticks(np.arange(len(timestamp))[::15], timestamp[::15], rotation = '45')
plt.show()
```
## Perfect!
So now let's we start our Data analytics.
#### Distribution study
```
plt.figure(figsize = (15, 5))
sns.distplot(selling)
plt.show()
```
Look at this, already normal distribution, coincidence? (I really wanted to show off unit scaling skills, too bad :/ )
Now let's change our into Pandas, for lagging analysis.
```
import pandas as pd
df = pd.DataFrame({'timestamp':timestamp, 'selling':selling})
df.head()
def df_shift(df, lag = 0, start = 1, skip = 1, rejected_columns = []):
df = df.copy()
if not lag:
return df
cols = {}
for i in range(start, lag + 1, skip):
for x in list(df.columns):
if x not in rejected_columns:
if not x in cols:
cols[x] = ['{}_{}'.format(x, i)]
else:
cols[x].append('{}_{}'.format(x, i))
for k, v in cols.items():
columns = v
dfn = pd.DataFrame(data = None, columns = columns, index = df.index)
i = start - 1
for c in columns:
dfn[c] = df[k].shift(periods = i)
i += skip
df = pd.concat([df, dfn], axis = 1, join_axes = [df.index])
return df
```
**Shifted and moving average are not same.**
```
df_crosscorrelated = df_shift(
df, lag = 12, start = 4, skip = 2, rejected_columns = ['timestamp']
)
df_crosscorrelated['ma7'] = df_crosscorrelated['selling'].rolling(7).mean()
df_crosscorrelated['ma14'] = df_crosscorrelated['selling'].rolling(14).mean()
df_crosscorrelated['ma21'] = df_crosscorrelated['selling'].rolling(21).mean()
```
## why we lagged or shifted to certain units?
Virals took some time, impacts took some time, same goes to price lot / unit.
Now I want to `lag` for until 12 units, `start` at 4 units shifted, `skip` every 2 units.
```
df_crosscorrelated.head(10)
plt.figure(figsize = (20, 4))
plt.subplot(1, 3, 1)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_4'])
mse = (
(df_crosscorrelated['selling_4'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 4, average change: %f'%(mse))
plt.subplot(1, 3, 2)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_8'])
mse = (
(df_crosscorrelated['selling_8'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 8, average change: %f'%(mse))
plt.subplot(1, 3, 3)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_12'])
mse = (
(df_crosscorrelated['selling_12'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 12, average change: %f'%(mse))
plt.show()
```
Keep increasing and increasing!
```
plt.figure(figsize = (10, 5))
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_4'],
label = 'close vs shifted 4',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_8'],
label = 'close vs shifted 8',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_12'],
label = 'close vs shifted 12',
)
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize = (15, 5))
df_crosscorrelated.plot(
x = 'timestamp', y = ['selling', 'ma7', 'ma14', 'ma21'], ax = ax
)
plt.xticks(np.arange(len(timestamp))[::10], timestamp[::10], rotation = '45')
plt.show()
```
As you can see, even moving average 7 already not followed sudden trending (blue line), means that, **dilation rate required less than 7 days! so fast!**
#### How about correlation?
We want to study linear relationship between, how many days required to give impact to future sold units?
```
colormap = plt.cm.RdBu
plt.figure(figsize = (15, 5))
plt.title('cross correlation', y = 1.05, size = 16)
sns.heatmap(
df_crosscorrelated.iloc[:, 1:].corr(),
linewidths = 0.1,
vmax = 1.0,
cmap = colormap,
linecolor = 'white',
annot = True,
)
plt.show()
```
Based on this correlation map, look at selling vs selling_X,
**selling_X from 4 to 12 is getting lower, means that, if today is 50 mean, next 4 days should increased by 0.95 * 50 mean, and continue.**
#### Outliers
Simple, we can use Z-score to detect outliers, which timestamps gave very uncertain high and low value.
```
std_selling = (selling - np.mean(selling)) / np.std(selling)
def detect(signal, treshold = 2.0):
detected = []
for i in range(len(signal)):
if np.abs(signal[i]) > treshold:
detected.append(i)
return detected
```
Based on z-score table, 2.0 already positioned at 97.772% of the population.
https://d2jmvrsizmvf4x.cloudfront.net/6iEAaVSaT3aGP52HMzo3_z-score-02.png
```
outliers = detect(std_selling)
plt.figure(figsize = (15, 7))
plt.plot(selling)
plt.plot(
np.arange(len(selling)),
selling,
'X',
label = 'outliers',
markevery = outliers,
c = 'r',
)
plt.legend()
plt.show()
```
We can see that, **we have positive and negative outliers**. What happened to our local market on that days? So we should study sentiment from local news to do risk analysis.
# Give us predictive modelling!
Okay okay.
## Predictive modelling
Like I said, I want to compare with 3 models,
1. Linear regression
2. ARIMA
3. LSTM Tensorflow (sorry Pytorch, not used to it)
Which models give the best accuracy and lowest error rate?
**I want to split first timestamp 80% for train, another 20% timestamp for test.**
```
from sklearn.linear_model import LinearRegression
train_selling = selling[: int(0.8 * len(selling))]
test_selling = selling[int(0.8 * len(selling)) :]
```
Beware of `:`!
```
future_count = len(test_selling)
future_count
```
Our model should forecast 61 future days ahead.
#### Linear regression
```
%%time
linear_regression = LinearRegression().fit(
np.arange(len(train_selling)).reshape((-1, 1)), train_selling
)
linear_future = linear_regression.predict(
np.arange(len(train_selling) + future_count).reshape((-1, 1))
)
```
Took me 594 us to train linear regression from sklearn. Very quick!
```
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
```
Oh no, if based on linear relationship, the trend is going down!
#### ARIMA
Stands for Auto-regressive Moving Average.
3 important parameters you need to know about ARIMA, ARIMA(p, d, q). You will able to see what is `p`, `d`, `q` from wikipedia, https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average.
`p` for the order (number of time lags).
`d` for degree of differencing.
`q` for the order of the moving-average.
Or,
`p` is how long the periods we need to look back.
`d` is the skip value during calculating future differences.
`q` is how many periods for moving average.
```
import statsmodels.api as sm
from sklearn.preprocessing import MinMaxScaler
from itertools import product
Qs = range(0, 2)
qs = range(0, 2)
Ps = range(0, 2)
ps = range(0, 2)
D = 1
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
```
Problem with ARIMA, you cannot feed a high value, so we need to scale, simplest we can use, minmax scaling.
```
minmax = MinMaxScaler().fit(np.array([train_selling]).T)
minmax_values = minmax.transform(np.array([train_selling]).T)
```
Now using naive meshgrid parameter searching, which pairs of parameters are the best! **Lower is better!**
```
best_aic = float('inf')
for param in parameters_list:
try:
model = sm.tsa.statespace.SARIMAX(
minmax_values[:, 0],
order = (param[0], D, param[1]),
seasonal_order = (param[2], D, param[3], future_count),
).fit(disp = -1)
except Exception as e:
print(e)
continue
aic = model.aic
print(aic)
if aic < best_aic and aic:
best_model = model
best_aic = aic
arima_future = best_model.get_prediction(
start = 0, end = len(train_selling) + (future_count - 1)
)
arima_future = minmax.inverse_transform(
np.expand_dims(arima_future.predicted_mean, axis = 1)
)[:, 0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
```
Perfect!
Now we left,
#### RNN + LSTM
```
import tensorflow as tf
class Model:
def __init__(
self,
learning_rate,
num_layers,
size,
size_layer,
output_size,
forget_bias = 0.1,
):
def lstm_cell(size_layer):
return tf.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell(
[lstm_cell(size_layer) for _ in range(num_layers)],
state_is_tuple = False,
)
self.X = tf.placeholder(tf.float32, (None, None, size))
self.Y = tf.placeholder(tf.float32, (None, output_size))
drop = tf.contrib.rnn.DropoutWrapper(
rnn_cells, output_keep_prob = forget_bias
)
self.hidden_layer = tf.placeholder(
tf.float32, (None, num_layers * 2 * size_layer)
)
self.outputs, self.last_state = tf.nn.dynamic_rnn(
drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
)
self.logits = tf.layers.dense(self.outputs[-1], output_size)
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
```
**Naively defined neural network parameters, no meshgrid here. this parameters came from my dream, believe me :)**
```
num_layers = 1
size_layer = 128
epoch = 500
dropout_rate = 0.6
skip = 10
```
Same goes to LSTM, we need to scale our value becaused LSTM use sigmoid and tanh functions during feed-forward, we don't want any gradient vanishing during backpropagation.
```
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
df_log.head()
tf.reset_default_graph()
modelnn = Model(
learning_rate = 0.001,
num_layers = num_layers,
size = df_log.shape[1],
size_layer = size_layer,
output_size = df_log.shape[1],
forget_bias = dropout_rate
)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
%%time
for i in range(epoch):
init_value = np.zeros((1, num_layers * 2 * size_layer))
total_loss = 0
for k in range(0, df_log.shape[0] - 1, skip):
index = min(k + skip, df_log.shape[0] -1)
batch_x = np.expand_dims(
df_log.iloc[k : index, :].values, axis = 0
)
batch_y = df_log.iloc[k + 1 : index + 1, :].values
last_state, _, loss = sess.run(
[modelnn.last_state, modelnn.optimizer, modelnn.cost],
feed_dict = {
modelnn.X: batch_x,
modelnn.Y: batch_y,
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
total_loss += loss
total_loss /= ((df_log.shape[0] - 1) / skip)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
future_day = future_count
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0] = df_log.iloc[0]
upper_b = (df_log.shape[0] // skip) * skip
init_value = np.zeros((1, num_layers * 2 * size_layer))
for k in range(0, (df_log.shape[0] // skip) * skip, skip):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(
df_log.iloc[k : k + skip], axis = 0
),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[k + 1 : k + skip + 1] = out_logits
if upper_b < df_log.shape[0]:
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[upper_b:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[upper_b + 1 : df_log.shape[0] + 1] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1]
future_day = future_day - 1
for i in range(future_day):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[-skip:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[df_log.shape[0]] = out_logits[-1]
df_log.loc[df_log.shape[0]] = out_logits[-1]
df_log = minmax.inverse_transform(output_predict)
lstm_future = df_log[:,0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(lstm_future, label='forecast lstm')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
from sklearn.metrics import r2_score
from scipy.stats import pearsonr, spearmanr
```
Accuracy based on correlation coefficient, **higher is better!**
```
def calculate_accuracy(real, predict):
r2 = r2_score(real, predict)
if r2 < 0:
r2 = 0
def change_percentage(val):
# minmax, we know that correlation is between -1 and 1
if val > 0:
return val
else:
return val + 1
pearson = pearsonr(real, predict)[0]
spearman = spearmanr(real, predict)[0]
pearson = change_percentage(pearson)
spearman = change_percentage(spearman)
return {
'r2': r2 * 100,
'pearson': pearson * 100,
'spearman': spearman * 100,
}
```
Distance error for mse and rmse, **lower is better!**
```
def calculate_distance(real, predict):
mse = ((real - predict) ** 2).mean()
rmse = np.sqrt(mse)
return {'mse': mse, 'rmse': rmse}
```
#### Now let's check distance error using Mean Square Error and Root Mean Square Error
Validating based on 80% training timestamps
```
linear_cut = linear_future[: len(train_selling)]
arima_cut = arima_future[: len(train_selling)]
lstm_cut = lstm_future[: len(train_selling)]
```
Linear regression
```
calculate_distance(train_selling, linear_cut)
calculate_accuracy(train_selling, linear_cut)
```
ARIMA
```
calculate_distance(train_selling, arima_cut)
calculate_accuracy(train_selling, arima_cut)
```
LSTM
```
calculate_distance(train_selling, lstm_cut)
calculate_accuracy(train_selling, lstm_cut)
```
**LSTM learn better during training session!**
How about another 20%?
```
linear_cut = linear_future[len(train_selling) :]
arima_cut = arima_future[len(train_selling) :]
lstm_cut = lstm_future[len(train_selling) :]
```
Linear regression
```
calculate_distance(test_selling, linear_cut)
calculate_accuracy(test_selling, linear_cut)
```
ARIMA
```
calculate_distance(test_selling, arima_cut)
calculate_accuracy(test_selling, arima_cut)
```
LSTM
```
calculate_distance(test_selling, lstm_cut)
calculate_accuracy(test_selling, lstm_cut)
```
**LSTM is the best model based on testing!**
Deep learning won again!
I guess that's all for now, **again, do not use these models to buy any stocks or trends!**
| github_jupyter |
```
import torch
from torch.distributions import Normal
import math
```
Let us revisit the problem of predicting if a resident of Statsville is female based on the height. For this purpose, we have collected a set of height samples from adult female residents in Statsville. Unfortunately, due to unforseen circumstances we have collected a very small sample from the residents. Armed with our knowledge of Bayesian inference, we do not want to let this deter us from trying to build a model.
From physical considerations, we can assume that the distribution of heights is Gaussian. Our goal is to estimate the parameters ($\mu$, $\sigma$) of this Gaussian.
Let us first create the dataset by sampling 5 points from a Gaussian distribution with $\mu$=152 and $\sigma$=8. In real life scenarios, we do not know the mean and standard deviation of the true distribution. But for the sake of this example, let's assume that the mean height is 152cm and standard deviation is 8cm.
```
torch.random.manual_seed(0)
num_samples = 5
true_dist = Normal(152, 8)
X = true_dist.sample((num_samples, 1))
print('Dataset shape: {}'.format(X.shape))
```
### Maximum Likelihood Estimate
If we relied on Maximum Likelihood estimation, our approach would be simply to compute the mean and standard deviation of the dataset, and use this normal distribution as our model.
$$\mu_{MLE} = \frac{1}{N}\sum_{i=1}^nx_i$$
$$\sigma_{MLE} = \frac{1}{N}\sum_{i=1}^n(x_i - \mu)^2$$
Once we estimate the parameters, we can find out the probability that a sample lies in the range using the following formula
$$ p(a < X <= b) = \int_{a}^b p(X) dX $$
However, when the amount of data is low, the MLE estimates are not as reliable.
```
mle_mu, mle_std = X.mean(), X.std()
mle_dist = Normal(mle_mu, mle_std)
print(f"MLE: mu {mle_mu:0.2f} std {mle_std:0.2f}")
```
## Bayesian Inference
Can we do better than MLE?
One potential method to do this is to use Bayesian inference with a good prior. How does one go about selecting a good prior? Well, lets say from another survey, we know that the average and the standard deviation of height of adult female residents in Neighborville, the neighboring town. Additionally, we have no reason to believe that the distribution of heights at Statsville is significantly different. So we can use this information to "initialize" our prior.
Lets say the the mean height of adult female resident in Neighborville is 150 cm with a standard deviation of 9 cm.
We can use this information as our prior. The prior distribution encodes our beliefs on the parameter values.
Given that we are dealing with an unknown mean, and unknown variance, we will model the prior as a Normal Gamma distribution.
$$p\left( \theta \middle\vert X \right) = p \left( X \middle\vert \theta \right) p \left( \theta \right)\\
p\left( \theta \middle\vert X \right) = Normal-Gamma\left( \mu_{n}, \lambda_{n}, \alpha_{n}, \beta_{n} \right) \\
p \left( X \middle\vert \theta \right) = \mathbb{N}\left( \mu, \lambda^{ -\frac{1}{2} } \right) \\
p \left( \theta \right) = Normal-Gamma\left( \mu_{0}, \lambda_{0}, \alpha_{0}, \beta_{0} \right)$$
We will choose a prior, $p \left(\theta \right)$, such that
$$ \mu_{0} = 150 \\
\lambda_{0} = 100 \\
\alpha_{0} = 100.5 \\
\beta_{0} = 8100 $$
$$p \left( \theta \right) = Normal-Gamma\left( 150, 100, 100.5 , 8100 \right)$$
We will compute the posterior, $p\left( \theta \middle\vert X \right)$, using Bayesian inference.
$$\mu_{n} = \frac{ \left( n \bar{x} + \mu_{0} \lambda_{0} \right) }{ n + \lambda_{0} } \\
\lambda_{n} = n + \lambda_{0} \\
\alpha_{n} = \frac{n}{2} + \alpha_{0} \\
\beta_{n} = \frac{ ns }{ 2 } + \beta_{ 0 } + \frac{ n \lambda_{0} } { 2 \left( n + \lambda_{0} \right) } \left( \bar{x} - \mu_{0} \right)^{ 2 }$$
$$p\left( \theta \middle\vert X \right) = Normal-Gamma\left( \mu_{n}, \lambda_{n}, \alpha_{n}, \beta_{n} \right)$$
```
class NormalGamma():
def __init__(self, mu_, lambda_, alpha_, beta_):
self.mu_ = mu_
self.lambda_ = lambda_
self.alpha_ = alpha_
self.beta_ = beta_
@property
def mean(self):
return self.mu_, self.alpha_/ self.beta_
@property
def mode(self):
return self.mu_, (self.alpha_-0.5)/ self.beta_
def inference_unknown_mean_variance(X, prior_dist):
mu_mle = X.mean()
sigma_mle = X.std()
n = X.shape[0]
# Parameters of the prior
mu_0 = prior_dist.mu_
lambda_0 = prior_dist.lambda_
alpha_0 = prior_dist.alpha_
beta_0 = prior_dist.beta_
# Parameters of posterior
mu_n = (n * mu_mle + mu_0 * lambda_0) / (lambda_0 + n)
lambda_n = n + lambda_0
alpha_n = n / 2 + alpha_0
beta_n = (n / 2 * sigma_mle ** 2) + beta_0 + (0.5* n * lambda_0 * (mu_mle - mu_0) **2 /(n + lambda_0))
posterior_dist = NormalGamma(mu_n, lambda_n, alpha_n, beta_n)
return posterior_dist
# Let us initialize the prior based on our beliefs
prior_dist = NormalGamma(150, 100, 10.5, 810)
# We compute the posterior distribution
posterior_dist = inference_unknown_mean_variance(X, prior_dist)
```
How do we use the posterior distribution?
Note that the posterior distribution is a distribution on the parameters $\mu$ and $\lambda$. It is important to note that the posterior and prior are distributions in the parameter space. The likelihood is a distribution on the data space.
Once we learn the posterior distribution, one way to use the distribution is to look at the mode of the distribution i.e the parameter values which have the highest probability density. Using these point estimates leads us to Maximum A Posteriori / MAP estimation.
As usual, we will obtain the maxima of the posterior probability density function $p\left( \mu, \sigma \middle\vert X \right) = Normal-Gamma\left( \mu, \sigma ; \;\; \mu_{n}, \lambda_{n}, \alpha_{n}, \beta_{n} \right) $.
This function attains its maxima when
$$\mu = \mu_{n} \\
\lambda = \frac{ \alpha_{n} - \frac{1}{2} } { \beta_{n} }$$
We notice that the MAP estimates for $\mu$ and $\sigma$ are better than the MLE estimates.
```
# With the Normal Gamma formulation, the unknown parameters are mu and precision
map_mu, map_precision = posterior_dist.mode
# We can compute the standard deviation using precision.
map_std = math.sqrt(1 / map_precision)
map_dist = Normal(map_mu, map_std)
print(f"MAP: mu {map_mu:0.2f} std {map_std:0.2f}")
```
How did we arrive at the values of the parameters for the prior distribution?
Let us consider the case when we have 0 data points. In this case, posterior will become equal to the prior. If we use the mode of this posterior for our MAP estimate, we see that the mu and std parameters are the same as the $\mu$ and $\sigma$ of adult female residents in Neighborville.
```
prior_mu, prior_precision = prior_dist.mode
prior_std = math.sqrt(1 / prior_precision)
print(f"Prior: mu {prior_mu:0.2f} std {prior_std:0.2f}")
```
## Inference
Let us say we want to find out the probability that a height between 150 and 155 belongs to an adult female resident. We can now use the the MAP estimates for $\mu$ and $\sigma$ to compute this value.
Since our prior was good, we notice that the MAP serves as a better estimator than MLE at low values of n
```
a, b = torch.Tensor([150]), torch.Tensor([155])
true_prob = true_dist.cdf(b) - true_dist.cdf(a)
print(f'True probability: {true_prob}')
map_prob = map_dist.cdf(b) - map_dist.cdf(a)
print(f'MAP probability: {map_prob}')
mle_prob = mle_dist.cdf(b) - mle_dist.cdf(a)
print('MLE probability: {}'.format(mle_prob))
```
Let us say we receive more samples, how do we incorporate this information into our model? We can now set the prior to our current posterior and run inference again to obtain the new posterior. This process can be done interatively.
$$ p \left( \theta \right)_{n} = p\left( \theta \middle\vert X \right)_{n-1}$$
$$ p\left( \theta \middle\vert X \right)_{n}=inference\_unknown\_mean\_variance(X_{n}, p \left( \theta \right)_{n})$$
We also notice that as the number of data points increases, the MAP starts to converge towards the true values of $\mu$ and $\sigma$ respectively
```
num_batches, batch_size = 20, 10
for i in range(num_batches):
X_i = true_dist.sample((batch_size, 1))
prior_i = posterior_dist
posterior_dist = inference_unknown_mean_variance(X_i, prior_i)
map_mu, map_precision = posterior_dist.mode
# We can compute the standard deviation using precision.
map_std = math.sqrt(1 / map_precision)
map_dist = Normal(map_mu, map_std)
if i % 5 == 0:
print(f"MAP at batch {i}: mu {map_mu:0.2f} std {map_std:0.2f}")
print(f"MAP at batch {i}: mu {map_mu:0.2f} std {map_std:0.2f}")
```
| github_jupyter |
# Labs - Biopython and data formats
## Outline
- Managing dependencies in Python with environments
- Biopython
- Sequences (parsing, representation, manipulation)
- Structures (parsing, representation, manipulation)
### 1. Python environments
- handles issues with dependencies versions
- ensures reproducibility
- does not clutter users' global site-packages directory
`python3 -m venv venv/ # Creates an environment called venv/`
`source venv/bin/activate`
`pip install biopython`
`pip freeze > requirements.txt`
`(venv) % deactivate`
On a different machine, the environment can be replicated by creating a new environment and running
`pip install -r requirements.txt`
### 2. Biopython
Biopython is a library consisting of tools for both sequence and structure bioinformatics. Among other things it enables parsing, handling and storing molecular data present in common formats such as FASTA, PDB or mmCIF.
Install biopython using `pip install biopython`
Functionality divided into packages list of which is available in the [docs](https://biopython.org/docs/1.75/api/Bio.html).
Main sequence and structure packages:
- [Bio.Seq](https://biopython.org/docs/latest/api/Bio.Seq.html)
- [Bio.Align](https://biopython.org/docs/latest/api/Bio.Align.html)
- [Bio.SeqIO](https://biopython.org/docs/latest/api/Bio.SeqIO.html)
- [Bio.PDB](https://biopython.org/docs/latest/api/Bio.PDB.html)
#### Sequences
Loading a sequence from a string:
```
from Bio.Seq import Seq
seq = Seq("AGTACACTG")
print(seq)
```
This creates a [sequence object](https://biopython.org/docs/latest/api/Bio.Seq.html) with a couple of fancy methods, especially when it comes to nuclotide sequences such as `reverse_complement` or `translate`.
```
print(seq.translate())
print(seq.reverse_complement().transcribe())
print(seq.reverse_complement().transcribe().translate())
coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG")
print(coding_dna.translate())
print(coding_dna.translate(to_stop=True))
print(coding_dna.translate(table=2))
print(coding_dna.translate(table=2, to_stop=True))
```
Notice, in the example above we used different genetic tables. Check [NCBI genetic codes](https://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi) for details.
To list all the methods, run, e.g., one of the following:
```
print(dir(seq))
print(help(seq))
```
Methods for accessing by position are available as well.
```
print(seq[3])
print(seq[3:5])
print(seq[::-1])
```
If needed, the Seq object can be converted into a string.
```
print(str(seq))
print(str(seq).translate({65: 88}))
```
To parse sequence from a file, you can use [Bio.SeqIO](https://biopython.org/docs/latest/api/Bio.SeqIO.html). [Here](https://biopython.org/wiki/SeqIO#file-formats) is the list of supported formats. The format name is passed into the `parse` method.
```
from Bio import SeqIO
sars2_it = SeqIO.parse("R1A-B_SARS2.fasta", "fasta")
for seq_record in sars2_it:
print(seq_record.id)
print(repr(seq_record.seq))
print(len(seq_record))
sars2_seq_recs = list(sars2_it)
```
The result is an iterator of [SeqRecord](https://biopython.org/docs/latest/api/Bio.SeqRecord.html)s. Other attributes of `SeqRecord` such as features or annotations are more relevant for other formats, such as genbank. The underlying gene for the two isoforms (R1A_SARS2/P0DTC1 and R1AB_SARS2/P0DTD1) is ORF1ab and the two isoforms are caused by ribosomal slippage during translation (see, e.g., [here](https://www.science.org/doi/full/10.1126/science.abf3546)). Both reading frames R1A_SARS2 and R1AB_SARS2 are polyproteins and are encoded by the same [gene](https://www.ncbi.nlm.nih.gov/gene/43740578). Let's explore this.
```
gb_rec = list(SeqIO.parse("NC_045512.gb", "genbank"))[0]
print(gb_rec.)
print(gb_rec.annotations)
print(gb_rec.features)
gb_rec.features
```
Let's obtain all CDS (coding sequence) features.
```
print(gb_rec.features)
cds = [seq_feature for seq_feature in gb_rec.features if seq_feature.type == 'CDS']
cds[0].extract(gb_rec.seq).translate()
```
Now, let's get the DNA sequence for the the polyprotein 1ab.
```
aa_seq = cds[0].extract(gb_rec.seq).translate()
print(aa_seq[-10:])
print(gb_rec.seq.translate()[-10:])
```
To write a sequence into a file, use `SeqIO.write`.
```
SeqIO.write([gb_rec, SeqIO.SeqRecord(aa_seq, id="id", description="aa")], "fasta_from_gb.fasta", "fasta")
```
Now, carry out the following tasks by yourselfs:
- Obtain the protein sequnece for polyprotein 1ab and check with UniProt that it matches (just by eyeballing).
- Obtain the protein sequence for the polyprotien 1a.
- Obtain protein sequences for all the proteins and list them together with their names
```
print(cds[1].extract(gb_rec.seq).translate())
#print(cds[1].extract(gb_rec.seq).translate())
peptides = [print("{}: {}".format(ft.qualifiers['protein_id'], ft.extract(gb_rec.seq).translate())) for ft in gb_rec.features if ft.type == 'mat_peptide']
```
#### Structures
Structure processing is managed by the [Bio.PDB](https://biopython.org/docs/latest/api/Bio.PDB.html) package.
To read a structure from a PDB file, use the `PDBParser`. We will be using the 3C-like proteinase protein, which is one of the processed proteins present in the ORF1a discussed above. One of it's structures is [7ALH](https://www.ebi.ac.uk/pdbe/entry/pdb/7alh). To see all the proteins, I suggest checking out the PDBe-KB page for [P0DTD1](https://www.ebi.ac.uk/pdbe/pdbe-kb/protein/P0DTD1).
```
from Bio.PDB.PDBParser import PDBParser
parser = PDBParser(PERMISSIVE=1)
structure = parser.get_structure("7alh", "7alh.ent")
```
As the PDB format is considered deprecated, one should use the mmCIF file instead. This is done the same way as in case of PDB files.
```
from Bio.PDB.MMCIFParser import MMCIFParser
parser = MMCIFParser()
structure = parser.get_structure("7alh", "7alh.cif")
```
To retrieve the individual CIF dictionary fields, one can use the `MMCIF2Dict` module.
```
from Bio.PDB.MMCIFParser import MMCIF2Dict
mmcif_dict = MMCIF2Dict("7alh.cif")
print(mmcif_dict["_citation.title"])
```
The structure record has the structure->model->chain->residue architecture.

Each of the levels in the hierarchy is represented by a submodule in Bio.PDB, namely [Bio.Structure](https://biopython.org/docs/latest/api/Bio.PDB.Modul.html), [Bio.Module](https://biopython.org/docs/latest/api/Bio.PDB.Module.html),[Bio.Chain](https://biopython.org/docs/latest/api/Bio.PDB.Chain.html),[Bio.Residue](https://biopython.org/docs/latest/api/Bio.PDB.Residue.html) and [Bio.Atom](https://biopython.org/docs/latest/api/Bio.PDB.Atom.html). For details regarding IDs, check the [section on ID](https://biopython.org/docs/1.75/api/Bio.PDB.Entity.html#Bio.PDB.Entity.Entity.get_full_id) of the Entity class which is the superclass of the Module/Chain/Residue/Atom classes.
```
print(structure.get_list())
print('---------- MODEL INFO ----------')
model = structure[0]
print("Full ID: {}\nID: {}".format(model.get_full_id(), model.get_id()))
print(model.get_list())
print('---------- CHAIN INFO ----------')
chain = model['A']
print("Full ID: {}\nID: {}".format(chain.get_full_id(), chain.get_id()))
print(chain.get_list())
print('---------- RESIDUE INFO ----------')
res = chain[(' ',1,' ')]
print("Full ID: {}\nID: {}".format(res.get_full_id(), res.get_id()))
print(res.get_resname())
res = chain[1]
print(res.get_resname())
print(res.get_list())
print('---------- ATOM INFO ----------')
atom=res['CA']
print("Full ID: {}\nID: {}".format(atom.get_full_id(), atom.get_id()))
print("{}\n{}\n{}\n{}".format(atom.get_name(), atom.get_id(), atom.get_coord(), atom.get_fullname()))
print(atom.get_vector())
```
To download a file from PDB, one can use the PDBList module.
```
from Bio.PDB.PDBList import PDBList
pdbl = PDBList()
pbl_7lkr=pdbl.retrieve_pdb_file("7LKR", file_format="mmCif", pdir=".")
from Bio.PDB.MMCIFParser import MMCIFParser
parser = MMCIFParser()
structure = parser.get_structure("7lkr", "7lkr.cif")
```
Tasks:
- Iterate over all atoms of the structure
- List all water residues (the first field of the residue id is 'W')
- How many water molecules is in the recrod?
- How many heteroatoms are there in the recod (the first field of the residue id is 'H').
- Find a structure in PDB with at least one ligand (different from water) and write a code which lists all the ligands.
| github_jupyter |
# Throughtput Benchmarking Seldon-Core on GCP Kubernetes
The notebook will provide a benchmarking of seldon-core for maximum throughput test. We will run a stub model and test using REST and gRPC predictions. This will provide a maximum theoretical throughtput for model deployment in the given infrastructure scenario:
* 1 replica of the model running on n1-standard-16 GCP node
For a real model the throughput would be less. Future benchmarks will test realistic models scenarios.
## Create Cluster
Create a cluster of 4 nodes of machine type n1-standard-16
```bash
PROJECT=seldon-core-benchmarking
ZONE=europe-west1-b
gcloud beta container --project "${PROJECT}" clusters create "loadtest" \
--zone "${ZONE}" \
--username "admin" \
--cluster-version "1.9.3-gke.0" \
--machine-type "n1-standard-16" \
--image-type "COS" \
--disk-size "100" \
--num-nodes "4" \
--network "default" \
--enable-cloud-logging \
--enable-cloud-monitoring \
--subnetwork "default"
```
## Install helm
```
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
```
## Start Seldon-Core CRD
```
!helm install ../helm-charts/seldon-core-crd --name seldon-core-crd
```
## Cordon off loadtest nodes
```
!kubectl get nodes
```
We cordon off first 3 nodes so seldon-core and the model will not be deployed on the 1 remaining node.
```
!kubectl cordon $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
!kubectl cordon $(kubectl get nodes -o jsonpath='{.items[1].metadata.name}')
!kubectl cordon $(kubectl get nodes -o jsonpath='{.items[2].metadata.name}')
```
Label the nodes so they can be used by locust.
```
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[1].metadata.name}') role=locust
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[2].metadata.name}') role=locust
```
## Start seldon-core
```
!helm install ../helm-charts/seldon-core --name seldon-core \
--set cluster_manager.rbac=true \
--set apife.enabled=true \
--set engine.image.tag=0.1.6_SNAPSHOT_loadtest \
--set cluster_manager.image.tag=0.1.6_SNAPSHOT_loadtest
```
Wait for seldon-core to start
```
!kubectl get pods -o wide
```
## Create Stub Deployment
```
!pygmentize resources/loadtest_simple_model.json
!kubectl apply -f resources/loadtest_simple_model.json
```
Wait for deployment to be running.
```
!kubectl get seldondeployments seldon-core-loadtest -o jsonpath='{.status}'
```
## Run benchmark
Uncorden the first 3 nodes so they can be used to schedule locust
```
!kubectl uncordon $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
!kubectl uncordon $(kubectl get nodes -o jsonpath='{.items[1].metadata.name}')
!kubectl uncordon $(kubectl get nodes -o jsonpath='{.items[2].metadata.name}')
```
## gRPC
Start locust load test for gRPC
```
!helm install ../helm-charts/seldon-core-loadtesting --name loadtest \
--set locust.host=loadtest:5001 \
--set locust.script=predict_grpc_locust.py \
--set oauth.enabled=false \
--set oauth.key=oauth-key \
--set oauth.secret=oauth-secret \
--set locust.hatchRate=1 \
--set locust.clients=256 \
--set loadtest.sendFeedback=0 \
--set locust.minWait=0 \
--set locust.maxWait=0 \
--set replicaCount=64
```
To download stats use
```bash
if [ "$#" -ne 2 ]; then
echo "Illegal number of parameters: <experiment> <rest|grpc>"
fi
EXPERIMENT=$1
TYPE=$2
MASTER=`kubectl get pod -l name=locust-master-1 -o jsonpath='{.items[0].metadata.name}'`
kubectl cp ${MASTER}:stats_distribution.csv ${EXPERIMENT}_${TYPE}_stats_distribution.csv
kubectl cp ${MASTER}:stats_requests.csv ${EXPERIMENT}_${TYPE}_stats_requests.csv
```
You can get live stats by viewing the logs of the locust master
```
!kubectl logs $(kubectl get pod -l name=locust-master-1 -o jsonpath='{.items[0].metadata.name}') --tail=10
!helm delete loadtest --purge
```
## REST
Run REST benchmark
```
!helm install ../helm-charts/seldon-core-loadtesting --name loadtest \
--set locust.host=http://loadtest:8000 \
--set oauth.enabled=false \
--set oauth.key=oauth-key \
--set oauth.secret=oauth-secret \
--set locust.hatchRate=1 \
--set locust.clients=256 \
--set loadtest.sendFeedback=0 \
--set locust.minWait=0 \
--set locust.maxWait=0 \
--set replicaCount=64
```
Get stats as per gRPC and/or monitor
```
!kubectl logs $(kubectl get pod -l name=locust-master-1 -o jsonpath='{.items[0].metadata.name}') --tail=10
!helm delete loadtest --purge
!kubectl cordon $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
!kubectl cordon $(kubectl get nodes -o jsonpath='{.items[1].metadata.name}')
!kubectl cordon $(kubectl get nodes -o jsonpath='{.items[2].metadata.name}')
```
## Tear Down
```
!kubectl delete -f resources/loadtest_simple_model.json
!helm delete seldon-core --purge
!helm delete seldon-core-crd --purge
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed Training in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/guide/distribute_strategy"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
`tf.distribute.Strategy` is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, users can distribute their existing models and training code with minimal code changes.
`tf.distribute.Strategy` has been designed with these key goals in mind:
* Easy to use and support multiple user segments, including researchers, ML engineers, etc.
* Provide good performance out of the box.
* Easy switching between strategies.
`tf.distribute.Strategy` can be used with TensorFlow's high level APIs, [tf.keras](https://www.tensorflow.org/guide/keras) and [tf.estimator](https://www.tensorflow.org/guide/estimators), with just a couple of lines of code change. It also provides an API that can be used to distribute custom training loops (and in general any computation using TensorFlow).
In TensorFlow 2.0, users can execute their programs eagerly, or in a graph using [`tf.function`](../tutorials/eager/tf_function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution. Note that we may talk about training most of the time in this guide, but this API can also be used for distributing evaluation and prediction on different platforms.
As you will see in a bit, very few changes are needed to use `tf.distribute.Strategy` with your code. This is because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we will talk about various types of strategies and how one can use them in a different situations.
```
# Import TensorFlow
from __future__ import absolute_import, division, print_function
import tensorflow as tf #gpu
```
## Types of strategies
`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
* Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
* Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF nightly at this time.
### MirroredStrategy
`tf.distribute.MirroredStrategy` support synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. The user can also choose between a few other options we provide, or write their own.
Here is the simplest way of creating `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
```
This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
```
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
```
If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently we provide `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` as 2 other options other than `tf.distribute.NcclAllReduce` which is the default.
```
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
```
### MultiWorkerMirroredStrategy
`tf.distribute.experimental.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.
It uses [CollectiveOps](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/collective_ops.py) as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.
It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, users will be able to plugin algorithms that are better tuned for their hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather.
Here is the simplest way of creating `MultiWorkerMirroredStrategy`:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
`MultiWorkerMirroredStrategy` currently allows you to choose between two different implementations of collective ops. `CollectiveCommunication.RING` implements ring-based collectives using gRPC as the communication layer. `CollectiveCommunication.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `CollectiveCommunication.AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. You can specify them like so:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
```
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. "TF_CONFIG" environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. See section on ["TF_CONFIG" below](#TF_CONFIG) for more details on how this can be done.
Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### TPUStrategy
`tf.distribute.experimental.TPUStrategy` lets users run their TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Google Compute Engine](https://cloud.google.com/tpu).
In terms of distributed training architecture, TPUStrategy is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`.
Here is how you would instantiate `TPUStrategy`.
Note: To run this code in Colab, you should select TPU as the Colab runtime. See [Using TPUs]( tpu.ipynb) guide for a runnable version.
```
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.tpu.experimental.initialize_tpu_system(resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver)
```
`TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it. If you want to use this for Cloud TPUs, you will need to specify the name of your TPU resource in `tpu` argument. We also need to initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation and should ideally be done at the beginning because it also wipes out the TPU memory so all state will be lost.
Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### ParameterServerStrategy
`tf.distribute.experimental.ParameterServerStrategy` supports parameter servers training. It can be used either for multi-GPU synchronous local training or asynchronous multi-machine training. When used to train locally on one machine, variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. In a multi-machine setting, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all GPUs of the all the workers.
In terms of code, it looks similar to other strategies:
```
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
```
For multi worker training, "TF_CONFIG" needs to specify the configuration of parameter servers and workers in your cluster, which you can read more about in ["TF_CONFIG" below](#TF_CONFIG) below.
So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end.
## Using `tf.distribute.Strategy` with Keras
We've integrated `tf.distribute.Strategy` into `tf.keras` which is TensorFlow's implementation of the
[Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for Keras users to distribute their training written in the Keras training framework. The only things that need to change in a user's program are: (1) Create an instance of the appropriate `tf.distribute.Strategy` and (2) Move the creation and compiling of Keras model inside `strategy.scope`.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
```
In this example we used `MirroredStrategy` so we can run this on a machine with multiple GPUs. `strategy.scope()` indicated which parts of the code to run distributed. Creating a model inside this scope allows us to create mirrored variables instead of regular variables. Compiling under the scope allows us to know that the user intends to train this model using this strategy. Once this is setup, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients etc.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
```
Here we used a `tf.data.Dataset` to provide the training and eval input. You can also use numpy arrays:
```
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
```
In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas.
```
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
```
### What's supported now?
In [TF nightly release](https://pypi.org/project/tf-nightly-gpu/), we now support training with Keras using all strategies.
Note: When using `MultiWorkerMirorredStrategy` for multiple workers or `TPUStrategy` with more than one host with Keras, currently the user will have to explicitly shard or shuffle the data for different workers, but we will change this in the future to automatically shard the input data intelligently.
### Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
1. [Tutorial](../tutorials/distribute/keras.ipynb) to train MNIST with `MirroredStrategy`.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/keras/keras_imagenet_main.py) training with ImageNet data using `MirroredStrategy`.
3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) trained with Imagenet data on Cloud TPus with `TPUStrategy`.
## Using `tf.distribute.Strategy` with Estimator
`tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. Like with Keras, we've integrated `tf.distribute.Strategy` into `tf.Estimator` so that a user who is using Estimator for their training can easily change their training is distributed with very few changes to your their code. With this, estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs.
The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator.
Here is a snippet of code that shows this with a premade estimator `LinearRegressor` and `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
```
We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_evaluate` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval.
Now we can train and evaluate this Estimator with an input function:
```
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
```
Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split across the multiple replicas. In Estimator, however, the user provides an `input_fn` and have full control over how they want their data to be distributed across workers and devices. We do not do automatic splitting of batch, nor automatically shard the data across different workers. The provided `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`. When doing multi worker training, users will also want to either split their data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [multi-worker tutorial](../tutorials/distribute/multi_worker.ipynb).
We showed an example of using `MirroredStrategy` with Estimator. You can also use `TPUStrategy` with Estimator as well, in the exact same way:
```
config = tf.estimator.RunConfig(
train_distribute=tpu_strategy, eval_distribute=tpu_strategy)
```
And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set "TF_CONFIG" environment variables for each binary running in your cluster.
### What's supported now?
In TF nightly release, we support training with Estimator using all strategies.
### Examples and Tutorials
Here are some examples that show end to end usage of various strategies with Estimator:
1. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kuberentes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/distribution_strategy/resnet_estimator.py) example with TPUStrategy.
## Using `tf.distribute.Strategy` with custom training loops
As you've seen, using `tf.distrbute.Strategy` with high level APIs is only a couple lines of code change. With a little more effort, `tf.distrbute.Strategy` can also be used by other users who are not using these frameworks.
TensorFlow is used for a wide variety of use cases and some users (such as researchers) require more flexibility and control over their training loops. This makes it hard for them to use the high level frameworks such as Estimator or Keras. For instance, someone using a GAN may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training. So these users will usually write their own training loops.
For these users, we provide a core set of methods through the `tf.distrbute.Strategy` classes. Using these may require minor restructuring of the code initially, but once that is done, the user should be able to switch between GPUs / TPUs / multiple machines by just changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
Note: These APIs are still experimental and we are improving them to make them more user friendly.
First, we create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
```
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.train.GradientDescentOptimizer(0.1)
```
Next, we create the input dataset and call `make_dataset_iterator` to distribute the dataset based on the strategy. This API is expected to change in the near future.
```
with mirrored_strategy.scope():
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
input_iterator = mirrored_strategy.make_dataset_iterator(dataset)
```
Then, we define one step of the training. We will use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, we put in in a function `step_fn` and pass it to `strategy.experimental_run` along with the iterator created before:
```
def train_step():
def step_fn(inputs):
features, labels = inputs
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
train_op = optimizer.minimize(loss)
with tf.control_dependencies([train_op]):
return tf.identity(loss)
per_replica_losses = mirrored_strategy.experimental_run(
step_fn, input_iterator)
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_replica_losses)
return mean_loss
```
A few other things to note in the code above:
1. We used `tf.nn.softmax_cross_entropy_with_logits` to compute the loss. And then we scaled the total loss by the global batch size. This is important because all the replicas are training in sync and number of examples in each step of training is the global batch. If you're using TensorFlow's standard losses from `tf.losses` or `tf.keras.losses`, they are distribution aware and will take care of the scaling by number of replicas whenever a strategy is in scope.
2. We used the `strategy.reduce` API to aggregate the results returned by `experimental_run`. `experimental_run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `strategy.unwrap(results)`* to get the list of values contained in the result, one per local replica.
*expected to change
Finally, once we have defined the training step, we can initialize the iterator and variables and run the training in a loop:
```
with mirrored_strategy.scope():
iterator_init = input_iterator.initialize()
var_init = tf.global_variables_initializer()
loss = train_step()
with tf.Session() as sess:
sess.run([iterator_init, var_init])
for _ in range(10):
print(sess.run(loss))
```
In the example above, we used `make_dataset_iterator` to provide input to your training. We also provide two additional APIs: `make_input_fn_iterator` and `make_experimental_numpy_iterator` to support other kinds of inputs. See their documentation in `tf.distribute.Strategy` and how they differ from `make_dataset_iterator`.
This covers the simplest case of using `tf.distribute.Strategy` API to do distribute custom training loops. We are in the process of improving these APIs. Since this use case requres more work on the part of the user, we will be publishing a separate detailed guide for this use case in the future.
### What's supported now?
In TF nightly release, we support training with custom training loops using `MirroredStrategy` and `TPUStrategy` as shown above. Support for other strategies will be coming in soon. `MultiWorkerMirorredStrategy` support will be coming in the future.
### Examples and Tutorials
Here are some examples for using distribution strategy with custom training loops:
1. [Example](https://github.com/tensorflow/tensorflow/blob/5456cc28f3f8d9c17c645d9a409e495969e584ae/tensorflow/contrib/distribute/python/examples/mnist_tf1_tpu.py) to train MNIST using `TPUStrategy`.
## Other topics
In this section, we will cover some topics that are relevant to multiple use cases.
<a id="TF_CONFIG">
### Setting up TF\_CONFIG environment variable
</a>
For multi-worker training, as mentioned before, you need to us set "TF\_CONFIG" environment variable for each
binary running in your cluster. The "TF\_CONFIG" environment variable is a JSON string which specifies what
tasks constitute a cluster, their addresses and each task's role in the cluster. We provide a Kubernetes template in the
[tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets
"TF\_CONFIG" for your training tasks.
One example of "TF\_CONFIG" is:
```
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["host1:port", "host2:port", "host3:port"],
"ps": ["host4:port", "host5:port"]
},
"task": {"type": "worker", "index": 1}
})
```
This "TF\_CONFIG" specifies that there are three workers and two ps tasks in the
cluster along with their hosts and ports. The "task" part specifies that the
role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is
"chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`.
## What's next?
`tf.distribute.Strategy` is actively under development. We welcome you to try it out and provide and your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
# Streamlines tutorial
In this tutorial you will learn how to download and render streamline data to display connectivity data. In brief, injections of anterogradely transported viruses are performed in wild type and CRE-driver mouse lines. The viruses express fluorescent proteins so that efferent projections from the injection locations can be traced everywhere in the brain. The images with the fluorescence data are acquired and registered to the Allen Coordinates reference frame. The traces of the streamlines are then extracted using a fast marching algorithm (by [https://neuroinformatics.nl](https://neuroinformatics.nl)).
<img src="https://raw.githubusercontent.com/BrancoLab/BrainRender/master/Docs/Media/streamlines2.png" width="600" height="350">
The connectivity data are produced as part of the Allen Brain Atlas [Mouse Connectivity project](http://connectivity.brain-map.org).
The first step towards being able to render streamlines data is to identify the set of experiments you are interested in (i.e. injections in the primary visual cortex of wild type mice]. To do so you can use the experiments explorer at [http://connectivity.brain-map.org].
Once you have selected the experiments, you can download metadata about them using the 'download data as csv' option at the bottom of the page. This metadata .csv is what we can then use to get a link to the data to download.
First we do the usual set up steps to get brainrender up and running
### Setup
```
# We begin by adding the current path to sys.path to make sure that the imports work correctly
import sys
sys.path.append('../')
import os
# Set up VTKPLOTTER to work in Jupyter notebooks
from vtkplotter import *
# Import variables
from brainrender import * # <- these can be changed to personalize the look of your renders
# Import brainrender classes and useful functions
from brainrender.scene import Scene
from brainrender.Utils.parsers.streamlines import StreamlinesAPI
from brainrender.Utils.data_io import listdir
streamlines_api = StreamlinesAPI()
```
## Downloading data
If you have streamlines data already saved somewhere, you can skip this section.
### Manual download
To download streamlines data, you have two options (see the [user guide](Docs/UserGuide.md) for more details.
If you head to [http://connectivity.brain-map.org](http://connectivity.brain-map.org) you can download a .csv file with the experiment IDs of interest. Then you can use the following function to download the streamline data:
```
# parse .csv file
# Make sure to put the path to your downloaded file here
filepaths, data = streamlines_api.extract_ids_from_csv("Examples/example_files/experiments_injections.csv",
download=True)
```
The `filepaths` variable stores the paths to the .json files that have been saved by the `streamlines_api`, the `data` variable already contains the streamlines data. You can pass either `filepaths` or `data` to `scene.add_streamlines` (see below) to render your streamlines data.
### Automatic download
If you know that you simply want to download the data to a specific target structure, then you can let brainrender take care of downloading the data for you. This is how:
```
filepaths, data = streamlines_api.download_streamlines_for_region("CA1") # <- get the streamlines for CA1
```
Once you have downloaded the streamlines data, it's time to render it in your scene.
## Rendering streamlines data
You can pass either `data` or `filepaths` to `scene.add_streamlines`, just make sure to use the correct keyword argument (unimaginatively called `data` and `filepath`).
```
# Start by creating a scene
scene = Scene(jupyter=True)
# you can then pass this list of filepaths to add_streamlines.
scene.add_streamlines(data, color="green")
# alternative you can pass a string with the path to a single file or a list of paths to the .json files that you
# created in some other way.
# then you can just render your scene
scene.render()
vp = Plotter(axes=0)
vp.show(scene.get_actors(), viewup=(10, 0.7, 0))
```
add_streamliens takes a few arguments that let you personalize the look of the streamlines:
* `colorby`: you can pass the acronym to a brain region, then the default color of that region will be used for the streamliens
* `color`: alternatively you can specify the color of the streamlines directly.
* `alpha`, `radius`: you can change the transparency and the thickness of the actors used to render the streamlines.
* `show_injection_site`: if set as True, a sphere will be rendered at the locations that correspond to the injections sytes.
Don't forget to check the other examples to lear more about how to use brainrender to make amazing 3D renderings!
Also, you can find a list of variables you can play around with in brainrender.variables.py
Playing around with these variables will allow you to make the rendering look exactly how you want them to be.
| github_jupyter |
```
!pip install plotly -U
import numpy as np
import matplotlib.pyplot as plt
from plotly import graph_objs as go
import plotly as py
from scipy import optimize
print("hello")
```
Generate the data
```
m = np.random.rand()
n = np.random.rand()
num_of_points = 100
x = np.random.random(num_of_points)
y = x*m + n + 0.15*np.random.random(num_of_points)
fig = go.Figure(data=[go.Scatter(x=x, y=y, mode='markers', name='all points')],
layout=go.Layout(
xaxis=dict(range=[np.min(x), np.max(x)], autorange=False),
yaxis=dict(range=[np.min(y), np.max(y)], autorange=False)
)
)
fig.show()
print("m=" + str(m) + " n=" + str(n) )
# fmin
def stright_line_fmin(x,y):
dist_func = lambda p: (((y-x*p[0]-p[1])**2).mean())
p_opt = optimize.fmin(dist_func, np.array([0,0]))
return p_opt
stright_line_fmin(x,y)
# PCA
def straight_line_pca(x,y):
X = np.append(x-x.mean(),y-y.mean(), axis=1)
# Data matrix X, assumes 0-centered
n, m = X.shape
# Compute covariance matrix
C = np.dot(X.T, X) / (n-1)
# Eigen decomposition
eigen_vals, eigen_vecs = np.linalg.eig(C)
# Project X onto PC space
X_pca_inv = np.dot(np.array([[1,0],[-1,0]]), np.linalg.inv(eigen_vecs))
X_pca = np.dot(X, eigen_vecs)
x_min = (x-x.mean()).min()
x_max = (x-x.mean()).max()
fig = go.Figure(data=[
go.Scatter(x=x.ravel(), y=y.ravel(), mode='markers', name='all points'),
go.Scatter(x=X_pca_inv[:, 0]+x.mean(), y=X_pca_inv[:,1]+y.mean(), mode='lines', name='pca estimation')])
fig.show()
return X_pca_inv[1, 1]/X_pca_inv[1, 0], y.mean() - x.mean()*X_pca_inv[1, 1]/X_pca_inv[1, 0]
c = straight_line_pca(x[:, np.newaxis],y[:, np.newaxis])
c
#leaset squares
def least_square_fit(x, y):
# model: y_i = h*x_i
# cost: (Y-h*X)^T * (Y-h*X)
# solution: h = (X^t *X)^-1 * X^t * Y
return np.dot(np.linalg.inv(np.dot(x.transpose(), x)), np.dot(x.transpose() , y))
least_square_fit(np.append(x[:, np.newaxis], np.ones_like(x[:, np.newaxis]), axis=1), y)
# SVd
def svd_fit(x, y):
# model: y_i = h*x_i
# minimize: [x_0, 1, -y_0; x1, 1, -y_1; ...]*[h, 1] = Xh = 0
# do so by: eigenvector coresponds to smallest eigenvalue of X
X = np.append(x, -y, axis=1)
u, s, vh = np.linalg.svd(X)
return vh[-1, :2]/vh[-1,-1]
m_, n_ = svd_fit(np.append(x[:, np.newaxis], np.ones_like(x[:, np.newaxis]), axis=1), y[:, np.newaxis])
print(m_, n_)
#Ransac
def ransac(src_pnts, distance_func, model_func, num_of_points_to_determine_model,
dist_th, inliers_ratio=0.7, p=0.95):
"""Summary or Description of the Function
Parameters:
src_pnt : data points used by Ransac to find the model
distance_func : a function pointer to a distance function.
The distance function takes a model and a point and calculate the cost
p : success probabilaty
Returns:
int:Returning value
"""
min_x = src_pnts[:, 0].min()
max_x = src_pnts[:, 0].max()
print(min_x, max_x)
num_of_points = src_pnts.shape[0]
num_of_iter = int(np.ceil(np.log(1-p)/np.log(1-inliers_ratio**num_of_points_to_determine_model)))
proposed_line = []
max_num_of_inliers = 0
for i in range(num_of_iter):
indx = np.random.permutation(num_of_points)[:num_of_points_to_determine_model]
curr_model = model_func(src_pnts[indx, :])
x=np.array([min_x, max_x])
y=curr_model(x)
print(y)
d = distance_func(curr_model, src_pnts)
num_of_inliers = np.sum(d<dist_th)
proposed_line.append((curr_model, x, y, indx, d, num_of_inliers))
if num_of_inliers > max_num_of_inliers:
max_num_of_inliers = num_of_inliers
best_model = curr_model
return best_model, proposed_line
def stright_line_from_two_points(pnts):
m = (pnts[1, 1]-pnts[0,1])/(pnts[1,0]-pnts[0,0])
n = (pnts[1,0]*pnts[0,1]-pnts[0,0]*pnts[1,1])/(pnts[1,0]-pnts[0,0])
mod_func = lambda x : x*m + n
return mod_func
src_pnts = np.array([x, y]).transpose()
distance_func = lambda model, pnts : (model(pnts[:, 0]) - pnts[:, 1])**2
model_func = stright_line_from_two_points
num_of_points_to_determine_model = 2
dist_th = 0.2
best_model, ransac_run = ransac(src_pnts, distance_func, model_func, num_of_points_to_determine_model, dist_th)
print(x.min())
print(x.max())
x_ransac = np.array([x.min(), x.max()])
y_ransac = best_model(x_ransac)
print(y_ransac)
scatter_xy = go.Scatter(x=x, y=y, mode='markers', name="all points")
frames=[go.Frame(
data=[scatter_xy,
go.Scatter(x=x[item[3]], y=y[item[3]], mode='markers', line=dict(width=2, color="red"), name="selected points"),
go.Scatter(x=item[1], y=item[2], mode='lines', name='current line')]) for item in ransac_run]
fig = go.Figure(
data=[go.Scatter(x=x, y=y, mode='markers', name='all points'),
go.Scatter(x=x, y=y, mode='markers', name="selected points"),
go.Scatter(x=x, y=y, mode='markers', name="current line"),
go.Scatter(x=x_ransac, y=y_ransac, mode='lines', name="best selection")],
layout=go.Layout(
xaxis=dict(range=[np.min(x), np.max(x)], autorange=False),
yaxis=dict(range=[np.min(y), np.max(y)], autorange=False),
title="Ransac guesses",
updatemenus=[dict(
type="buttons",
buttons=[dict(label="Play",
method="animate",
args=[None])])]
),
frames=frames
)
fig.show()
```
| github_jupyter |
```
%pylab inline
wvl = 488 # wavelength [nm]
NA = 1.2 # numerical aperture
n = 1.33 # refractive index of propagating medium
pixel_size = 50 # effective camera pixel size [nm]
chip_size = 128 # pixels
def widefield_psf_2d(wvl, NA, n, pixel_size, chip_size, z=0.0):
"""
Construct the electric field for a widefield PSF in 2d.
Parameters
----------
wvl : float
Wavelength of emitted light in nm.
NA : float
Numerical aperture of the optical system
n : float
Refractive index surrounding point source
pixel_size : float
Effective pixel size of camera chip in nm
chip_size : int
How many pixels on the camera chip?
z : float
Depth from focus
Returns
-------
psf : np.array
Array of np.complex values describing electric field of the PSF.
"""
# Create frequency space
# f = np.arange(-chip_size//2,chip_size//2)/(pixel_size*chip_size) # <cycles per chip>*<cycle size [nm^-1]>
# If f above is used, we need an additional ifftshift
f = np.fft.fftfreq(chip_size, pixel_size)*wvl/n
X, Y = np.meshgrid(f,f)
# Create an aperture in frequency space
# Clip on 1/<spatial resolution of the system> (spatial frequency)
# Note the "missing" factor of 2 since we are thresholding on radius
# rescale by refractive index
aperture = (X*X+Y*Y) <= (NA/n)**2
# The pupil can also contain aberrations, but they must
# be clipped by aperture
k = 2.0*np.pi/(n*wvl)
pf = np.exp(1j*k*z*np.sqrt(1-np.minimum(X*X+Y*Y,1)))
pupil = aperture*pf
# Take the inverse fourier transform of the pupil
# to get the point spread function
psf = np.fft.fftshift(np.fft.ifft2(np.fft.ifftshift(pupil)))
return psf
def amplitude_psf(psf):
""" Return the amplitude of a PSF, described by its electric field. """
return np.abs(psf)
def intensity_psf(psf):
""" Return the intensity of a PSF, described by its electric field. """
return np.abs(psf*np.conj(psf))
# psf = widefield_psf_2d(wvl, NA, n, pixel_size, chip_size)
res_z = 2*wvl/(NA*NA)
psf_z = np.array([widefield_psf_2d(wvl, NA, n, pixel_size, chip_size,z=i) for i in np.arange(-chip_size//2,chip_size//2)*2*pixel_size])
fig, axs = plt.subplots(1,2,figsize=(12,6))
n_steps=chip_size
offset = chip_size//2+n_steps//8
fs = 24
sl = slice(chip_size//2-n_steps//4,chip_size//2+n_steps//4)
psf_im = intensity_psf(psf_z[sl,psf_z.shape[1]//2,sl]).T
axs[0].imshow(psf_im, cmap='gray',vmax=1.5e-3)
axs[0].annotate("", xy=(20, offset-25), xytext=(10, offset-25),
arrowprops=dict(arrowstyle="->",color='white',linewidth=2),
color='white')
axs[0].annotate("", xy=(10, offset-34.6), xytext=(10, offset-24.6),
arrowprops=dict(arrowstyle="->",color='white',linewidth=2),
color='white')
axs[0].annotate("x", xy=(5, offset-27.5),color="white",fontsize=fs)
axs[0].annotate("z", xy=(12.5, offset-20),color="white",fontsize=fs)
axs[0].set_xticks([])
axs[0].set_yticks([])
otf_im = intensity_psf(np.fft.ifftshift(np.fft.fft2(intensity_psf(psf_z[:,psf_z.shape[1]//2,:]))))[sl,sl].T
axs[1].imshow(otf_im,cmap='gray',vmax=3e-2)
axs[1].annotate("", xy=(20, offset-25), xytext=(10, offset-25),
arrowprops=dict(arrowstyle="->",color='white',linewidth=2),
color='white')
axs[1].annotate("", xy=(10, offset-34.6), xytext=(10, offset-24.6),
arrowprops=dict(arrowstyle="->",color='white',linewidth=2),
color='white')
axs[1].annotate("k$_x$", xy=(5, offset-27.5),color="white",fontsize=fs)
axs[1].annotate("k$_z$", xy=(12.5, offset-20),color="white",fontsize=fs)
axs[1].set_xticks([])
axs[1].set_yticks([])
fig.tight_layout()
plt.savefig('introduction-simple-psf.png', dpi=600)
```
| github_jupyter |
# DB acute analysis
By Stephen Karl Larroque @ Coma Science Group, GIGA Research, University of Liege
Creation date: 2018-05-27
License: MIT
v1.0.3
DESCRIPTION:
Calculate whether patients were acute at the time of MRI acquisition (28 days included by default).
This expects as input a csv file with both the accident date and dicom date (see other scripts). The result will be saved as a new csv file.
INSTALL NOTE:
You need to pip install pandas before launching this script.
Tested on Python 2.7.13
USAGE:
Input the csv demographics file that is the output of the notebook stats_analysis_fmp_dicoms_db.ipynb
TODO:
```
# Forcefully autoreload all python modules
%load_ext autoreload
%autoreload 2
# AUX FUNCTIONS
import os, sys
cur_path = os.path.realpath('.')
sys.path.append(os.path.join(cur_path, 'csg_fileutil_libs'))
import re
from csg_fileutil_libs.aux_funcs import save_df_as_csv, _tqdm, reorder_cols_df, find_columns_matching, convert_to_datetype
# Nice plots!
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# PARAMETERS
# FileMakerPro (FMP) database, cleaned with the provided script
fmp_agg_csv = r'databases_output\fmp_db_subjects_aggregated.csv_etiosedatfixed_dicomsdatediag_dicompathsedat.csv'
# Import the csv dbs as dataframes
import pandas as pd
import ast
cf_agg = pd.read_csv(fmp_agg_csv, sep=';', low_memory=False).dropna(axis=0, how='all') # drop empty lines
cf_agg.set_index('Name', inplace=True)
cf_agg
def df_extract_first_date(x):
if not pd.isnull(x):
try:
x2 = ast.literal_eval(x)
except SyntaxError as exc:
x2 = ast.literal_eval("['"+x+"']")
return x2[0].split(':')[0]
else:
return x
first_crsr_date = cf_agg['CRSr::Date and subscores'].apply(df_extract_first_date)
cf_agg['CRSr first date'] = first_crsr_date
cf_agg
# Convert to datetime the columns we need, to ease date calculations
cf_agg2 = convert_to_datetype(cf_agg, 'Date of Accident', '%d/%m/%Y', errors='coerce')
cf_agg2 = convert_to_datetype(cf_agg2, 'CRSr first date', '%d/%m/%Y', errors='coerce')
cf_agg2 = convert_to_datetype(cf_agg2, 'Dicom Date Sync With CRS-R', '%Y-%m-%d', errors='coerce')
cf_agg2
# Acute from a random CRS-R date
cf_agg2['Days random CRSr since accident'] = cf_agg2['CRSr first date'] - cf_agg2['Date of Accident']
cf_agg2['Days random CRSr since accident']
# Acute from dicom date
cf_agg2['Days scan since accident'] = cf_agg2['Dicom Date Sync With CRS-R'] - cf_agg2['Date of Accident']
cf_agg2.loc[:, ['Name', 'CRSr::Best Computed Outcome', 'CRSr::Best Diagnosis', 'Final diagnosis', 'Days scan since accident']]
cf_agg2['AcuteDicom'] = (cf_agg2['Days scan since accident'] <= pd.Timedelta('28 days'))
# Nullify if no dicom date available (then cannot know if acute or not)
cf_agg2.loc[cf_agg2['Dicom Date Sync With CRS-R'].isnull(), ['Days scan since accident', 'Days random CRSr since accident']] = None
cf_agg2.loc[cf_agg2['Dicom Date Sync With CRS-R'].isnull() | cf_agg2['Date of Accident'].isnull(), 'AcuteDicom'] = ''
# Save as csv
save_df_as_csv(cf_agg2, fmp_agg_csv+'_acute.csv', fields_order=False, keep_index=False)
cf_agg2
```
| github_jupyter |
This is the collection of codes that read food atlas datasets and CDC health indicator datasets from Github repository, integrate datasets and cleaning data
```
#merge food atlas datasets into one
import pandas as pd
Overall_folder='C:/Users/cathy/Capstone_project_1/'
dfs=list()
url_folder='https://raw.githubusercontent.com/cathyxinxyz/Capstone_Project_1/master/Datasets/Food_atlas/'
filenames=['ACCESS','ASSISTANCE','HEALTH','INSECURITY','LOCAL','PRICES_TAXES','RESTAURANTS','SOCIOECONOMIC','STORES']
for i,filename in enumerate(filenames):
filepath=url_folder+filename+".csv"
d=pd.read_csv(filepath,index_col='FIPS',encoding="ISO-8859-1")
#append datasets to the list and drop the redundent columns:'State' and 'County'
if i!=0:
dfs.append(d.drop(['State', 'County'], axis=1))
else:
dfs.append(d)
#merge datasets
df_merge=pd.concat(dfs, join='outer', axis=1)
print (df_merge.head(5))
```
Check columns for missing values
```
df_merge.describe()
number_null_values_percol=df_merge.isnull().sum(axis=0)
#columns with over 100 missing values
cols_with_over_5_percent_null_values=number_null_values_percol[number_null_values_percol>0.05*df_merge.shape[0]]
print (cols_with_over_5_percent_null_values.index)
#drop these columns first
df_merge=df_merge.drop(list(cols_with_over_5_percent_null_values.index), axis=1)
df_merge.shape
#check number of remaining columns
print (df_merge.columns)
```
categorizes columns into three groups: category data ('State' and 'County'), count data, percent data, # per 1000 pop, and percent change
columns to keep: category data ('State' and 'County'), percent data, # per 1000 pop, and percent change; remove count data because it is not adjusted by population size
Each column name is highly abstract and unreadable, need to extract info from the variable information provided by Food_atlas
```
url='https://raw.githubusercontent.com/cathyxinxyz/Capstone_Project_1/master/Datasets/Food_atlas/variable_info.csv'
var_info_df=pd.read_csv(url,encoding="ISO-8859-1", index_col='Variable Code')
```
further filter varaibles based on following principles:
i. keep variables that are adjusted by population size: '% change', 'Percent', '# per 1,000 pop','Percentage points';
ii. keep variables that are mostly valuable for analysis
iii. keep variables where values are valid: e.g. no negative values for variables with units as 'Percent' or '# per 1,000 pop'.
```
#units to keep: '% change', 'Percent', '# per 1,000 pop','Percentage points'
#var_info_df['Units'].isin(['Percent', '# per 1,000 pop','Dollars'])
var_info_df_subset=var_info_df[var_info_df['Units'].isin(['Percent', '# per 1,000 pop','Dollars'])]
var_subset=list(var_info_df_subset.index)
var_subset.extend(['State', 'County'])
#print (var_subset)
df_subset=df_merge.loc[:, var_subset]
#print (df_merge.shape)
print (df_subset.shape)
#check weather each column has valid values:
####### columns with units 'Percent' should have values between 0 and 100, any value that fall out of this range should be changed to NaN values
######
######
######
#Replace invalid values with np.nan
import numpy as np
for c in df_subset.columns:
if c in var_info_df.index:
if var_info_df.loc[c]['Units'] =='Percent':
df_subset[c][(df_subset[c]<0)|(df_subset[c]>100)]=np.nan
elif var_info_df.loc[c]['Units'] =='# per 1,000 pop':
df_subset[c][(df_subset[c]<0)|(df_subset[c]>1000)]=np.nan
elif var_info_df.loc[c]['Units'] =='Dollars':
df_subset[c][(df_subset[c]<0)]=np.nan
df_subset.shape
```
get the average of variables measured at two time points
```
var_tup_dict={}
for c in df_subset.columns:
if c in var_info_df.index:
k=(var_info_df.loc[c]['Category Name'],var_info_df.loc[c]['Sub_subcategory Name'],var_info_df.loc[c]['Units'])
if k not in var_tup_dict.keys():
var_tup_dict[k]=list()
var_tup_dict[k].append(c)
print (var_tup_dict)
n=1
var_name_cat_subcat=list()
for k in var_tup_dict.keys():
df_subset['var'+str(n)]=(df_subset[var_tup_dict[k][0]]+df_subset[var_tup_dict[k][-1]])/2
var_name_cat_subcat.append(['var'+str(n), k[0], k[1]])
df_subset=df_subset.drop(var_tup_dict[k], axis=1)
n+=1
df_subset.columns
df_subset.shape
further drop variables that have redundent information
dropped=['var'+str(n) for n in [24,25, 42]]
dropped.extend(['var'+str(n) for n in range(45,54)])
dropped.extend(['var'+str(n) for n in [55,56]])
df_subset=df_subset.drop(dropped, axis=1)
df_subset.shape
df_subset=df_subset.drop(['var28','var29','var43','var54','var57'],axis=1)
var_name_info_df=pd.DataFrame(var_name_cat_subcat, columns=['variable','category', 'sub_category'])
var_name_info_df.to_csv('C:/Users/cathy/Capstone_project_1/Datasets/Food_atlas/Var_name_info.csv',index=False)
df_subset.to_csv(Overall_folder+'Datasets/food_environment.csv')
```
Integrate CDC Datasets together
```
import pandas as pd
dfs=list()
sub_folder=Overall_folder+'/Datasets/CDC/'
filenames=['Diabetes_prevalence',
'Obesity_prevalence',
'Physical_inactive_prevalence']
for filename in filenames:
filepath=sub_folder+filename+".csv"
df=pd.read_csv(filepath,index_col='FIPS')
if 'Diabetes' in filename:
df.columns=df.columns.astype(str)+'_db'
elif 'Obesity' in filename:
df.columns=df.columns.astype(str)+'_ob'
elif 'Physical' in filename:
df.columns=df.columns.astype(str)+'_phy'
dfs.append(df)
#merge datasets
CDC_merge=pd.concat(dfs, join='outer', axis=1)
CDC_merge.info()
#Find out the non numeric entries in CDC_merge
for c in CDC_merge.columns:
num_non_numeric=sum(CDC_merge.applymap(lambda x: isinstance(x, (int, float)))[c])
if num_non_numeric>0:
print(c, num_non_numeric, CDC_merge[pd.to_numeric(CDC_merge[c], errors='coerce').isnull()])
#It turns out that some entries are 'No Data' or NaN, so I replace the 'No Data' with NaN values
CDC_merge=CDC_merge.replace('No Data', np.nan)
CDC_merge=CDC_merge.astype(float)
#now check the CDC_merge
CDC_merge.info()
#choose the latest prevalence of diabetes, obesity and physical inactivity to merge with df_tp
CDC_subset=CDC_merge[['2013_db','2013_ob','2011_phy','2012_phy','2013_phy']]
CDC_subset['prevalence of physical inactivity']=(CDC_subset['2011_phy']+CDC_subset['2012_phy']+CDC_subset['2013_phy'])/3
CDC_subset.head(5)
CDC_subset.rename(columns={'2013_db': 'prevalence of diabetes', '2013_ob': 'prevalence of obesity'}, inplace=True)
CDC_subset[['prevalence of diabetes', 'prevalence of obesity', 'prevalence of physical inactivity']].to_csv(Overall_folder+'Datasets/Db_ob_phy.csv')
```
Integrating geography dataset
```
df=pd.read_excel(Overall_folder+'Datasets/geography/ruralurbancodes2013.xls')
df.head(5)
df=df.set_index('FIPS')
df_RUCC_info=pd.DataFrame()
df_RUCC_info['RUCC_2013']=df['RUCC_2013'].unique()
df[df['RUCC_2013']==1]
df[df['RUCC_2013']==4]['Description'].unique()[0]
description_dict={1:df[df['RUCC_2013']==1]['Description'].unique()[0],
2:df[df['RUCC_2013']==2]['Description'].unique()[0],
3:df[df['RUCC_2013']==3]['Description'].unique()[0],
4:df[df['RUCC_2013']==4]['Description'].unique()[0],
5:df[df['RUCC_2013']==5]['Description'].unique()[0],
6:df[df['RUCC_2013']==6]['Description'].unique()[0],
7:df[df['RUCC_2013']==7]['Description'].unique()[0],
8:df[df['RUCC_2013']==8]['Description'].unique()[0],
9:df[df['RUCC_2013']==9]['Description'].unique()[0]}
description_dict
df_RUCC_info['RUCC_2013']
df_RUCC_info['categories']=df_RUCC_info['RUCC_2013'].map(description_dict)
df_RUCC_info
df_RUCC_info.to_csv(Overall_folder+'Datasets/rural_urban_category.csv', index=False)
df.to_csv(Overall_folder+'Datasets/rural_urban_codes.csv')
df[['RUCC_2013']].to_csv(Overall_folder+'Datasets/RUCC_codes.csv')
```
Integrate information of uninsured population from 2011 to 2013
```
def Guess_skiprows(filename, firstcol):
for n in range(100):
try:
df=pd.read_csv(filename, skiprows=n)
if 'year' in df.columns[0]:
print (n, df.columns)
skiprows=n
break
except:
next
return skiprows
import pandas as pd
def Extract_number(x):
import re
if type(x)==str:
num_string=''.join(re.findall('\d+', x ))
if num_string !='':
return float(num_string)
else:
return None
elif type(x) in [int, float]:
return x
def Choose_Subset(df):
df=df[df['agecat']==0]
df=df[df['sexcat']==0]
df=df[df['racecat']==0]
df=df[df['iprcat']==0]
return df
df_dicts={}
years=[2011, 2012, 2013]
for year in years:
filename='C:/Users/cathy/Capstone_Project_1/Datasets/SAHIE/sahie_{}.csv'.format(year)
firstcol='year'
skiprows=Guess_skiprows(filename, firstcol)
df=pd.read_csv(filename, skiprows=skiprows)
df=Choose_Subset(df)
df['FIPS']=df['statefips'].apply((lambda x:('0'+str(x))[-2:]))+df['countyfips'].apply((lambda x:('00'+str(x))[-3:]))
df['FIPS']=df['FIPS'].astype(int)
df=df.set_index('FIPS')
df['NUI']=df['NUI'].apply(Extract_number)
df_dicts[year]=df[['NUI']]
df_dem=pd.read_csv('C:/Users/cathy/Capstone_Project_1/Datasets/Food_atlas/Supplemental_data_county.csv', encoding="ISO-8859-1", index_col='FIPS')
for year in years:
df_dem['Population Estimate, {}'.format(year)]=df_dem['Population Estimate, {}'.format(year)].apply(lambda x:float(''.join(x.split(','))))
df_combineds=list()
for year in years:
df_combined=pd.concat([df_dicts[year], df_dem['Population Estimate, {}'.format(year)]],axis=1, join='inner')
df_combined['frac_uninsured_{}'.format(year)]=df_combined['NUI']/df_combined['Population Estimate, {}'.format(year)]
df_combineds.append(df_combined['frac_uninsured_{}'.format(year)])
df_frac_nui=pd.concat(df_combineds, axis=1)
df_frac_nui
import numpy as np
df_frac_nui['frac_uninsured']=(df_frac_nui['frac_uninsured_2011']+df_frac_nui['frac_uninsured_2012']+df_frac_nui['frac_uninsured_2013'])/3
df_frac_nui['frac_uninsured']
df_frac_nui[['frac_uninsured']].to_csv('C:/Users/cathy/Capstone_Project_1/Datasets/Uninsured.csv')
```
Integrate all datasets
```
filenames=['food_environment', 'Db_ob_phy', 'Uninsured', 'RUCC_codes']
Overall_folder='C:/Users/cathy/Capstone_Project_1/'
dfs=list()
for filename in filenames:
df=pd.read_csv(Overall_folder+'Datasets/'+filename+'.csv', index_col='FIPS', encoding="ISO-8859-1")
dfs.append(df)
df_merge=pd.concat(dfs, axis=1, join='inner')
df_merge.info()
df_merge.to_csv(Overall_folder+'Datasets/combined.csv')
```
combine state, county, fips code file into one for map
```
df=pd.read_csv(Overall_folder+'Datasets/Food_atlas/Supplemental_data_county.csv',encoding="ISO-8859-1", index_col='FIPS')
df.info()
df['State']=df['State'].apply((lambda x:x.lower()))
df['County']=df['County'].apply((lambda x:x.lower()))
df['State']=df['State'].apply((lambda x:("").join(x.split(' '))))
df['County']=df['County'].apply((lambda x:("").join(x.split(' '))))
df['County']
df[['State', 'County']].to_csv(Overall_folder+'Datasets/state_county_name.csv')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Gradient Boosted Trees: Model understanding
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/estimators/boosted_trees_model_understanding"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimators/boosted_trees_model_understanding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/tree/master/site/en/tutorials/estimators/boosted_trees_model_understanding.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
For an end-to-end walkthrough of training a Gradient Boosting model check out the [boosted trees tutorial](https://www.tensorflow.org/tutorials/estimators/boosted_trees). In this tutorial you will:
* Learn how to interpret a Boosted Trees model both *locally* and *globally*
* Gain intution for how a Boosted Trees model fits a dataset
## How to interpret Boosted Trees models both locally and globally
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole. Such techniques can help machine learning (ML) practitioners detect bias and bugs during the model development stage
For local interpretability, you will learn how to create and visualize per-instance contributions. To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability you will retrieve and visualize gain-based feature importances, [permutation feature importances](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) and also show aggregated DFCs.
## Load the titanic dataset
You will be using the titanic dataset, where the (rather morbid) goal is to predict passenger survival, given characteristics such as gender, age, class, etc.
```
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
```
For a description of the features, please review the prior tutorial.
## Create feature columns, input_fn, and the train the estimator
### Preprocess the data
Create the feature columns, using the original numeric columns as is and one-hot-encoding categorical variables.
```
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
```
### Build the input pipeline
Create the input functions using the `from_tensor_slices` method in the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API to read in data directly from Pandas.
```
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
y = np.expand_dims(y, axis=1)
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(NUM_EXAMPLES))
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
```
### Train the model
```
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(feature_columns, **params)
est.train(train_input_fn, max_steps=100)
results = est.evaluate(eval_input_fn)
pd.Series(results).to_frame()
```
For performance reasons, when your data fits in memory, we recommend use the `boosted_trees_classifier_train_in_memory` function. However if training time is not of a concern or if you have a very large dataset and want to do distributed training, use the `tf.estimator.BoostedTrees` API shown above.
When using this method, you should not batch your input data, as the method operates on the entire dataset.
```
in_memory_params = dict(params)
del in_memory_params['n_batches_per_layer']
# In-memory input_fn does not use batching.
def make_inmemory_train_input_fn(X, y):
y = np.expand_dims(y, axis=1)
def input_fn():
return dict(X), y
return input_fn
train_input_fn = make_inmemory_train_input_fn(dftrain, y_train)
# Train the model.
est = tf.contrib.estimator.boosted_trees_classifier_train_in_memory(
train_input_fn,
feature_columns,
**in_memory_params)
print(est.evaluate(eval_input_fn))
```
## Model interpretation and plotting
```
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
```
## Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/) (this method is also available in scikit-learn for Random Forests in the [`treeinterpreter`](https://github.com/andosa/treeinterpreter) package). The DFCs are generated with:
`pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))`
(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
```
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.describe().T
```
A nice property of DFCs is that the sum of the contributions + the bias is equal to the prediction for a given example.
```
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
```
Plot DFCs for an individual passenger.
```
# Plot results.
ID = 182
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = example[sorted_ix].plot(kind='barh', color=sns_colors[3])
ax.grid(False, axis='y')
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability')
plt.show()
```
The larger magnitude contributions have a larger impact on the model's prediction. Negative contributions indicate the feature value for this given example reduced the model's prediction, while positive values contribute an increase in the prediction.
### Improved plotting
Let's make the plot nice by color coding based on the contributions' directionality and add the feature values on figure.
```
# Boilerplate code for plotting :)
def _get_color(value):
"""To make positive DFCs plot green, negative DFCs plot red."""
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
"""Display feature's values on left of plot."""
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
def plot_example(example):
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=[colors],
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
return ax
```
Plot example.
```
example = df_dfc.iloc[ID] # Choose IDth example from evaluation set.
ax = plot_example(example)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
```
You can also plot the example's DFCs compare with the entire distribution using a voilin plot.
```
# Boilerplate plotting code.
def dist_violin_plot(df_dfc, ID):
# Initialize plot.
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
# Create example dataframe.
TOP_N = 8 # View top 8 features.
example = df_dfc.iloc[ID]
ix = example.abs().sort_values()[-TOP_N:].index
example = example[ix]
example_df = example.to_frame(name='dfc')
# Add contributions of entire distribution.
parts=ax.violinplot([df_dfc[w] for w in ix],
vert=False,
showextrema=False,
widths=0.7,
positions=np.arange(len(ix)))
face_color = sns_colors[0]
alpha = 0.15
for pc in parts['bodies']:
pc.set_facecolor(face_color)
pc.set_alpha(alpha)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
# Add local contributions.
ax.scatter(example,
np.arange(example.shape[0]),
color=sns.color_palette()[2],
s=100,
marker="s",
label='contributions for example')
# Legend
# Proxy plot, to show violinplot dist on legend.
ax.plot([0,0], [1,1], label='eval set contributions\ndistributions',
color=face_color, alpha=alpha, linewidth=10)
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large',
frameon=True)
legend.get_frame().set_facecolor('white')
# Format plot.
ax.set_yticks(np.arange(example.shape[0]))
ax.set_yticklabels(example.index)
ax.grid(False, axis='y')
ax.set_xlabel('Contribution to predicted probability', size=14)
```
Plot this example.
```
dist_violin_plot(df_dfc, ID)
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
plt.show()
```
Finally, third-party tools, such as [LIME](https://github.com/marcotcr/lime) and [shap](https://github.com/slundberg/shap), can also help understand individual predictions for a model.
## Global feature importances
Additionally, you might want to understand the model as a whole, rather than studying individual predictions. Below, you will compute and use:
* Gain-based feature importances using `est.experimental_feature_importances`
* Permutation importances
* Aggregate DFCs using `est.experimental_predict_with_explanations`
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated ([source](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-307)). Check out [this article](http://explained.ai/rf-importance/index.html) for an in-depth overview and great discussion on different feature importance types.
### Gain-based feature importances
Gain-based feature importances are built into the TensorFlow Boosted Trees estimators using `est.experimental_feature_importances`.
```
importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.Series(importances)
# Visualize importances.
N = 8
ax = (df_imp.iloc[0:N][::-1]
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6)))
ax.grid(False, axis='y')
```
### Average absolute DFCs
You can also average the absolute values of DFCs to understand impact at a global level.
```
# Plot.
dfc_mean = df_dfc.abs().mean()
N = 8
sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
```
You can also see how DFCs vary as a feature value varies.
```
FEATURE = 'fare'
feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index()
ax = sns.regplot(feature.index.values, feature.values, lowess=True)
ax.set_ylabel('contribution')
ax.set_xlabel(FEATURE)
ax.set_xlim(0, 100)
plt.show()
```
### Permutation feature importance
```
def permutation_importances(est, X_eval, y_eval, metric, features):
"""Column by column, shuffle values and observe effect on eval set.
source: http://explained.ai/rf-importance/index.html
A similar approach can be done during training. See "Drop-column importance"
in the above article."""
baseline = metric(est, X_eval, y_eval)
imp = []
for col in features:
save = X_eval[col].copy()
X_eval[col] = np.random.permutation(X_eval[col])
m = metric(est, X_eval, y_eval)
X_eval[col] = save
imp.append(baseline - m)
return np.array(imp)
def accuracy_metric(est, X, y):
"""TensorFlow estimator accuracy."""
eval_input_fn = make_input_fn(X,
y=y,
shuffle=False,
n_epochs=1)
return est.evaluate(input_fn=eval_input_fn)['accuracy']
features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS
importances = permutation_importances(est, dfeval, y_eval, accuracy_metric,
features)
df_imp = pd.Series(importances, index=features)
sorted_ix = df_imp.abs().sort_values().index
ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6))
ax.grid(False, axis='y')
ax.set_title('Permutation feature importance')
plt.show()
```
## Visualizing model fitting
Lets first simulate/create training data using the following formula:
$$z=x* e^{-x^2 - y^2}$$
Where \\(z\\) is the dependent variable you are trying to predict and \\(x\\) and \\(y\\) are the features.
```
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
```
You can visualize the function. Redder colors correspond to larger function values.
```
zi = griddata(x, y, z, xi, yi, interp='linear')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
def predict(est):
"""Predictions from a given estimator."""
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
```
First let's try to fit a linear model to the data.
```
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
```
It's not a very good fit. Next let's try to fit a GBDT model to it and try to understand how the model fits the function.
```
def create_bt_est(n_trees):
return tf.estimator.BoostedTreesRegressor(fc,
n_batches_per_layer=1,
n_trees=n_trees)
N_TREES = [1,2,3,4,10,20,50,100]
for n in N_TREES:
est = create_bt_est(n)
est.train(train_input_fn, max_steps=500)
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n), color='w', backgroundcolor='black', size=20)
plt.show()
```
As you increase the number of trees, the model's predictions better approximates the underlying function.
## Conclusion
In this tutorial you learned how to interpret Boosted Trees models using directional feature contributions and feature importance techniques. These techniques provide insight into how the features impact a model's predictions. Finally, you also gained intution for how a Boosted Tree model fits a complex function by viewing the decision surface for several models.
| github_jupyter |
Files and Printing
------------------
** See also Examples 15, 16, and 17 from Learn Python the Hard Way**
You'll often be reading data from a file, or writing the output of your python scripts back into a file. Python makes this very easy. You need to open a file in the appropriate mode, using the `open` function, then you can read or write to accomplish your task. The `open` function takes two arguments, the name of the file, and the mode. The mode is a single letter string that specifies if you're going to be reading from a file, writing to a file, or appending to the end of an existing file. The function returns a file object that performs the various tasks you'll be performing: `a_file = open(filename, mode)`. The modes are:
+ `'r'`: open a file for reading
+ `'w'`: open a file for writing. Caution: this will overwrite any previously existing file
+ `'a'`: append. Write to the end of a file.
When reading, you typically want to iterate through the lines in a file using a for loop, as above. Some other common methods for dealing with files are:
+ `file.read()`: read the entire contents of a file into a string
+ `file.write(some_string)`: writes to the file, note this doesn't automatically include any new lines. Also note that sometimes writes are buffered- python will wait until you have several writes pending, and perform them all at once
+ `file.flush()`: write out any buffered writes
+ `file.close()`: close the open file. This will free up some computer resources occupied by keeping a file open.
Here is an example using files:
#### Writing a file to disk
```
# Create the file temp.txt, and get it ready for writing
f = open("temp.txt", "w")
f.write("This is my first file! The end!\n")
f.write("Oh wait, I wanted to say something else.")
f.close()
# Let's check that we did everything as expected
!cat temp.txt
# Create a file numbers.txt and write the numbers from 0 to 24 there
f = open("numbers.txt", "w")
for num in range(25):
f.write(str(num)+'\n')
f.close()
# Let's check that we did everything as expected
!cat numbers.txt
```
#### Reading a file from disk
```
# We now open the file for reading
f = open("temp.txt", "r")
# And we read the full content of the file in memory, as a big string
content = f.read()
f.close()
content
```
Once we read the file, we have the lines in a big string. Let's process that big string a little bit:
```
# Read the file in the cell above, the content is in f2_content
# Split the content of the file using the newline character \n
lines = content.split("\n")
# Iterate through the line variable (it is a list of strings)
# and then print the length of each line
for line in lines:
print(line, " ===> ", len(line))
# We now open the file for reading
f = open("numbers.txt", "r")
# And we read the full content of the file in memory, as a big string
content = f.read()
f.close()
content
```
Once we read the file, we have the lines in a big string. Let's process that big string a little bit:
```
lines = content.split("\n") # we get back a list of strings
print(lines)
# here we convert the strings into integers, using a list comprehension
# we have the conditional to avoid trying to parse the string '' that
# is at the end of the list
numbers = [int(line) for line in lines if len(line)>0]
print(numbers)
# Let's clean up
!rm temp.txt
!rm numbers.txt
```
#### Exercise 1
* Write a function that reads a file and returns its content as a list of strings (one string per line). Read the file with filename `data/restaurant-names.txt`. If you stored your notebook under `Student_Notebooks` the full filename is `/home/ubuntu/jupyter/NYU_Notes/2-Introduction_to_Python/data/restaurant-names.txt`
#### Exercise 2
* Write a function that reads the n-th column of a CSV file and returns its contents. (Reuse the function that you wrote above.) Then reads the file `data/baseball.csv` and return the content of the 5th column (`team`).
#### Exercise 3
The command below will create a file called `phonetest.txt`. Write code that:
* Reads the file `phonetest.txt`
* Write a function that takes as input a string, and removes any non-digit characters
* Print out the "clean" string, without any non-digit characters
```
%%file phonetest.txt
679-397-5255
2126660921
212-998-0902
888-888-2222
800-555-1211
800 555 1212
800.555.1213
(800) 555-1214
1-800-555-1215
1(800)555-1216
800-555-1212-1234
800-555-1212x1234
800-555-1212 ext. 1234
work 1-(800) 555.1212 #1234
# your code here
```
#### Solution for exercise 4 (with a lot of comments)
```
# this function takes as input a phone (string variable)
# and prints only its digits
def clean(phone):
# We initialize the result variable to be empty.
# We will append to this variable the digit characters
result = ""
# This is a set of digits (as **strings**) that will
# allow us to filter the characters
digits = {"0","1","2","3","4","5","6","7","8","9"}
# We iterate over all the characters in the string "phone"
# which is a parameter of the function clean
for c in phone:
# We check if the character c is a digit
if c in digits:
# if it is, we append it to the result
result = result + c
# once we are done we return a string variable with the result
return result
# This is an alternative, one-line solution that uses a list
# comprehension to create the list of acceptable characters,
# and then uses the join command to concatenate all the
# characters in the list into a string. Notice that we use
# the empty string "" as the connector
def clean_oneline(phone):
digits = {"0","1","2","3","4","5","6","7","8","9"}
return "".join([c for c in phone if c in digits])
# your code here
# We open the file
f = open("phonetest.txt", "r")
# We read the content using the f.read() command
content = f.read()
# Close the file
f.close()
# We split the file into lines
lines = content.split("\n")
# We iterate over the lines, and we clean each one of them
for line in lines:
print(line, "==>", clean(line))
# Let's clean up
!rm phonetest.txt
```
| github_jupyter |
# Algoritmoen Konplexutasuna eta Notazio Asintotikoa
<img src="../img/konplexutasuna.jpg" alt="Konplexutasuna" style="width: 600px;"/>
# Algoritmoen Konplexutasuna eta Notazio Asintotikoa
* Problema bat algoritmo ezberdinekin ebatzi daitezke
* Zeren araberea aukeratuko dugu?
* Ulergarritasuna
* Inplementatzeko erreztasuna
* Exekutatzeko behar duen denbora
* **Denbora Konplexutasuna**
* Exekutatzeko behar duen memoria
* **Espazio Konplexutasuna**
→ Gai honetan **Denbora Konplexutasuna** aztertuko dugu.
## Denbora Konplexutasunaren azterketa enpirikoa
* Algoritmo ezberdinen exekuzio denborak neurtu
<center><img src="../img/cronometro.jpg" alt="Konplexutasuna" style="width: 300px;"/></center>
### Adibide bat: Ordenazio algoritmoa
* Oinarria: Zerrenda bat ordenatua dago, ondoz-ondoko elementu guztiak ordenatuak badaude
```
def isOrdered(z):
return all(z[i]<=z[i+1] for i in range(len(z)-1))
isOrdered([1,2,3,4,5,6,7,8,9,10])
isOrdered([1,2,3,4,6,5,7,8,9,10])
```
### Algoritmo zoro bat: Suffle-Sort
1. Zerrendako elementuak nahastu.
1. Zerrenda ordenatua badago, **AMAITU**
1. Jauzi **1**-era
```
from random import shuffle
def shuffleSort(z):
while not isOrdered(z):
shuffle(z)
```
Algoritmoa badabil...
```
z = [2,1,4,3,5,7,6]
shuffleSort(z)
print(z)
```
### Exekuzio denbora neurtzen I - notebook-eko `%timit`
* https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit
* `%timit sententzia` → sententzia exekutatzeko behar den denbora neurtu
* Defektuz, exekuzio asko egingo ditu, denboraren batazbestekoa eta desbiderapena pantailatik idatziz
```
print('Neurketa hastera doa')
%timeit sum(range(100000))
print('Amaitu da:')
```
Algoritmo zoroaren exekuzio denboraren neurketa...
```
z = [2,1,4,3,5,7,6]
%timeit shuffleSort(z)
```
→ nahiko bizkorra dirudi....
→ **bizkorregia**
* `%timit -n int -r int sententzia` → _loops_ eta _runs_ aukeratu
```
z = [2,1,4,3,5,7,6]
%timeit -n 1 -r 1 shuffleSort(z)
```
* x µs → x ms ????
* zerbait txarto dabil...
Eta 2 *run* egiten baditugu?
```
z = [2,1,4,3,5,7,6]
%timeit -n 1 -r 2 shuffleSort(z)
```
* `%timeit`-aren lehenengo exekuzioak zerrenda ordenatzen du
* Bigarrenetik aurrera ordenatua dago
```python
z = [2,1,4,3,5,7,6]
%timeit shuffleSort(z)
```
* Ez dugu denbora ongi neurtzen `-n 1 -r 1` jartzen ez badugu
### Exekuzio denbora neurtzen II - notebook-eko `%%timit`
* https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit
* `%%timit` zeldako lehen agindua → zelda osoa exekutatzeko behar den denbora neurtu
```
%%timeit -n 2 -r 3
print('Neurketa hastera doa')
sum(range(100000))
print('Amaitu da:')
```
Orain ez gara arduratu behar _loops_ eta _runs_ aukeratzeaz
```
%%timeit
z = [2,1,4,3,5,7,6]
shuffleSort(z)
```
→ Neurtzen ari garen denboran, zerrendaren sorrera bere barne du
```
%%timeit
z = [2,1,4,3,5,7,6]
```
→ guztiz arbuiagarria da
### Exekuzio denbora neurtzen III - `timit` modulua
* https://docs.python.org/3.8/library/timeit.html
* `timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000, globals=None)`
* `stmt` sententziaren `number` exekuzioek behar duten denbora
* `timeit.repeat(stmt='pass', setup='pass', timer=<default timer>, repeat=5, number=1000000, globals=None)`
* `stmt` sententziaren `number` exekuzioek behar duten denbora `repeat` aldiz
```
import timeit
timeit.timeit('sum(range(100000))',number=1000)
timeit.repeat('sum(range(100000))',number=100, repeat=10)
```
→ Saiatu gintezke `%%timit` moduko bat sortzen, edozein lekutan erabiltzeko.
```
# prints a timing msm such as:
# 66.2 ns ± 0.104 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
def mytimeit(stmt='pass',loops=100,runs=7,setup='pass',globals=None):
z = timeit.repeat(stmt=stmt,number=loops,repeat=runs,setup=setup,globals=globals)
z = [x/loops for x in z]
mean = sum(z)/runs
std = (sum((x-mean)**2 for x in z)/(runs-1))**0.5 if runs>1 else 0.0
if mean >= 1.0 :
unit = 's'
elif mean >= 1e-3 :
unit = 'ms'
mean *= 1e3
std *= 1e3
elif mean >= 1e-6 :
unit = 'µs'
mean *= 1e6
std *= 1e6
else :
unit = 'ns'
mean *= 1e9
std *= 1e9
print(f'{mean:.2f} {unit} ± {std:.2f} {unit} per loop (mean ± std. dev. of {runs} runs, {loops} loops each)')
mytimeit('sum(range(100000))')
%timeit sum(range(100000))
```
Lerro anitzetako kodea neurtzeko:
```
stmt='''
b = 0
for i in range(100000):
b += i
'''
mytimeit(stmt=stmt)
```
Saia gitezke algoritmo zoroa neurtzen...
```
stmt='''
z = [2,1,4,3,5,7,6]
shuffleSort(z)
'''
# Errorea gertatuko da, timit moduluak beste ingurune batetan exekutatzen
# duelako kodea, eta beraz shuffleSort funtzioa ez dago definitua
#mytimeit(stmt=stmt)
```
* `globals` argumentuari `globals()` builtin funtzioaren emaitza pasa.
* https://docs.python.org/3/library/functions.html#globals
```
%%timeit
z = [2,1,4,3,5,7,6]
shuffleSort(z)
stmt='''
z = [2,1,4,3,5,7,6]
shuffleSort(z)
'''
mytimeit(stmt=stmt,loops=20,globals=globals())
```
Eta zerrenden `sort` funtzioarekin konparatzen badugu?
```
%%timeit
z = [2,1,4,3,5,7,6]
z.sort()
stmt='''
z = [2,1,4,3,5,7,6]
z.sort()
'''
mytimeit(stmt=stmt,globals=globals())
stmt='''
z = [2,1,4,3,5,7,6]
z.sort()
'''
mytimeit(stmt=stmt,number=1000000,repeat=7,globals=globals())
```
Gure ordenazio algoritmo zoroaren eta python-eko `sort`-aren arteko ezberdintasuna ikaragarri handituko da **zerrendaren tamaina luzatu ahala**...
```
for i in range(11):
print('---',i,'---')
z = list(range(i))
shuffle(z)
mytimeit('shuffleSort(z)',loops=1,runs=1,globals=globals())
for i in range(11):
print('---',i,'---')
z = list(range(i))
shuffle(z)
mytimeit('z.sort()',loops=1,runs=1,globals=globals())
def f1(h):
b = 0
for k in h:
b += k*h[k]
return b
def f2(h):
b = 0
for k,v in h.items():
b += k*v
return b
def f3(h):
return sum(k*v for k,v in h.items())
h = {i:i for i in range(10000)}
print(f1(h),f2(h),f3(h))
%timeit f1(h)
%timeit f2(h)
%timeit f3(h)
```
## Denbora Konplexutasunaren azterketa teorikoa
* Algoritmo ezberdinen exekuzio denborak **estimatu**
<center><img src="../img/guessing.gif" alt="Konplexutasuna" style="width: 300px;"/></center>
### Adibide bat: $n^2$ kalkulkatzen
* Berreketa eragiketa existituko ez balitz...
* Hiru algoritmo ezberdin aztertuko ditugu:
<center><img src="../img/Konplexutasuna-taula-1.png" alt="Konplexutasuna"/></center>
<!--
<table>
<thead><tr><th><center>Biderkadura</center></th><th><center>Batura</center></th><th><center>Inkrementua</center></th></tr></thead>
<tbody><tr>
<td><code>result=n*n</code></td>
<td><code>result = 0
for i in range(n):
result += n</code></td>
<td><code>result = 0
for i in range(n):
for j in range(n):
result += 1</code></td>
</tr></tbody>
</table>
-->
* Oraingoz, analisia errazteko:
<center><img src="../img/Konplexutasuna-taula-2.png" alt="Konplexutasuna"/></center>
<!--
<table>
<thead><tr><th><center>Biderkadura</center></th><th><center>Batura</center></th><th><center>Inkrementua</center></th></tr></thead>
<tbody><tr>
<td><code>result=n*n</code></td>
<td><code>result = 0
i = 0
while i < n :
result += n
i += 1</code></td>
<td><code>result = 0
i = 0
while i < n :
j = 0
while j < n :
result += 1
j += 1
i += 1</code></td>
</tr></tbody>
</table>
-->
* **Biderkaduran** oinarritutako algoritmoa
<center><img src="../img/Konplexutasuna-taula-3.png" alt="Konplexutasuna"/></center>
<!--
<table>
<thead><tr><th><center>Kodea</center></th><th><center>Eragiketa kopurua</center></tr></thead>
<tbody><tr>
<td><code>result=n*n</code></td>
<td><code>→ 1 biderkaketa + 1 esleipen</code></td>
</tr></tbody>
</table>
-->
* **Baturan** oinarritutako algoritmoa
<center><img src="../img/Konplexutasuna-taula-4.png" alt="Konplexutasuna"/></center>
<!--
<table>
<thead><tr><th><center>Kodea</center></th><th><center>Eragiketa kopurua</center></tr></thead>
<tbody><tr>
<td><code>result = 0
i = 0
while i < n :
result += n
i += 1</code></td>
<td><code>→ 1 esleipen
→ 1 esleipen
→ (n+1) • (1 konparaketa)
→ n • (1 batura + 1 esleipen)
→ n • (1 inkrementu)</code></td>
</tr></tbody>
</table>
-->
* **Inkrementuan** oinarritutako algoritmoa
<center><img src="../img/Konplexutasuna-taula-5.png" alt="Konplexutasuna"/></center>
<!--
<table>
<thead><tr><th><center>Kodea</center></th><th><center>Eragiketa kopurua</center></tr></thead>
<tbody><tr>
<td><code>result = 0
i = 0
while i < n :
j = 0
while j < n :
result += 1
j += 1
i += 1</code></td>
<td><code>→ 1 esleipen
→ 1 esleipen
→ (n+1) • (1 konparaketa)
→ n • (1 esleipen)
→ n • (n+1) • (1 konparaketa)
→ n • n • (1 inkrementu)
→ n • n • (1 inkrementu)
→ n • (1 inkrementu)</code></td>
</tr></tbody>
</table>
-->
Demagun oinarrizko eragiketa bakoitzak ondoko denborak behar dituela :
| Biderkadura | Batura | Inkrementua | Esleipena | Konparaketa |
|:--------:|:--------:|:--------:|:---------:|:--------:|
| 342$\mu s$ | 31$\mu s$ | 1$\mu s$ | 1$\mu s$ | 1$\mu s$ |
Orduan,
| Algoritmoa | Bider. | Batura | Inkr. | Esleip. | Konpa. | Denbora $\mu s$ |
| :------------ | :------: | :------: | :------: | :-------: | :------: | :--------------: |
| Biderkadura | $\tiny 1$ | | | $\tiny 1$ | | $\tiny 343$ |
| Batura | | $\tiny n$ | $\tiny n$ | $\tiny n+2$ | $\tiny n+1$ | $\tiny 34n+3$ |
| Inkrementua | | | $\tiny 2n^2+n$ | $\tiny n+2$ | $\tiny n^2+2n+1$ | $\tiny 3n^2+4n+3$ |
| Algoritmoa | Bider. | Batura | Inkr. | Esleip. | Konpa. | Denbora $\mu s$ |
|:--------------|:--------:|:--------:|:--------:|:---------:|:--------:|----------:|
| Biderkadura | $\tiny 1$ | | | $\tiny 1$ | | $\tiny 343$ |
| Batura | | $\tiny n$ | $\tiny n$ | $\tiny n+2$ | $\tiny n+1$ | $\tiny 34n+3$ |
| Inkrementua | | | $\tiny 2n^2+n$ | $\tiny n+2$ | $\tiny n^2+2n+1$ | $\tiny 3n^2+4n+3$ |
<center><img src="../img/Berreketa.png" alt="Konplexutasuna"/></center>
Oinarrizko eragiketen exekuzio denborak edozein direlarik ere:
| Biderkadura | Batura | Inkrementua | Esleipena | Konparaketa |
|:--------:|:--------:|:--------:|:---------:|:--------:|
| $c_1$ | $c_2$ | $c_3$ | $c_4$ | $c_5$ |
| Algoritmoa | Bider. | Batura | Inkr. | Esleip. | Konpa. |
|:--------------|:--------:|:--------:|:--------:|:---------:|:--------:|
| Biderkadura | $\tiny 1$ | | | $\tiny 1$ | |
| Batura | | $\tiny n$ | $\tiny n$ | $\tiny n+2$ | $\tiny n+1$ |
| Inkrementua | | | $\tiny 2n^2+n$ | $\tiny n+2$ | $\tiny n^2+2n+1$ |
* Biderkadura: $c_1 + c_2$
* Batura: $(c_2 + c_3 + c_4 +c_5) \cdot n + (2c_4 + c_5)$
* Inkrementua: $(2c_3+c_5) \cdot n^2 + (c_3 + c_4 + 2c_5) \cdot n + (2c_4+c_5)$
* Biderkadura: $c_1 + c_2$
* Batura: $(c_2 + c_3 + c_4 +c_5) \cdot n + (2c_4 + c_5)$
* Inkrementua: $(2c_3+c_5) \cdot n^2 + (c_3 + c_4 + 2c_5) \cdot n + (2c_4+c_5)$
Konstante berriak definituz:
* Biderkadura: $k_1$
* Batura: $k_2 n + k_3 $
* Inkrementua: $k_4 n^2 + k_5 n + k_6$
Berdin dio $k_1 \dots k_6$ konstanteen balioa zein den, n-ren tamaina handitu ahala:
* Biderkadura algoritmoak $k_1$ koste **konstantea** izango du
* n handitu arren, denbora ez da aldatuko.
* Batura algoritmoak $k_2 n + k_3$ koste **lineala** izango du
* n bikoiztean, denbora ere bikoiztu egingo da.
* Inkrementu algoritmoak $k_4 n^2 + k_5 n + k_6$ koste **kuadratikoa** izango du
* n bikoiztu → denbora laukoiztu
* n x 10 → t x 100
* n x 100 → t x 10.000
* n x 1000 → t x 1.000.000
* ...
### Denbora Konplexutasunaren azterketa teorikoa:
<p><center><em>Algoritmo baten exekuzio denborak problemaren tamainarekiko izango duen konportamolde asintotikoa</em></center></p>
* Problemaren Tamaina:
* $n^2$ kalkulatzean, n
* Zerrenda bat ordenatzerakoan, zerrendaren luzera
* ...
* Batzuetan tamaina bat baina gehiago egon daiteke
* Matrize batetako elementu maximoan, ilara eta zutabe kopurua
* ...
## Eragiketetatik pausuetara: azterketa teorikoa sinplifikatzen
<center><img src="../img/Pausuak.png" alt="Konplexutasuna" style="width: 600px;"/></center>
* Aurreko adibideetan, exekuzio denbora oinarrizko eragiketetan neurtu dugu
* Esleipena, Batura, Konparaketa, Inkrementua...
* Amaieran, eragiketa ezberdinen denbora koefizienteak konbinatu egin ditugu:
* $(2c_3+c_5) \cdot n^2 + (c_3 + c_4 + 2c_5) \cdot n + (2c_4+c_5)$ → $k_4 n^2 + k_5 n + k_6$
* Halako konbinaketak haseratik egin ditzakegu, notazioa errazteko:
* $k$ → pausuak/urratsak
### Pausua/urratsa
* Denbora konstante batetan exekutatuko den eragiketa multzoa
* batura → pausu 1
* 2 batura → pausu 1
* 10.000 batura → pausu 1
* ...
* batura + esleipena → pausu 1
* 2 x (batura + esleipena) → pausu 1
* 10.000 x (batura + esleipena ) → pausu 1
* ...
→ **Problemaren tamainarekiko menpekotasunik ez duen eragiketa multzoa**
<center><img src="../img/Pausuak2.png" alt="Konplexutasuna" /></center>
<!--
<table>
<thead><tr><th><center>Kodea</center></th><th><center>Pausu kopurua</center></tr></thead>
<tbody><tr>
<td><code>result = 0
i = 0
while i < n :
j = 0
while j < n :
result += 1
j += 1
i += 1</code></td>
<td><code>
</tr></tbody>
</table>
-->
<center><img src="../img/Pausuak3.png" alt="Konplexutasuna" /></center>
→ **Pausu Kopurua:** $t(n) = n^2+n+1$
**Algoritmo originaleetara bueltatuz:**
<img src="../img/Konplexutasuna-taula-1.png" alt="Konplexutasuna"/>
* `range(n)` → 1 pausu
* `for i in range(n)` → n x 1 pausu
* Biderkadura: $t(n) = 1$
* Batura: $t(n) = n + 1$
* Inkrementua: $t(n) = n^2+n+1$
### Hiru algoritmoen pausu kopuruak:
* Biderkadura: $t(n) = 1$
* Batura: $t(n) = n+1$
* Inkrementua: $t(n) = n^2+n+1$
* Pausuak **edozein** direlarik ere:
* $\exists \; n_a , \forall n \ge n_a$ Batura Inkrementua baina bizkorragoa den.
* $\exists \; n_b , \forall n \ge n_b$ Biderkadura Batura baina bizkorragoa den.
## Kasu On, Txar eta Batazbestekoa
<br/>
<br/>
<center><img src="../img/GoodUglyBad.jpg" alt="GoodUglyBasd" /></center>
Algoritmo batek emango dituen pausu kopuruak, problemaren tamaina konstante mantenduta ere, ebazten duen **kasu zehatzaren** araberakoa izan daiteke:
* `if` kontrol egitura
* Aurrez ez dakigu egia izango ote den
* batzuetan exekutatu, besteetan ez.
* `while` kontrol egitura
* Aurrez ez dakigu zenbat aldiz exekutatuko ote den
* batzuetan askotan exekutatu, besteetan gutxitan
#### Adibide bat: zerrenda batetan balio baten agerpen kopurua kalkulatu
```
def kontatu(z,x):
k = 0
for y in z:
if x == y :
k += 1
return k
```
* Problemaren tamaina: $n = len(z)$
* `x` $\ne$ `y` → 1 pausu
* `x` $=$ `y` → 2 pausu → 1 pausu
* $t(n) = n + 1$
#### Adibide bat: zerrenda batetan balio baten lehen agerpenaren posizioa, edo `None`
```
def topatu(z,x):
for i in range(len(z)):
if x == z[i] :
return i
return None
```
* Problemaren tamaina: $n = len(z)$
* `x` $\ne$ `z[i]` → 1 pausu
* `x` $=$ `z[i]` → 1 pausu eta *AMAITU*
* $t(n) = ???$
* Funtzioak jasotzen duen **zerrenda zehatzaren** araberakoa
#### I - Kasu Ona (*El Bueno*)
```python
def topatu(z,x):
for i in range(len(z)):
if x == z[i] :
return i
return None
```
* Problemaren tamaina **EDOZEIN** delarik ere, izan dezakegun adibiderik bizkorrena.
* Zerrendaren tamaina 0 dela esateak ez du balio.
* Elementua zerrendaren lehenengo posizioan topatzea.
* $t(n) = 1$
#### II - Kasu Txarra (*El Malo*)
```python
def topatu(z,x):
for i in range(len(z)):
if x == z[i] :
return i
return None
```
* Problemaren tamaina **EDOZEIN** delarik ere, izan dezakegun adibiderik motelena.
* Elementua zerrendan ez egotea.
* $t(n) = n+1$
#### III - Batazbesteko Kasua (*El Feo*)
```python
def topatu(z,x):
for i in range(len(z)):
if x == z[i] :
return i
return None
```
* Problemaren tamaina **EDOZEIN** delarik ere, *batazbestean* emango dugun pausu kopurua.
* Batazbestekoa kalkulatu ahal izateko, posible diren kasuen gaineko probabilitate banaketa bat definitu beharko genuke eta ondoren kasu bakoitzaren pausu kopurua bere probabilitatearekin pixatu eta batu.
* Edo integratu, kasu espazioa jarraia balitz
* Horrexegatik esleitu diogu *El Feo* pertsonaia...
#### III - Batazbesteko Kasua (*El Feo*) kalkulatzen...
```python
def topatu(z,x):
for i in range(len(z)):
if x == z[i] :
return i
return None
```
* Demagun $n$ luzerako zerrenda batetan elementu bat edozein posiziotan topatzeko edo zerrendan ez egoteko probabilitatea berdina dela, hau da, $1/(n+1)$.
* $j$ posizioan dagoen elementua → $t_j(n)=j+1$ pausu
* Zerrendan ez dagoen elementua → $t_{None}(n)=n+1$ pausu
* $j$ posizioan → $prob(j)=1/(n+1) \;\; , \;\;t_j(n)=j+1$
* ez badago → $prob(None)=1/(n+1) \;\; , \;\;t_{None}(n)=n+1$
$$t(n) = \sum_{k \in kasuak}{prob(k) \cdot t_k(n)} = \left(\sum_{j=0}^{j=n-1}{\frac{1}{n+1} \cdot (j+1)} \right) + \frac{1}{n+1} \cdot (n+1)$$
$$= \left(\frac{1}{n+1} \sum_{i=1}^{i=n}{i}\right) + 1 = \frac{n}{2}+1$$
## Konplexutasun Polinomiko eta Ez Polinomikoak
<br/>
<br/>
<center><img src="../img/Konplexutasuna-polinomioak.png" alt="Konplexutasun ez polinomikoak" /></center>
* `for` kontrol egiturek, askotan, pausu kopuru polinomikoak suposatzen dituzte
* $t(n)=n$ :
```python
for i in range(n):
pausu 1
```
* $t(n)=n^2$ :
```python
for i in range(n):
for j in range(n):
pausu 1
```
* $t(n)=n^3$ :
```python
for i in range(n):
for j in range(n):
for k in range(n):
pausu 1
```
* Indizeak erabiltzen dituzten `for` kontrol egitura *garbietan* (`return/break` ez dutenak), pausu kopurua batukarien bidez nahiko erraz adierazi daiteke
* `for i in range(n)` $\equiv$ `for i in range(0,n)` → $\sum_{i=0}^{n-1}$
* `for j in range(i,n)` → $\sum_{j=i}^{n-1}$
* Kontuan izan gainera:
$$\sum_{i=a}^{b} 1 = \sum_{i=b}^{a} 1 = \max{(a,b)}-\min{(a,b)}+1$$
$$\sum_{i=1}^{n} i = \sum_{i=n}^{1} i = \frac{n \cdot (n+1)}{2}$$
<span style="display:block; margin-top:-20px;">
```python
for i in range(n):
pausu 1
```
   →   $t(n) = \sum_{i=0}^{n-1} 1 = n$
<span style="display:block; margin-top:-20px;">
```python
for i in range(n):
for j in range(n):
pausu 1
```
   →   $t(n) = \sum_{i=0}^{n-1} \left( \sum_{j=0}^{n-1} 1 \right) = \sum_{i=0}^{n-1} n = n^2$
<span style="display:block; margin-top:-20px;">
```python
for i in range(n):
for j in range(n):
for k in range(n):
pausu 1
```
   →   $t(n) = \sum_{i=0}^{n-1} \left( \sum_{j=0}^{n-1} \left( \sum_{k=0}^{n-1} 1 \right) \right) = \sum_{i=0}^{n-1} \left( \sum_{j=0}^{n-1} n \right) = \sum_{i=0}^{n-1} n^2 = n^3$
**Adibide bat:** Zerrenda batetan, bi edozein elementuren biderkadura maximoa
```python
def kontatu(z):
m = z[0]*z[1]
for i in range(n-1):
for j in range(i+1,n):
x = z[i]*z[j]
if x > m :
m = x
return m
```
$$t(n) = 1 + \sum_{i=0}^{n-2} \left( \sum_{j=i+1}^{n-1} 1 \right) = 1 + \sum_{i=0}^{n-2} (n-1-i)$$
$$ \overset{k=n-1-i}{=\mathrel{\mkern-3mu}=} \;\; 1 + \sum_{k=n-1}^{1} k = 1 + \frac{(n-1) \cdot n}{2} = \frac{n^2}{2} - \frac{n}{2} + 1$$
* `while` kontrol egiturek, askotan, kasu on eta txarrak sortzen dituzte.
* Pausu kopuruek ez dute zertan polinomikoak izan behar.
**Adibide bat:** $[1,n]$ arteko zenbaki arrunt bat asmatzen. Demagun funtzio bat dugula, `galdera(k)` zeinak zenbakia pentsatu duenari galdetzeko balio duen. Funtzioak `0` bueltatuko du asmatu badugu, `1` bilatzen ari garen zenbakia handiagoa bada edo `-1` txikiagoa bada.
* Problemaren tamaina: $n$
* `galdera(k)` : 1 pausu
* Ume oso txiki batek, halako zerbait egiten lezake:
```python
from random import randrange
def asmatu(n):
x = galdera(randrange(1,n+1))
while x:
x = galdera(randrange(1,n+1))
print('Asmatu dut zure zenbakia!')
```
* Kasu Ona (auzazki aukeratutako lehenengo zenbakia): $t(n)=1$
* Kasu Txarra (ez du sekula topatuko?): $t(n)\overset{?}{=}\infty$
* Batazbesteko kasua: $t(n) = \sum_{k \in kasuak}{prob(k) \cdot t_k(n)} = ??$
* Estimazio enpirikoa:
```
from random import random
n = 17
th = 1/n
N = 100000
b = 0
for i in range(N):
k = 1
while random()>th :
k+= 1
b += k
print(b/N)
```
* Kasu Ona (auzazki aukeratutako lehenengo zenbakia): $t(n)=1$
* Kasu Txarra (ez du sekula topatuko?): $t(n)\overset{?}{=}\infty$
* Batazbesteko kasua: $t(n) = \sum_{k \in kasuak}{prob(k) \cdot t_k(n)} \overset{enp}{=} n$
<center><img src="../img/Ugly.jpg" alt="GoodUglyBasd" /></center>
* Umeak hobeto egiten ikas dezake:
```python
def asmatu(n):
i = 1
x = galdera(i)
while x:
i += 1
x = galdera(i)
print('Asmatu dut zure zenbakia!')
```
* Kasu Ona (lehenengo zenbakia): $t(n)=1$
* Kasu Txarra (azken zenbakia): $t(n)=n$
* Batazbesteko kasua: $t(n) = \sum_{i=1}^{n} (\frac{1}{n} \cdot i)= \frac{n+1}{2}$
* Adin batetik aurrera, honako hau egin beharko genuke:
```
def asmatu(n):
i,j = 1,n
e = (i+j)//2
x = galdera(e)
while x :
if x == 1 :
i = e+1
else :
j = e-1
e = (i+j)//2
x = galdera(e)
print('Asmatu dut zure zenbakia!')
```
* Kasu Ona (justu justu erdian!): $t(n)=1$
* Kasu Txarra (`i==j` egoerara iristean): $t(n) = \; ???$
* Batazbesteko kasua: $t(n) = \; ???$
→ Bitxia... bizkorragoa dela suposatzen dugu, baina ez gera gai zuzenean bere bizkortasuna adierazteko
* Iterazio bakoitza: 1 pausu → $t(n) = iterazio\_kopurua$
* Iterazio bakoitzean, bilaketa tartea erdira (apur bat txikiagoa) doa:
* Iterazio bat: $[i,j] \; \approx \frac{1}{2} [1,n]$
* 2 iterazio: $[i,j] \; \approx \frac{1}{4} [1,n]$
* $k$ iterazio: $[i,j] \; \approx \frac{1}{2^k} [1,n]$
* $i = j \iff 2^k = n $
* $k=\log_2 n\;$ iterazio izango dira
* Kasu Ona (erdian): $t(n)=1$
* Kasu Txarra (`i==j` egoerara iristean): $t(n) = \; \log_2 n$
* Batazbesteko kasua: $t(n) = \; \sum_{k \in kasuak}{prob(k) \cdot t_k(n)}$
* Batazbestekoa kalkulatzeko, kasu bakoitzaren probabilitatea aukeratu behar dugu.
* Demagun zenbaki guztiek probabilitate berdina dutela, $prob(k)=\frac{1}{n}$
* 1 pausu: 1 kasu (erdian egotea)
* 2 pausu: 2 kasu (erdi bakoitzetako erdian egotea)
* 3 pausu: 4 kasu (laurden bakoitzetako erdian egotea)
* ...
* $k$ pausu: $2^{k-1}$ kasu
* ...
* $k = \log_2 n$ pausu : $2^{k-1} = \frac{n}{2}$ kasu.
$$\small t(n) = \; \sum_{k \in kasuak}{prob(k) \cdot t_k(n)} = \frac{1}{n} \cdot \sum_{k \in kasuak}{t_k(n)} = \frac{1}{n} \cdot \left( \sum_{k=1}^{\log_2 n}{ 2^{k-1} \cdot k } \right) \overset{?}{\approx} \log_2 n$$
$$ \frac{1}{2} \cdot \log_2 n \lt t(n) \lt \log_2 n \;\;\; \to \;\;\; t(n) = \log_2 n$$
<center><img src="../img/Ugly.jpg" alt="GoodUglyBasd" /></center>
## Algoritmo Errekurtsiboak
<br/>
<br/>
<center><img src="../img/recursion.png" alt="Algoritmo Errekurtsiboak" /></center>
Algoritmo errekurtsiboen pausu kopurua espresio errekurtsibo bat erabiliz adierazi ahal da.
```
def faktoriala(n):
if n < 2 :
return 1
else :
return n * faktoriala(n-1)
```
$$
t(n) =
\begin{cases}
1 & , & n<2\\
1+t(n-1) & , & n \ge 2\\
\end{cases}
$$
Espresio errekurtsiboa garatu dezakegu:
$$ t(n) = 1 + t(n-1) = 2 + t(n-2) = 3 + t(n-3) = \ldots $$
$$= k + t(n-k)$$
Kasu basera iristeko behar den $k$ konstantea lortu behar dugu:
$$ n-k = 1 \iff k = n-1$$
Eta ordezkatu:
$$\boxed{\small t(n) = n - 1 + t(1) = n}$$
```
def hanoi(a,b,n):
if n == 1 :
print(a,'-->',b)
else :
c = 6-a-b
hanoi(a,c,n-1)
print(a,'-->',b)
hanoi(c,b,n-1)
```
$$
t(n) =
\begin{cases}
1 & , & n=1\\
1 + 2 \cdot t(n-1) & , & n > 1\\
\end{cases}
$$
$$t(n) = 1 + 2 \cdot t(n-1) = 3 + 4 \cdot t(n-2) = 7 + 8 \cdot t(n-3) = \ldots $$
$$= (2^k-1) + 2^k \cdot t(n-k)$$
$$n-k = 1 \iff k=n-1$$
$$t(n) = 2^{n-1} - 1 + 2^{n-1} \cdot 1$$
$$\boxed{t(n) = 2^n - 1}$$
→ Bagenekien 2 eraztun 3 mugimendu zirela, 3 eraztun 7, 4 eraztun 15, 5 eraztun 31...
```
def merge_sort(z):
n = len(z)
if n == 1 :
return z
else :
a = merge_sort(z[:n//2])
b = merge_sort(z[n//2:])
return merge(a,b)
```
* `z[:n//2]` → $\frac{n}{2}$ pausu
* `z[n//2:]` → $\frac{n}{2}$ pausu
* `merge(a,b)` → $len(a)+len(b)=n$ pausu
$$
t(n) =
\begin{cases}
1 & , & n=1\\
1 + 2 n + 2 \cdot t\left(\frac{n}{2}\right) & , & n > 1\\
\end{cases}
$$
```
def merge_sort(z):
n = len(z)
if n > 1 :
a = z[:n//2]
b = z[n//2:]
merge_sort(a)
merge_sort(b)
z.clear()
z.extend(merge(a,b))
```
$$\small{ t(n) = 1 + 2 n + 2 \cdot t\left(\frac{n}{2}\right) = 3 + 4n + 4 \cdot t\left(\frac{n}{4}\right) = 7 + 6n + 8 \cdot t\left(\frac{n}{8}\right) = \ldots }$$
$$\small{= (2^k-1) + k \cdot 2n+ 2^k \cdot t\left(\frac{n}{2^k}\right)}$$
$$\small \frac{n}{2^k} = 1 \iff k=\log_2 n$$
$$t(n) = (n-1) + (\log_2 n) \cdot 2n + n \cdot 1 $$
$$\boxed{t(n) = 2n \cdot \log_2 n + 2n -1}$$
```
def merge_sort(z):
n = len(z)
if n > 1 :
a = z[:n//2]
b = z[n//2:]
merge_sort(a)
merge_sort(b)
z.clear()
z.extend(merge(a,b))
def fib(n):
if n < 2 :
return n
else :
return fib(n-1) + fib(n-2)
```
$$
t(n) =
\begin{cases}
1 & , & n < 2\\
1 + t(n-1) + t(n-2) & , & n \ge 2\\
\end{cases}
$$
$$\small{ t(n) = 1 + t(n-1) + t(n-2) = (1 + 1) + 2 \cdot t(n-2) + t(n-3)}$$
$$\small{= (1+1+2) + 3 \cdot t(n-3) + 2 \cdot t(n-4) = (1+1+2+3) + 5 \cdot t(n-4) + 3 \cdot t(n-5) }$$
$$\small{= (1+1+2+3+5) + 8 \cdot t(n-5) + 5 \cdot t(n-6)}$$
$$\small{ = \ldots = \left(1 + \sum_{i=1}^{k}{fib(i)}\right) + fib(k+1) \cdot t(n-k) + fib(k) \cdot t(n-(k+1))}$$
Errazagoa izango da goi/behe-borneak ezartzea:
$$
g(n) =
\begin{cases}
1 & , & n < 2\\
1 + 2 \cdot t(n-2) & , & n \ge 2\\
\end{cases}
$$
$$
h(n) =
\begin{cases}
1 & , & n < 2\\
1 + 2 \cdot t(n-1) & , & n \ge 2\\
\end{cases}
$$
$$g(h) < t(n) < h(n)$$
$$g(n) = 1 + 2 \cdot t(n-2) = 3 + 4 \cdot t(n-4) = \ldots = (2^k-1) + 2^k \cdot t(n-2k)$$
$$n-2k = 0 \iff k=\frac{n}{2}$$
$$g(n) = (2^{n/2}-1) + 2^{n/2} \cdot 1 = 2 \cdot \left(\sqrt{2}\right)^n - 1$$
$$h(n) = t_{hanoi}(n) = 2^n - 1$$
$$\boxed{ 2 \cdot \left(\sqrt{2}\right)^n - 1 \;<\; t(n) \;<\; 2^n - 1}$$
## Notazio Asintotikoa
* Algoritmo baten suposatzen dituen $t(n)$ pausu kopurua (kasu on eta kasu txarra) modu konpaktu batean adierazteko notazioa
<center><img src="../img/konplexutasuna.jpg" alt="Konplexutasuna" style="width: 600px;"/></center>
* **Goi Limitea** : *Kasu Txarra*
$$\small{O\left( f(n) \right) = \{ t : \mathbb{N} \to \mathbb{R}^+ \;\;:\;\; \exists c \in \mathbb{R}^+ \land \exists n_0 \in \mathbb{N} \;\;:\;\; \forall n \ge n_0 \;\; t(n) \le c \cdot f(n) \}}$$
$$t(n)=an+b \quad \to \quad t(n) \in O(n)$$
* **Behe Limitea** : *Kasu Ona*
$$\small{\Omega \left( f(n) \right) = \{ t : \mathbb{N} \to \mathbb{R}^+ \;\;:\;\; \exists c \in \mathbb{R}^+ \land \exists n_0 \in \mathbb{N} \;\;:\;\; \forall n \ge n_0 \;\; t(n) \ge c \cdot f(n) \}}$$
$$t(n)=an+b \quad \to \quad t(n) \in \Omega(n)$$
* **Magnitude Orden Zehatza** : *Kasu Txarra* $\equiv$ *Kasu Ona*
$$\small{\Theta \left( f(n) \right) = \{ t : \mathbb{N} \to \mathbb{R}^+ \;:\; \exists c,d \in \mathbb{R}^+ \land \exists n_0 \in \mathbb{N} \;:\; \forall n \ge n_0 \; c \cdot f(n) \ge t(n) \ge d \cdot f(n) \}}$$
→ $f(n)$ funtziorik sinpleenak erabiliko ditugu: $O(1) \;,\; O(n) \;,\; O(\log n) \;,\; O(n^2) \ldots$
### Adibide batzuk
* $t(n) = 3n^2 - 4n + 17$ → $\Theta(n^2)$
* $t_{txarra}(n) = 4n + 2 \quad t_{ona}(n) = 117 $ → $O(n) \quad \Omega(1)$
* $t_{txarra}(n) = n^2 + n + 1 \quad t_{ona}(n) = n \cdot \log_2 n+ 1 $ → $O(n^2) \quad \Omega(n \cdot \log n)$
### Konplexutasun mailak
$$\small{O(1) < O(\log n) < O(n) < O(n \cdot \log n) < O(n^2) < O(n^3) < O(2^n) < O(n!) }$$
## Python-en berezko funtzio eta datu egituren pausuak
### Built-in funtzioak `n = len(it)`
* `min(it)` , `max(it)` , `sum(it)` , `reversed(it)` : n
* `all(it)` , `any(it)` : [1,n]
* `sorted(it)` : n log n
* `range()` , `zip(it)` , `enumerate(it)`: 1
### Zerrendak `n = len(z)`
* `list()` , `[]` , `z[i]` , `z[i] = x` , `len(z)` : 1
* `z.clear()` : 1
* `z.append(x)` : 1
* `z.extend(x)` , `list(x)` : len(x)
* `z.pop(-i)` , `del z[-i]` , `z.insert(-i,x)` : i
* `z[i:j]` : j-i
* `z.copy()` , `z.reverse()` : n
* `z1 == z2` , `z1 != z2` , `z1 < z2` , ... : [1,n]
* `z.count(x)` : n
* `z.index(x)` , `x in z` : [1,n]
* `z.remove(x)` : n
* `z.sort()` : n log n
### Hiztegiak `n = len(h)`
* `dict()` , `{}` , `h[k]` , `h[k] = v` , `len(h)` , `h.get(k)` , `h.setdefault(k)` : 1
* `del h[k]` , `h.popitem()` , `h.pop(x)` : 1
* `h.keys()` , `h.values()` , `h.items()` : 1
* `x in h` : 1
* `dict.fromkeys(x)` , `h.update(x)` : len(x)
* `h.copy()` : n
* `h.clear()` : 1? n?
### Multzoak `n = len(s)`
* `set()` , `len(s)` , `s.add(x)` : 1
* `s.pop()` , `s.remove(x)` : 1
* `x in s` : 1
* `s.update(x)` : len(x)
* `s.copy()` : n
* `s.clear()` : 1? n?
| github_jupyter |
# Cycle-GAN
## Model Schema Definition
The purpose of this notebook is to create in a simple format the schema of the solution proposed to colorize pictures with a Cycle-GAN accelerated with FFT convolutions.<p>To create a simple model schema this notebook will present the code for a Cycle-GAN built as a MVP (Minimum Viable Product) that works with the problem proposed.
```
import re
import os
import urllib.request
import numpy as np
import random
import pickle
from PIL import Image
from skimage import color
import matplotlib.pyplot as plt
from glob import glob
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D, Activation, BatchNormalization, UpSampling2D, Dropout, Flatten, Dense, Input, LeakyReLU, Conv2DTranspose,AveragePooling2D, Concatenate
from keras.models import load_model
from keras.optimizers import Adam
from keras.models import Sequential
from tensorflow.compat.v1 import set_random_seed
import numpy as np
import matplotlib.pyplot as plt
import pickle
import keras.backend as K
import boto3
import time
from copy import deepcopy
%%time
%matplotlib inline
#import tqdm seperately and use jupyter notebooks %%capture
%%capture
from tqdm import tqdm_notebook as tqdm
#enter your bucket name and use boto3 to identify your region if you don't know it
bucket = None
region = boto3.Session().region_name
#add your bucket then creat the containers to download files and send to bucket
role = get_execution_role()
bucket = None # customize to your bucket
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'}
training_image = containers[boto3.Session().region_name]
def download(url):
'''
Downloads the file of a given url
'''
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
def upload_to_s3(channel, file):
'''
Save file in a given folder in the S3 bucket
'''
s3 = boto3.resource('s3')
data = open(file, "rb")
key = channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# MPII Human Pose
download('https://datasets.d2.mpi-inf.mpg.de/andriluka14cvpr/mpii_human_pose_v1.tar.gz')
upload_to_s3('people', 'mpii_human_pose_v1.tar.gz')
#untar the file
!tar xvzf mpii_human_pose_v1.tar.gz
#MIT coastal
download('http://cvcl.mit.edu/scenedatabase/coast.zip')
upload_to_s3('coast', 'coast.zip')
#unzip the file
!unzip coast.zip -d ./data
def image_read(file, size=(256,256)):
'''
This function loads and resizes the image to the passed size.
Default image size is set to be 256x256
'''
image = image.load_img(file, target_size=size)
image = image.img_to_array(img)
return image
def image_convert(file_paths,size=256,channels=3):
'''
Redimensions images to Numpy arrays of a certain size and channels. Default values are set to 256x256x3 for coloured
images.
Parameters:
file_paths: a path to the image files
size: an int or a 2x2 tuple to define the size of an image
channels: number of channels to define in the numpy array
'''
# If size is an int
if isinstance(size, int):
# build a zeros matrix of the size of the image
all_images_to_array = np.zeros((len(file_paths), size, size, channels), dtype='int64')
for ind, i in enumerate(file_paths):
# reads image
img = image_read(i)
all_images_to_array[ind] = img.astype('int64')
print('All Images shape: {} size: {:,}'.format(all_images_to_array.shape, all_images_to_array.size))
else:
all_images_to_array = np.zeros((len(file_paths), size[0], size[1], channels), dtype='int64')
for ind, i in enumerate(file_paths):
img = read_img(i)
all_images_to_array[ind] = img.astype('int64')
print('All Images shape: {} size: {:,}'.format(all_images_to_array.shape, all_images_to_array.size))
return all_images_to_array
file_paths = glob(r'./images/*.jpg')
X_train = image_convert(file_paths)
def rgb_to_lab(img, l=False, ab=False):
"""
Takes in RGB channels in range 0-255 and outputs L or AB channels in range -1 to 1
"""
img = img / 255
l = color.rgb2lab(img)[:,:,0]
l = l / 50 - 1
l = l[...,np.newaxis]
ab = color.rgb2lab(img)[:,:,1:]
ab = (ab + 128) / 255 * 2 - 1
if l:
return l
else: return ab
def lab_to_rgb(img):
"""
Takes in LAB channels in range -1 to 1 and out puts RGB chanels in range 0-255
"""
new_img = np.zeros((256,256,3))
for i in range(len(img)):
for j in range(len(img[i])):
pix = img[i,j]
new_img[i,j] = [(pix[0] + 1) * 50,(pix[1] +1) / 2 * 255 - 128,(pix[2] +1) / 2 * 255 - 128]
new_img = color.lab2rgb(new_img) * 255
new_img = new_img.astype('uint8')
return new_img
L = np.array([rgb_to_lab(image,l=True)for image in X_train])
AB = np.array([rgb_to_lab(image,ab=True)for image in X_train])
L_AB_channels = (L,AB)
with open('l_ab_channels.p','wb') as f:
pickle.dump(L_AB_channels,f)
def resnet_block(x ,num_conv=2, num_filters=512,kernel_size=(3,3),padding='same',strides=2):
'''
This function defines a ResNet Block composed of two convolution layers and that returns the sum of the inputs and the
convolution outputs.
Parameters
x: is the tensor which will be used as input to the convolution layer
num_conv: is the number of convolutions inside the block
num_filters: is an int that describes the number of output filters in the convolution
kernel size: is an int or tuple that describes the size of the convolution window
padding: padding with zeros the image so that the kernel fits the input image or not. Options: 'valid' or 'same'
strides: is the number of pixels shifts over the input matrix.
'''
input=x
for i in num_conv:
input=Conv2D(num_filters,kernel_size=kernel_size,padding=padding,strides=strides)(input)
input=InstanceNormalization()(input)
input=LeakyReLU(0.2)(input)
return (input + x)
```
### Generator
```
def generator(input,filters=64,num_enc_layers=4,num_resblock=4,name="Generator"):
'''
The generator per se is an autoencoder built by a series of convolution layers that initially extract features of the
input image.
'''
# defining input
input=Input(shape=(256,256,1))
x=input
'''
Adding first layer of the encoder model: 64 filters, 5x5 kernel size, 2 so the input size is reduced to half,
input size is the image size: (256,256,1), number of channels 1 for the luminosity channel.
We will use InstanceNormalization through the model and Leaky Relu with and alfa of 0.2
as activation function for the encoder, while relu as activation for the decoder.
between both of them, in the latent space we insert 4 resnet blocks.
'''
for lay in num_enc_layers:
x=Conv2D(filters*lay,(5,5),padding='same',strides=2,input_shape=(256,256,1))(x)
x=InstanceNormalization()(x)
x=LeakyReLU(0.2)(x)
x=Conv2D(128,(3,3),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=LeakyReLU(0.2)(x)
x=Conv2D(256,(3,3),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=LeakyReLU(0.2)(x)
x=Conv2D(512,(3,3),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=LeakyReLU(0.2)(x)
'''
----------------------------------LATENT SPACE---------------------------------------------
'''
for r in num_resblock:
x=resnet_block(x)
'''
----------------------------------LATENT SPACE---------------------------------------------
'''
x=Conv2DTranspose(256,(3,3),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=Activation('relu')(x)
x=Conv2DTranspose(128,(3,3),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=Activation('relu')(x)
x=Conv2DTranspose(64,(3,3),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=Activation('relu')(x)
x=Conv2DTranspose(32,(5,5),padding='same',strides=2)(x)
x=InstanceNormalization()(x)
x=Activation('relu')(x)
x=Conv2D(2,(3,3),padding='same')(x)
output=Activation('tanh')(x)
model=Model(input,output,name=name)
return model
```
## Discriminator
```
def discriminator(input,name="Discriminator"):
# importing libraries
from keras.layers import Conv2D, MaxPooling2D, Activation, BatchNormalization, UpSampling2D, Dropout, Flatten, Dense, Input, LeakyReLU, Conv2DTranspose,AveragePooling2D, Concatenate
from tensorflow_addons import InstanceNormalization
# defining input
input=Input(shape=(256,256,2))
x=input
x=Conv2D(32,(3,3), padding='same',strides=2,input_shape=(256,256,2))(x)
x=LeakyReLU(0.2)(x)
x=Dropout(0.25)(x)
x=Conv2D(64,(3,3),padding='same',strides=2)(x)
x=BatchNormalization()
x=LeakyReLU(0.2)(x)
x=Dropout(0.25)(x)
x=Conv2D(128,(3,3), padding='same', strides=2)(x)
x=BatchNormalization()(x)
x=LeakyReLU(0.2)(x)
x=Dropout(0.25)(x)
x=Conv2D(256,(3,3), padding='same',strides=2)(x)
x=BatchNormalization()(x)
x=LeakyReLU(0.2)(x)
x=Dropout(0.25)(x)
x=Flatten()(x)
x=Dense(1)(x)
output=Activation('sigmoid')(x)
model=Model(input,output,name=name)
return model
```
## Building GAN Model
```
# Building discriminators
discriminator_A=discriminator(input_a,"discriminator_A")
discriminator_B=discriminator(input_b,"discriminator_A")
discriminator_A.trainable = False
discriminator_B.trainable = False
# Building generator
generator_B = generator(input_a,"Generator_A_B")
generator_A = generator(input_b,"Generator_B_A")
decision_A=discriminator(generator_a,"Discriminator_A")
decision_B=discriminator(generator_B,"Discriminator_B")
cycle_A=generator(generator_b,"Generator_B_A")
cycle_B=generator(generator_A,"Generator_A_B")
#creates lists to log the losses and accuracy
gen_losses = []
disc_real_losses = []
disc_fake_losses=[]
disc_acc = []
#train the generator on a full set of 320 and the discriminator on a half set of 160 for each epoch
#discriminator is given real and fake y's while generator is always given real y's
n = 320
y_train_fake = np.zeros([160,1])
y_train_real = np.ones([160,1])
y_gen = np.ones([n,1])
#Optional label smoothing
#y_train_real -= .1
#Pick batch size and number of epochs, number of epochs depends on the number of photos per epoch set above
num_epochs=1500
batch_size=32
#run and train until photos meet expectations (stop & restart model with tweaks if loss goes to 0 in discriminator)
for epoch in tqdm(range(1,num_epochs+1)):
#shuffle L and AB channels then take a subset corresponding to each networks training size
np.random.shuffle(X_train_L)
l = X_train_L[:n]
np.random.shuffle(X_train_AB)
ab = X_train_AB[:160]
fake_images = generator.predict(l[:160], verbose=1)
#Train on Real AB channels
d_loss_real = discriminator.fit(x=ab, y= y_train_real,batch_size=32,epochs=1,verbose=1)
disc_real_losses.append(d_loss_real.history['loss'][-1])
#Train on fake AB channels
d_loss_fake = discriminator.fit(x=fake_images,y=y_train_fake,batch_size=32,epochs=1,verbose=1)
disc_fake_losses.append(d_loss_fake.history['loss'][-1])
#append the loss and accuracy and print loss
disc_acc.append(d_loss_fake.history['acc'][-1])
#Train the gan by producing AB channels from L
g_loss = combined_network.fit(x=l, y=y_gen,batch_size=32,epochs=1,verbose=1)
#append and print generator loss
gen_losses.append(g_loss.history['loss'][-1])
#every 50 epochs it prints a generated photo and every 100 it saves the model under that epoch
if epoch % 50 == 0:
print('Reached epoch:',epoch)
pred = generator.predict(X_test_L[2].reshape(1,256,256,1))
img = lab_to_rgb(np.dstack((X_test_L[2],pred.reshape(256,256,2))))
plt.imshow(img)
plt.show()
if epoch % 100 == 0:
generator.save('generator_' + str(epoch)+ '_v3.h5')
img_height = 256
img_width = 256
img_layer = 3
img_size = img_height * img_width
to_train = True
to_test = False
to_restore = False
output_path = "./output"
check_dir = "./output/checkpoints/"
temp_check = 0
max_epoch = 1
max_images = 100
h1_size = 150
h2_size = 300
z_size = 100
batch_size = 1
pool_size = 50
sample_size = 10
save_training_images = True
ngf = 32
ndf = 64
class CycleGAN():
def input_setup(self):
'''
This function basically setup variables for taking image input.
filenames_A/filenames_B -> takes the list of all training images
self.image_A/self.image_B -> Input image with each values ranging from [-1,1]
'''
filenames_A = tf.train.match_filenames_once("./input/horse2zebra/trainA/*.jpg")
self.queue_length_A = tf.size(filenames_A)
filenames_B = tf.train.match_filenames_once("./input/horse2zebra/trainB/*.jpg")
self.queue_length_B = tf.size(filenames_B)
filename_queue_A = tf.train.string_input_producer(filenames_A)
filename_queue_B = tf.train.string_input_producer(filenames_B)
image_reader = tf.WholeFileReader()
_, image_file_A = image_reader.read(filename_queue_A)
_, image_file_B = image_reader.read(filename_queue_B)
self.image_A = tf.subtract(tf.div(tf.image.resize_images(tf.image.decode_jpeg(image_file_A),[256,256]),127.5),1)
self.image_B = tf.subtract(tf.div(tf.image.resize_images(tf.image.decode_jpeg(image_file_B),[256,256]),127.5),1)
def input_read(self, sess):
'''
It reads the input into from the image folder.
self.fake_images_A/self.fake_images_B -> List of generated images used for calculation of loss function of Discriminator
self.A_input/self.B_input -> Stores all the training images in python list
'''
# Loading images into the tensors
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
num_files_A = sess.run(self.queue_length_A)
num_files_B = sess.run(self.queue_length_B)
self.fake_images_A = np.zeros((pool_size,1,img_height, img_width, img_layer))
self.fake_images_B = np.zeros((pool_size,1,img_height, img_width, img_layer))
self.A_input = np.zeros((max_images, batch_size, img_height, img_width, img_layer))
self.B_input = np.zeros((max_images, batch_size, img_height, img_width, img_layer))
for i in range(max_images):
image_tensor = sess.run(self.image_A)
if(image_tensor.size() == img_size*batch_size*img_layer):
self.A_input[i] = image_tensor.reshape((batch_size,img_height, img_width, img_layer))
for i in range(max_images):
image_tensor = sess.run(self.image_B)
if(image_tensor.size() == img_size*batch_size*img_layer):
self.B_input[i] = image_tensor.reshape((batch_size,img_height, img_width, img_layer))
coord.request_stop()
coord.join(threads)
def model_setup(self):
''' This function sets up the model to train
self.input_A/self.input_B -> Set of training images.
self.fake_A/self.fake_B -> Generated images by corresponding generator of input_A and input_B
self.lr -> Learning rate variable
self.cyc_A/ self.cyc_B -> Images generated after feeding self.fake_A/self.fake_B to corresponding generator. This is use to calcualte cyclic loss
'''
self.input_A = tf.placeholder(tf.float32, [batch_size, img_width, img_height, img_layer], name="input_A")
self.input_B = tf.placeholder(tf.float32, [batch_size, img_width, img_height, img_layer], name="input_B")
self.fake_pool_A = tf.placeholder(tf.float32, [None, img_width, img_height, img_layer], name="fake_pool_A")
self.fake_pool_B = tf.placeholder(tf.float32, [None, img_width, img_height, img_layer], name="fake_pool_B")
self.global_step = tf.Variable(0, name="global_step", trainable=False)
self.num_fake_inputs = 0
self.lr = tf.placeholder(tf.float32, shape=[], name="lr")
with tf.variable_scope("Model") as scope:
self.fake_B = build_generator_resnet_9blocks(self.input_A, name="g_A")
self.fake_A = build_generator_resnet_9blocks(self.input_B, name="g_B")
self.rec_A = build_gen_discriminator(self.input_A, "d_A")
self.rec_B = build_gen_discriminator(self.input_B, "d_B")
scope.reuse_variables()
self.fake_rec_A = build_gen_discriminator(self.fake_A, "d_A")
self.fake_rec_B = build_gen_discriminator(self.fake_B, "d_B")
self.cyc_A = build_generator_resnet_9blocks(self.fake_B, "g_B")
self.cyc_B = build_generator_resnet_9blocks(self.fake_A, "g_A")
scope.reuse_variables()
self.fake_pool_rec_A = build_gen_discriminator(self.fake_pool_A, "d_A")
self.fake_pool_rec_B = build_gen_discriminator(self.fake_pool_B, "d_B")
def loss_calc(self):
''' In this function we are defining the variables for loss calcultions and traning model
d_loss_A/d_loss_B -> loss for discriminator A/B
g_loss_A/g_loss_B -> loss for generator A/B
*_trainer -> Variaous trainer for above loss functions
*_summ -> Summary variables for above loss functions'''
cyc_loss = tf.reduce_mean(tf.abs(self.input_A-self.cyc_A)) + tf.reduce_mean(tf.abs(self.input_B-self.cyc_B))
disc_loss_A = tf.reduce_mean(tf.squared_difference(self.fake_rec_A,1))
disc_loss_B = tf.reduce_mean(tf.squared_difference(self.fake_rec_B,1))
g_loss_A = cyc_loss*10 + disc_loss_B
g_loss_B = cyc_loss*10 + disc_loss_A
d_loss_A = (tf.reduce_mean(tf.square(self.fake_pool_rec_A)) + tf.reduce_mean(tf.squared_difference(self.rec_A,1)))/2.0
d_loss_B = (tf.reduce_mean(tf.square(self.fake_pool_rec_B)) + tf.reduce_mean(tf.squared_difference(self.rec_B,1)))/2.0
optimizer = tf.train.AdamOptimizer(self.lr, beta1=0.5)
self.model_vars = tf.trainable_variables()
d_A_vars = [var for var in self.model_vars if 'd_A' in var.name]
g_A_vars = [var for var in self.model_vars if 'g_A' in var.name]
d_B_vars = [var for var in self.model_vars if 'd_B' in var.name]
g_B_vars = [var for var in self.model_vars if 'g_B' in var.name]
self.d_A_trainer = optimizer.minimize(d_loss_A, var_list=d_A_vars)
self.d_B_trainer = optimizer.minimize(d_loss_B, var_list=d_B_vars)
self.g_A_trainer = optimizer.minimize(g_loss_A, var_list=g_A_vars)
self.g_B_trainer = optimizer.minimize(g_loss_B, var_list=g_B_vars)
for var in self.model_vars: print(var.name)
#Summary variables for tensorboard
self.g_A_loss_summ = tf.summary.scalar("g_A_loss", g_loss_A)
self.g_B_loss_summ = tf.summary.scalar("g_B_loss", g_loss_B)
self.d_A_loss_summ = tf.summary.scalar("d_A_loss", d_loss_A)
self.d_B_loss_summ = tf.summary.scalar("d_B_loss", d_loss_B)
def save_training_images(self, sess, epoch):
if not os.path.exists("./output/imgs"):
os.makedirs("./output/imgs")
for i in range(0,10):
fake_A_temp, fake_B_temp, cyc_A_temp, cyc_B_temp = sess.run([self.fake_A, self.fake_B, self.cyc_A, self.cyc_B],feed_dict={self.input_A:self.A_input[i], self.input_B:self.B_input[i]})
imsave("./output/imgs/fakeB_"+ str(epoch) + "_" + str(i)+".jpg",((fake_A_temp[0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/fakeA_"+ str(epoch) + "_" + str(i)+".jpg",((fake_B_temp[0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/cycA_"+ str(epoch) + "_" + str(i)+".jpg",((cyc_A_temp[0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/cycB_"+ str(epoch) + "_" + str(i)+".jpg",((cyc_B_temp[0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/inputA_"+ str(epoch) + "_" + str(i)+".jpg",((self.A_input[i][0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/inputB_"+ str(epoch) + "_" + str(i)+".jpg",((self.B_input[i][0]+1)*127.5).astype(np.uint8))
def fake_image_pool(self, num_fakes, fake, fake_pool):
''' This function saves the generated image to corresponding pool of images.
In starting. It keeps on feeling the pool till it is full and then randomly selects an
already stored image and replace it with new one.'''
if(num_fakes < pool_size):
fake_pool[num_fakes] = fake
return fake
else :
p = random.random()
if p > 0.5:
random_id = random.randint(0,pool_size-1)
temp = fake_pool[random_id]
fake_pool[random_id] = fake
return temp
else :
return fake
def train(self):
''' Training Function '''
# Load Dataset from the dataset folder
self.input_setup()
#Build the network
self.model_setup()
#Loss function calculations
self.loss_calc()
# Initializing the global variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
#Read input to nd array
self.input_read(sess)
#Restore the model to run the model from last checkpoint
if to_restore:
chkpt_fname = tf.train.latest_checkpoint(check_dir)
saver.restore(sess, chkpt_fname)
writer = tf.summary.FileWriter("./output/2")
if not os.path.exists(check_dir):
os.makedirs(check_dir)
# Training Loop
for epoch in range(sess.run(self.global_step),100):
print ("In the epoch ", epoch)
saver.save(sess,os.path.join(check_dir,"cyclegan"),global_step=epoch)
# Dealing with the learning rate as per the epoch number
if(epoch < 100) :
curr_lr = 0.0002
else:
curr_lr = 0.0002 - 0.0002*(epoch-100)/100
if(save_training_images):
self.save_training_images(sess, epoch)
# sys.exit()
for ptr in range(0,max_images):
print("In the iteration ",ptr)
print("Starting",time.time()*1000.0)
# Optimizing the G_A network
_, fake_B_temp, summary_str = sess.run([self.g_A_trainer, self.fake_B, self.g_A_loss_summ],feed_dict={self.input_A:self.A_input[ptr], self.input_B:self.B_input[ptr], self.lr:curr_lr})
writer.add_summary(summary_str, epoch*max_images + ptr)
fake_B_temp1 = self.fake_image_pool(self.num_fake_inputs, fake_B_temp, self.fake_images_B)
# Optimizing the D_B network
_, summary_str = sess.run([self.d_B_trainer, self.d_B_loss_summ],feed_dict={self.input_A:self.A_input[ptr], self.input_B:self.B_input[ptr], self.lr:curr_lr, self.fake_pool_B:fake_B_temp1})
writer.add_summary(summary_str, epoch*max_images + ptr)
# Optimizing the G_B network
_, fake_A_temp, summary_str = sess.run([self.g_B_trainer, self.fake_A, self.g_B_loss_summ],feed_dict={self.input_A:self.A_input[ptr], self.input_B:self.B_input[ptr], self.lr:curr_lr})
writer.add_summary(summary_str, epoch*max_images + ptr)
fake_A_temp1 = self.fake_image_pool(self.num_fake_inputs, fake_A_temp, self.fake_images_A)
# Optimizing the D_A network
_, summary_str = sess.run([self.d_A_trainer, self.d_A_loss_summ],feed_dict={self.input_A:self.A_input[ptr], self.input_B:self.B_input[ptr], self.lr:curr_lr, self.fake_pool_A:fake_A_temp1})
writer.add_summary(summary_str, epoch*max_images + ptr)
self.num_fake_inputs+=1
sess.run(tf.assign(self.global_step, epoch + 1))
writer.add_graph(sess.graph)
def test(self):
''' Testing Function'''
print("Testing the results")
self.input_setup()
self.model_setup()
saver = tf.train.Saver()
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
self.input_read(sess)
chkpt_fname = tf.train.latest_checkpoint(check_dir)
saver.restore(sess, chkpt_fname)
if not os.path.exists("./output/imgs/test/"):
os.makedirs("./output/imgs/test/")
for i in range(0,100):
fake_A_temp, fake_B_temp = sess.run([self.fake_A, self.fake_B],feed_dict={self.input_A:self.A_input[i], self.input_B:self.B_input[i]})
imsave("./output/imgs/test/fakeB_"+str(i)+".jpg",((fake_A_temp[0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/test/fakeA_"+str(i)+".jpg",((fake_B_temp[0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/test/inputA_"+str(i)+".jpg",((self.A_input[i][0]+1)*127.5).astype(np.uint8))
imsave("./output/imgs/test/inputB_"+str(i)+".jpg",((self.B_input[i][0]+1)*127.5).astype(np.uint8))
def main():
model = CycleGAN()
if to_train:
model.train()
elif to_test:
model.test()
if __name__ == '__main__':
main()
```
| github_jupyter |
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
$$
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\abs}[1]{\left\lvert#1\right\rvert}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\inner}[2]{\left\langle#1,#2\right\rangle}
\newcommand{\bra}[1]{\left\langle#1\right|}
\newcommand{\ket}[1]{\left|#1\right\rangle}
\newcommand{\braket}[2]{\left\langle#1|#2\right\rangle}
\newcommand{\ketbra}[2]{\left|#1\right\rangle\left\langle#2\right|}
\newcommand{\angleset}[1]{\left\langle#1\right\rangle}
\newcommand{\expected}[1]{\left\langle#1\right\rangle}
\newcommand{\dv}[2]{\frac{d#1}{d#2}}
\newcommand{\real}[0]{\mathfrak{Re}}
$$
# Projective Measurement
_prepared by Israel Gelover_
### <a name="definition_3_6">Definition 3.6</a> Projector
Given a subset of vectors $\set{\ket{f_i}}_{i=1}^n \subset \mathcal{H}$, we define the _Projector_ over the subspace $\mathcal{F}$ generated by them as:
\begin{equation*}
\begin{split}
\hat{P}:\mathcal{H} &\to \mathcal{F} \\
\ket{\psi} &\to \sum_{i=1}^n \ket{f_i}\braket{f_i}{\psi}
\end{split}
\end{equation*}
It is clear that what we obtain from this operator is a linear combination of $\set{\ket{f_i}}_{i=1}^n$ and therefore, the resulting vector is an element of the subspace generated by these vectors. And it is precisely this definition what we used to calculate the <a href="./WorkedExample.ipynb#5">Wave function collapse</a>.
### <a name="definition_3_7">Definition 3.7</a> Projective Measurement
A _Projective Measurement_ is described with a self-adjoint operator
\begin{equation*}
\hat{M} = \sum_m m\hat{P}_m
\end{equation*}
Where $\hat{P}_m$ is a projector on the subspace corresponding to the eigenvalue $m$ of $\hat{M}$.
This is known as the spectral decomposition of the $\hat{M}$ operator, and any self-adjoint operator can be expressed in terms of its spectral decomposition. We emphasize that this way of decomposing a projective measurement is very useful to us since it involves the eigenvalues and the projectors associated with these eigenvalues.
### Example
Let
\begin{equation}\label{op_h}
\hat{H} = \ketbra{0} + i\ketbra{1}{2} - i\ketbra{2}{1}
\end{equation}
Let us recall that in the example of <a href="./WorkedExample.ipynb#3">Time evlution</a> we saw that this is a self-adjoint operator, therefore we can use it as a projective measurement, and the way to do it is by obtaining its spectral decomposition through the eigenvalues and eigenvectors that we already calculated. That is
\begin{equation*}
\begin{split}
\varepsilon_1 = 1 \qquad&\qquad \ket{\varepsilon_1} = \ket{0} \\
\varepsilon_2 = 1 \qquad&\qquad \ket{\varepsilon_2} = \frac{1}{\sqrt{2}}(\ket{1} + i\ket{2}) \\
\varepsilon_3 = -1 \qquad&\qquad \ket{\varepsilon_3} = \frac{1}{\sqrt{2}}(\ket{1} - i\ket{2})
\end{split}
\end{equation*}
Note that we only have two different eigenvalues: $1$ and $-1$. The eigenvalue $1$ has multiplicity $2$ and therefore has associated a subspace of dimension $2$, while the eigenvalue $-1$ has multiplicity $1$ and therefore has associated a subspace of dimension $1$.
Thus
\begin{equation*}
\hat{H} = 1\cdot\hat{P_1} + (-1)\cdot\hat{P_{-1}}
\end{equation*}
Where, from <a href="#definition_3_6">Definition 3.6</a>
\begin{equation*}
\begin{split}
\hat{P_1} &= \ketbra{\varepsilon_1}{\varepsilon_1} + \ketbra{\varepsilon_2}{\varepsilon_2} \\
\hat{P_{-1}} &= \ketbra{\varepsilon_3}{\varepsilon_3}
\end{split}
\end{equation*}
Therefore
\begin{equation*}
\hat{H} = \ketbra{\varepsilon_1}{\varepsilon_1} + \ketbra{\varepsilon_2}{\varepsilon_2} - \ketbra{\varepsilon_3}{\varepsilon_3}
\end{equation*}
Something that may not be so clear from this result is that in $\hat{H} = \ketbra{0} + i\ketbra{1}{2} - i\ketbra{2}{1}$ we have $\hat{H}$ expressed in terms of the base $\set{\ket{0}, \ket{1}, \ket{2}}$ and what the spectral decomposition is doing is diagonalize the operator $\hat{H}$, since we are expressing it in terms of its eigenvectors and that what turns out to be is a diagonal matrix, in this case
\begin{equation*}
\hat{H} = \begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & -1
\end{pmatrix}
\end{equation*}
## Measurement Related Postulates
This formalism of projective measurements allows us on the one hand to group the postulates of quantum mechanics related to measurement in a single formalism, but on the other hand, it also allows us to focus on the state that we want to measure and on the state of the system after the measurement. Let us recall that the postulates of quantum mechanics related to measurement focus on the value that we can measure, that is, on the eigenvalue of an observable that is related to a measurable physical quantity. This formalism allows us to focus on the state in which the vector (that we originally had) ends up after measurement, and so to speak, to put aside for a bit what we are measuring.
In the following proposition we are going to describe the postulates related to the measurement that we already mentioned, but in a more condensed way in two quantities.
### <a name="proposition_3_8">Proposition 3.8</a>
Let $\hat{M} = \sum_m m\hat{P_m}$ be a projective measurement expressed in terms of its spectral decomposition. ($\hat{M}$ can be an observable)
1. If the system is in the state $\ket{\psi}$, the probability of measuring the eigenvalue $m$ is given by
\begin{equation*}
P_\psi(m) = \bra{\psi}\hat{P_m}\ket{\psi}
\end{equation*}
2. The state of the system immediately after measuring the eigenvalue $m$ is given by
\begin{equation*}
\ket{\psi} \to \frac{\hat{P_m}\ket{\psi}}{\sqrt{P(m)}}
\end{equation*}
**Proof:**
1. Let's verify the first statement by calculating the expected value. Recall that by <a href="#definition_3_6">Definition 3.6</a>, the $m$ projector applied to $\ket{\psi}$ is given by
\begin{equation*}
\hat{P_m}\ket{\psi} = \sum_{i=1}^{g_m} \ket{m_i}\braket{m_i}{\psi}
\end{equation*}
where $g_m$ is the multiplicity of the eigenvalue $m$. Thus
\begin{equation*}
\begin{split}
\bra{\psi}\hat{P_m}\ket{\psi} &= \bra{\psi} \sum_{i=1}^{g_m} \ket{m_i}\braket{m_i}{\psi} = \sum_{i=1}^{g_m} \braket{\psi}{m_i}\braket{m_i}{\psi} \\
&= \sum_{i=1}^{g_m} \braket{m_i}{\psi}^*\braket{m_i}{\psi} = \sum_{i=1}^{g_m} \abs{\braket{m_i}{\psi}}^2 \\
&= P_\psi(m)
\end{split}
\end{equation*}
This last equality is given by <a href="./Postulates.ipynb#definition_3_1">Postulate V</a>.
2. Let's remember that projecting the vector can change its norm and therefore we need to renormalize it. Let us then calculate the magnitude of the projection, calculating the internal product of the projection with itself. In the previous section we gave the expression of the $m$ projector applied to $\ket{\psi}$, let's see now that
\begin{equation*}
\bra{\psi}\hat{P_m}^* = \sum_{i=1}^{g_m} \braket{\psi}{m_i}\bra{\psi}
\end{equation*}
Thus
\begin{equation*}
\begin{split}
\abs{\hat{P_m}\ket{\psi}}^2 &= \bra{\psi}\hat{P_m}^* \hat{P_m}\ket{\psi} \\
&= \sum_{i=1}^{g_m} \braket{\psi}{m_i}\bra{\psi} \sum_{i=1}^{g_m} \ket{m_i}\braket{m_i}{\psi} \\
&= \sum_{i=1}^{g_m} \braket{\psi}{m_i} \braket{m_i} \braket{m_i}{\psi} \\
&= \sum_{i=1}^{g_m} \braket{\psi}{m_i}\braket{m_i}{\psi} \\
&= \sum_{i=1}^{g_m} \braket{m_i}{\psi}^*\braket{m_i}{\psi} \\
&= \sum_{i=1}^{g_m} \abs{\braket{m_i}{\psi}}^2 \\
&= P_\psi(m) \\
\implies \\
\abs{\hat{P_m}\ket{\psi}} &= \sqrt{P_\psi(m)}
\end{split}
\end{equation*}
### <a name="remark">Remark</a>
In summary, with this projector formalism, we can express the measurement-related postulates in two simpler expressions:
**1. The probability of measuring an eigenvalue is the expected value of the projector associated with the eigenvalue.**
**2. The state of the system after measurement is obtained by applying the projector to the state and renormalizing. The normalization constant is precisely the square root of the probability, calculated in the previous section.**
As we have already mentioned, this formalism is useful when we are not so interested in what we are going to measure, but rather we are interested in the state of the system after measurement. That is, instead of verifying what is the observable, calculate the eigenvalues of the observable, calculate the probability based on the eigenvectors associated with these eigenvalues, etc. All this, that is required by the quantum mechanics postulates related to measurement in order to find a probability, is already implicit in the projector formalism.
On the other hand, when we talk about measurements in quantum computing, we usually refer to measurements in the computational basis, and the computational basis is the basis of the Pauli operator $\hat{\sigma_z}$. So the observable that is almost always used in quantum computing is $\hat{\sigma_z}$, that is, when talking about measuring a qubit, we are talking about measuring the observable $\hat{\sigma_z}$ and calculating the probability to find the eigenvalue $+1$ or the eigenvalue $-1$ of $\hat{\sigma_z}$. The eigenvalue $+1$ is associated with the qubit $\ket{0}$ and the eigenvalue $-1$ is associated with the qubit $\ket{1}$.
Observables are very useful when we are interested in measuring magnitudes that have a physical interpretation such as momentum, position or energy, on the other hand, in quantum computing we are interested in knowing if when the measurement is made the system will be in a state $\ket{0}$ or in a state $\ket{1}$, beyond the measured eigenvalue or the observable with which you are working. It is for this reason that this formalism of projective measurements is particularly useful in this area.
Let's see how we can apply it in a concrete example.
### <a name="example">Example</a>
Let $\ket{\psi}$ be the state
\begin{equation*}
\ket{\psi} = \sqrt{\frac{3}{8}}\ket{00} + \frac{1}{2}\ket{01} + \frac{1}{2}\ket{10} + \frac{1}{\sqrt{8}}\ket{11} \in \mathbb{B}^{\otimes2}
\end{equation*}
We note that the state is normalized, that is, it is a valid state in $\mathbb{B}^{\otimes2}$ and thus we can answer the following questions.
**1. What is the probability of finding the first qubit in $\ket{0}$?**
To emphasize the previous comments, let's start by considering the following projective measurement, which corresponds to the expression in terms of outer products of the Pauli operator $\hat{\sigma_x}$
\begin{equation*}
\hat{M} = (1)\hat{P_0} + (-1)\hat{P_1} \enspace \text{ where } \enspace \hat{P_0} = \ketbra{0}{0}, \enspace \hat{P_1} = \ketbra{1}{1}
\end{equation*}
We know that to answers this question we need to find the probability of measuring the eigenvalue of $\hat{M}$ associated with the qubit $\ket{0}$, but note that according to section **1.** of the previous remark, what is relevant to make this calculation is only $\hat{P_0}$. That is, we are not interested in the eigenvalue that is measured, nor the observable. To accentuate this fact, we could even have considered any other projective measurement such as
\begin{equation*}
\hat{M} = \alpha\hat{P_0} + \beta\hat{P_1}
\end{equation*}
and this would still be a self-adjoint operator and therefore a valid projective measure, for all $\alpha,\beta \in \mathbb{R}$.
For all this to agree with the formalism of the postulates of quantum mechanics, we usually take $\hat{M} = \hat{\sigma_z}$ as we did initially, that is, formally from a physical point of view, what we will do is measure the observable $\hat{\sigma_z}$. However, from a mathematical point of view we can measure any projective measurement (self-adjoint operator) that distinguishes with a certain eigenvalue the qubit $\ket{0}$ and with a different eigenvalue the qubit $\ket{1}$.
In summary, what is really relevant for this calculation is the projector of the eigenvalue associated with the state we want to measure, in this case what we want to calculate is $\bra{\psi}\hat{P_0}\ket{\psi}$, except that we are working on $\mathbb{B}^{\otimes2}$, but that detail will be clarified below.
According to section **1.** of the previous remark, to calculate this probability, we must calculate the expected value of a projector, but we cannot simply consider $\hat{P_0}$ because of the fact that we just mentioned, that we are working on $\mathbb{B}^{\otimes2}$. Since in this case the state of the second qubit is not relevant, what we need is the following
\begin{equation*}
\begin{split}
&\bra{\psi}\hat{P_0}\otimes\hat{I}\ket{\psi} = \\
&= \bra{\psi} \left[\ketbra{0}\otimes\hat{I}\left(\sqrt{\frac{3}{8}}\ket{00} + \frac{1}{2}\ket{01} + \frac{1}{2}\ket{10} + \frac{1}{\sqrt{8}}\ket{11} \right) \right] \\
&= \bra{\psi} \left[ \sqrt{\frac{3}{8}} \ketbra{0}\otimes\hat{I}\ket{00} + \frac{1}{2} \ketbra{0}\otimes\hat{I}\ket{01} + \frac{1}{2} \ketbra{0}\otimes\hat{I}\ket{10} + \frac{1}{\sqrt{8}} \ketbra{0}\otimes\hat{I}\ket{11} \right] \\
\end{split}
\end{equation*}
Let us recall from <a href="../2-math/TensorProduct.ipynb#definition_2_11">Definition 2.11</a>, that $\hat{A} \otimes \hat{B}(\ket{a} \otimes \ket{b}) = (\hat{A}\ket{a})\otimes(\hat{B}\ket{b})$. This means that we must apply the projector $\ketbra{0}{0}$ to the first qubit and the operator $\hat{I}$ to the second qubit, since its state is not relevant to us. Thus
\begin{equation*}
\begin{split}
\bra{\psi}\hat{P_0}\otimes\hat{I}\ket{\psi} &= \bra{\psi} \left( \sqrt{\frac{3}{8}} \ket{00} + \frac{1}{2} \ket{01} \right) \\
&= \left(\sqrt{\frac{3}{8}}\bra{00} + \frac{1}{2}\bra{01} + \frac{1}{2}\bra{10} + \frac{1}{\sqrt{8}}\bra{11}\right) \left(\sqrt{\frac{3}{8}} \ket{00} + \frac{1}{2} \ket{01}\right) \\
&= \left(\sqrt{\frac{3}{8}}\right)\left(\sqrt{\frac{3}{8}}\right)\braket{00}{00} + \left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\braket{01}{01} \\
&= \frac{3}{8} + \frac{1}{4} = \frac{5}{8}
\end{split}
\end{equation*}
This is congruent with the intuition given by the fact that the amplitudes associated with the states where the first qubit is $\ket{0}$ are $\sqrt{\frac{3}{8}}$ and $\frac{1}{2}$ and to calculate the probability of measuring these states, we take the modulus squared of the amplitudes, which is known as _Born's Rule_.
In summary, formally what we did was calculate the probability of measuring the eigenvalue $+1$ of the observable $\hat{\sigma_z}\otimes\hat{I}$, which is completely congruent with what the postulates tell us. But as we previously highlighted, for practical issues of carrying out this calculation, the only thing that was relevant, was the projector associated with the state we wanted to measure, we do not need to know the observable or the eigenvalue to measure. Which allows us to put aside a bit the formalism that the postulates entail.
**2. What is the status immediately after measurement?**
Section **2.** of the previous remark tells us that
\begin{equation*}
\begin{split}
\ket{\psi} \longrightarrow \frac{\hat{P_0}\otimes\hat{I}\ket{\psi}}{\sqrt{P(\ket{0})}} &= \frac{\hat{P_0}\otimes\hat{I}\ket{\psi}}{\sqrt{\frac{5}{8}}} \\
&= \sqrt{\frac{8}{5}}\hat{P_0}\otimes\hat{I}\ket{\psi} \\
&= \sqrt{\frac{8}{5}}\left(\sqrt{\frac{3}{8}}\ket{00} + \frac{1}{2}\ket{01}\right) \\
&= \sqrt{\frac{3}{5}}\ket{00} + \sqrt{\frac{2}{5}}\ket{01}
\end{split}
\end{equation*}
Where $P(\ket{0})$ is the probability that we just calculated in the first question. Technically it would have to be the probability of measuring the eigenvalue $+1$, but from what we explained previously, we allow ourselves to use this notation.
Note that this new state is the projection of $\ket{\psi}$ over the subspace generated by all the states that have the first qubit in $\ket{0}$, namely $\set{\ket{00}, \ket{01}}$, therefore this condition is also true. On the other hand, we note that the normalization is correct since $\abs{\sqrt{\frac{3}{5}}}^2 + \abs{\sqrt{\frac{2}{5}}}^2 = \frac{3}{5} + \frac{2}{5} = 1$.
**3. What is the probability of measuring some qubit in $\ket{1}$?**
Let us consider the following events
\begin{equation*}
\begin{split}
A &= \text{Measure first qubit in} \ket{1} \\
B &= \text{Measure second qubit in} \ket{1}
\end{split}
\end{equation*}
Recall from probability theory that
\begin{equation*}
P(A \cup B) = P(A) + P(B) - P(A \cap B)
\end{equation*}
So what we are looking for is
\begin{equation*}
\begin{split}
P(A \cup B) &= \bra{\psi}\hat{P_1}\otimes\hat{I}\ket{\psi} + \bra{\psi}\hat{I}\otimes\hat{P_1}\ket{\psi} - \bra{\psi}\hat{P_1}\otimes\hat{P_1}\ket{\psi} \\
&= \bra{\psi}\left(\frac{1}{2}\ket{10} + \frac{1}{\sqrt{8}}\ket{11}\right) + \bra{\psi}\left(\frac{1}{2}\ket{01} + \frac{1}{\sqrt{8}}\ket{11}\right) - \bra{\psi}\left(\frac{1}{\sqrt{8}}\ket{11}\right) \\
&= \frac{1}{2}\braket{\psi}{10} + \frac{1}{\sqrt{8}}\braket{\psi}{11} + \frac{1}{2}\braket{\psi}{01} + \frac{1}{\sqrt{8}}\braket{\psi}{11} - \frac{1}{\sqrt{8}}\braket{\psi}{11} \\
&= \left(\frac{1}{2}\right)\left(\frac{1}{2}\right) + \left(\frac{1}{\sqrt{8}}\right)\left(\frac{1}{\sqrt{8}}\right) + \left(\frac{1}{2}\right)\left(\frac{1}{2}\right) \\
&= \frac{1}{4} + \frac{1}{8} + \frac{1}{4} = \frac{5}{8}
\end{split}
\end{equation*}
Note that the amplitudes of the terms of $\ket{\psi}$ that have some qubit in $\ket{1}$ are precisely $\frac{1}{2}$, $\frac{1}{2}$ and $\frac{1}{\sqrt{8}}$, if we calculate the sum of its squared modules we have exactly
\begin{equation*}
\abs{\frac{1}{2}}^2 + \abs{\frac{1}{2}}^2 + \abs{\frac{1}{\sqrt{8}}}^2 = \frac{1}{4} + \frac{1}{4} + \frac{1}{8} = \frac{5}{8} = P(AUB)
\end{equation*}
The goal of this section on projective measurement is to highlight that in quantum computing, when we talk about measuring, it is much more practical to ask about the state of the system than the value to be measured, which might not be so relevant in this context. For example, if we have a state $\ket{\psi}$ of three qubits, it is easier to think of calculating the probability of measuring $\ket{010}$ than to think of calculating the probability of measuring a certain eigenvalue of $\hat{\sigma_z}\otimes\hat{\sigma_z}\otimes\hat{\sigma_z}$, which is actually what we are doing in the background but without the formalism of the postulates of quantum mechanics. We can say that in quantum computing the state of the system (the qubits themselves) is more relevant than the eigenvalues obtained from measuring $\hat{\sigma_z}$.
It is important to note that in quantum computing, measurement can also be part of an algorithm. When this topic is addressed, it will be clear that many times a measurement is made to project to a certain state that is being sought and continue with the algorithm from that new state. Therefore, being able to know the state of a system after a certain measurement turns out to be very relevant.
### <a name="remark_3_9">Remark 3.9</a>
1. Non-orthogonal states cannot be reliably distinguished by a projective measurement.
Let us remember that if we have a certain state $\ket{\psi}$, we measure an observable (self-adjoint operator) and obtain an eigenvalue of that measurement, what the postulates of quantum mechanics regarding the measurement tell us, is that this state $\ket{\psi}$ will be projected on the subspace associated with the measured eigenvalue. In terms of the previous section, this means applying a projector to the $\ket{\psi}$ state.
What do we mean by reliably distinguish them? By using a projective measurement we can measure one of them with probability $1$, and measure the other with probability $0$. For example, if we wanted to distinguish if we have the state $\ket{\varphi}$ and not state $\ket{\psi}$, we would simply want to measure the expected value of the projector $\hat{P_\varphi}$ in $\ket{\varphi}$ and get $1$ and in turn get $0$ by measuring it in $\ket{\psi}$. Let's see why we can't make this reliable distinction with two states that are not orthogonal using an example.
Let us consider the following non-orthogonal states
\begin{equation*}
\ket{\psi} = \ket{0} \enspace \text{ y } \enspace \ket{\varphi} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})
\end{equation*}
And the projector
\begin{equation*}
\hat{P_\psi} = \ketbra{\psi}{\psi}
\end{equation*}
Thus we have
\begin{equation*}
\begin{split}
P(\ket{\psi}) &= \bra{\psi}\hat{P_\psi}\ket{\psi} = \braket{\psi}{\psi}\braket{\psi}{\psi} = 1 \\
P(\ket{\varphi}) &= \bra{\varphi}\hat{P_\psi}\ket{\varphi} = \braket{\varphi}{\psi}\braket{\psi}{\varphi} = \frac{1}{\sqrt{2}}\frac{1}{\sqrt{2}} = \frac{1}{2}
\end{split}
\end{equation*}
It should be clear from this particular example that we cannot have a projector that allows us to reliably distinguish two non-orthogonal states.
2. Orthogonal states can be reliably distinguished by a projective measurement.
Let's consider the states
\begin{equation*}
\ket{\psi} = \ket{0} \enspace \text{ and } \enspace \ket{\varphi} = \ket{1}
\end{equation*}
\begin{equation*}
\begin{split}
\hat{P_\psi} &= \ketbra{\psi}{\psi} \\
\implies \\
P(\ket{\psi}) &= \bra{\psi}\hat{P_\psi}\ket{\psi} = \braket{\psi}{\psi}\braket{\psi}{\psi} = 1 \\
P(\ket{\varphi}) &= \bra{\varphi}\hat{P_\psi}\ket{\varphi} = \braket{\varphi}{\psi}\braket{\psi}{\varphi} = 0
\end{split}
\end{equation*}
It should be clear from this particular example that we can reliably distinguish orthogonal states.
| github_jupyter |
# Neural Networks and Deep Learning for Life Sciences and Health Applications - An introductory course about theoretical fundamentals, case studies and implementations in python and tensorflow
(C) Umberto Michelucci 2018 - umberto.michelucci@gmail.com
github repository: https://github.com/michelucci/dlcourse2018_students
Fall Semester 2018
```
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
# Solutions to exercises
## Exercise 1 (Difficulty: easy)
Draw and develop in tensorflow with ```tf.constant``` the computational graphs for the following operations
A) ```w1*x1+w2*x2+x1*x1```
B) ```A*x1+3+x2/2```
Use as input values ```x1 = 5``` and ```x2 = 6```
## A)
There are several ways of solving this exercise. This is one possible
```
# Building Phase
x1 = tf.constant(5.)
x2 = tf.constant(6.)
w1 = 10.
w2 = 20.
z1 = tf.multiply(w1, x1)
z2 = tf.multiply(w2, x2)
z3 = tf.multiply(x1, x1)
result = z1 + z2 + z3
# Evaluation Phase
with tf.Session() as sess:
print(result.eval())
```
A second way of doing that is the following
```
# Building Phase
x1 = tf.constant(5.)
x2 = tf.constant(6.)
w1 = 10.
w2 = 20.
z1 = tf.multiply(w1, x1)
z2 = tf.multiply(w2, x2)
z3 = tf.multiply(x1, x1)
result = z1 + z2 + z3
# Evaluation Phase
sess = tf.Session()
print(sess.run(result))
sess.close()
```
But you can also define ```w1``` and ```w2``` as constants too
```
# Building Phase
x1 = tf.constant(5.)
x2 = tf.constant(6.)
w1 = tf.constant(10.)
w2 = tf.constant(20.)
z1 = tf.multiply(w1, x1)
z2 = tf.multiply(w2, x2)
z3 = tf.multiply(x1, x1)
result = z1 + z2 + z3
# Evaluation Phase
sess = tf.Session()
print(sess.run(result))
sess.close()
```
### B)
```
# Building Phase
x1 = tf.constant(5.)
x2 = tf.constant(6.)
A = tf.constant(10.)
result = tf.multiply(A, x1) + tf.constant(3.) + tf.divide(x2, 2.)
# Evaluation Phase
sess = tf.Session()
print(sess.run(result))
sess.close()
```
or you can define the ```result``` in multiple steps
```
# Building Phase
z1 = tf.multiply(A, x1)
z2 = tf.add(z1, 3.)
z3 = tf.add(z2, tf.divide(x2,2.))
# Evaluation Phase
sess = tf.Session()
print(sess.run(result))
sess.close()
```
## Exercise 2 (Difficulty: medium)
Draw and develop in tensorflow with ```tf.Variable``` the computational graph for the following operation ```A*(w1*x1+w2*x2)```
build the computational graph and then evaluate it two times (without re-building it) with the initial values in the same session
A) ```x1 = 3, x2 = 4```
B) ```x1 = 5, x2 = 7```
```
# Building Phase
x1 = tf.Variable(3.)
x2 = tf.Variable(4.)
w1 = tf.constant(10.)
w2 = tf.constant(20.)
A = tf.constant(30.)
init = tf.global_variables_initializer()
z1 = tf.multiply(w1,x1)
z2 = tf.multiply(w2,x2)
z3 = tf.add(z1, z2)
result = tf.multiply(A, z3)
```
To run the same graph twice in the same session you can do the following
```
sess = tf.Session()
print(sess.run(result, feed_dict = {x1: 3, x2: 4}))
print(sess.run(result, feed_dict = {x1: 5, x2: 7}))
sess.close()
```
Or you can write a function that creates a session, evaluates a node, and then close it.
```
def run_evaluation(x1_, x2_):
sess = tf.Session()
print(sess.run(result, feed_dict = {x1: x1_, x2: x2_}))
sess.close()
```
And then you can evalute the node with a call to your function.
```
run_evaluation(3,4)
run_evaluation(5,7)
```
## Exercise 3 (Difficulty: FUN)
Consider two vectors
``` x1 = [1,2,3,4,5], x2 = [6,7,8,9,10]```
draw and build in tensorflow the computational graph for the dot-product operation between the two vectors. If you don't know what a dot-product is you can check it here (we covered that in our introductory week) [](https://en.wikipedia.org/wiki/Dot_product).
Build it in two different ways:
A) Do it with loops. Build a computational graph that takes as input scalars and in the session/evaluation phase build a loop to go over all the inputs and then sums the results
B) Do it in one shot with tensorflow. Build a computational graph that takes as input vectors and do the entire operation directly in tensorflow.
Hint: you can use in tensorflow two methods: ```tf.reduce_sum(tf.multiply(x1, x2))``` or ```tf.matmul(tf.reshape(x1,[1,5]), tf.reshape(x2, [-1, 1]))```. Try to understand why they work checking the official documentation.
## a)
```
first = tf.Variable(0.)
second = tf.Variable(0.)
mult = tf.multiply(first, second)
x1 = [1,2,3,4,5]
x2 = [6,7,8,9,10]
sess = tf.Session()
total = 0
for i in range(0,len(x1)):
total = total + sess.run(mult, feed_dict = {first: x1[i], second: x2[i]})
print(total)
```
Note that you can do that easily in numpy
```
np.dot(x1, x2)
```
## b)
Another way, and much more efficient, is the following
```
x1 = tf.placeholder(tf.int32, None) # Let's assume we work with integers
x2 = tf.placeholder(tf.int32, None) # Let's assume we work with integers
result = tf.reduce_sum(tf.multiply(x1, x2))
sess = tf.Session()
print(sess.run(result, feed_dict = {x1: [1,2,3,4,5], x2:[6,7,8,9,10]}))
sess.close()
```
Or in with matrices
```
x1 = tf.placeholder(tf.int32, None) # Let's assume we work with integers
x2 = tf.placeholder(tf.int32, None) # Let's assume we work with integers
result = tf.matmul(tf.reshape(x1,[1,5]), tf.reshape(x2, [-1, 1]))
sess = tf.Session()
print(sess.run(result, feed_dict = {x1: [1,2,3,4,5], x2:[6,7,8,9,10]}))
sess.close()
```
Note that the result is different in the two cases! In the first we get a scalar, in the second a matrix that has dimensions ```1x1```, because the second method is a matrix multiplication function that will return a matrix (or better a tensor).
## c) (even another way) (BONUS Solution)
There is actually another way. Tensorflow can perform the dot product directly
```
x1 = tf.placeholder(tf.int32, None) # Let's assume we work with integers
x2 = tf.placeholder(tf.int32, None) # Let's assume we work with integers
result = tf.tensordot(x1, x2, axes = 1)
sess = tf.Session()
print(sess.run(result, feed_dict = {x1: [1,2,3,4,5], x2:[6,7,8,9,10]}))
sess.close()
```
## Exercise 4 (Difficulty: medium)
Write a function that build a computational graph for the operation ```x1+x2``` where the input ```x1``` and ```x2``` are input with given dimensions. Your ```x1``` and ```x2``` should be declared as ```tf.placeholder```.
Your functions should accept as input:
- dimensions of ```x1``` as list, for example ```[3]```
- dimensions of ```x2``` as list, for example ```[3]```
The function should return a tensor ```z = x1 + x2```.
Then open a session and evaluate ```z``` with the following inputs:
- ```x1 = [4,6,7], x2 = [1,2,9]```
- ```x1 = [1,2,....., 1000], x2 = [10001, 10002, ...., 11000]```
and print the result.
```
def build_graph(dim1, dim2):
tf.reset_default_graph()
x1 = tf.placeholder(tf.float32, dim1)
x2 = tf.placeholder(tf.float32, dim2)
z = tf.add(x1, x2)
return z, x1, x2
x1list = [4,6,7]
x2list = [1,2,9]
# Building Phase
z, x1, x2 = build_graph(len(x1list), len(x2list))
sess = tf.Session()
print(sess.run(z, feed_dict = {x1: x1list, x2: x2list}))
sess.close()
```
**Note that since you refer to the tensors ```x1``` and ```x2``` in the ```feed_dict``` dictionary you need to have the tensors visible, otherwise you will get an error, therefore you need your function to return no only ```z``` but also ```x1``` and ```x2```.**
```
x1list = np.arange(1, 1001, 1)
x2list = np.arange(10001, 11001, 1)
# Building Phase
z, x1, x2 = build_graph(len(x1list), len(x2list))
sess = tf.Session()
print(sess.run(z, feed_dict = {x1: x1list, x2: x2list}))
sess.close()
```
## Exercise 5 (Difficult: FUN)
### Linear Regression with tensorflow
https://onlinecourses.science.psu.edu/stat501/node/382/
Consider the following dataset
```
x = [4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0]
y = [33, 42, 45, 51, 53, 61, 62]
```
We want to find the best parameters $p_0$ and $p_1$ that minimise the MSE (mean squared error) for the data given, in other words we want to do a linear regression on the data $(x,y)$. Given that a matrix solution to find the best parameter is
$$
{\bf p} =(X^TX)^{-1} X^T Y
$$
where $X^T$ is the transpose of the matrix $X$. The matrix $X$ is defined as
$$
X =
\begin{bmatrix}
1 & x_1 \\
... & ... \\
1 & x_n
\end{bmatrix}
$$
The matrix $Y$ is simply a matrix $n\times 1$ containing the values $y_i$.
dimensions are:
- $X$ has dimensions $n\times 2$
- $Y$ has dimensions $n\times 1$
- ${\bf p}$ has dimensions $2\times 1$
Build a computational graph that evaluates $\bf p$ as given above, given the matrices $X$ and $Y$. Note you will have to build the matrices from the data given at the beginning. If you need more information a beatifully long explanation can be found here https://onlinecourses.science.psu.edu/stat501/node/382/
Let's convert ```y``` to a floating list... **Remeber tensorflow is really strict with datatypes**.
```
y = [float(i) for i in y]
y
x = pd.DataFrame(x)
y = pd.DataFrame(y)
x['b'] = 1
x.head()
cols = x.columns.tolist()
cols = cols[-1:] + cols[:-1]
print(cols)
x = x[cols]
x.head()
```
Let's build the computational graph:
**NOTE: if you use tf.float32 you will get results that are slightly different than numpy. So be aware. To be safe you can use ```float64```.**
Always try to be as specific
as you can with dimensions
The first dimensions is defined as "None" so that we use, in necessary,
with different number of observations without rebuilding the graph.
```
tf.reset_default_graph()
xinput = tf.placeholder(tf.float64, [None,2])
yinput = tf.placeholder(tf.float64, [None,1])
```
Multiplication between tensors is somewhat complicated, especially when dealing
with tensors with more dimensions. So we use the method
https://www.tensorflow.org/api_docs/python/tf/einsum
check it out to get more information.
```
tmp = tf.einsum('ij,jk->ik',tf.transpose(xinput) , xinput)
part1 = tf.linalg.inv(tmp)
part2 = tf.einsum('ij,jk->ik',tf.transpose(xinput), yinput)
pout = tf.einsum('ij,jk->ik', part1, part2)
# Reference: https://www.tensorflow.org/api_docs/python/tf/einsum
sess = tf.Session()
print("The best parameters p are:")
print(sess.run(pout, feed_dict = {xinput: x, yinput: y}))
sess.close()
```
If you remember the first week (check https://github.com/michelucci/dlcourse2018_students/blob/master/Week%201%20-%20Mathematic%20introduction/Week%201%20-%20Solution%20to%20exercises.ipynb) you can do the same with ```numpy```
```
part1np = np.linalg.inv(np.matmul(x.transpose() , x))
part2np = np.matmul(x.transpose(), y)
pnp = np.matmul(part1np, part2np)
print(pnp)
```
## Computational Graph for predictions
The same result we got with tensorflow. Now we can build a graph that will use the ```p``` we have found for predictions
```
p = tf.placeholder(tf.float32, [2,1])
xnode = tf.placeholder(tf.float32, [None, 2]) # This time let's be specific with dimensions
pred = tf.tensordot(xnode, p, axes = 1)
sess = tf.Session()
pred_y = sess.run(pred, feed_dict = {p: pnp, xnode: x})
pred_y
```
And those are the **true** values
```
y
```
## Plot of the results
```
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
plt.tight_layout()
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(1, 1, 1)
ax.scatter(y, pred_y, lw = 0.3, s = 80)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw = 3)
ax.set_xlabel('Measured Target Value', fontsize = 16);
ax.set_ylabel('Predicted Target Value', fontsize = 16);
plt.tick_params(labelsize=16)
```
| github_jupyter |
# **Imbalanced Data**
Encountered in a classification problem in which the number of observations per class are disproportionately distributed.
## **How to treat for Imbalanced Data?**<br>
Introducing the `imbalanced-learn` (imblearn) package.
### Data
```
import pandas as pd
import seaborn as sns
from sklearn.datasets import make_classification
# make dummy data
X, y = make_classification(n_samples=5000, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=3,
n_clusters_per_class=1,
weights=[0.01, 0.05, 0.94],
class_sep=0.8, random_state=0)
df = pd.DataFrame(X)
df.columns = ['feature1', 'feature2']
df['target'] = y
df.head()
# visualize the data
sns.countplot(data=df, x=df['target']);
```
We can see that the data are very heavily imbalanced.
--------
# 1) Over-Sampling Approach
## 1.1) naive approach known as Random Over-Sampling
+ We will upsample our minority classes, that is sample with replacement until the number of observations is uniform across all classes.
+ As we can imagine this approach should give us a pause depending on the scale of upsampling we'll be doing.
+ `from imblearn.over_sampling import RandomOverSampler`
## 1.2) another approach is SMOTE (Synthetic Minority Oversampling Technique)
+ in the case, we generate new observations within the existing feature space over our minority classes.
### Now, let's apply an over-sampling approach. For this we'll use **a naive approach known as random over-sampling.**
```
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=0)
X_resampled, y_resampled = ros.fit_resample(X, y)
```
### Let's visualize again after random over-sampling
```
df = pd.DataFrame(y_resampled, columns=['target'])
sns.countplot(data=df, x=df['target']);
```
We have increased the size of each of our minority classes to be uniform with that of our majority class through random sampling.
# 2) Under-Sampling Technique
## 2.1) Naive approach to randomly under-sample our majority class
+ this time we actually throwing out data in our majority class until the number of observations is uniform.
+ `from imblearn.under_sampling import RandomUnderSampler`
### Let's now try an under-sampling technique. Again, we'll start with a naive approach to randomly under-sample our majority class.
```
from imblearn.under_sampling import RandomUnderSampler
rus = RandomUnderSampler(random_state=0)
X_resampled, y_resampled = rus.fit_resample(X, y)
```
### Visualized the resampled data
```
df = pd.DataFrame(y_resampled, columns=['target'])
sns.countplot(data=df, x='target');
```
Data get blanced. However note that there are about 60 observations per class.
**Because of the infrequency of our smallest minority class, we threw out a huge percentage**.
So you might want to consider other methods for this data (like `k-means` and `near-miss`)
| github_jupyter |
```
# ~145MB
!wget -x --load-cookies cookies.txt -O business.zip 'https://www.kaggle.com/yelp-dataset/yelp-dataset/download/py6LEr6zxQNWjebkCW8B%2Fversions%2FlVP0fduiJJo8YKt2vKKr%2Ffiles%2Fyelp_academic_dataset_business.json?datasetVersionNumber=2'
!unzip business.zip
!wget -x --load-cookies cookies.txt -O review.zip 'https://www.kaggle.com/yelp-dataset/yelp-dataset/download/py6LEr6zxQNWjebkCW8B%2Fversions%2FlVP0fduiJJo8YKt2vKKr%2Ffiles%2Fyelp_academic_dataset_review.json?datasetVersionNumber=2'
!unzip review.zip
import pandas as pd
from six.moves import cPickle
import numpy as np
import json
from scipy.sparse import csr_matrix
from sklearn.decomposition import TruncatedSVD
from scipy.sparse.linalg import svds
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
business = []
with open('/content/yelp_academic_dataset_business.json') as fl:
for line in fl:
business.append(json.loads(line))
business = pd.DataFrame(business)
business.head()
review = []
with open('/content/yelp_academic_dataset_review.json') as fl:
for line in fl:
review.append(json.loads(line))
review = pd.DataFrame(review)
review.head()
bcols = ['business_id', 'city', 'categories']
ucols = ['business_id', 'user_id', 'review_id', 'stars']
df = review[ucols].merge(business[bcols], how = 'outer', on= 'business_id')
df = df.dropna()
df.head()
#selecting subset: Phoenix city restaurants
dfx = df[(df.city == 'Phoenix') & (df.categories.str.contains('.Restaurant.', case= False))]
dfx.shape
def get_clean_df(df, min_user_review = 30, min_res_review = 0, cols = ['user_id', 'business_id', 'stars']):
'''Cleans the df and gets rid of the unwanted cols and also allows to filter the user and business based on the min number of reviews received'''
df_new = df[cols]
df_new.dropna(axis = 0, how = 'any', inplace = True)
df_new[cols[1]+'_freq'] = df_new.groupby(cols[1])[cols[1]].transform('count')
df_clean = df_new[df_new[cols[1]+'_freq']>=min_res_review]
df_clean[cols[0]+'_freq'] = df_clean.groupby(cols[0])[cols[0]].transform('count')
df_clean_2 = df_clean[df_clean[cols[0]+'_freq']>=min_user_review]
return df_clean_2
from pandas.api.types import CategoricalDtype
def get_sparse_matrix(df):
'''Converts the df into a sparse ratings matrix'''
unique_users = list(df['user_id'].unique())
unique_bus = list(df['business_id'].unique())
data = df['stars'].tolist()
row = df['user_id'].astype(CategoricalDtype(categories=unique_users)).cat.codes
col = df['business_id'].astype(CategoricalDtype(categories=unique_bus)).cat.codes
sparse_matrix = csr_matrix((data, (row, col)), shape=(len(unique_users), len(unique_bus)))
return sparse_matrix
def get_sparsity(sparse_matrix):
return 1 - sparse_matrix.nnz/(sparse_matrix.shape[0]*sparse_matrix.shape[1])
data = get_sparse_matrix(get_clean_df(dfx, min_user_review=10))
print(get_sparsity(data))
print(data.shape)
def train_val_test_split(sparse_matrix, num_review_val = 2, num_review_test = 2):
'''Split the rating matrix into train ,val, and test marix that are disjoint matrices'''
nzrows, nzcols = sparse_matrix.nonzero()
sparse_matrix_test = csr_matrix(sparse_matrix.shape)
sparse_matrix_val = csr_matrix(sparse_matrix.shape)
sparse_matrix_train = sparse_matrix.copy()
n_users = sparse_matrix.shape[0]
for u in range(n_users):
idx = nzcols[np.where(nzrows == u)]
np.random.shuffle(idx)
test_idx = idx[-num_review_test:]
val_idx = idx[-(num_review_val+num_review_test):-num_review_test]
train_idx = idx[:-(num_review_val+num_review_test)]
sparse_matrix_test[u,test_idx] = sparse_matrix[u,test_idx]
sparse_matrix_val[u,val_idx] = sparse_matrix[u,val_idx]
sparse_matrix_train[u,test_idx] = 0
sparse_matrix_train[u,val_idx] = 0
data = np.array(sparse_matrix_train[sparse_matrix_train.nonzero()])[0]
row = sparse_matrix_train.nonzero()[0]
col = sparse_matrix_train.nonzero()[1]
size = sparse_matrix_train.shape
sparse_matrix_train = csr_matrix((data,(row,col)),shape = size)
mult = sparse_matrix_train.multiply(sparse_matrix_val)
mmult = mult.multiply(sparse_matrix_test)
assert(mmult.nnz == 0)
return sparse_matrix_train, sparse_matrix_val, sparse_matrix_test
train, val, test = train_val_test_split(data)
print(train.nnz, val.nnz, test.nnz)
```
## Model Building
```
def approx_err(k, A, U, S, Vt):
rec_A = np.dot(U[:, :k], np.dot(S[:k,:k], Vt[:k, :]))
idx = np.where(A>0);
diff = A[idx] - rec_A[idx]
return np.linalg.norm(diff)**2/diff.shape[1]
# # svd
# U, S, Vt = np.linalg.svd(train.todense())
# k = np.linspace(2,40,20, dtype = int)
# errors_svd_val = {}
# errors_svd_train = {}
# for i in k:
# errors_svd_val[i] = approx_err(i, val.todense(), U, S, Vt)
# errors_svd_train[i] = approx_err(i, train.todense(), U, S, Vt)
# plt.plot(errors_svd_val.keys(),errors_svd_val.values(), label = 'Validation')
# plt.plot(errors_svd_train.keys(),errors_svd_train.values(), label = 'Train')
# plt.xlabel('k')
# plt.ylabel('MSE')
# plt.legend()
```
ALS
```
def get_mse(pred, actual):
# Ignore zero terms.
pred = pred[actual.nonzero()].flatten()
actual = actual[actual.nonzero()].flatten()
return mean_squared_error(pred, actual)
def als(ratings_matrix, k=40, user_reg=0, res_reg=0, iters=10):
'''Performs ALS for a given ratings_matrix and returns predictions using the latent vector representation User (U x K) and Restaurant (R x K)'''
ratings_matrix = ratings_matrix.T
user_vec = np.random.rand(ratings_matrix.shape[1],k).T
res_vec = np.random.rand(ratings_matrix.shape[0],k).T
for i in range(iters):
for u in range(ratings_matrix.shape[1]):
user_vec[:,u] = np.linalg.solve(np.dot(res_vec,res_vec.T) + user_reg * np.eye(res_vec.shape[0]), np.dot(res_vec,ratings_matrix[:,u]))
for r in range(ratings_matrix.shape[0]):
res_vec[:,r] = np.linalg.solve(np.dot(user_vec,user_vec.T) + res_reg * np.eye(user_vec.shape[0]), np.dot(user_vec,ratings_matrix[r,:].T))
prediction = np.dot(res_vec.T, user_vec)
# error = np.mean((ratings_matrix - prediction)**2)
return np.dot(res_vec.T, user_vec).T
num_features = np.linspace(1,20,5,dtype=int)
test_error_als = []
train_error_als = []
for i in num_features:
preds_als = als(np.array(train.todense()), k=i, iters = 5)
test_err = get_mse(preds_als, np.array(val.todense()))
train_err = get_mse(preds_als, np.array(train.todense()))
test_error_als.append(test_err)
train_error_als.append(train_err)
fig = plt.figure(figsize=(8,5))
plt.plot(num_features,test_error_als,'b-',label = 'validation')
plt.plot(num_features,train_error_als,'r-', label = 'training')
plt.title('MSE vs num_features (for ALS)')
plt.xlabel('Number of features in a feature vector')
plt.ylabel('MSE')
plt.legend()
```
### Refer to [this](https://colab.research.google.com/github/HegdeChaitra/Yelp-Recommendation-System/blob/master/Yelp_Reco_System.ipynb#scrollTo=kAoMx5IHUpsi) for further info
| github_jupyter |
```
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
from scipy import stats
import scipy as sp
import datetime as dt
from ei_net import *
# import cmocean as cmo
%matplotlib inline
##########################################
############ PLOTTING SETUP ##############
EI_cmap = "Greys"
where_to_save_pngs = "../figs/pngs/"
where_to_save_pdfs = "../figs/pdfs/"
save = True
plt.rc('axes', axisbelow=True)
plt.rc('axes', linewidth=2)
##########################################
##########################################
```
# The emergence of informative higher scales in complex networks
# Chapter 04: Effective Information in Real Networks
$EI$ often grows with network size. To compare networks of different sizes, we examine their *effectiveness*, which is the $EI$ normalized by the size of the network to a value between $0.0$ and $1.0$:
$$ \text{effectiveness} = \frac{EI}{\log_2(N)} $$
As the noise and/or the degeneracy of a network increases toward their upper possible bounds, the effectiveness of that network will trend to $0.0$. Regardless of its size, a network wherein each node has a deterministic output to a unique target has an effectiveness of $1.0$.
Here, we examine the effectiveness of 84 different networks corresponding to data from real systems. These networks were selected primarily from the [Konect Network Database](http://konect.cc/), which was used because its networks are publicly available, range in size from dozens to tens of thousands of nodes, often have a reasonable interpretation as a causal structure, and they are diverse, ranging from social networks, to power networks, to metabolic networks. We defined four categories of interest: biological, social, informational, and technological. We selected our networks by using all the available networks (under 40,000 nodes) in the domains corresponding to each category within the Konect database, and where it was appropriate, the [Network Repository](http://networkrepository.com/) as well.
Lower effectiveness values correspond to structures that either have high degeneracy, low determinism, or a combination of both. In the networks we measured, biological networks on average have lower effectiveness values, whereas technological networks on average have the highest effectiveness. This finding aligns intuitively with what we know about the relationship between $EI$ and network structure, and it also supports long-standing hypotheses about the role of redundancy, degeneracy, and noise in biological systems. On the other hand, technological networks such as power grids, autonomous systems, or airline networks are associated with higher effectiveness values on average. One explanation for this difference is that efficiency in human-made technological networks tends to create sparser, non-degenerate networks with higher effectiveness on average.
Perhaps it might be surprising to find that evolved networks have such low effectiveness. But, as we will show, a low effectiveness can actually indicate that there are informative higher-scale (macroscale) dependencies in the system. That is, a low effectiveness can be reflective of the fact that biological systems often contain higher-scale causal structure, which we demonstrate in the following section.
________________________
## 4.1 Effectiveness of Real World Networks
```
import json
json_data = open('../data/real_network_ei.json',"r").read()
out_dict = json.loads(json_data)
list1 = out_dict['Eff']
list1 = list(enumerate(list1))
list2 = sorted(list1, key=lambda x:x[1])
ordering = list(list(zip(*list2))[0])
eff_vals = list(list(zip(*list2))[1])
newcos = ["#ed4f44","#fdcb12","#7f61c3","#00c6c5","#333333"]
cols = ['#88002c',"#ba4b57","#cc5134","#daaa32","#b8ab51","#698b4a","#69d07d","#50c9b5",
"#64b6ff","#786bdb","#573689","#b55083","#c65abb","#bfbfbf","#666666","#333333"]
plt.figure(figsize=(13,20))
for idx,i in enumerate(ordering):
co = out_dict['color'][i]
ef = out_dict['Eff'][i]
plt.hlines(idx,0,ef,color=co,linewidth=4.5)
plt.scatter(eff_vals, list(range(len(eff_vals))),
edgecolors='w',linewidths=1.5,
marker='o', s=130, alpha=0.98,
facecolor=np.array(out_dict['color'])[ordering], zorder=20)
plt.scatter([0]*len(eff_vals), list(range(len(eff_vals))),
marker='s', s=65, alpha=0.98,
edgecolors=np.array(out_dict['newco'])[ordering],
linewidths=3.5, facecolor='w', zorder=20)
domainz = ['Biological','Information','Social','Technological']
for ii, lab in enumerate(domainz):
plt.scatter([-1], [-1], marker='s', s=125,
alpha=0.98,edgecolors=newcos[ii],
linewidths=4.5, facecolor='w', label=lab)
for ii, lab in enumerate(sorted(np.unique(out_dict['Category']))):
plt.plot([-10,-9], [-10,-9], marker='',
alpha=0.98, linewidth=4.0,
color=cols[ii], label=lab)
plt.legend(loc=4, fontsize=19, framealpha=0.85)
plt.yticks(list(range(len(eff_vals))),
np.array(out_dict['Name'])[ordering],
fontsize=14)
plt.xticks(np.linspace(0,1,11),
["%.1f"%i for i in np.linspace(0,1,11)],
size=18)
plt.grid(alpha=0.3, color='#999999',
linestyle='-', linewidth=2.5)
plt.xlabel('Effectiveness', size=20)
plt.xlim(-0.01,1.01)
plt.ylim(-1,len(eff_vals))
if save:
plt.savefig(where_to_save_pngs+"Konect_SortedEffectiveness_withLabels.png", dpi=425, bbox_inches='tight')
plt.savefig(where_to_save_pdfs+"Konect_SortedEffectiveness_withLabels.pdf", bbox_inches='tight')
plt.show()
```
## 4.2 Statistical Comparison of Effectiveness, by Domain
```
rn_bio = np.array([out_dict['Eff'][i] for i in range(len(out_dict['Eff'])) \
if out_dict['Category_EI'][i]=='Biological'])
rn_inf = np.array([out_dict['Eff'][i] for i in range(len(out_dict['Eff'])) \
if out_dict['Category_EI'][i]=='Information'])
rn_soc = np.array([out_dict['Eff'][i] for i in range(len(out_dict['Eff'])) \
if out_dict['Category_EI'][i]=='Social'])
rn_tec = np.array([out_dict['Eff'][i] for i in range(len(out_dict['Eff'])) \
if out_dict['Category_EI'][i]=='Technological'])
labs = {'biological':0,'social':2,"information":1,'technological':3}
a = labs['biological']
b = labs['social']
all_data = [rn_bio,rn_inf,rn_soc,rn_tec]
for lab1 in labs.keys():
a = labs[lab1]
for lab2 in labs.keys():
b = labs[lab2]
if a!=b:
t,p = sp.stats.ttest_ind(all_data[a], all_data[b], equal_var=False)
print("comparing",lab1," \t",
"to \t ",lab2," \t t-statistic = %.7f, \t p < %.8f"%(t,p))
plt.rc('axes', linewidth=1.5)
mult = 0.8
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(15*mult, 15*mult))
noise0 = np.random.uniform(-0.035,0.035,len(all_data[0]))
noise1 = np.random.uniform(-0.035,0.035,len(all_data[1]))
noise2 = np.random.uniform(-0.035,0.035,len(all_data[2]))
noise3 = np.random.uniform(-0.035,0.035,len(all_data[3]))
plt.plot([1]*len(all_data[0]) + noise0, all_data[0],
marker='o',linestyle='', markeredgecolor='k',
markersize=6, color=newcos[0])
plt.plot([3]*len(all_data[1]) + noise1, all_data[1],
marker='o',linestyle='',markeredgecolor='k',
markersize=6, color=newcos[1])
plt.plot([2]*len(all_data[2]) + noise2, all_data[2],
marker='o',linestyle='',markeredgecolor='k',
markersize=6, color=newcos[2])
plt.plot([4]*len(all_data[3]) + noise3, all_data[3],
marker='o',linestyle='',markeredgecolor='k',
markersize=6, color=newcos[3])
parts = ax.violinplot(all_data, positions=[1,3,2,4],
showmeans=False, showmedians=False,
showextrema=False, widths=0.75)
for i in range(len(parts['bodies'])):
pc = parts['bodies'][i]
pc.set_edgecolor(newcos[i])
pc.set_facecolor(newcos[i])
pc.set_alpha(0.85)
pc.set_linewidth(4.0)
parts = ax.violinplot(all_data, positions=[1,3,2,4],
showmeans=False, showmedians=False,
showextrema=False, widths=0.55)
for i in range(len(parts['bodies'])):
pc = parts['bodies'][i]
pc.set_edgecolor(newcos[i])
pc.set_facecolor('w')
pc.set_alpha(0.5)
pc.set_linewidth(0.0)
plt.hlines([np.mean(data) for data in all_data],
[0.67, 2.6925, 1.695, 3.74],
[1.33, 3.3075, 2.305, 4.26],
linestyles='-', colors=newcos,
zorder=1, linewidth=4.5)
plt.plot(np.linspace(-10,-20,5), np.linspace(-10,-20,5),
linestyle='-', marker='>', markersize=18,
markerfacecolor='w', color='#333333',
linewidth=3.5, markeredgecolor='k',
markeredgewidth=2.5, label='Mean', alpha=0.98)
plt.scatter([1,3,2,4],
[np.mean(data) for data in all_data],
zorder=20, marker='>', s=450, facecolor='w',
edgecolors=newcos, linewidths=3.5, alpha=0.98)
ax.set_ylabel('Effectiveness', fontsize=22)
ax.set_xticks([y+1 for y in range(len(all_data))])
ax.set_xticklabels(['biological', 'social',
'information', 'technological'],
fontsize=19, rotation=353)
ax.set_yticks(np.linspace(0,1,6))
ax.set_yticklabels(["%.1f"%i for i in np.linspace(0,1,6)], fontsize=18)
ax.grid(True, linestyle='-', linewidth=3.0, color='#999999', alpha=0.4)
ax.text(1.28,0.07,"n=%i"%len(all_data[0]),
fontsize=22, color=newcos[0])
ax.text(3.20,0.33,"n=%i"%len(all_data[1]),
fontsize=22, color='k')
ax.text(3.20,0.33,"n=%i"%len(all_data[1]),
fontsize=22, color=newcos[1],alpha=0.95)
ax.text(2.26,0.25,"n=%i"%len(all_data[2]),
fontsize=22, color=newcos[2])
ax.text(4.21,0.55,"n=%i"%len(all_data[3]),
fontsize=22, color=newcos[3])
ax.text(2.35,1.065,"**", fontsize=22)
ax.hlines(1.07, labs['biological']+1+0.025,
labs['technological']+1-0.025, linewidth=2.0)
ax.vlines(labs['biological']+1+0.025, 1.045, 1.07, linewidth=2.0)
ax.vlines(labs['technological']+1-0.025, 1.045, 1.07, linewidth=2.0)
ax.text(3.01,1.012,"***", fontsize=22)
ax.hlines(1.015, labs['social']+0.025,
labs['technological']+1-0.025, linewidth=2.0)
ax.vlines(labs['social']+0.025, 0.995, 1.015, linewidth=2.0)
ax.vlines(labs['technological']+1-0.025, 0.995, 1.015, linewidth=2.0)
ax.text(3.47,0.962,"*", fontsize=22)
ax.hlines(0.965, labs['information']+2+0.025,
labs['technological']+1-0.025, linewidth=2.0)
ax.vlines(labs['information']+2+0.025, 0.945, 0.965, linewidth=2.0)
ax.vlines(labs['technological']+1-0.025, 0.945, 0.965, linewidth=2.0)
x1 = ax.plot([], [], marker='.', linestyle='', c='w')
x2 = ax.plot([], [], marker='.', linestyle='', c='w')
x3 = ax.plot([], [], marker='.', linestyle='', c='w')
legs=[x1,x2,x3]
leg1 = ax.legend(bbox_to_anchor=(1.009,0.22), fontsize=23,
ncol=1, columnspacing=2, framealpha=0.95)
ax.legend([l[0] for l in legs],
["p < 1e-06 ***","p < 1e-05 **","p < 1e-03 *"],
handletextpad=-1.50,
bbox_to_anchor=(1.0055,0.16), fontsize=18, ncol=1,
columnspacing=-3.75, framealpha=0.95)
ax.add_artist(leg1)
ax.set_ylim(-0.015, 1.1)
ax.set_xlim(0.25, 4.75)
if save:
plt.savefig(
where_to_save_pngs+\
"Konect_Effectiveness_Violinplots.png",
dpi=425, bbox_inches='tight')
plt.savefig(
where_to_save_pdfs+\
"Konect_Effectiveness_Violinplots.pdf",
bbox_inches='tight')
plt.show()
```
## End of Chapter 04. In [Chapter 05](https://nbviewer.jupyter.org/github/jkbren/einet/blob/master/code/Chapter%2005%20-%20Causal%20Emergence%20in%20Preferential%20Attachment%20and%20SBMs.ipynb) we'll start to look at *causal emergence* networks.
_______________
| github_jupyter |
---
**Export of unprocessed features**
---
```
import pandas as pd
import numpy as np
import os
import re
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import CountVectorizer
import random
import pickle
from scipy import sparse
import math
import pprint
import sklearn as sk
import torch
from IPython.display import display
from toolbox import *
# from myMLtoolbox import *
%matplotlib inline
sns.set()
sns.set_context("notebook")
sns.set(rc={'figure.figsize':(14,6)})
cfg = load_cfg()
logVersions = load_LogVersions()
```
---
**For figures**
```
from figures_toolbox import *
mpl.rcParams.update(mpl.rcParamsDefault)
sns.set(
context='paper',
style='ticks',
)
%matplotlib inline
mpl.rcParams.update(performancePlot_style)
```
# Get uniprot list of proteins
```
uniprotIDs = pd.read_csv(
os.path.join(cfg['rawDataUniProt'],
"uniprot_allProteins_Human_v{}.pkl".format(logVersions['UniProt']['rawData'])),
header=None,
names=['uniprotID']
)
glance(uniprotIDs)
```
## Hubs
```
path0 = os.path.join(
cfg['outputPreprocessingIntAct'],
"listHubs_20p_v{}.pkl".format(logVersions['IntAct']['preprocessed']['all'])
)
with open(path0, 'rb') as f:
list_hubs20 = pickle.load(f)
glance(list_hubs20)
```
# Load feature datasets
```
featuresDict = {
'bioProcessUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"bioProcessUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0', # '0', 'mean', 'none'
'normalise':False,
'isBinary': True,
},
'cellCompUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"cellCompUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'molFuncUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"molFuncUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'domainUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"domainFT_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'motifUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"motif_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'Bgee': {
'path': os.path.join(
cfg['outputPreprocessingBgee'],
"Bgee_processed_v{}.pkl".format(logVersions['Bgee']['preprocessed'])
),
'imputeNA': '0',
'normalise':True,
'isBinary': False,
},
'tissueCellHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"tissueIHC_tissueCell_v{}.pkl".format(logVersions['HPA']['preprocessed']['tissueIHC_tissueCell'])
),
'imputeNA': '0',
'normalise':True,
'isBinary': False,
},
'tissueHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"tissueIHC_tissueOnly_v{}.pkl".format(logVersions['HPA']['preprocessed']['tissueIHC_tissueOnly'])
),
'imputeNA': '0',
'normalise':True,
'isBinary': False,
},
'RNAseqHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"consensusRNAseq_v{}.pkl".format(logVersions['HPA']['preprocessed']['consensusRNAseq'])
),
'imputeNA': 'mean',
'normalise':True,
'isBinary': False,
},
'subcellularLocationHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"subcellularLocation_v{}.pkl".format(logVersions['HPA']['preprocessed']['subcellularLocation'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'sequence': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"sequenceData_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA':'none',
'normalise':False,
'isBinary': False,
}
}
def sneakPeak(featuresDict):
for feature, details in featuresDict.items():
df = pd.read_pickle(details['path'])
print('## ',feature)
glance(df)
print()
sneakPeak(featuresDict)
```
# EDA
**Number of GO terms for hubs and lone proteins**
```
def count_GOterms():
countGO = uniprotIDs.copy()
for feature, details in featuresDict.items():
print(feature)
if feature != 'sequence':
df = pd.read_pickle(details['path'])
foo = df.set_index('uniprotID').ne(0).sum(axis=1)
foo2 = pd.DataFrame(foo)
foo2.columns = [feature]
foo2.reset_index()
countGO = countGO.join(foo2, on='uniprotID', how='left')
return countGO
countGO = count_GOterms()
glance(countGO)
countGO.info()
countGO['isHub'] = countGO.uniprotID.isin(list_hubs20)
glance(countGO)
sns.displot(countGO, x="bioProcessUniprot", hue="isHub", kind='kde', common_norm=False);
doPlot=False
for feature in featuresDict.keys():
if feature != 'sequence':
foo = countGO.loc[countGO.isHub][feature]
bar = countGO.loc[~countGO.isHub][feature]
print(f"{feature}: on average, hubs have {foo.mean():.2f} GO terms, non-hubs have {bar.mean():.2f} (medians {foo.median():.2f} vs {bar.median():.2f})")
if doPlot:
sns.displot(countGO, x=feature, hue="isHub", kind='kde', common_norm=False)
plt.show();
```
# Export vectors lengths
```
def getVectorsLengths(featuresDict):
vectorsLengths = dict()
for feature, details in featuresDict.items():
df = pd.read_pickle(details['path'])
assert 'uniprotID' in df.columns
vectorsLengths[feature] = df.shape[1]-1 # -1 to remove uniprotID
return vectorsLengths
vectorsLengths = getVectorsLengths(featuresDict)
print(vectorsLengths)
versionRawImpute_overall = '6-0'
logVersions['featuresEngineering']['longVectors']['overall'] = versionRawImpute_overall
dump_LogVersions(logVersions)
with open(os.path.join(
cfg['outputFeaturesEngineering'],
"longVectors_lengths_v{}.pkl".format(versionRawImpute_overall)
), 'wb') as f:
pickle.dump(vectorsLengths, f)
```
# Format long vectors
```
def formatRawData(featuresDict, uniprotIDs, vectorsLengths):
out = dict()
out['uniprotID'] = uniprotIDs.uniprotID.to_list()
for feature, details in featuresDict.items():
print(feature)
df = pd.read_pickle(details['path'])
print(' - initial dim:', df.shape)
print(' - merge with reference index list')
df = uniprotIDs.merge(
df,
on = 'uniprotID',
how='left',
validate='1:1'
)
df.set_index('uniprotID', inplace=True)
print(' - new dim:', df.shape)
assert details['imputeNA'] in ['0','mean','none']
if details['imputeNA'] == 'mean':
print(' - mean imputation')
meanValues = df.mean(axis = 0, skipna = True)
meanValues[np.isnan(meanValues)] = 0
df.fillna(meanValues, inplace=True)
# sanity check
assert df.isna().sum().sum() == 0
elif details['imputeNA'] == '0':
print(' - imputate with 0')
df.fillna(0, inplace=True)
# sanity check
assert df.isna().sum().sum() == 0
else:
print(' - no imputation: {:,} NAs'.format(df.isna().sum().sum()))
if details['normalise']:
print(' - normalise')
scal = sk.preprocessing.StandardScaler(copy = False)
df = scal.fit_transform(df)
elif feature == 'sequence':
df = df.sequence.to_list()
else:
df = df.values
# compare shape to vectorsLengths
if feature == 'sequence':
assert isinstance(df, list)
else:
assert df.shape[1] == vectorsLengths[feature]
out[feature] = df.copy()
return out
def sneakPeak2(featuresDict, n=5):
for feature, df in featuresDict.items():
print('## ',feature)
glance(df, n=n)
print()
```
## Without normalising binary features
```
for feature in featuresDict:
if featuresDict[feature]['isBinary']:
featuresDict[feature]['normalise'] = False
featuresDict
outDict = formatRawData(featuresDict=featuresDict, uniprotIDs=uniprotIDs, vectorsLengths=vectorsLengths)
sneakPeak2(outDict)
sneakPeak2(outDict, n=0)
```
---
**Export**
- v6.1 09/11/2021
```
versionRawLimitedImpute = '6-1'
# logVersions['featuresEngineering'] = dict()
# logVersions['featuresEngineering']['longVectors']=dict()
logVersions['featuresEngineering']['longVectors']['keepBinary'] = versionRawLimitedImpute
dump_LogVersions(logVersions)
with open(os.path.join(
cfg['outputFeaturesEngineering'],
"longVectors_keepBinary_v{}.pkl".format(versionRawLimitedImpute)
), 'wb') as f:
pickle.dump(outDict, f)
```
## WITH normalising binary features
```
for feature in featuresDict:
if featuresDict[feature]['isBinary']:
featuresDict[feature]['normalise'] = True
featuresDict
outDict2 = formatRawData(featuresDict=featuresDict, uniprotIDs=uniprotIDs, vectorsLengths=vectorsLengths)
sneakPeak2(outDict2)
```
---
**Export**
- v6.1 09/11/2021
```
versionRawImputeAll = '6-1'
logVersions['featuresEngineering']['longVectors']['imputeAll'] = versionRawImputeAll
dump_LogVersions(logVersions)
with open(os.path.join(
cfg['outputFeaturesEngineering'],
"longVectors_imputeAll_v{}.pkl".format(versionRawImputeAll)
), 'wb') as f:
pickle.dump(outDict, f)
```
| github_jupyter |
# Case 2.2
## How do users engage with a mobile app for automobiles?
_"It is important to understand what you can do before you learn how to measure how well you seem to have done it." – J. Tukey
As we saw in the previous case, careful data vizualization (DV) can guide or even replace formal statistical analysis and model building. Here, we'll continue with more complex and computationally-intensive visualizations.
## Introduction
**Business Context.** A recent trend among car manufacturers is to provide continued support through mobile applications. Features of these apps include services like remote ignition, GPS location, anti-theft mechanisms, maintenance reminders, and promotion pushes. Manufacturers are keen to maximize engagement with their app because they believe this increases relationship depth and brand loyalty with the customer. However, app usage is often limited, with many customers abandoning the app after only a short time period or never even opening it in the first place.
You are a data scientist for a large luxury automobile company. Your company wants you to uncover behavioral patterns of the users who engage with the app. They believe that if you can find discernible patterns, your company can leverage those insights to give users incentives to use the app more frequently.
**Business Problem.** Your employer would like you to answer the following: **"How do users currently engage with your mobile app and how has that engagement changed over time?"**
**Analytical Context.** In this case, we will look at data on a subset of 105 customers (out of 1,000 total app users) for the first four weeks after installing the app. This small subset of the data is chosen as a representative sample. Data were collected as part off a beta version of the app.
We will not just present a catalog of different visualizations but rather, we will look at how domain questions can guide visualizations and how carefully constructed visualizations can generate new questions and insights.
## First look at the data
As always, let's begin by having a look at the data and computing a few summary statistics. The data set contains
105 rows and 116 columns. Most of the columns represent app data collected on day $j$ ($1 \le j \le 28$):
| Variable name| Description | Values |
|--------------|--------------|------------|
| age | Ordinal age, coded: 1 (<= 25), 2 (26-34), 3 (35-50), 4 (50+)| Int: 1-4 |
| sex | Categorical sex | Char: F, M|
| device_type | Android or OS X | String: Andr, X|
| vehicle_class| Luxury or standard vehicle| String: Lx, Std|
| p_views_j, j=1,...,28| Ordinal page views on day j| Int: 1-5 |
| major_p_type_j, j=1,...,28| Majority page type| String: Main, Prom, Serv|
| engagement_time_j, j=1,...,28| Ordinal engagement time per day | Int: 0-5|
| drive_j, j=1,...,28| Indicator that user drove| Int: 0, 1|
We see that a lot of the data are **ordinal variables**. An ordinal variable is a categorical variable where the categories are numbers and the relative values of those numbers matter; however, the absolute values of those numbers does not. In other words, for a given ordinal variable $x$, a larger numbered category means "more of $x$" than a smaller numbered category; however, the category number does not indicate the actual amount of $x$. For example, here `age` is coded as an ordinal variable; the categorical value of `3` clearly indicates "more age" than the categorical value of `1` (35 - 50 years of age vs. under 25 years of age), but the specific category value `3` or `1` is meaningless.
Below is some more information about some of the other variables:
1. The only allowable mobile platforms are Android (coded `Andr`) or OS X (coded `X`) and this is collected automatically when the app is installed; thus, we expect this variable to have no missing values.
2. The vehicle identification number was required to sign in and from this `vehicle_class` was automatically populated; thus, we also expect this variable to have no missing values.
3. The variable `major_p_type_j` is the majority page type for the user on day j. In other words, it's the type of page which is viewed most often. It's coded as a categorical variable taking the values `Main` for maintenance, `Prom` for promotions, and `Serv` for services. Here, services means the app's services (e.g. automatic start, GPS location, etc.), rather than, say, scheduling an appointment to get the car serviced (which would be categorized as maintenance).
Furthermore, a lot of the data here is "opt-in" only; that is, it is only recorded if the user was active on the app that day, and missing otherwise. For example, `p_views_j`, `major_p_type_j`, `engagement_time_j`, and `drive_j` are all "opt-in" variables.
### Exercise 1:
What is the significance of the variables mentioned above being opt-in? What insights can we derive from this?
Given this realization about the "opt-in" data, it makes sense for us to first understand patterns surrounding what data is missing.
## Understanding and visualizing patterns in the missing data
As you saw in the Python cases, missing data is a staple of almost any dataset we will encounter. This one is no different. This dataset has substantial missing data, with nearly 60% of subjects missing a value for at least one column.
A useful tool to look at the structure of missing data is a **missingness plot**, which is a grid where the rows correspond to individuals and the columns correspond to the variables (so in our case, this will be a 106 x 115 grid).
The $(i,j)$-th square of the grid is colored white if variable $j$ was missing for subject $i$. A first pass at a missingness plot gives us:
<img src="img/missingnessPlotOne.png" width="1200">
### Question:
Do you spot any patterns in the missing values here?
### Exercise 2:
What are some things you can do with the dataset to visualize the missing data better?
In light of this, let's remake the missingness plot with the similar variables grouped together:
<img src="img/missingnessPlotTwo.png" width="1200">
### Exercise 3:
What patterns do you notice here? Do these patterns make sense based on your understanding of the problem?
We can make the pattern from Exercise 2 even more apparent by not just grouping the "opt-in" data together by type of information conveyed, but by grouping them all together, regardless of type. In this case the missingness plot looks like:
<img src="img/missingnessPlotThree.png" width="1200">
### Exercise 4:
A natural question to ask is 'what percentage of users were still engaged as of a certain day?'. How can we modify the above plot to better visualize this?
<img src="img/missingnessPlotFour.png" width="1200">
From this plot it is immediately apparent that some subjects are dropping off and not returning; the data shows a **nearly monotone missingness pattern** which is useful for weighting and multiple imputation schemes (such methods are discussed in future cases on data wrangling). Furthermore, a significant proportion of users were engaged with the app throughout the entire 4-week period.
We now see the power of using contextual knowledge of the problem and dataset itself in the data visualization process. **The preceding four plots all contained the same underlying information, yet the later plots were clearly much easier to draw insights from than the earlier ones.**
## Investigating in-app behavior
Now that we've gleaned basic insights into whether or not users engage with the app at all, it's time to do a more detailed analysis of their behavior within the app. We'll start by looking at page views.
### Evaluating patterns in page views
To stakeholders, page views are a key measure of engagement. Let's identify patterns in the number of page views per day. Recall that page views is an ordinal variable (ordered categorical variable) coded 1-5. Here 1 codes 0-1 actual page views, with 1 indicating that the app was opened and then closed without navigating past the splash page. For each person, we have a sequence of up to 28 observations. Let's first create a parallel coordinates plot with one line per subject:
<img src="img/matplotOne.png" width="1200">
The preceding plot is extremely difficult to read. But we don't care so much about patterns for any individual user as much as the aggregate set of users. Thus, let's graph a line representing the average page views per person. The following plot shows this in black:
<img src="img/matplotTwo.png" width="1200">
### Exercise 5:
There seems to be some kind of periodicity in the above smoothed plot. What might explain this pattern?
#### Clustering by user cohorts
Domain experts who have run qualitative studies of user behavior believe that there are different groups, or **cohorts**, of users, where the users within a single cohort behave similarly. They believe that page view behavior would be more homogeneous within any given cohort. However, these cohorts are not directly observable.
Using clustering methods (which you will learn about in future cases), we have segregated the users into three groups based on their similarities:
<img src="img/matplotG1.png" width="1200">
<img src="img/matplotG2.png" width="1200">
<img src="img/matplotG3.png" width="1200">
### Exercise 6:
Describe the page view behaviors within each cohort.
### Exercise 7:
Which cohort of users do you think are more likely to look at promotional pages (major page type category `Prom`)?
### Analyzing patterns in major page type
Let's have a look at the major page type over time across our three user cohorts:
<img src="img/pagetypeG1.png" width="1200">
<img src="img/pagetypeG2.png" width="1200">
<img src="img/pagetypeG3.png" width="1200">
From this, we can see that the third group is indeed the most engaged with the promotional pages.
### Exercise 8:
What are some potential next steps if you wanted to do a deep dive into user page view behavior? What additional data might you want to collect on users?
## Predicting dropout from page view behavior
Because page view behavior is believed to be strongly related to engagement with the app and likelihood of discontinuation, we would like to see if we can predict the point of disengagement by analyzing the page view behavior within each cohort. We start by simply labeling the last observation (i.e. day of usage) for each subject with a large red dot:
<img src="img/matplotMissingG1.png" width="1100">
<img src="img/matplotMissingG2.png" width="1100">
<img src="img/matplotMissingG3.png" width="1100">
### Exercise 9:
Do you notice any patterns in page views preceding dropout?
### Exercise 10:
Work with a partner. Based on the preceding visualizations, propose an adaptive intervention strategy that monitors a user's page views and then offers them an incentive to continue using the app right when we believe that the incentive would have the most impact. Assume that you can offer at most one such incentive during the first four weeks of app use.
## Conclusions
We explored usage and disengagement patterns among users of a mobile app for a car manufacturer. We saw that most users still remained engaged with the app even after 28 days, and that there were three significantly distinct cohorts of users. We used these patterns to generate ideas for intervention strategies that might be used to increase app usage and reduce disengagement. These visualizations are an excellent starting point for building statistical models or designing experiments to test theories about drivers of disengagement.
## Takeaways
In this case, you looked at more types of plots and how to draw conclusions from them. You also learned how these conclusions can drive further questions and plotting. Some key insights include:
1. Sometimes it is important to reorder the data according to some variable in order to derive insights (as we saw with the missingness plot).
2. Sometimes additional computation or data manipulation is required in order to tease a meaningful pattern from a data visualization (as we saw with the clustering & averaging for the parallel coordinates plots with the three cohorts).
3. Domain knoweldge and understanding the context of the problem and data at hand is crucial. Without this, we would never have been able to create the visualizations we did and draw the conclusions we did from the missingness plot and the parallel coordinates plots.
| github_jupyter |
```
import import_ipynb
import matplotlib.pyplot as plt
from FULL_DATA import final_df
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk.probability import FreqDist
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from nltk.corpus import stopwords
#Making Labels
final_df['SENTIMENT'] = [0 if (x > 1 and x<3.4999) else 1 if (x > 3.5 and x<4.49999) else -1 for x in final_df['ratings']]
print(final_df['SENTIMENT'].value_counts())
print(final_df.shape)
final_df.head()
final_df['SENTIMENT'].value_counts()
# iterate through each sentence in the file
data = []
for i in final_df['TRANSCRIPTS']:
temp = []
# tokenize the sentence into words
# print(i)
for j in word_tokenize(i):
if j in temp:
pass
elif j in stopwords.words('english'):
pass
else:
temp.append(j.lower())
data.append(temp)
#data
tf=TfidfVectorizer(lowercase=True,max_df = .9,min_df=.1,ngram_range = (1,1))
text_tf= tf.fit_transform(final_df['TRANSCRIPTS'])
tfidf = dict(zip(tf.get_feature_names(), tf.idf_))
# tfidf
#Word2Vec
max_len = 500
from gensim.models import Word2Vec
word2vec = Word2Vec(data, min_count=2,size = max_len, window = 5)
vocabulary = word2vec.wv.vocab
avg_list = []
import numpy as np
for i in final_df['TRANSCRIPTS']:
vec = np.zeros(max_len).reshape((1, max_len))
count = 0
# print("iiiiiiiiiiiiiiiiiiiiii",i)
for j in word_tokenize(i):
# print(j)
try:
vec += word2vec[j].reshape((1, max_len)) * tfidf[j]
count += 1.
except KeyError:
continue
if count != 0:
vec /= count
avg_list.append(vec[-1])
import pandas as pd
X_train, X_test, y_train, y_test = train_test_split(
pd.DataFrame(avg_list), final_df['SENTIMENT'], test_size=0.15, random_state=1)
from keras.utils import to_categorical
num_classes = 3
y_train_adjusted = to_categorical(np.array(y_train), num_classes = num_classes)
y_test_adjusted = to_categorical(np.array(y_test), num_classes = num_classes)
#Simple Feed-Forward
import tensorflow
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Embedding
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Activation
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.optimizers import SGD
epochs = 1500
#lr =.1
num_classes = 3
model = Sequential()
model.add(Dense(100, activation='sigmoid', input_dim=max_len))
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation = 'softmax'))
#sgd = SGD(lr=0.05, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer = 'adam', metrics=['acc'])
model.fit(np.array(X_train), y_train_adjusted, epochs=epochs, batch_size=32, verbose=0,validation_data=(np.array(X_test), y_test_adjusted), shuffle=False)
from sklearn import metrics
score = model.evaluate(X_test, y_test_adjusted, batch_size=32, verbose=2)
y_prob = model.predict(X_test)
predicted = np.argmax(y_prob, axis = 1)
sklearn_y_test = np.argmax(y_test_adjusted, axis = 1)
print("Making Sure Accuracy is the same:",metrics.accuracy_score(sklearn_y_test, predicted))
print("Feed-Foward Precision:",metrics.precision_score(sklearn_y_test, predicted, average = 'weighted'))
print("Feed-Foward Recall:",metrics.recall_score(sklearn_y_test, predicted, average = 'weighted'))
print("Feed-Foward F1:",metrics.f1_score(sklearn_y_test, predicted, average = 'weighted'))
#For Report - Baseline
'''
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(final_df['TRANSCRIPTS'], final_df['SENTIMENT'], test_size=0.15, random_state=1)
print(y_train)
print()
'''
predicted_baseline = np.full(y_test.size, 1)
print("Baseline Accuracy",metrics.accuracy_score(y_test, predicted_baseline))
print("Baseline Precision:",metrics.precision_score(y_test, predicted_baseline, average = 'weighted'))
print("Baseline Recall:",metrics.recall_score(y_test, predicted_baseline, average = 'weighted'))
print("Baseline F1:",metrics.f1_score(y_test, predicted_baseline, average = 'weighted'))
#Printing Confusion Matrix - For Report
print("Confusion Matrix DL:")
print(metrics.multilabel_confusion_matrix(sklearn_y_test, predicted))
```
| github_jupyter |
#### Arbitrary Value Imputation
this technique was derived from kaggle competition
It consists of replacing NAN by an arbitrary value
```
import pandas as pd
df=pd.read_csv("titanic.csv", usecols=["Age","Fare","Survived"])
df.head()
def impute_nan(df,variable):
df[variable+'_zero']=df[variable].fillna(0)
df[variable+'_hundred']=df[variable].fillna(100)
df['Age'].hist(bins=50)
```
### Advantages
- Easy to implement
- Captures the importance of missingess if there is one
### Disadvantages
- Distorts the original distribution of the variable
- If missingess is not important, it may mask the predictive power of the original variable by distorting its distribution
- Hard to decide which value to use
## How To Handle Categroical Missing Values
##### Frequent Category Imputation
```
df.columns
df=pd.read_csv('loan.csv', usecols=['BsmtQual','FireplaceQu','GarageType','SalePrice'])
df.shape
df.isnull().sum()
df.isnull().mean().sort_values(ascending=True)
```
### Compute the frequency with every feature
```
df['BsmtQual'].value_counts().plot.bar()
df.groupby(['BsmtQual'])['BsmtQual'].count().sort_values(ascending=False).plot.bar()
df['GarageType'].value_counts().plot.bar()
df['FireplaceQu'].value_counts().plot.bar()
df['GarageType'].value_counts().index[0]
df['GarageType'].mode()[0]
def impute_nan(df,variable):
most_frequent_category=df[variable].mode()[0]
df[variable].fillna(most_frequent_category,inplace=True)
for feature in ['BsmtQual','FireplaceQu','GarageType']:
impute_nan(df,feature)
df.isnull().mean()
```
#### Advantages
1. Easy To implement
2. Fater way to implement
#### Disadvantages
1. Since we are using the more frequent labels, it may use them in an over respresented way, if there are many nan's
2. It distorts the relation of the most frequent label
##### Adding a variable to capture NAN
```
df=pd.read_csv('loan.csv', usecols=['BsmtQual','FireplaceQu','GarageType','SalePrice'])
df.head()
import numpy as np
df['BsmtQual_Var']=np.where(df['BsmtQual'].isnull(),1,0)
df.head()
df['BsmtQual'].mode()[0]
df['BsmtQual'].fillna(frequent,inplace=True)
df.head()
df['FireplaceQu_Var']=np.where(df['FireplaceQu'].isnull(),1,0)
frequent=df['FireplaceQu'].mode()[0]
df['FireplaceQu'].fillna(frequent,inplace=True)
df.head()
```
#### Suppose if you have more frequent categories, we just replace NAN with a new category
```
df=pd.read_csv('loan.csv', usecols=['BsmtQual','FireplaceQu','GarageType','SalePrice'])
df.head()
def impute_nan(df,variable):
df[variable+"newvar"]=np.where(df[variable].isnull(),"Missing",df[variable])
for feature in ['BsmtQual','FireplaceQu','GarageType']:
impute_nan(df,feature)
df.head()
df=df.drop(['BsmtQual','FireplaceQu','GarageType'],axis=1)
df.head()
```
| github_jupyter |
# Muscle modeling
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
There are two major classes of muscle models that have been used in biomechanics and motor control: the Hill-type and Huxley-type models. They differ mainly on how the contractile element is modeled. In Hill-type models, the modeling of the contractile element is phenomenological; arbitrary mathematical functions are used to reproduce experimental observations relating muscle characteristics (such as excitation/activation, muscle length and velocity) with the muscle force. In Huxley-type models, the modeling of the contractile element is mechanistic; the mathematical functions used represent the hypothesized mechanisms for the cross-bridge dynamics (Tsianos and Loeb, 2013). Huxley-type models tend to produce more realistic results than Hill-type models for certain conditions but they have a higher computational demand. For this reason, Hill-type models are more often employed in musculoskeletal modeling and simulation.
Hill-type muscle models are presented in several texts (e.g., Erdermir et al. 2007; He et al., 1991; McMahon, 1984; Nigg and Herzog, 2007; Robertson et al., 2013, Thelen, 2003; Tsianos and Loeb, 2013, Winters, 1990; Zajac, 1989; Zatsiorsky and Prilutsky, 2012) and implemented in many software for modeling and simulation of the musculoskeletal dynamics of human movement (e.g., the free and open source software [OpenSim](https://simtk.org/home/opensim)).
Next, let's see a brief overview of a Hill-type muscle model and a basic implementation in Python.
## Hill-type muscle model
Hill-type models are developed to reproduce the dependence of force with the length and velocity of the muscle-tendon unit and parameters are lumped and made dimensionless in order to represent different muscles with few changes in these parameters. A Hill-type model is complemented with the modeling of the activation dynamics (i.e., the temporal pattern of muscle activation and deactivation as a function of the neural excitation) to produce more realistic results. As a result, the force generated will be a function of three factors: the length and velocity of the muscle-tendon unit and its activation level $a$.
A Hill-type muscle model has three components (see figure below): two for the muscle, an active contractile element (CE) and a passive elastic element (PE) in parallel with the CE, and one component for the tendon, an elastic element (SE) in series with the muscle. In some variations, a damping component is added parallel to the CE as a fourth element. A [pennation angle](http://en.wikipedia.org/wiki/Muscle_architecture) (angle of the pennate fibers with respect to the force-generating axis) is also included in the model. In a simpler approach, the muscle and tendon are assumed massless.
<figure><img src="./../images/muscle_hill.png" width=400 alt="Hill-type muscle model."/><figcaption><center><i>Figure. A Hill-type muscle model with three components: two for the muscle, an active contractile element, $\mathsf{CE}$, and a passive elastic element in parallel, $\mathsf{PE}$, with the $\mathsf{CE}$, and one component for the tendon, an elastic element in series, $\mathsf{SE}$, with the muscle. $\mathsf{L_{MT}}$: muscle–tendon length, $\mathsf{L_T}$: tendon length, $\mathsf{L_M}$: muscle fiber length, $\mathsf{F_T}$: tendon force, $\mathsf{F_M}$: muscle force, and $α$: pennation angle.</i></center></figcaption>
Let's now revise the models of a Hill-type muscle with three components and activation dynamics by two references:
1. [Thelen (2003)](http://simtk-confluence.stanford.edu:8080/display/OpenSim/Thelen+2003+Muscle+Model) with some of the adjustments described in Millard et al. (2013). Hereafter, Thelen2003Muscle or T03.
2. [McLean, Su, van den Bogert (2003)](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Hereafter, McLean2003Muscle or M03.
First, let's import the necessary Python libraries and customize the environment:
```
import numpy as np
from scipy.integrate import ode, odeint
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 3
matplotlib.rcParams['font.size'] = 13
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rc('axes', grid=True, labelsize=14, titlesize=16, ymargin=0.05)
matplotlib.rc('legend', numpoints=1, fontsize=11)
```
### Force-length relationship
In a Hill-type model, the force a muscle can generate depends on its length due to two factors:
1. The active force of the contractile element (CE), which in turn depends on the spatial superposition of the actin and myosin molecules to form cross-bridges at the sarcomere. A maximum number of cross-bridges will be formed at an optimal fiber length, generating a maximum force. When a fiber is too stretched or too shortened, fewer cross-bridges will be formed, decreasing the force generated.
2. The passive and parallel elastic element (PE), which behaves as a nonlinear spring where no force is generated below a certain length (the slack length) and force increases with the muscle elongation.
#### Force-length relationship of the contractile element
Thelen2003Muscle represented the normalized force-length relationship of the contractile element by a Gaussian function:
\begin{equation}
\bar{f}_{l,CE} = exp\left[-(\bar{L}_M-1)^2/\gamma\right]
\label{}
\end{equation}
where $\gamma$ is a shape factor and $\bar{L}_M$ is the muscle fiber length normalized by the optimal muscle fiber length at which maximal force can be produced, $L_{Mopt}$:
\begin{equation}
\bar{L}_M=\dfrac{L_M}{L_{Mopt}}
\label{}
\end{equation}
Thelen2003Muscle adopted $\gamma=0.45$. The actual force produced is obtained multiplying $\bar{f}_{l,CE}$ by the maximum isometric muscle force, $F_{M0}$. Thelen2003Muscle assumed that the maximum isometric muscle forces for old adults were 30% lower than those used for young adults.
McLean2003Muscle represented the force-length relationship of the contractile element (not normalized) as a function of muscle length (not normalized) by a quadratic function:
\begin{equation}
f_{l,CE} = max \left\{
\begin{array}{l l}
F_{Mmin} \\
F_{M0}\left[1 - \left(\dfrac{L_M-L_{Mopt}}{WL_{Mopt}}\right)^2\right]
\end{array} \right.
\label{}
\end{equation}
where $W$ is a dimensionless parameter describing the width of the force-length relationship. A minimum force level $F_{Mmin}$ is employed for numerical stability.
McLean2003Muscle adopted $W=1$ and $F_{Mmin}=10 N$.
The corresponding Python functions are:
```
def flce_T03(lm=1, gammal=0.45):
"""Thelen (2003) force of the contractile element as function of muscle length.
Parameters
----------
lm : float, optional (default=1)
normalized muscle fiber length
gammal : float, optional (default=0.45)
shape factor
Returns
-------
fl : float
normalized force of the muscle contractile element
"""
fl = np.exp(-(lm-1)**2/gammal)
return fl
def flce_M03(lm=1, lmopt=1, fm0=1, fmmin=0.001, wl=1):
"""McLean (2003) force of the contractile element as function of muscle length.
Parameters
----------
lm : float, optional (default=1)
muscle (fiber) length
lmopt : float, optional (default=1)
optimal muscle fiber length
fm0 : float, optional (default=1)
maximum isometric muscle force
fmmin : float, optional (default=0.001)
minimum muscle force
wl : float, optional (default=1)
shape factor of the contractile element force-length curve
Returns
-------
fl : float
force of the muscle contractile element
"""
fl = np.max([fmmin, fm0*(1 - ((lm - lmopt)/(wl*lmopt))**2)])
return fl
```
And plots of these functions:
```
lm = np.arange(0, 2.02, .02)
fce_T03 = np.zeros(lm.size)
fce_M03 = np.zeros(lm.size)
for i in range(len(lm)):
fce_T03[i] = flce_T03(lm[i])
fce_M03[i] = flce_M03(lm[i])
plt.figure(figsize=(7, 4))
plt.plot(lm, fce_T03, 'b', label='T03')
plt.plot(lm, fce_M03, 'g', label='M03')
plt.xlabel('Normalized length')
plt.ylabel('Normalized force')
plt.legend(loc='best')
plt.suptitle('Force-length relationship of the contractile element', y=1, fontsize=16)
plt.show()
```
Similar results when the same parameters are used.
#### Force-length relationship of the parallel element
Thelen2003Muscle represents the normalized force of the parallel (passive) element of the muscle as a function of muscle length (normalized by the optimal muscle fiber length) by an exponential function:
\begin{equation}
\bar{F}_{PE}(\bar{L}_M) = \dfrac{exp\left[k_{PE}(\bar{L}_M-1)/\epsilon_{M0}\right]-1}{exp(k_{PE})-1}
\label{}
\end{equation}
where $k_{PE}$ is an exponential shape factor and $\epsilon_{M0}$ is the passive muscle strain due to maximum isometric force:
\begin{equation}
\epsilon_{M0}=\dfrac{L_M(F_{M0})-L_{Mslack}}{L_{Mslack}}
\label{}
\end{equation}
where $L_{Mslack}$ is the muscle slack length. Thelen2003Muscle adopted $L_{Mslack} = L_{Mopt}$.
Thelen2003Muscle adopted $k_{PE}=5$ and $\epsilon_{M0}=0.6$ for young adults ($\epsilon_{M0}=0.5$ for old adults). The actual force produced is obtained multiplying $\bar{F}_{PE}$ by the maximum isometric muscle force, $F_{M0}$.
McLean2003Muscle represents the force of the parallel (passive) element of the muscle (not normalized) as a function of muscle length (not normalized) by a quadratic function:
\begin{equation}
F_{PE}(L_M) = \left\{
\begin{array}{l l}
0 \quad & \text{if} \quad L_M \leq L_{Mslack} \\
k_{PE}(L_M - L_{Mslack})^2 \quad & \text{if} \quad L_M > L_{Mslack}
\end{array} \right.
\label{}
\end{equation}
where $k_{PE}$ is a stiffness parameter of the parallel element such that the passive muscle force is equal to the normalized maximum isometric force of the muscle when the CE is stretched to its maximal length for active force production:
\begin{equation}
k_{PE} = \dfrac{F_{M0}}{(WL_{Mopt})^2}
\label{}
\end{equation}
McLean2003Muscle adopted $L_{Mslack} = L_{Mopt}$.
The corresponding Python functions are:
```
def fpelm_T03(lm=1, kpe=5, epsm0=0.6):
"""Thelen (2003) force of the muscle parallel element as function of muscle length.
Parameters
----------
lm : float, optional (default=1)
normalized muscle fiber length
kpe : float, optional (default=5)
exponential shape factor
epsm0 : float, optional (default=0.6)
passive muscle strain due to maximum isometric force
Returns
-------
fpe : float
normalized force of the muscle parallel (passive) element
"""
if lm < 1:
fpe = 0
else:
fpe = (np.exp(kpe*(lm-1)/epsm0)-1)/(np.exp(kpe)-1)
return fpe
def fpelm_M03(lm=1, lmopt=1, fm0=1, lmslack=1, wp=1):
"""McLean (2003) force of the muscle parallel element as function of muscle length.
Parameters
----------
lm : float, optional (default=1)
muscle fiber length
lmopt : float, optional (default=1)
optimal muscle (fiber) length
fm0 : float, optional (default=1)
maximum isometric muscle force
lmslack : float, optional (default=1)
muscle slack length
wp : float, optional (default=1)
shape factor of the parallel element force-length curve
Returns
-------
fpe : float
force of the muscle parallel (passive) element
"""
kpe = fm0/(wp*lmopt)**2
if lm <= lmslack:
fpe = 0
else:
fpe = kpe*(lm-lmslack)**2
return fpe
```
And plots of these functions:
```
lm = np.arange(0, 2.02, .02)
fpe_T03 = np.zeros(lm.size)
fpe_M03 = np.zeros(lm.size)
for i in range(len(lm)):
fpe_T03[i] = fpelm_T03(lm[i])
fpe_M03[i] = fpelm_M03(lm[i])
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10, 4))
ax1.plot(lm[:86], fce_T03[:86], 'b', label='Active')
ax1.plot(lm[:86], fpe_T03[:86], 'r', label='Passive')
ax1.plot(lm[:86], fce_T03[:86] + fpe_T03[:86], 'g', label='Total')
ax1.text(0.1, 2.6, 'T03')
ax1.set_xlim([0, 1.7])
ax1.set_xlabel('Normalized length')
ax1.set_ylabel('Normalized force')
#ax1.legend(loc='best')
ax2.plot(lm[:86], fce_M03[:86], 'b', label='Active')
ax2.plot(lm[:86], fpe_M03[:86], 'r', label='Passive')
ax2.plot(lm[:86], fce_M03[:86] + fpe_M03[:86], 'g', label='Total')
ax2.text(0.1, 2.6, 'M03')
ax2.set_xlim([0, 1.7])
ax2.set_xlabel('Normalized length')
ax2.legend(loc='best')
plt.suptitle('Muscle force-length relationship', y=1, fontsize=16)
plt.tight_layout()
plt.show()
```
The results are different at the maximum stretching because Thelen2003Muscle and McLean2003Muscle model differently the passive component.
These results were simulated for a maximum muscle activation (an activation level, $a$, of 1, where 0 is no activation). The effect of different activation levels on the total muscle force (but only the active force is affected) is shown in the next figure:
```
lm = np.arange(0, 2.02, .02)
fce_T03_als = np.zeros((lm.size, 5))
als = [0, 0.25, 0.50, 0.75, 1.0]
for j, al in enumerate(als):
for i in range(len(lm)):
fce_T03_als[i, j] = flce_T03(lm[i])*al
fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True, sharey=True, figsize=(6, 5))
for j, al in enumerate(als):
ax.plot(lm[:86], fce_T03_als[:86, j] + fpe_T03[:86], label='%.2f'%al)
ax.text(0.1, 2.6, 'T03')
ax.set_xlim([0, 1.7])
ax.set_xlabel('Normalized length')
ax.set_ylabel('Normalized force')
ax.legend(loc='best', title='Activation level')
ax.set_title('Muscle force-length relationship', y=1, fontsize=16)
plt.tight_layout()
plt.show()
```
#### Force-length relationship of the series element (tendon)
Thelen2003Muscle represented the tendon force of the series element as a function of the normalized tendon length (in fact, tendon strain) by an exponential function during an initial nonlinear toe region and by a linear function thereafter:
\begin{equation}
\bar{F}_{SE}(\bar{L}_T) = \left\{
\begin{array}{l l}
\dfrac{\bar{F}_{Ttoe}}{exp(k_{Ttoe})-1}\left[exp(k_{Ttoe}\epsilon_T/\epsilon_{Ttoe})-1\right] \quad & \text{if} \quad \epsilon_T \leq \epsilon_{Ttoe} \\
k_{Tlin}(\epsilon_T - \epsilon_{Ttoe}) + \bar{F}_{Ttoe} \quad & \text{if} \quad \epsilon_T > \epsilon_{Ttoe}
\end{array} \right.
\label{}
\end{equation}
where $\epsilon_{T}$ is the tendon strain:
\begin{equation}
\epsilon_{T} = \dfrac{L_T-L_{Tslack}}{L_{Tslack}}
\label{}
\end{equation}
$L_{Tslack}$ is the tendon slack length, $\epsilon_{Ttoe}$ is the tendon strain above which the tendon exhibits linear behavior, $k_{Ttoe}$ is an exponential shape factor, and $k_{Tlin}$ is a linear scale factor. The parameters are chosen such that the tendon elongation at the normalized maximal isometric force of the muscle is 4% of the tendon length ($\epsilon_{T0}=0.04$).
Thelen2003Muscle adopted $k_{Ttoe}=3$ and the transition from nonlinear to linear behavior occurs for normalized tendon forces greater than $\bar{F}_{Ttoe}=0.33$. For continuity of slopes at the transition, $\epsilon_{Ttoe}=0.609\epsilon_{T0}$ and $k_{Tlin}=1.712/\epsilon_{T0}$. The actual force produced is obtained multiplying $\bar{F}_{SE}$ by the maximum isometric muscle force, $F_{M0}$.
McLean2003Muscle represented the tendon force (not normalized) of the series element as a function of the tendon length (not normalized) by the same quadratic function used for the force of the muscle passive element:
\begin{equation}
F_{SE}(L_T) = \left\{
\begin{array}{l l}
0 \quad & \text{if} \quad L_T \leq L_{Tslack} \\
k_T(L_T - L_{Tslack})^2 \quad & \text{if} \quad L_T > L_{Tslack}
\end{array} \right.
\label{}
\end{equation}
where $k_T$ is the tendon stiffness. The stiffness parameter $k_T$ is chosen such that the tendon elongation is 4% at the maximum isometric force, $k_T=(1/\epsilon_{T0})^2=625$ for $F_{M0}=1$.
The corresponding Python functions are:
```
def fselt_T03(lt=1, ltslack=1, epst0=0.04, kttoe=3):
"""Thelen (2003) force-length relationship of tendon as function of tendon length.
Parameters
----------
lt : float, optional (default=1)
normalized tendon length
ltslack : float, optional (default=1)
normalized tendon slack length
epst0 : float, optional (default=0.04)
tendon strain at the maximal isometric muscle force
kttoe : float, optional (default=3)
linear scale factor
Returns
-------
fse : float
normalized force of the tendon series element
"""
epst = (lt-ltslack)/ltslack
fttoe = 0.33
# values from OpenSim Thelen2003Muscle
epsttoe = .99*epst0*np.e**3/(1.66*np.e**3 - .67)
ktlin = .67/(epst0 - epsttoe)
#
if epst <= 0:
fse = 0
elif epst <= epsttoe:
fse = fttoe/(np.exp(kttoe)-1)*(np.exp(kttoe*epst/epsttoe)-1)
else:
fse = ktlin*(epst-epsttoe) + fttoe
return fse
def fselt_M03(lt, ltslack=1, fm0=1, epst0=0.04):
"""McLean (2003) force-length relationship of tendon as function of tendon length.
Parameters
----------
lt : float, optional (default=1)
tendon length
ltslack : float, optional (default=1)
tendon slack length
fm0 : float, optional (default=1)
maximum isometric muscle force
epst0 : float, optional (default=0.04)
tendon strain at the maximal isometric muscle force
Returns
-------
fse : float
force of the tendon series element
"""
kt = fm0/epst0**2
if lt <= ltslack:
fse = 0
else:
fse = kt*(lt-ltslack)**2
return fse
```
And plots of these functions:
```
lt = np.arange(1, 1.051, .001)
fse_T03 = np.zeros(lt.size)
fse_M03 = np.zeros(lt.size)
for i in range(len(lt)):
fse_T03[i] = fselt_T03(lt[i])
fse_M03[i] = fselt_M03(lt[i])
plt.figure(figsize=(7, 4))
plt.plot(lt-1, fse_T03, 'b', label='T03')
plt.plot(lt-1, fse_M03, 'g', label='M03')
plt.plot(0.04, 1, 'ro', markersize=8)
plt.text(0.04, 0.7, '$\epsilon_{T0}$', fontsize=22)
plt.xlabel('Tendon strain')
plt.ylabel('Normalized force')
plt.legend(loc='upper left')
plt.suptitle('Tendon force-length relationship (series element)', y=1, fontsize=16)
plt.show()
```
Similar results when the same parameters are used.
### Force-velocity relationship of the contractile element
The force-velocity relation of the contractile element for shortening (concentric activation) is based on the well known Hill's equation of a hyperbola describing that the product between force $F$ and velocity $V$ of the contractile element is constant (Winters, 1990; Winters, 1995):
\begin{equation}
(F+a')(V+b') = (F_{0}+a')b'
\label{}
\end{equation}
where $a'$, $b'$, and $F_{0}$ are constants.
We can rewrite the equation above with constants more meaningful to our modeling:
\begin{equation}
(F_{M}+A_f F_{Mlen})(V_M+A_f V_{Mmax}) = A_f F_{Mlen}V_{Mmax}(1+A_f)
\label{}
\end{equation}
where $F_{M}$ and $V_M$ are the contractile element force and velocity, respectively, and the three constants are: $V_{Mmax}$, the maximum unloaded velocity (when $F_{M}=0$), $F_{Mlen}$, the maximum isometric force (when $V_M=0$), and $A_f$, a shape factor which specifies the concavity of the hyperbola.
Based on the equation above for the shortening phase and in Winters (1990, 1995) for the lengthening phase, Thelen2003Muscle employed the following force-velocity equation:
\begin{equation}
V_M = (0.25+0.75a)\,V_{Mmax}\dfrac{\bar{F}_M-a\bar{f}_{l,CE}}{b}
\label{}
\end{equation}
where
\begin{equation}
b = \left\{
\begin{array}{l l l}
a\bar{f}_{l,CE} + \bar{F}_M/A_f \quad & \text{if} \quad \bar{F}_M \leq a\bar{f}_{l,CE} & \text{(shortening)} \\
\\
\dfrac{(2+2/A_f)(a\bar{f}_{l,CE}\bar{f}_{Mlen} - \bar{F}_M)}{\bar{f}_{Mlen}-1} \quad & \text{if} \quad \bar{F}_M > a\bar{f}_{l,CE} & \text{(lengthening)}
\end{array} \right.
\label{}
\end{equation}
where $a$ is the activation level and $\bar{f}_{Mlen}$ is a constant for the maximum force generated at the lengthening phase (normalized by the maximum isometric force).
Thelen2003Muscle adopted $A_f=0.25$, $V_{Mmax}=10L_{Mopt}/s$, $\bar{f}_{Mlen}=1.4$ for young adults ($V_{Mmax}=8L_{Mopt}/s$ and $\bar{f}_{Mlen}=1.8$ for old adults). Note that the dependences of the force with the activation level and with the muscle length are already incorporated in the expression above.
McLean2013Muscle employed:
\begin{equation}
\bar{f}_{v,CE} = \left\{
\begin{array}{l l l}
\dfrac{\lambda(a)V_{Mmax} + V_M}{\lambda(a)V_{Mmax} - V_M/A_f} \quad & \text{if} \quad V_M \leq 0 & \text{(shortening)} \\
\\
\dfrac{\bar{f}_{Mlen}V_M + d_1}{V_M + d_1} \quad & \text{if} \quad 0 < V_M \leq \gamma d_1 & \text{(slow lengthening)} \\
\\
d_3 + d_2V_M \quad & \text{if} \quad V_M > \gamma d_1 & \text{(fast lengthening)}
\end{array} \right.
\label{}
\end{equation}
where
\begin{equation}
\begin{array}{l l}
\lambda(a) = 1-e^{-3.82a} + a\:e^{-3.82} \\
\\
d_1 = \dfrac{V_{Mmax}A_f(\bar{f}_{Mlen}-1)}{S(A_f+1)} \\
\\
d_2 = \dfrac{S(A_f+1)}{V_{Mmax}A_f(\gamma+1)^2} \\
\\
d_3 = \dfrac{(\bar{f}_{Mlen}-1)\gamma^2}{(\gamma+1)^2} + 1
\end{array}
\label{}
\end{equation}
where $\lambda(a)$ is a scaling factor to account for the influence of the activation level $a$ on the force-velocity relationship, $\bar{f}_{Mlen}$ is the asymptotic (maximum) value of $\bar{F}_M$, $S$ is a parameter to double the slope of the force-velocity curve at zero velocity, and $\gamma$ is a dimensionless parameter to ensure the transition between the hyperbolic and linear parts of the lengthening phase.
McLean2013Muscle adopted $A_f=0.25$, $V_{Mmax}=10L_{Mopt}/s$, $\bar{f}_{Mlen}=1.5$, $S=2.0$, and $\gamma=5.67$.
Let's write these expressions as Python code and visualize them:
```
def vmfce_T03(fm, flce=1, lmopt=1, a=1, vmmax=1, fmlen=1.4, af=0.25):
"""Thelen (2003) velocity of the force-velocity relationship as function of CE force.
Parameters
----------
fm : float
normalized muscle force
flce : float, optional (default=1)
normalized muscle force due to the force-length relationship
lmopt : float, optional (default=1)
optimal muscle fiber length
a : float, optional (default=1)
muscle activation level
vmmax : float, optional (default=1)
maximum muscle velocity for concentric activation
fmlen : float, optional (default=1.4)
normalized maximum force generated at the lengthening phase
af : float, optional (default=0.25)
shape factor
Returns
-------
vm : float
velocity of the muscle
"""
vmmax = vmmax*lmopt
if fm <= a*flce: # isometric and concentric activation
b = a*flce + fm/af
else: # eccentric activation
b = (2 + 2/af)*(a*flce*fmlen - fm)/(fmlen - 1)
vm = (0.25 + 0.75*a)*vmmax*(fm - a*flce)/b
return vm
```
Let's find an expression for contractile element force as function of muscle velocity given the equation above, i.e. we want to invert the equation. For that, let's use [Sympy](http://www.sympy.org/):
```
def fvce_T03_symb():
# Thelen (2003) velocity of the force-velocity relationship as function of CE force
from sympy import symbols, solve, collect, Eq
a, flce, fm, af, fmlen, vmmax = symbols('a, flce, fm, af, fmlen, vmmax', positive=True)
vm = symbols('vm', real=True)
b = a*flce + fm/af
vm_eq = Eq(vm - (0.25 + 0.75*a)*vmmax*(fm - a*flce)/b)
sol = solve(vm_eq, fm)
print('fm <= a*flce:\n', collect(sol[0], vmmax),'\n')
b = (2 + 2/af)*(a*flce*fmlen - fm)/(fmlen - 1)
vm_eq = Eq(vm - (0.25 + 0.75*a)*vmmax*(fm - a*flce)/b)
sol = solve(vm_eq, fm)
print('fm > a*flce:\n', collect(sol[0], (vmmax*af, fmlen, vm)))
fvce_T03_symb()
```
And here is the function we need to compute contractile element force as function of muscle velocity:
```
def fvce_T03(vm=0, flce=1, lmopt=1, a=1, vmmax=1, fmlen=1.4, af=0.25):
"""Thelen (2003) force of the contractile element as function of muscle velocity.
Parameters
----------
vm : float, optional (default=0)
muscle velocity
flce : float, optional (default=1)
normalized muscle force due to the force-length relationship
lmopt : float, optional (default=1)
optimal muscle fiber length
a : float, optional (default=1)
muscle activation level
vmmax : float, optional (default=1)
maximum muscle velocity for concentric activation
fmlen : float, optional (default=1.4)
normalized maximum force generated at the lengthening phase
af : float, optional (default=0.25)
shape factor
Returns
-------
fvce : float
normalized force of the muscle contractile element
"""
vmmax = vmmax*lmopt
if vm <= 0: # isometric and concentric activation
fvce = af*a*flce*(4*vm + vmmax*(3*a + 1))/(-4*vm + vmmax*af*(3*a + 1))
else: # eccentric activation
fvce = a*flce*(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*fmlen*(af + 1))/\
(af*vmmax*(3*a*fmlen - 3*a + fmlen - 1) + 8*vm*(af + 1))
return fvce
```
Here is the Python function for the McLean (2003) model:
```
def fvce_M03(vm=0, lmopt=1, a=1, vmmax=1, fmlen=1.5, af=0.25, s=2, gammav=5.67):
"""McLean (2003) contractile element force as function of muscle velocity.
Parameters
----------
vm : float, optional (default=0)
muscle velocity
lmopt : float, optional (default=1)
optimal muscle fiber length
a : float, optional (default=1)
muscle activation level
vmmax : float, optional (default=1)
maximum muscle velocity for concentric activation
fmlen : float, optional (default=1.5)
normalized maximum force generated at the lengthening phase
af : float, optional (default=0.25)
shape factor
s : float, optional (default=2)
to double the slope of the force-velocity curve at zero velocity
gammav : float, optional (default=5.67)
to ensure the smooth transition of the lengthening phase
Returns
-------
fvce : float
normalized force of the muscle contractile element
"""
vmmax = vmmax*lmopt
d1 = vmmax*af*(fmlen - 1)/(s*(af + 1))
d2 = s*(af + 1)/(vmmax*af*(gammav + 1)**2)
d3 = (fmlen - 1)*gammav**2/(gammav + 1)**2 + 1
lbd = 1 - np.exp(-3.82*a) + a*np.exp(-3.82)
if vm <= 0: # isometric and concentric activation
fvce = (lbd*vmmax + vm)/(lbd*vmmax - vm/af)
elif 0 < vm <= gammav*d1: # slow lengthening
fvce = (fmlen*vm + d1)/(vm + d1)
elif vm > gammav*d1: # fast lengthening
fvce = d3 + d2*vm
return fvce
```
We can invert this equation to get an expression for muscle velocity as function of the contractile element force:
```
def vmfce_M03(fvce=1, lmopt=1, a=1, vmmax=1, fmlen=1.5, af=0.25, s=2, gammav=5.67):
"""McLean (2003) contractile element velocity as function of CE force.
Parameters
----------
fvce : float, optional (default=1)
normalized muscle force
lmopt : float, optional (default=1)
optimal muscle fiber length
a : float, optional (default=1)
muscle activation level
vmmax : float, optional (default=1)
maximum muscle velocity for concentric activation
fmlen : float, optional (default=1.5)
normalized maximum force generated at the lengthening phase
af : float, optional (default=0.25)
shape factor
s : float, optional (default=2)
to double the slope of the force-velocity curve at zero velocity
gammav : float, optional (default=5.67)
to ensure the smooth transition of the lengthening phase
Returns
-------
fvce : float
muscle velocity
"""
vmmax = vmmax*lmopt
d1 = vmmax*af*(fmlen - 1)/(s*(af + 1))
d2 = s*(af + 1)/(vmmax*af*(gammav + 1)**2)
d3 = (fmlen - 1)*gammav**2/(gammav + 1)**2 + 1
lbd = 1 - np.exp(-3.82*a) + a*np.exp(-3.82)
if 0 <= fvce <= 1: # isometric and concentric activation
vm = (lbd*vmmax*(1 - fvce))/(1 + fvce/af)
elif 1 < fvce <= gammav*d1*d2 + d3: # slow lengthening
vm = d1*(fvce - 1)/(fmlen - fvce)
elif fvce > gammav*d1*d2 + d3: # fast lengthening
vm = (fvce - d3)/d2
return vm
```
Let's use these functions to compute muscle force as a function of the muscle velocity considering two levels of activation:
```
vm1_T03 = np.linspace(-1, 1, 201)
fce1_T03 = np.zeros(vm1_T03.size)
vm2_T03 = np.linspace(-.63, .63, 201)
fce2_T03 = np.zeros(vm2_T03.size)
for i in range(len(vm1_T03)):
fce1_T03[i] = fvce_T03(vm=vm1_T03[i])
fce2_T03[i] = fvce_T03(vm=vm2_T03[i], a=0.5)
vm1_M03 = np.linspace(-1, 1, 201)
fce1_M03 = np.zeros(vm1_M03.size)
vm2_M03 = np.linspace(-.63, .63, 201)
fce2_M03 = np.zeros(vm2_M03.size)
for i in range(len(vm1_M03)):
fce1_M03[i] = fvce_M03(vm=vm1_M03[i])
fce2_M03[i] = fvce_M03(vm=vm2_M03[i], a=0.5)
fce2_M03 = fce2_M03*0.5
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10, 4))
ax1.plot(vm1_T03, fce1_T03, 'b', label='T03)')
ax1.plot(vm1_M03, fce1_M03, 'g', label='M03)')
ax1.set_ylabel('Normalized force')
ax1.set_xlabel('Normalized velocity')
ax1.text(-.9, 1.5, 'Activation = 1.0')
ax2.plot(vm2_T03, fce2_T03, 'b', label='T03')
ax2.plot(vm2_M03, fce2_M03, 'g', label='M03')
ax2.text(-.9, 1.5, 'Activation = 0.5')
ax2.set_xlabel('Normalized velocity')
ax2.legend(loc='best')
plt.suptitle('Force-velocity relationship of the contractile element', y=1.05, fontsize=16)
plt.tight_layout()
plt.show()
```
Identical results for the shortening phase when $a=1$ and similar results for the lengthening phase when the same parameters are used.
#### Muscle power
The muscle power is the product between force and velocity:
```
P_T03 = np.abs(fce1_T03*vm1_T03)
```
Let's visualize the muscle power only for the concentric phase (muscle shortening):
```
plt.figure(figsize=(7, 4))
plt.plot(vm1_T03[:101], fce1_T03[:101], 'b', label='Force')
plt.xlabel('Normalized velocity')
plt.ylabel('Normalized force', color='b')
#plt.legend(loc='upper left')
plt.gca().invert_xaxis()
plt.gca().twinx()
plt.plot(vm1_T03[:101], P_T03[:101], 'g', label='Power')
plt.ylabel('Normalized power', color='g')
#plt.legend(loc='upper right')
plt.suptitle('Muscle power', y=1, fontsize=16)
plt.show()
```
#### Force-length-velocity relationship
Let's visualize the effects of the length and velocity on the total (active plus passive) muscle force:
```
lms = np.linspace(0, 1.65, 101)
vms = np.linspace(-1, .76, 101)
fce_T03 = np.zeros(lms.size)
fpe_T03 = np.zeros(lms.size)
fm_T03 = np.zeros((lms.size, vms.size))
for i in range(len(lms)):
fce_T03[i] = flce_T03(lm=lms[i])
fpe_T03[i] = fpelm_T03(lm=lms[i])
for j in range(len(vms)):
fm_T03[j, i] = fvce_T03(vm=vms[j], flce=fce_T03[i]) + fpe_T03[i]
lms = np.linspace(0, 1.65, 101)
vms = np.linspace(-1, .76, 101)
fce_M03 = np.zeros(lms.size)
fpe_M03 = np.zeros(lms.size)
fm_M03 = np.zeros((lms.size, vms.size))
for i in range(len(lms)):
fce_M03[i] = flce_M03(lm=lms[i])
fpe_M03[i] = fpelm_M03(lm=lms[i])
for j in range(len(vms)):
fm_M03[j, i] = fvce_M03(vm=vms[j])*fce_M03[i] + fpe_M03[i]
from mpl_toolkits.mplot3d import Axes3D
def flv3dplot(ax, lm, vm, fm, model):
# 3d plot
lm2, vm2 = np.meshgrid(lm, vm)
ax.plot_surface(lm2, vm2, fm, rstride=2, cstride=2, cmap=plt.cm.coolwarm,
linewidth=.5, antialiased=True)
ax.plot(np.ones(vms.size), vms, fm[:, np.argmax(lm>=1)], 'w', linewidth=4)
ax.plot(lm, np.zeros(lm.size), fm[np.argmax(vm>=0),:], 'w', linewidth=4)
ax.set_xlim3d(lm[0], lm[-1])
ax.set_ylim3d(vm[0], vm[-1])
#ax.set_zlim3d(np.min(fm), np.max(fm))
ax.set_zlim3d(0, 2)
ax.set_xlabel('Normalized length')
ax.set_ylabel('Normalized velocity')
ax.set_zlabel('Normalized force')
ax.view_init(20, 225)
ax.locator_params(nbins=6)
ax.text(-0.4, 0.7, 2.5, model, fontsize=14)
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
flv3dplot(ax1, lms, vms, fm_T03, 'T03')
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
flv3dplot(ax2, lms, vms, fm_M03, 'M03')
plt.suptitle('Force-length-velocity relationship', y=1, fontsize=16)
plt.tight_layout()
plt.show()
```
### Activation dynamics
Activation dynamics represents the fact that a muscle cannot instantly activate or deactivate because of the electrical and chemical processes involved and it is usually integrated with a Hill-type model. In its simplest form, the activation dynamics is generally represented as a first-order ODE.
Thelen2003Muscle employed the following first-order [ordinary differential equation (ODE)](http://en.wikipedia.org/wiki/Ordinary_differential_equation):
\begin{equation}
\frac{\mathrm{d}a}{\mathrm{d}t} = \dfrac{u-a}{\tau(a, u)}
\label{}
\end{equation}
with a lower activation bound to both activation and excitation.
where $u$ and $a$ are the muscle excitation and activation, respectively (both are function of time), and $\tau$ is a variable time constant to represent the activation and deactivation times, given by:
\begin{equation}
\tau(a, u) = \left\{
\begin{array}{l l}
t_{act}(0.5+1.5a) \quad & \text{if} \quad u > a\\
\dfrac{t_{deact}}{(0.5+1.5a)} \quad & \text{if} \quad u \leq a
\end{array} \right.
\label{}
\end{equation}
Thelen2003Muscle adopted activation, $t_{act}$, and deactivation, $t_{deact}$, time constants for young adults equal to 15 and 50 ms, respectively (for old adults, Thelen2003Muscle adopted 15 and 60 ms, respectively).
McLean2003Muscle expressed the activation dynamics as the following first-order ODE:
\begin{equation}
\dfrac{\mathrm{d}a}{\mathrm{d}t} = (u - a)(c_1u + c_2)
\label{}
\end{equation}
with a lower activation bound to both activation and excitation.
where $c_1 + c_2$ is the activation rate constant (when $u = 1$), the inverse of $t_{act}$, and $c_2$ is the deactivation rate constant (when $u = 0$), the inverse of $t_{deact}$.
McLean2003Muscle adopted $c_1=3.3 s^{-1}$ and $c_2=16.7 s^{-1}$, resulting in time constants of 50 ms and 60 ms for activation and deactivation, respectively.
In Python, the numeric first-order ODE for the activation dynamics presented in Thelen2003Muscle can be expressed as:
```
def actdyn_T03(t, a, t_act, t_deact, u_max, u_min, t0=0, t1=1):
"""Thelen (2003) activation dynamics, the derivative of `a` at `t`.
Parameters
----------
t : float
time instant [s]
a : float (0 <= a <= 1)
muscle activation
t_act : float
activation time constant [s]
t_deact : float
deactivation time constant [s]
u_max : float (0 < u_max <= 1), optional (default=1)
maximum value for muscle excitation
u_min : float (0 < u_min < 1), optional (default=0.01)
minimum value for muscle excitation
t0 : float [s], optional (default=0)
initial time instant for muscle excitation equals to u_max
t1 : float [s], optional (default=1)
final time instant for muscle excitation equals to u_max
Returns
-------
adot : float
derivative of `a` at `t`
"""
u = excitation(t, u_max, u_min)
if u > a:
adot = (u - a)/(t_act*(0.5 + 1.5*a))
else:
adot = (u - a)/(t_deact/(0.5 + 1.5*a))
return adot
```
In Python, the numeric first-order ODE for the activation dynamics presented in McLean2003Muscle can be expressed as:
```
def actdyn_M03(t, a, t_act, t_deact, u_max=1, u_min=0.01, t0=0, t1=1):
"""McLean (2003) activation dynamics, the derivative of `a` at `t`.
Parameters
----------
t : float
time instant [s]
a : float (0 <= a <= 1)
muscle activation
t_act : float
activation time constant [s]
t_deact : float
deactivation time constant [s]
u_max : float (0 < u_max <= 1), optional (default=1)
maximum value for muscle excitation
u_min : float (0 < u_min < 1), optional (default=0.01)
minimum value for muscle excitation
t0 : float [s], optional (default=0)
initial time instant for muscle excitation equals to u_max
t1 : float [s], optional (default=1)
final time instant for muscle excitation equals to u_max
Returns
-------
adot : float
derivative of `a` at `t`
"""
c2 = 1/t_deact
c1 = 1/t_act - c2
u = excitation(t, u_max, u_min)
adot = (u - a)*(c1*u + c2)
return adot
```
Let's simulate the activation signal for a rectangular function as excitation signal:
```
def excitation(t, u_max=1, u_min=0.01, t0=0.1, t1=0.4):
"""Excitation signal, a square wave.
Parameters
----------
t : float
time instant [s]
u_max : float (0 < u_max <= 1), optional (default=1)
maximum value for muscle excitation
u_min : float (0 < u_min < 1), optional (default=0.01)
minimum value for muscle excitation
t0 : float [s], optional (default=0.1)
initial time instant for muscle excitation equals to u_max
t1 : float [s], optional (default=0.4)
final time instant for muscle excitation equals to u_max
Returns
-------
u : float (0 < u <= 1)
excitation signal
"""
u = u_min
if t >= t0 and t <= t1:
u = u_max
return u
```
We will solve the equation for $a$ by numerical integration using the [`scipy.integrate.ode`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.ode.html) class of numeric integrators, particularly the `dopri5`, an explicit runge-kutta method of order (4)5 due to Dormand and Prince (a.k.a. ode45 in Matlab):
```
import warnings
def actdyn_ode45(fun, t0=0, t1=1, a0=0, t_act=0.015, t_deact=0.050, u_max=1, u_min=0.01):
# Runge-Kutta (4)5 due to Dormand & Prince with variable stepsize ODE solver
f = ode(fun).set_integrator('dopri5', nsteps=1, max_step=0.01, atol=1e-8)
f.set_initial_value(a0, t0).set_f_params(t_act, t_deact, u_max, u_min)
# suppress Fortran warning
warnings.filterwarnings("ignore", category=UserWarning)
data = []
while f.t < t1:
f.integrate(t1, step=True)
data.append([f.t, excitation(f.t, u_max, u_min), np.max([f.y, u_min])])
warnings.resetwarnings()
data = np.array(data)
return data
```
Solving the problem for two different maximum excitation levels:
```
# using the values for t_act and t_deact from Thelen2003Muscle for both models
act1_T03 = actdyn_ode45(fun=actdyn_T03, u_max=1.0)
act2_T03 = actdyn_ode45(fun=actdyn_T03, u_max=0.5)
act1_M03 = actdyn_ode45(fun=actdyn_M03, u_max=1.0)
act2_M03 = actdyn_ode45(fun=actdyn_M03, u_max=0.5)
# using the values for t_act and t_deact from McLean2003Muscle
act3_M03 = actdyn_ode45(fun=actdyn_M03, u_max=1.0, t_act=0.050, t_deact=0.060)
act4_M03 = actdyn_ode45(fun=actdyn_M03, u_max=0.5, t_act=0.050, t_deact=0.060)
```
And the results:
```
fig, axs = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True, figsize=(10, 6))
axs[0, 0].plot(act1_T03[:, 0], act1_T03[:, 1], 'r:', label='Excitation')
axs[0, 0].plot(act1_T03[:, 0], act1_T03[:, 2], 'b', label='T03 [15, 50] ms')
axs[0, 0].plot(act1_M03[:, 0], act1_M03[:, 2], 'g', label='M03 [15, 50] ms')
axs[0, 0].set_ylabel('Level')
axs[0, 1].plot(act2_T03[:, 0], act2_T03[:, 1], 'r:', label='Excitation')
axs[0, 1].plot(act2_T03[:, 0], act2_T03[:, 2], 'b', label='T03 [15, 50] ms')
axs[0, 1].plot(act2_M03[:, 0], act2_M03[:, 2], 'g', label='M03 [15, 50] ms')
axs[1, 1].set_xlabel('Time (s)')
axs[0, 1].legend()
axs[1, 0].plot(act1_T03[:, 0], act1_T03[:, 1], 'r:', label='Excitation')
axs[1, 0].plot(act1_T03[:, 0], act1_T03[:, 2], 'b', label='T03 [15, 50] ms')
axs[1, 0].plot(act3_M03[:, 0], act3_M03[:, 2], 'g', label='M03 [50, 60] ms')
axs[1, 0].set_xlabel('Time (s)')
axs[1, 0].set_ylabel('Level')
axs[1, 1].plot(act2_T03[:, 0], act2_T03[:, 1], 'r:', label='Excitation')
axs[1, 1].plot(act2_T03[:, 0], act2_T03[:, 2], 'b', label='T03 [15, 50] ms')
axs[1, 1].plot(act4_M03[:, 0], act4_M03[:, 2], 'g', label='M03 [50, 60] ms')
axs[1, 1].set_xlabel('Time (s)')
axs[1, 1].legend()
plt.suptitle('Activation dynamics', y=1, fontsize=16)
plt.tight_layout()
plt.show()
```
Similar results when the same parameters are used (first row), but different bahavior when the typical values of each study are compared (second row).
### Muscle modeling parameters
We have seen two types of parameters in the muscle modeling: parameters related to the mathematical functions used to model the muscle and tendon behavior and parameters related to the properties of specific muscles and tendons (e.g., maximum isometric force, optimal fiber length, pennation angle, and tendon slack). In general the first type of parameters are independent of the muscle-tendon unit being modeled (but dependent of the model!) while the second type of parameters is changed for each muscle-tendon unit (for instance, see http://isbweb.org/data/delp/ for some of these parameters).
### Limitations of Hill-type muscle models
As with any modeling, Hill-type muscle models are a simplification of the reality. For instance, a typical Hill-type muscle model (as implemented here) does not capture time-dependent muscle behavior, such as force depression after quick muscle shortening, force enhancement after quick muscle lengthening, viscoelastic properties (creep and relaxation), and muscle fatigue (Zatsiorsky and Prilutsky, 2012). There are enhanced models that capture these properties but it seems their complexity are not worthy for the most common applications of human movement simulation.
## Exercises
1. The results presented in this text depend on the parameters used in the model. These parameters may vary because of different properties of the muscle and tendon but also because different mathematical functions may be used.
a. Change some of the parameters and reproduce the plots shown here and discuss these results (e.g., use the parameters for different muscles from OpenSim or the data from [http://isbweb.org/data/delp/](http://isbweb.org/data/delp/)).
b. Select another reference (e.g., Anderson, 2007) about muscle modeling that uses different mathematical functions and repeat the previous item.
## References
- Anderson C (2007) [Equations for Modeling the Forces Generated by Muscles and Tendons](https://docs.google.com/viewer?url=https%3A%2F%2Fsimtk.org%2Fdocman%2Fview.php%2F124%2F604%2FMuscleAndTendonForcesClayAnderson20070521.doc) ([PDF](https://drive.google.com/open?id=0BxbW72zV7WmUVUh0MldGOGZ6aHc&authuser=0)). BioE215 Physics-based Simulation of Biological Structures.
- Erdemir A, McLean S, Herzog W, van den Bogert AJ (2007) [Model-based estimation of muscle forces exerted during movements](http://www.ncbi.nlm.nih.gov/pubmed/17070969). Clinical Biomechanics, 22, 131–154.
- He J, Levine WS, Loeb GE (1991) [Feedback gains for correcting small perturbations to standing posture](https://drive.google.com/open?id=0BxbW72zV7WmUekRXY09GSEhUVlE&authuser=0). IEEE Transactions on Automatic Control, 36, 322–332.
- McLean SG, Su A, van den Bogert AJ (2003) [Development and validation of a 3-D model to predict knee joint loading during dynamic movement](http://www.ncbi.nlm.nih.gov/pubmed/14986412). Journal of Biomechanical Engineering, 125, 864-74.
- McMahon TA (1984) [Muscles, Reflexes, and Locomotion](https://archive.org/details/McMahonTAMusclesReflexesAndLocomotionPrincetonUniversityPress1984). Princeton University Press, Princeton, New Jersey.
- Millard M, Uchida T, Seth A, Delp SL (2013) [Flexing computational muscle: modeling and simulation of musculotendon dynamics](http://www.ncbi.nlm.nih.gov/pubmed/23445050). Journal of Biomechanical Engineering, 135, 021005.
- Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley.
- Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics.
- Thelen DG (2003) [Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults](http://homepages.cae.wisc.edu/~thelen/pubs/jbme03.pdf). Journal of Biomechanical Engineering, 125(1):70–77.
- Tsianos GA and Loeb GE (2013) [Muscle Physiology and Modeling](http://www.scholarpedia.org/article/Muscle_Physiology_and_Modeling). Scholarpedia, 8(10):12388.
- Winters JM (1990) [Hill-based muscle models: a systems engineering perspective](http://link.springer.com/chapter/10.1007%2F978-1-4613-9030-5_5). In [Multiple Muscle Systems: Biomechanics and Movement Organization](http://link.springer.com/book/10.1007/978-1-4613-9030-5), edited by JM Winters and SL Woo, Springer-Verlag, New York.
- Winters JM (1995) [An Improved Muscle-Reflex Actuator for Use in Large-Scale Neuromusculoskeletal Models](http://www.ncbi.nlm.nih.gov/pubmed/7486344). Annals of Biomedical Engineering, 23, 359–374.
- Zajac FE (1989) [Muscle and tendon: properties, models, scaling and application to biomechanics and motor control](http://www.ncbi.nlm.nih.gov/pubmed/2676342). Critical Reviews in Biomedical Engineering 17:359-411.
- Zatsiorsky V and Prilutsky B (2012) [Biomechanics of Skeletal Muscles](http://books.google.com.br/books?id=THXfHT8L5MEC). Human Kinetics.
| github_jupyter |
```
!wget https://resources.lendingclub.com/LoanStats_2019Q1.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q2.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q3.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q4.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2020Q1.csv.zip
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
from sklearn.model_selection import train_test_split
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc",
"verification_status", "pymnt_plan", "dti", "delinq_2yrs",
"inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int",
"total_rec_late_fee", "recoveries", "collection_recovery_fee",
"last_pymnt_amnt", "collections_12_mths_ex_med", "policy_code",
"application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal",
"open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m",
"mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m",
"open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi",
"total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal",
"bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt",
"mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op",
"mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc",
"mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats",
"num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75",
"pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim",
"total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit",
"hardship_flag", "debt_settlement_flag",
"loan_status"
]
target = "loan_status"
# Load the data
df1 = pd.read_csv(Path('../Resources/LoanStats_2019Q1.csv.zip'), skiprows=1)[:-2]
df2 = pd.read_csv(Path('../Resources/LoanStats_2019Q2.csv.zip'), skiprows=1)[:-2]
df3 = pd.read_csv(Path('../Resources/LoanStats_2019Q3.csv.zip'), skiprows=1)[:-2]
df4 = pd.read_csv(Path('../Resources/LoanStats_2019Q4.csv.zip'), skiprows=1)[:-2]
df = pd.concat([df1, df2, df3, df4]).loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
low_risk_rows = df[df[target] == 'low_risk']
high_risk_rows = df[df[target] == 'high_risk']
#df = pd.concat([low_risk_rows, high_risk_rows.sample(n=len(low_risk_rows), replace=True)])
df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=42), high_risk_rows])
df = df.reset_index(drop=True)
df = df.rename({target:'target'}, axis="columns")
df
df.to_csv('2019loans.csv', index=False)
# Load the data
validate_df = pd.read_csv(Path('../Resources/LoanStats_2020Q1.csv.zip'), skiprows=1)[:-2]
validate_df = validate_df.loc[:, columns].copy()
# Drop the null columns where all values are null
validate_df = validate_df.dropna(axis='columns', how='all')
# Drop the null rows
validate_df = validate_df.dropna()
# Remove the `Issued` loan status
issued_mask = validate_df[target] != 'Issued'
validate_df = validate_df.loc[issued_mask]
# convert interest rate to numerical
validate_df['int_rate'] = validate_df['int_rate'].str.replace('%', '')
validate_df['int_rate'] = validate_df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = dict.fromkeys(['Current', 'Fully Paid'], 'low_risk')
validate_df = validate_df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period', 'Charged Off'], 'high_risk')
validate_df = validate_df.replace(x)
low_risk_rows = validate_df[validate_df[target] == 'low_risk']
high_risk_rows = validate_df[validate_df[target] == 'high_risk']
validate_df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=37), high_risk_rows])
validate_df = validate_df.reset_index(drop=True)
validate_df = validate_df.rename({target:'target'}, axis="columns")
validate_df
validate_df.to_csv('2020Q1loans.csv', index=False)
```
| github_jupyter |
##### Internal Document
# DAND Project WeRateDogs: Wrangling report
WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. We wrangle the WeRateDogs Tweets. We provide a cleaned dataset for further analysis.
We combine three datasets:
1. WeRateDogs Twitter archive download by WeRateDogs.
2. Data downloaded using the Twitter API.
3. Predictions of the pictures' content.
Concerning 1: WeRateDogs downloaded their Twitter archive and sent it to Udacity. The format is CSV. Some enhancement of the data is included. Namely, from the Tweet texts, the dogs' names and dogs stages are extracted. Dogs stages are made up of dog categories from WeRateDogs.
Concerning 2: Using the Twitter API all information from WeRateDogs can be downloaded. The exceptions are the enhancements mentioned under (1) and the predictions of the pictures.
Concerning 3: The images in the WeRateDogs Twitter archive were run through a neural network to classify breeds of dogs. The data is provided as a link to TSV file.
Much Tweet information can be obtained through both (1) and (2). Because the ETL steps obtaining the data directly from the source are more transparent, we prefer to collect the data from the Twitter API. The API is called Tweepy.
Our approach is to collect all the data which is available via the API. We select the relevant columns later. Thus we can easily expand the analysis to data not considered before.
Our goal is to produce a high-quality dataset. We prefer a complete dataset over collecting as much data as possible. This means we combine the information from all sources and drop observations with missing data. Most notably, the predictions in (3) are only available up to August 2017. WeRateDogs started in November 2015. What we accept are missing dog names and dog stages.
Based on the Tweet IDs in (1) we collect the data from the API. We do not filter the IDs we request over the API based on the date given in (1). Instead, we filter the date directly at the source: for each Tweet ID data is collected using the Twitter API. The data returned is only accepted when the tweet is from before 02.08.2017. Also, retweets are filtered here.
The Twitter API runs about 30 minutes to check 2,356 IDs in (1). Of those IDs, 2,325 are considered relevant. Additionally to the acceptance criteria mentioned, a small number of Tweets were deleted from Twitter as of 12.05.2019. Unexpectedly, the Twitter output contains a small 2-digit number of duplicates. One instance of the duplicates is dropped. Because of the runtime of data collection using the API, the data is stored on disc.
Data completeness and tidiness is reached combining all three datasets, dropping columns which are duplicated or not relevant for the analysis, and melting the dog stages. From this, we end up with 2,065 observations. That is the number of lines in (3) minus Tweets no more available.
Assessing data quality, we decide to correct the data type for the Tweet creation timestamp and improve the rating information. The rating information is wrong e.g. when similar patterns are in the Tweet. An example is "3 1/2 legged (...) 9/10", where 9/10 is the rating. Rating denominator is not 10 when multiple dogs on a single picture are rated. Instead of scaling, due to the small number of such pictures those observations are dropped. After this, we are down to 2,045 observations.
The dogs' names are wrongly extracted in (1) for certain cases. Then wrongly "a", "an", and "the" are given as names. We correct these cases.
The cleaned dataset is stored in CSV format.
```
# words:
text = "WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. We wrangle the WeRateDogs Tweets. We provide a cleaned dataset for further analysis. We combine three datasets: 1. WeRateDogs Twitter archive download by WeRateDogs. 2. Data downloaded using the Twitter API. 3. Predictions of the pictures content. CMuch Tweet information can be obtained through both (1) and (2). Because the ETL steps obtaining the data directly from the source are more transparent, we prefer to collect the data from the Twitter API. The API is called Tweepy. Our approach is to collect all the data which is available via the API. We select the relevant columns later. Thus we can easily expand the analysis to data not considered before. Our goal is to produce a high quality dataset. We prefer a complete dataset over collecting as much data as possible. This means we combine the information from all sources and drop observations with missing data. Most notably, the predictions in (3) are only available up to August 2017. WeRateDogs was started November 2015. What we accept is missing dog names and dog stages.Based on the Tweet IDs in (1) we collect the data from the API. We do not filter the IDs we request over the API based on the date given in (1). Instead, we filter the date directly at the source: for each Tweet ID data is collected using the Twitter API. The data returned is only accepted when the tweet is from before 02.08.2017. Also retweets are filtered here.The Twitter API runs about 30 minutes to check 2,356 IDs in (1). Of those IDs 2,325 are considered relevant. Additionally to the acceptance criteria mentioned, a small amount of Tweets were deleted from Twitter as of 12.05.2019. Unexpectedly, the Twitter output contains a small 2-digit number of duplicates. One instance of the duplicates is dropped. Because of the runtime of data collection using the API, the data is stored on disc.Data completeness and tidiness is reached combining all three datasets, dropping columns which are duplicated or not relevant for the analysis, and melting the dog stages. From this we end up with 2,065 observations. That is the number of lines in (3) minus Tweets no more available. Assessing data quality, we decide to correct the data type for the Tweet creation timestamp and improve the rating information. The rating information is wrong e.g. when similar patterns are in the Tweet. An example is '3 1/2 legged (...) 9/10', where 9/10 is the rating. Ratings denominator is not 10 when multiple dogs on a single picture are rated. Instead of scaling, due to the small amount of such pictures those observations are dropped. After this we are down to 2,045 observations. The dogs names are wrongly extracted in (1) for certain cases. Then wrongly 'a', 'an', and 'the' are given as names. We correct these ceses. The cleaned dataset is stored in csv format."
print('This report has', len(text.split(' ')), 'words.')
```
| github_jupyter |
# Getting Started With Xarray
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Getting-Started-With-Xarray" data-toc-modified-id="Getting-Started-With-Xarray-1"><span class="toc-item-num">1 </span>Getting Started With Xarray</a></span><ul class="toc-item"><li><span><a href="#Learning-Objectives" data-toc-modified-id="Learning-Objectives-1.1"><span class="toc-item-num">1.1 </span>Learning Objectives</a></span></li><li><span><a href="#What-Is-Xarray?" data-toc-modified-id="What-Is-Xarray?-1.2"><span class="toc-item-num">1.2 </span>What Is Xarray?</a></span></li><li><span><a href="#Core-Data-Structures" data-toc-modified-id="Core-Data-Structures-1.3"><span class="toc-item-num">1.3 </span>Core Data Structures</a></span><ul class="toc-item"><li><span><a href="#DataArray" data-toc-modified-id="DataArray-1.3.1"><span class="toc-item-num">1.3.1 </span><code>DataArray</code></a></span></li><li><span><a href="#Dataset" data-toc-modified-id="Dataset-1.3.2"><span class="toc-item-num">1.3.2 </span><code>Dataset</code></a></span></li></ul></li><li><span><a href="#Going-Further" data-toc-modified-id="Going-Further-1.4"><span class="toc-item-num">1.4 </span>Going Further</a></span></li></ul></li></ul></div>
## Learning Objectives
- Provide an overview of xarray
- Describe the core xarray data structures, the DataArray and the Dataset, and the components that make them up
- Create xarray DataArrays/Datasets out of raw numpy arrays
- Create xarray objects with and without indexes
- View and set attributes
## What Is Xarray?
Unlabeled, N-dimensional arrays of numbers (e.g., NumPy’s ndarray) are the most widely used data structure in scientific computing. However, they lack a meaningful representation of the metadata associated with their data. Implementing such functionality is left to individual users and domain-specific packages. xarray is a useful tool for parallelizing and working with large datasets in the geosciences. xarry expands on the capabilities of NumPy arrays, providing a lot of streamline data manipulation.
Xarray's interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM).
## Core Data Structures
- xarray has 2 fundamental data structures:
- `DataArray`, which holds single multi-dimensional variables and its coordinates
- `Dataset`, which holds multiple variables that potentially share the same coordinates

### `DataArray`
The DataArray is xarray's implementation of a labeled, multi-dimensional array. It has several key properties:
| Attribute | Description |
|----------- |------------------------------------------------------------------------------------------------------------------------------------------ |
| `data` | `numpy.ndarray` or `dask.array` holding the array's values. |
| `dims` | dimension names for each axis. For example:(`x`, `y`, `z`) (`lat`, `lon`, `time`). |
| `coords` | a dict-like container of arrays (coordinates) that label each point (e.g., 1-dimensional arrays of numbers, datetime objects or strings) |
| `attrs` | an `OrderedDict` to hold arbitrary attributes/metadata (such as units) |
| `name` | an arbitrary name of the array |
```
# Import packages
import numpy as np
import xarray as xr
# Create some sample data
data = 2 + 6 * np.random.exponential(size=(5, 3, 4))
data
```
To create a basic `DataArray`, you can pass this numpy array of random data to `xr.DataArray`
```
prec = xr.DataArray(data)
prec
```
<div class="alert alert-block alert-warning">
Xarray automatically generates some basic dimension names for us.
</div>
You can also pass in your own dimension names and coordinate values:
```
# Use pandas to create an array of datetimes
import pandas as pd
times = pd.date_range('2019-04-01', periods=5)
times
# Use numpy to create array of longitude and latitude values
lons = np.linspace(-150, -60, 4)
lats = np.linspace(10, 80, 3)
lons, lats
coords = {'time': times, 'lat': lats, 'lon': lons}
dims = ['time', 'lat', 'lon']
# Add name, coords, dims to our data
prec = xr.DataArray(data, dims=dims, coords=coords, name='prec')
prec
```
This is already improved upon from the original numpy array, because we have names for each of the dimensions (or axis in NumPy parlance).
We can also add attributes to an existing `DataArray`:
```
prec.attrs['units'] = 'mm'
prec.attrs['standard_name'] = 'precipitation'
prec
```
### `Dataset`
- Xarray's `Dataset` is a dict-like container of labeled arrays (`DataArrays`) with aligned dimensions. - It is designed as an in-memory representation of a netCDF dataset.
- In addition to the dict-like interface of the dataset itself, which can be used to access any `DataArray` in a `Dataset`. Datasets have the following key properties:
| Attribute | Description |
|------------- |------------------------------------------------------------------------------------------------------------------------------------------ |
| `data_vars` | OrderedDict of `DataArray` objects corresponding to data variables. |
| `dims` | dictionary mapping from dimension names to the fixed length of each dimension (e.g., {`lat`: 6, `lon`: 6, `time`: 8}). |
| `coords` | a dict-like container of arrays (coordinates) that label each point (e.g., 1-dimensional arrays of numbers, datetime objects or strings) |
| `attrs` | OrderedDict to hold arbitrary metadata pertaining to the dataset. |
| `name` | an arbitrary name of the dataset |
- DataArray objects inside a Dataset may have any number of dimensions but are presumed to share a common coordinate system.
- Coordinates can also have any number of dimensions but denote constant/independent quantities, unlike the varying/dependent quantities that belong in data.
To create a `Dataset` from scratch, we need to supply dictionaries for any variables (`data_vars`), coordinates (`coords`) and attributes (`attrs`):
```
dset = xr.Dataset({'precipitation' : prec})
dset
```
Let's add some toy `temperature` data array to this existing dataset:
```
temp_data = 283 + 5 * np.random.randn(5, 3, 4)
temp = xr.DataArray(data=temp_data, dims=['time', 'lat', 'lon'],
coords={'time': times, 'lat': lats, 'lon': lons},
name='temp',
attrs={'standard_name': 'air_temperature', 'units': 'kelvin'})
temp
# Now add this data array to our existing dataset
dset['temperature'] = temp
dset.attrs['history'] = 'Created for the xarray tutorial'
dset.attrs['author'] = 'foo and bar'
dset
```
## Going Further
Xarray Documentation on Data Structures: http://xarray.pydata.org/en/latest/data-structures.html
<div class="alert alert-block alert-success">
<p>Next: <a href="02_io.ipynb">I/O</a></p>
</div>
| github_jupyter |
# Reporting on user journeys to a GOV.UK page
Calculate the count and proportion of sessions that have the same journey behaviour.
This script finds sessions that visit a specific page (`DESIRED_PAGE`) in their journey. From the first or last visit to
`DESIRED_PAGE` in the session, the journey is subsetted to include the last N pages including `DESIRED_PAGE`
(`NUMBER_OF_STAGES`).
The count and proportion of sessions visiting distinct, subsetted journeys are compiled together, and returned as a
sorted list in descending order split by subsetted journeys including the entrance page.
## Arguments
- `START_DATE`: String in YYYYMMDD format defining the start date of your query.
- `END_DATE`: String in YYYYMMDD format defining the end date of your query.
- `DESIRED_PAGE`: String of the desired GOV.UK page path of interest.
- `FIRST_HIT`: Boolean flag indicating that the `FIRST` hit to the `DESIRED_PAGE` in the session is used for the subsetted journey. If this option is selected, `LAST_HIT` cannot be selected.
- `LAST_HIT`: Boolean flag indicating that the `LAST` hit to the `DESIRED_PAGE` in the session is used for the subsetted journey. If this option is selected, `FIRST_HIT` cannot be selected.
- `NUMBER_OF_STAGES`: Integer defining how many pages in the past (including `DESIRED_PAGE`) should be considered when subsetting the user journeys. Note that journeys with fewer pages than `NUMBER_OF_STAGES` will always be included.
- `PAGE_TYPE`: Boolean flag indicating that `PAGE` page paths are required. One of `PAGE_TYPE` or `EVENT_TYPE` must be selected.
- `EVENT_TYPE`: Boolean flag indicating that `EVENT` page paths are required. One of `PAGE_TYPE` or `EVENT_TYPE` must be selected.
- `DEVICE_DESKTOP`: Boolean flag indicating that desktop devices should be included in this query. One of `DEVICE_DESKTOP`, `DEVICE_MOBILE`, `DEVICE_TABLET`, or `DEVICE_ALL` must be selected. However, `DEVICE_TABLET` cannot be selected if `DEVICE_ALL` is selected.
- `DEVICE_MOBILE`: Boolean flag indicating that mobile devices should be included in this query. One of `DEVICE_DESKTOP`, `DEVICE_MOBILE`, `DEVICE_TABLET`, or `DEVICE_ALL` must be selected. However, `DEVICE_MOBILE` cannot be selected if `DEVICE_ALL` is selected.
- `DEVICE_TABLET`: Boolean flag indicating that tablet devices should be included in this query. One of `DEVICE_DESKTOP`, `DEVICE_MOBILE`, `DEVICE_TABLET`, or `DEVICE_ALL` must be selected. However, `DEVICE_TABLET` cannot be selected if `DEVICE_ALL` is selected.
- `DEVICE_ALL`: Boolean flag indicating that all devices should be segmented but included in this query. One of `DEVICE_DESKTOP`, `DEVICE_MOBILE`, `DEVICE_TABLET`, or `DEVICE_ALL` must be selected. However, `DEVICE_ALL` cannot be selected if `DEVICE_DESKTOP`, `DEVICE_MOBILE`, or `DEVICE_TABLET` is selected.
### Optional arguments
- `QUERY_STRING`: Boolean flag. If `TRUE`, remove query strings from all page paths. If `FALSE`, keep query strings in all page paths.
- `FLAG_EVENTS`: Boolean flag. If `TRUE`, all `EVENT` page paths will have a ` [E]` suffix. This is useful if both `PAGE_TYPE` and `EVENT_TYPE` are selected, so you can differentiate between the same page path with different types. If `FALSE`, no suffix is appended to `EVENT` page paths.
- `EVENT_CATEGORY`: Boolean flag. If `TRUE`, all event categorys will be displayed.
- `EVENT_ACTION`: Boolean flag. If `TRUE`, all event actions will be displayed.
- `EVENT_LABEL`: Boolean flag. If `TRUE`, all event labels will be displayed.
- `ENTRANCE_PAGE`: Boolean flag. If `TRUE`, if the subsetted journey contains the entrance page this is flagged.
- `EXIT_PAGE`: Boolean flag. If `TRUE`, if the subsetted journey contains the exit page this is flagged.
- `REMOVE_DESIRED_PAGE_REFRESHES`: Boolean flag. If `TRUE` sequential page paths of the same type are removed when the query calculates the first/last visit to the desired page. In other words, it will only use the first visit in a series of sequential visits to desired page if they have the same type. Other visits to the desired page will remain, as will any other desired page refreshes.
- `TRUNCATE_SEARCHES`: Boolean flag. If `TRUE`, all GOV.UK search page paths are truncated to `Sitesearch ({TYPE}): {KEYWORDS}`, where `{TYPE}` is the GOV.UK search content type, and `{KEYWORDS}` are the search keywords. If there are no keywords, this is set to `none`. If `FALSE`, GOV.UK search page paths are not truncated.
## Returns
A csv file containing a Google BigQuery result showing the subsetted user journey containing `PAGE_TYPE` and/or `EVENT_TYPE` page paths in order from the first or last visit to `DESIRED_PAGE` with a maximum length `NUMBER_OF_STAGES`. The results are presented in descending order, with the most popular subsetted user journey first.
Results show:
- `flagEntrance`: Subsetted journeys that incorporate the first page visited during a session are flagged if selected
- `flagExit`: Subsetted journeys that incorporate the last page visited during a session are flagged if selected
- `deviceCategories`: The device category/ies of the subsetted journeys
- `totalSessions`: The total number of sessions
- `countSessions`: The total number of sessions per subsetted journey
- `proportionSessions`: The proportion of sessions per subsetted journey
- `goalPreviousStepX`: The X previous page path following the `DESIRED_PAGE`; X corresponding to `NUMBER_OF_STAGES`
- `goal`: The `DESIRED_PAGE`
A second csv file showing the count for each previous step page path, regardless of the overall subsetted journey. The results are presented in descending order, with the most popular previous step first.
Results show:
- `goalPreviousStepX`: The X previous step page path; X corresponding to `NUMBER_OF_STAGES`
- `countsGoalPreviousStepX`: The number of sessions that visited the page path at step X
- `goal`: The `DESIRED_PAGE`
- `countsGoal`: The number of unique subsetted journeys
## Assumptions
- Only exact matches to `DESIRED_PAGE` are currently supported.
- Other visits to `DESIRED_PAGE` are ignored, only the first or last visit is used.
- If `REMOVE_DESIRED_PAGE_REFRESHES` is `TRUE`, and there is more than one page type (`PAGE_TYPE` and `EVENT_TYPE` are both selected), only the first visit in page refreshes to the same `DESIRED_PAGE` and page type are used to determine which is the first/last visit.
- Journeys shorter than the number of desired stages (`NUMBER_OF_STAGES`) are always included.
- GOV.UK search page paths are assumed to have the format `/search/{TYPE}?keywords={KEYWORDS}{...}`, where `{TYPE}` is the GOV.UK search content type, `{KEYWORDS}` are the search keywords, where each keyword is
separated by `+`, and `{...}` are any other parts of the search query that are not keyword-related (if they exist).
- GOV.UK search page titles are assumed to have the format `{KEYWORDS} - {TYPE} - GOV.UK`, where `{TYPE}` is the GOV.UK search content type, and `{KEYWORDS}` are the search keywords.
- If `ENTRANCE_PAGE` is `FALSE`, each journey (row) contains both instances where the entrance page is included, and the entrance page is not included. Therefore, if there are more page paths than `NUMBER_OF_STAGES`, this will not be flagged.
- If `EXIT_PAGE` is `FALSE`, each journey (row) contains both instances where the exit page is included, and the exit page is not included. Therefore, if there are more page paths than `NUMBER_OF_STAGES`, this will not be flagged.
- If `DEVICE_ALL` is selected in combination with either `DEVICE_DESKTOP`, `DEVICE_MOBILE`, and/or `DEVICE_TABLET`, then the analysis will use `DEVICE_ALL` and ignore all other arguments.
```
from datetime import datetime
import numpy as np
import pandas as pd
import plotly.graph_objects as go
from google.cloud import bigquery
from google.colab import auth, files
from IPython.core.interactiveshell import InteractiveShell
from oauth2client.client import GoogleCredentials
!pip install --upgrade gspread -q
import gspread
!pip install gspread_formatting -q
import gspread_formatting as gsf
# Allow multiline outputs
InteractiveShell.ast_node_interactivity = "all"
# Authenticate the user - follow the link and the prompts to get an authentication token
auth.authenticate_user()
# @markdown ## Set query parameters
# @markdown Define the start and end dates
START_DATE = "2022-01-04" # @param {type:"date"}
END_DATE = "2022-01-04" # @param {type:"date"}
# @markdown Set the desired page path - must start with '/'
DESIRED_PAGE = "/coronavirus" # @param {type:"string"}
# @markdown Set the hit to the desired page in the session; select one option only
FIRST_HIT = False # @param {type:"boolean"}
LAST_HIT = True # @param {type:"boolean"}
# @markdown Set the number of pages, including `DESIRED_PAGE` to include in the subsetted journeys
NUMBER_OF_STAGES = 4 # @param {type:"integer"}
# @markdown Set the page types; at least one must be checked
PAGE_TYPE = True # @param {type:"boolean"}
EVENT_TYPE = False # @param {type:"boolean"}
# @markdown Set the device categories; select one or more devices `[DEVICE_DESKTOP, DEVICE_MOBILE, DEVICE_TABLET]`, OR select all device categories divided up but included in the same analysis `[DEVICE_ALL]`
DEVICE_DESKTOP = True # @param {type:"boolean"}
DEVICE_MOBILE = True # @param {type:"boolean"}
DEVICE_TABLET = True # @param {type:"boolean"}
DEVICE_ALL = False # @param {type:"boolean"}
# @markdown ### Other options
# @markdown Remove query strings from all page paths
QUERY_STRING = False # @param {type:"boolean"}
# @markdown Add a ` [E]` suffix to EVENT page paths - easier to differentiate between PAGE and
# @markdown EVENT types for the same page path
FLAG_EVENTS = False # @param {type:"boolean"}
# @markdown Add event information suffix to EVENT page paths
EVENT_CATEGORY = False # @param {type:"boolean"}
EVENT_ACTION = False # @param {type:"boolean"}
EVENT_LABEL = False # @param {type:"boolean"}
# @markdown Include entrance page flag
ENTRANCE_PAGE = True # @param {type:"boolean"}
# @markdown Include exit page flag
EXIT_PAGE = True # @param {type:"boolean"}
# @markdown Remove page refreshes when determining the last visit to `DESIRED_PAGE`
REMOVE_DESIRED_PAGE_REFRESHES = True # @param {type:"boolean"}
# @markdown Truncate search pages to only show the search content type, and search keywords
TRUNCATE_SEARCHES = True # @param {type:"boolean"}
# Convert the inputted start and end date into `YYYYMMDD` formats
QUERY_START_DATE = datetime.strptime(START_DATE, "%Y-%m-%d").strftime("%Y%m%d")
QUERY_END_DATE = datetime.strptime(END_DATE, "%Y-%m-%d").strftime("%Y%m%d")
# Check that `DESIRED_PAGE` starts with '/'
assert DESIRED_PAGE.startswith(
"/"
), f"`DESIRED_PAGE` must start with '/': {DESIRED_PAGE}"
# Check that only one of `FIRST_HIT` or `LAST_HIT` is selected
if FIRST_HIT and LAST_HIT:
raise AssertionError("Only one of `FIRST_HIT` or `LAST_HIT` can be checked!")
# Compile the query page types
if PAGE_TYPE and EVENT_TYPE:
QUERY_PAGE_TYPES = ["PAGE", "EVENT"]
elif PAGE_TYPE:
QUERY_PAGE_TYPES = ["PAGE"]
elif EVENT_TYPE:
QUERY_PAGE_TYPES = ["EVENT"]
else:
raise AssertionError("At least one of `PAGE_TYPE` or `EVENT_TYPE` must be checked!")
# Compile the device categories
QUERY_DEVICE_CATEGORIES = [
"desktop" if DEVICE_DESKTOP else "",
"mobile" if DEVICE_MOBILE else "",
"tablet" if DEVICE_TABLET else "",
]
QUERY_DEVICE_CATEGORIES = [d for d in QUERY_DEVICE_CATEGORIES if d]
assert (bool(QUERY_DEVICE_CATEGORIES)) | (DEVICE_ALL), (
f"At least one of `DEVICE_DESKTOP`, `DEVICE_MOBILE`, `DEVICE_TABLET`"
+ f" or `DEVICE_ALL` must be checked!"
)
# Set the notebook execution date
NOTEBOOK_EXECUTION_DATE = datetime.now().strftime("%Y%m%d")
# Define the output file names
OUTPUT_FILE = (
f"{NOTEBOOK_EXECUTION_DATE}_user_journeys_{QUERY_START_DATE}_{QUERY_END_DATE}_"
+ f"{'_'.join(QUERY_DEVICE_CATEGORIES)}.csv"
)
query = """
WITH
get_session_data AS (
-- Get all the session data between `start_date` and `end_date`, subsetting for specific `page_type`s. As
-- some pages might be dropped by the subsetting, recalculate `hitNumber` as `journeyNumber` so the values
-- are sequential.
SELECT
CONCAT(fullVisitorId, "-", visitId) AS sessionId,
ROW_NUMBER() OVER (PARTITION BY fullVisitorId, visitId ORDER BY hits.hitNumber) AS journeyNumber,
ROW_NUMBER() OVER (PARTITION BY fullVisitorId, visitId ORDER BY hits.hitNumber DESC) AS revJourneyNumber,
hits.type,
device.deviceCategory,
hits.page.pagePath,
CONCAT(
IF(@queryString, REGEXP_REPLACE(hits.page.pagePath, r'[?#].*', ''), hits.page.pagePath), -- modify this line to `hits.page.pageTitle` if required
IF(hits.type = "EVENT" AND @flagEvents, IF ((@eventCategory OR @eventAction OR @eventLabel), " [E", "[E]"), ""),
IF(hits.type = "EVENT" AND @eventCategory, CONCAT(IF ((@flagEvents), ", ", " ["), hits.eventInfo.eventCategory, IF ((@eventAction OR @eventLabel), "", "]")), ""),
IF(hits.type = "EVENT" AND @eventAction, CONCAT(IF ((@flagEvents OR @eventCategory), ", ", " ["), hits.eventInfo.eventAction, IF ((@eventLabel), "", "]")), ""),
IF(hits.type = "EVENT" AND @eventLabel, CONCAT(IF ((@flagEvents OR @eventCategory OR @eventAction), ", ", " ["), hits.eventInfo.eventLabel, "]"), "")
) AS pageId
FROM `govuk-bigquery-analytics.87773428.ga_sessions_*`
CROSS JOIN UNNEST(hits) AS hits
WHERE _TABLE_SUFFIX BETWEEN @startDate AND @endDate
AND hits.type IN UNNEST(@pageType)
AND (CASE WHEN @deviceAll THEN device.deviceCategory in UNNEST(["mobile", "desktop", "tablet"]) END
OR CASE WHEN @deviceCategories IS NOT NULL THEN device.deviceCategory in UNNEST(@deviceCategories) END )
),
get_search_content_type_and_keywords AS (
-- Extract the content type and keywords (if any) for GOV.UK search pages.
SELECT
*,
IFNULL(
REGEXP_EXTRACT(pagePath, r"^/search/([^ ?#/]+)"),
REGEXP_EXTRACT(pagePath, r"^.+ - ([^-]+) - GOV.UK$")
) AS searchContentType,
IFNULL(
REPLACE(REGEXP_EXTRACT(pagePath, r"^/search/[^ ?#/]+\?keywords=([^&]+)"), "+", " "),
REGEXP_EXTRACT(pagePath, r"^(.+)- [^-]+ - GOV.UK$")
) AS searchKeywords
FROM get_session_data
),
compile_search_entry AS (
-- Truncate the search page into an entry of the search content type and keywords (if any).
SELECT
* EXCEPT (searchContentType, searchKeywords),
CONCAT(
"Sitesearch (",
searchContentType,
"):",
COALESCE(searchKeywords, "none")
) AS search_entry
FROM get_search_content_type_and_keywords
),
replace_escape_characters AS (
-- Replace \ with / as otherwise following REGEXP_REPLACE will not execute
SELECT
*,
REGEXP_REPLACE(search_entry, r"\\\\", "/") AS searchEntryEscapeRemoved
FROM compile_search_entry
),
revise_search_pageids AS (
-- Replace `pageId` for search pages with the compiled entries if selected by the user.
SELECT
* REPLACE (
IFNULL(IF(@truncatedSearches, REGEXP_REPLACE(pageId, r"^/search/.*", searchEntryEscapeRemoved), pageId), pageId) AS pageId
)
FROM replace_escape_characters
),
identify_page_refreshes AS (
-- Lag the page `type` and `pageId` columns. This helps identify page refreshes that can be removed in the
-- next CTE
SELECT
*,
LAG(type) OVER (PARTITION BY sessionId ORDER BY journeyNumber) AS lagType,
LAG(pageId) OVER (PARTITION BY sessionId ORDER BY journeyNumber) AS lagPageId
FROM revise_search_pageids
),
identify_hit_to_desired_page AS (
-- Get the first/last hit to the desired page. Ignores previous visits to the desired page. Page refreshes of the
-- desired page are also ignored if the correct option is declared.
SELECT
sessionId,
deviceCategory,
CASE
WHEN @firstHit THEN MIN(journeyNumber)
WHEN @lastHit THEN MAX(journeyNumber)
END AS desiredPageJourneyNumber
FROM identify_page_refreshes
WHERE pageId = @desiredPage
AND IF(
@desiredPageRemoveRefreshes,
(
lagPageId IS NULL
OR pageId != lagPageId
OR IF(ARRAY_LENGTH(@pageType) > 1, pageId = lagPageId AND type != lagType, FALSE)
),
TRUE
)
GROUP BY sessionId, deviceCategory
),
subset_journey_to_hit_of_desired_page AS (
-- Subset all user journeys to the first/last hit of the desired page.
SELECT revise_search_pageids.*
FROM revise_search_pageids
INNER JOIN identify_hit_to_desired_page
ON revise_search_pageids.sessionId = identify_hit_to_desired_page.sessionId
AND revise_search_pageids.deviceCategory = identify_hit_to_desired_page.deviceCategory
AND revise_search_pageids.journeyNumber <= identify_hit_to_desired_page.desiredPageJourneyNumber
),
calculate_stages AS (
-- Calculate the number of stages from the first/last hit to the desired page, where the first/last hit to the desired
-- page is '1'.
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY sessionId ORDER BY journeyNumber DESC) AS reverseDesiredPageJourneyNumber
FROM subset_journey_to_hit_of_desired_page
),
subset_journey_to_number_of_stages AS (
-- Compile the subsetted user journeys together for each session in reverse order (first/last hit to the desired
-- page first), delimited by " <<< ".
SELECT DISTINCT
sessionId,
deviceCategory,
MIN(journeyNumber) OVER (PARTITION BY sessionId) = 1 AS flagEntrance,
MIN(revJourneyNumber) OVER (PARTITION BY sessionId) = 1 AS flagExit,
STRING_AGG(pageId, " <<< ") OVER (
PARTITION BY sessionId
ORDER BY reverseDesiredPageJourneyNumber ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS userJourney
FROM calculate_stages
WHERE reverseDesiredPageJourneyNumber <= @numberOfStages
),
count_distinct_journeys AS (
-- Count the number of sessions for each distinct subsetted user journey, split by whether the sessions
-- entered on the first page of the subsetted journey or not
SELECT
CASE WHEN @entrancePage THEN CAST(flagEntrance AS STRING) ELSE 'no flag' END AS flagEntrance,
CASE WHEN @exitPage THEN CAST(flagExit AS STRING) ELSE 'no flag' END AS flagExit,
CASE WHEN @deviceAll THEN CAST(deviceCategory AS STRING) ELSE ARRAY_TO_STRING(@deviceCategories, ", ") END AS deviceCategory,
userJourney,
(SELECT COUNT(sessionId) FROM subset_journey_to_number_of_stages) AS totalSessions,
COUNT(sessionId) AS countSessions
FROM subset_journey_to_number_of_stages
GROUP BY
flagEntrance, flagExit, deviceCategory, userJourney
)
SELECT
*,
countSessions / totalSessions AS proportionSessions
FROM count_distinct_journeys
ORDER BY countSessions DESC;
"""
# Initialise a Google BigQuery client, and define a the query parameters
client = bigquery.Client(project="govuk-bigquery-analytics", location="EU")
query_parameters = [
bigquery.ScalarQueryParameter("startDate", "STRING", QUERY_START_DATE),
bigquery.ScalarQueryParameter("endDate", "STRING", QUERY_END_DATE),
bigquery.ArrayQueryParameter("pageType", "STRING", QUERY_PAGE_TYPES),
bigquery.ScalarQueryParameter("firstHit", "BOOL", FIRST_HIT),
bigquery.ScalarQueryParameter("lastHit", "BOOL", LAST_HIT),
bigquery.ArrayQueryParameter("deviceCategories", "STRING", QUERY_DEVICE_CATEGORIES),
bigquery.ScalarQueryParameter("deviceAll", "BOOL", DEVICE_ALL),
bigquery.ScalarQueryParameter("flagEvents", "BOOL", FLAG_EVENTS),
bigquery.ScalarQueryParameter("eventCategory", "BOOL", EVENT_CATEGORY),
bigquery.ScalarQueryParameter("eventAction", "BOOL", EVENT_ACTION),
bigquery.ScalarQueryParameter("eventLabel", "BOOL", EVENT_LABEL),
bigquery.ScalarQueryParameter("truncatedSearches", "BOOL", TRUNCATE_SEARCHES),
bigquery.ScalarQueryParameter("desiredPage", "STRING", DESIRED_PAGE),
bigquery.ScalarQueryParameter("queryString", "BOOL", QUERY_STRING),
bigquery.ScalarQueryParameter("entrancePage", "BOOL", ENTRANCE_PAGE),
bigquery.ScalarQueryParameter("exitPage", "BOOL", EXIT_PAGE),
bigquery.ScalarQueryParameter(
"desiredPageRemoveRefreshes", "BOOL", REMOVE_DESIRED_PAGE_REFRESHES
),
bigquery.ScalarQueryParameter("numberOfStages", "INT64", NUMBER_OF_STAGES),
]
# Dry run the query, asking for user input to confirm the query execution size is okay
bytes_processed = client.query(
query,
job_config=bigquery.QueryJobConfig(query_parameters=query_parameters, dry_run=True),
).total_bytes_processed
# Compile a message, and flag to the user for a response; if not "yes", terminate execution
user_message = (
f"This query will process {bytes_processed / (1024 ** 3):.1f} GB when run, "
+ f"which is approximately ${bytes_processed / (1024 ** 4)*5:.3f}. Continue ([yes])? "
)
if input(user_message).lower() != "yes":
raise RuntimeError("Stopped execution!")
# Execute the query, and return as a pandas DataFrame
df_raw = client.query(
query, job_config=bigquery.QueryJobConfig(query_parameters=query_parameters)
).to_dataframe()
df_raw.head()
df_stages = (
df_raw.set_index(
["flagEntrance", "flagExit", "deviceCategory", "userJourney"], drop=False
)["userJourney"]
.str.split(" <<< ", expand=True)
.iloc[:, ::-1]
)
df_stages.columns = [
*[f"goalPreviousStep{c+1}" for c in df_stages.columns[1:]],
"goalCompletionLocation",
]
df = df_raw.merge(
df_stages,
how="left",
left_on=["flagEntrance", "flagExit", "deviceCategory", "userJourney"],
right_index=True,
validate="1:1",
)
df.head()
```
# Outputs
```
# Output the results to a CSV file, and download it
df.to_csv(OUTPUT_FILE)
files.download(OUTPUT_FILE)
# Amalgamate the previous steps to provide a summary of the most popular pages (regardless of order of steps)
all_data = []
for c in df.columns[6:]:
df_amal = (
df.groupby([c])
.size()
.reset_index(name=f"counts{c}")
.sort_values([f"counts{c}"], ascending=False)
)
all_data.append(df_amal)
df2 = pd.concat(all_data, axis=0, ignore_index=True)
df2 = df2.apply(lambda x: pd.Series(x.dropna().values))
df2.head()
# Save amalgamation of previous steps to file
filename = "previous_steps_amalgamated.csv"
output = df2.to_csv(filename, index=False)
files.download(filename)
```
# Presenting results as a Sankey diagram
Run this code to create a pseduo Sankey diagram to summarise the top 10 and remainder journeys.
Notes:
* If you want to view `EVENT` hit information, consider using the google sheets template instead. The Sankey diagram can only present a limited number of characters, and therefore it is likely that `EVENT` hit information will be lost
* The plot is best when `NUMBER_OF_STAGES` <= 4. More characters are truncated the greater the number of stages, which will impact the coherence and quality of the diagram
* Because of the above, the Sankey plot cannot be created when `NUMBER_OF_STAGES` is equal to or greater than 8
* If, for example, `NUMBER_OF_STAGES` = 5, but the max journey length is 4, then re-do the analysis with `NUMBER_OF_STAGES` = 4. Less characters will be truncated
* When the plot is created, it is possible to drag the nodes to a different position. This is particularly useful when you have wide nodes, such as nodes with a proportion greater than 70%, as sometimes these nodes will overlap
```
# Raise an error if `NUMBER_OF_STAGES` >= 8
assert NUMBER_OF_STAGES < 8, f"`NUMBER_OF_STAGES` must be equal to or less than 7"
# Filter the data to show the top 10 journeys only and order columns
df_top = df.iloc[:, np.r_[5, 4, 6 : len(df.columns)]].iloc[:, ::-1].head(10)
# Transpose df, and replace the first instance of nan value for each journey with '[Entrance]'
for column in df_top.transpose():
df_top.loc[column] = df_top.loc[column].fillna("Entrance", limit=1)
# Sum count and proportion for top 10 journeys
top_10_count = df_top["countSessions"].sum()
top_10_count = f"{top_10_count:,}"
top_10_prop = df_top["proportionSessions"].sum() * 100
top_10_prop = top_10_prop.round(decimals=1)
# Create 11th journey `Other journeys` which amalgamates the remainding journeys
journey_remainder = [
[df[10:]["countSessions"].sum(axis=0)],
[df[10:]["proportionSessions"].sum(axis=0)],
[DESIRED_PAGE],
["Other journeys"],
]
journey_remainder = pd.DataFrame(data=journey_remainder).transpose()
journey_remainder.columns = [
"countSessions",
"proportionSessions",
"goalCompletionLocation",
"goalPreviousStep1",
]
df_top = df_top.append(journey_remainder, ignore_index=True)
df_top["proportionSessions"] = df_top["proportionSessions"].astype("float")
df_prop = df_top["proportionSessions"] * 100
df_prop = df_prop.round(decimals=1)
# Amalgamate countSessions and proportionSessions
df_top["proportionSessions"] = df_top["proportionSessions"] * 100
df_top["proportionSessions"] = df_top["proportionSessions"].round(decimals=1)
df_top["countSessions"] = [f"{val:,}" for val in df_top["countSessions"]]
df_top["sessions"] = (
" ["
+ df_top["countSessions"].astype(str)
+ ": "
+ df_top["proportionSessions"].astype(str)
+ "%]"
)
# Get total number of sessions
total_sessions = df_raw["totalSessions"][0]
total_sessions = f"{total_sessions:,}"
# Drop redundant columns
df_top = df_top.drop(
["countSessions", "totalSessions", "proportionSessions"], axis=1
).dropna(axis=1, how="all")
# Create a title for the figure
figure_title = (
f"<b>Reverse Path Tool: `{DESIRED_PAGE}`</b><br>[{START_DATE} to {END_DATE}]"
)
# Define node colours
desired_page_node_colour = ["rgb(136,34,85)"]
node_colour = [
"rgb(222,29,29)",
"rgb(82,188,163)",
"rgb(153,201,69)",
"rgb(204,97,196)",
"rgb(36,121,108)",
"rgb(218,165,27)",
"rgb(47,138,196)",
"rgb(118,78,115)",
"rgb(237,100,90)",
"rgb(229,134,6)",
"rgb(136,34,85)",
]
white_colour = ["rgb(255,255,255)"]
grey_colour = ["rgb(192,192,192)"]
# Create `x_coord` parameter, and truncate page path characters depending on `NUMBER_OF_STAGES`
df_top = df_top.astype(str)
if NUMBER_OF_STAGES <= 2:
# create `x_coord`
x_coord = list(np.linspace(1.05, 0.01, 2))
for column in df_top:
# truncate characters and add '...' where string lengths are more than 92
df_top[column] = df_top[column].where(
df_top[column].str.len() < 92, df_top[column].str[:92] + "..."
)
# for the last `goal`, truncate characters and add '...' where string lengths are more than 55
df_top.iloc[:, 0] = df_top.iloc[:, 0].where(
df_top.iloc[:, 0].str.len() < 55, df_top.iloc[:, 0].str[:55] + "..."
)
elif NUMBER_OF_STAGES == 3:
x_coord = [1.05, 0.40, 0.01]
for column in df_top:
df_top[column] = df_top[column].where(
df_top[column].str.len() < 55, df_top[column].str[:55] + "..."
)
df_top.iloc[:, 0] = df_top.iloc[:, 0].where(
df_top.iloc[:, 0].str.len() < 35, df_top.iloc[:, 0].str[:35] + "..."
)
elif NUMBER_OF_STAGES == 4:
x_coord = [1.05, 0.54, 0.29, 0.01]
for column in df_top:
df_top[column] = df_top[column].where(
df_top[column].str.len() < 36, df_top[column].str[:36] + "..."
)
df_top.iloc[:, 0] = df_top.iloc[:, 0].where(
df_top.iloc[:, 0].str.len() < 30, df_top.iloc[:, 0].str[:30] + "..."
)
elif NUMBER_OF_STAGES == 5:
x_coord = [1.05, 0.63, 0.45, 0.25, 0.001]
for column in df_top.iloc[:, 1:]:
df_top[column] = df_top[column].where(
df_top[column].str.len() < 27, df_top[column].str[:27] + "..."
)
df_top.iloc[:, 0] = df_top.iloc[:, 0].where(
df_top.iloc[:, 0].str.len() < 22, df_top.iloc[:, 0].str[:22] + "..."
)
elif NUMBER_OF_STAGES == 6:
x_coord = [1.05, 0.68, 0.55, 0.40, 0.25, 0.01]
for column in df_top:
df_top[column] = df_top[column].where(
df_top[column].str.len() < 22, df_top[column].str[:22] + "..."
)
else:
x_coord = [1.05, 0.75, 0.6, 0.45, 0.30, 0.15, 0.01]
for column in df_top:
df_top[column] = df_top[column].where(
df_top[column].str.len() < 15, df_top[column].str[:15] + "..."
)
# Remove `None` or 'nan' values
label_list = [
[x for x in y if str(x) != "None" and str(x) != "nan"]
for y in df_top.values.tolist()
]
# Concatanate count and proportion in the last `goalPreviousStep` field
label_list_concatanated = []
for lists in label_list:
temp = []
temp = lists[:-2] + [(" ".join(lists[-2:]))]
label_list_concatanated.append(temp)
# Get length for each journey
journey_lengths = [len(n) for n in label_list_concatanated]
# Create `x_coord` paramater
x_coord_list = [x_coord[1 : journey_lengths[x]] for x in range(11)]
x_coord_unnested = [item for sublist in x_coord_list for item in sublist]
x_coord_unnested.insert(0, 0.97)
# Create `y_coord` parameter
y_coord = [0.1]
for index in range(0, 10):
if index == 0 and df_prop[index] <= 30:
prev_elem = y_coord[0]
y_coord.append(prev_elem + 0.1)
elif index == 0 and df_prop[index] >= 30 and df_prop[index] <= 50:
prev_elem = y_coord[0]
y_coord.append(prev_elem + 0.2)
elif index == 0 and df_prop[index] >= 50 and df_prop[index] <= 70:
prev_elem = y_coord[0]
y_coord.append(prev_elem + 0.25)
elif index == 0 and df_prop[index] >= 70 and df_prop[index] <= 90:
prev_elem = y_coord[0]
y_coord.append(prev_elem + 0.3)
elif index == 0 and df_prop[index] >= 90 and df_prop[index] <= 100:
prev_elem = y_coord[0]
y_coord.append(prev_elem + 0.4)
elif index >= 1 and index <= 8 and df_prop[index] <= 10:
prev_elem = y_coord[index]
y_coord.append(prev_elem + 0.05)
elif index >= 1 and index <= 8 and df_prop[index] >= 10 and df_prop[index] <= 30:
prev_elem = y_coord[index]
y_coord.append(prev_elem + 0.1)
elif index >= 1 and index <= 8 and df_prop[index] >= 30 and df_prop[index] <= 50:
prev_elem = y_coord[index]
y_coord.append(prev_elem + 0.2)
elif index >= 1 and index <= 8 and df_prop[index] >= 50 and df_prop[index] <= 70:
prev_elem = y_coord[index]
y_coord.append(prev_elem + 0.3)
elif index >= 1 and index <= 8 and df_prop[index] >= 70 and df_prop[index] <= 100:
prev_elem = y_coord[index]
y_coord.append(prev_elem + 0.5)
elif index == 9:
y_coord.append(0.9)
y_coord_list = [[y_coord[y]] * (journey_lengths[y] - 1) for y in range(0, 11)]
y_coord_unnested = [item for sublist in y_coord_list for item in sublist]
y_coord_unnested.insert(0, 0.5)
# Get previous item function
from itertools import chain, islice, tee
def previous(some_iterable):
prevs, items = tee(some_iterable, 2)
prevs = chain([None], prevs)
return zip(prevs, items)
# Create new list of lists with node number
node_no_list = []
for prevlength, length in previous(journey_lengths):
if prevlength is None:
temp1 = list(range(0, length))
node_no_list.append(temp1)
elif temp1 != [] and len(node_no_list) == 1:
temp2 = list(range(temp1[-1] + 1, temp1[-1] + length + 1))
node_no_list.append(temp2)
else:
node_no_list.append(
list(range(node_no_list[-1][-1] + 1, node_no_list[-1][-1] + length + 1))
)
# Replace every first value with '0'
for journey in node_no_list:
journey[0] = 0
# Within `node_no_list`, combine the source and target values
source_target_list = []
for journey in node_no_list:
number_of_pairs = len(journey) - 1
for prev_elem, elem in previous(journey):
if prev_elem is None:
continue
elif prev_elem is not None:
temp = []
temp.append(prev_elem)
temp.append(elem)
source_target_list.append(temp)
# Create `source` and `target` parameter
source = [item[0] for item in source_target_list]
target = [item[1] for item in source_target_list]
# Unnest `label_list_concatanated` to create `label` parameter
label_list_unnested = [item for sublist in label_list_concatanated for item in sublist]
# Create `color` paramater
colours = [
desired_page_node_colour + [node_colour[colour]] * (journey_lengths[colour] - 1)
for colour in range(11)
]
colours_unnested = [item for sublist in colours for item in sublist]
# Create `link_color` parameter
link_colour = [
grey_colour * (journey_lengths[colour] - 1) + white_colour for colour in range(11)
]
link_colour_unnested = [item for sublist in link_colour for item in sublist]
# Create `value` parameter based on proportion
amin, amax = min(df_prop), max(df_prop)
val = [((val - amin) / (amax - amin)) for i, val in enumerate(df_prop)]
val_list = [[val[y]] * (journey_lengths[y] - 1) for y in range(0, 11)]
val_list_unnested = [item for sublist in val_list for item in sublist]
# Replace `0.0` with the second lowest number, as otherwise journeys with value `0.0` will not display
val_list_unnested = [
sorted(set(val_list_unnested))[1] if item == 0.0 else item
for item in val_list_unnested
]
# Create figure
fig = go.Figure(
data=[
go.Sankey(
node=dict(
x=x_coord_unnested,
y=y_coord_unnested,
pad=35,
thickness=35,
line=dict(color="white", width=0.5),
label=label_list_unnested,
color=colours_unnested,
),
arrangement="freeform", # 'fixed' 'snap' 'freeform' 'perpendicular'
link=dict(source=source, target=target, value=val_list_unnested),
)
]
)
# Add annotations
fig = fig.add_annotation(
x=1.05,
y=1.1,
text=f"<br>Total visits and proportion for the top 10 journeys: {top_10_count} [{top_10_prop}%]",
showarrow=False,
font=dict(family="Arial", size=22),
align="right",
)
fig = fig.add_annotation(
x=1.05,
y=0.485,
text=f"<br>Total visits:<br>{total_sessions}",
showarrow=False,
font=dict(family="Arial", size=19),
align="center",
)
# Update layout
fig.update_layout(
title_text=figure_title,
font=dict(family="Arial", size=19, color="black"),
title_font_size=30,
width=1700,
height=900,
hovermode=False,
xaxis={
"showgrid": False,
"zeroline": False,
"visible": False,
},
yaxis={
"showgrid": False,
"zeroline": False,
"visible": False,
},
plot_bgcolor="rgba(0,0,0,0)",
)
```
# Presenting results in Google sheets
Here's an [example of how you could present the results](https://docs.google.com/spreadsheets/d/1vSFXnPE8XozpRhI1G3x4tl5oro3pUIgZnoFmJ_AjPbY/edit#gid=1115034830) to facilitate sharing with colleagues. To do this, run the code below.
This code uses a [template google sheet](https://docs.google.com/spreadsheets/d/1E54VgFepSCxNfNKNtxp8eQXme7wGOAEauTqgzEuz3iM/edit?usp=drive_web&ouid=114104082491527752510) to create a new google sheet in `GOV.UK teams/2021-2022/Data labs/Requests/User journey tools/Path tools: google sheets result tables`, with the title: `{START_DATE} to {END_DATE} - Reverse path tool - {DESIRED_PAGE}`. This template can present up to 6 `NUMBER_OF_STAGES`. Copy or delete the formatting on the newly created google sheet if more or less stages are required.
It is advisable to present the results like this when the page paths are long, and if you want to visualise `EVENT` hits, as well as `PAGE` hits.
```
# Authentication
gc = gspread.authorize(GoogleCredentials.get_application_default())
## Set up data
df_top = (
df.iloc[:, np.r_[6 : len(df.columns), 4, 5]].iloc[:, ::-1].head(10)
) # Filter the data to show the top 10 journeys only and order columns
df_top["proportionSessions"] = (
df_top["proportionSessions"] * 100
) # Convert proportion to %
df_top["proportionSessions"] = df_top["proportionSessions"].round(
decimals=2
) # Round % 2 decimal places
# Tranpose df, reverse order df, and replace the first instance of na value for each journey with `[Entrance]`
for column in df_top.transpose():
df_top.loc[column] = df_top.loc[column].fillna("Entrance", limit=1)
# Create google sheet in `GOV.UK teams/2021-2022/Data labs/Requests/Path tools`
gc.copy(
"1E54VgFepSCxNfNKNtxp8eQXme7wGOAEauTqgzEuz3iM", # pragma: allowlist secret
title=f"{START_DATE} to {END_DATE} - Reverse path tool - {DESIRED_PAGE}",
copy_permissions=True,
)
sheet = gc.open(f"{START_DATE} to {END_DATE} - Reverse path tool - {DESIRED_PAGE}")
worksheet = sheet.worksheet("reverse_path_tool")
print("\n", sheet.url)
## Fill spreadsheet
# Replace df nan values with ''
df_top = df_top.fillna("")
# Update title header cells
title = f"Reverse path tool: `{DESIRED_PAGE}`"
worksheet.update("B1", f"{title}")
worksheet.update("B2", f"{START_DATE} to {END_DATE}")
# Update `% of sessions` cells
cell_range = list(map("C{}".format, range(4, 14)))
sessions = list(map("{}%".format, list(df_top["proportionSessions"])))
[worksheet.update(cell, sessionProp) for cell, sessionProp in zip(cell_range, sessions)]
# Update `No of. sessions` cells
cell_range = list(map("D{}".format, range(4, 14)))
sessions = list(df_top["countSessions"])
[
worksheet.update(cell, sessionCount)
for cell, sessionCount in zip(cell_range, sessions)
]
# Update `Goal page` cells
cell_range = list(map("F{}".format, range(4, 14)))
goal = list(df_top["goalCompletionLocation"])
[worksheet.update(cell, goalPage) for cell, goalPage in zip(cell_range, goal)]
## Update `Previous step N` cells
# Get cell ID letter for all `Previous step N` cells (start from cell `H`, skip 1, until cell `Z`)
cell_letters = [chr(c) for c in range(ord("h"), ord("z") + 1, 2)]
cell_letters = cell_letters[
:NUMBER_OF_STAGES
] # only keep the numer of elements that match NUMBER_OF_STAGES
# Get cell ID number for all `Previous step N` cells
cell_numbers = list(range(4, 14))
cell_numbers = [str(x) for x in cell_numbers]
# Combine cell ID letter and number to create a list of cell IDs for `Previous step N` cells
goal_previous_step_cells = []
for letter in cell_letters:
for number in cell_numbers:
goal_previous_step_cells.append(letter + number)
# Create a list of the `Previous step N` paths
goal_previous_step = []
for step in range(1, NUMBER_OF_STAGES):
goal_previous_step.extend(df_top[f"goalPreviousStep{step}"])
# Update `Previous step N` cells
[
worksheet.update(cell, goalPage)
for cell, goalPage in zip(goal_previous_step_cells, goal_previous_step)
]
```
# Original SQL query
```sql
/*
Calculate the count and proportion of sessions that have the same journey behaviour.
This script finds sessions that visit a specific page (`desiredPage`) in their journey. From the first/last visit to
`desiredPage` in the session, the journey is subsetted to include the last N pages including `desiredPage`
(`numberofStages`).
The count and proportion of sessions visiting distinct, subsetted journeys are compiled together, and returned as a
sorted list in descending order split by subsetted journeys including the entrance page.
Arguments:
startDate: String in YYYYMMDD format defining the start date of your query.
endDate: String in YYYYMMDD format defining the end date of your query.
pageType: String array containing comma-separated strings of page types. Must contain one or more of "PAGE" and
"EVENT".
firstHit: Boolean flag. If TRUE the first hit to the desired page is used for the subsetted journey. If set to TRUE,
`lastHit` must be set to FALSE.
lastHit: Boolean flag. If TRUE the last hit to the desired page is used for the subsetted journey. If set to TRUE,
`firstHit` must be set to FALSE.
deviceCategories: String array containing comma-separated strings of device categories. Can contain one or more
of "mobile", "desktop", and "tablet".
deviceAll: Boolean flag. If TRUE all device categories are included in the query but divided into their respective
categories. This must to set to TRUE if deviceCategories is left blank. If deviceCategories is not left blank,
this must be set to FALSE.
flagEvents: Boolean flag. If TRUE, all "EVENT" page paths will have a " [E]" suffix. This is useful if `pageType`
contains both "PAGE" and "EVENT" so you can differentiate between the same page path with different types. If
FALSE, no suffix is appended to "EVENT" page paths.
eventCategory: Boolean flag. If TRUE, all "EVENT" page paths will be followed by the " [eventCategory]". If FALSE,
no " [eventCategory]" suffix is appended to "EVENT" page paths.
eventAction: Boolean flag. If TRUE, all "EVENT" page paths will be followed by the " [eventAction]". If FALSE, no
" [eventAction]" suffix is appended to "EVENT" page paths.
eventLabel: Boolean flag. If TRUE, all "EVENT" page paths will be followed by the " [eventLabel]". If FALSE, no
" [eventLabel]" suffix is appended to "EVENT" page paths.
truncatedSearches: Boolean flag. If TRUE, all GOV.UK search page paths are truncated to
"Sitesearch ({TYPE}): {KEYWORDS}", where `{TYPE}` is the GOV.UK search content type, and `{KEYWORDS}` are the
search keywords. If there are no keywords, this is set to `none`. If FALSE, GOV.UK search page paths are
not truncated.
desiredPage: String of the desired GOV.UK page path of interest.
queryString: If TRUE, remove query string from all page paths. If FALSE, keep query strings for all page paths.
desiredPageRemoveRefreshes: Boolean flag. If TRUE sequential page paths of the same type are removed when the query
calculates the first/last visit to the desired page. In other words, it will only use the first visit in a series
of sequential visits to desired page if they have the same type. Other earlier visits to the desired page will
remain, as will any earlier desired page refreshes.
numberOfStages: Integer defining how many pages in the past (including `desiredPage`) should be considered when
subsetting the user journeys. Note that journeys with fewer pages than `numberOfStages` will always be
included.
entrancePage: Boolean flag. If TRUE, if the subsetted journey contains the entrance page this is flagged. If FALSE
no flag is used (e.g. the journey contains both instances where the entrance page is included, and the
entrance page is not included).
exitPage: Boolean flag. If TRUE, if the subsetted journey contains the exit page this is flagged. If FALSE
no flag is used (e.g. the journey contains both instances where the exit page is included, and the
exit page is not included).
Returns:
A Google BigQuery result containing the subsetted user journey containing `pageType` page paths in reverse from
the first/last visit to `desiredPage` with a maximum length `numberOfStages`. Counts and the proportion of sessions
that have this subsetted journey are also shown. Subsetted journeys that incorporate the first page or last page visited by a
session are flagged if selected. The device category/ies of the subsetted journeys are also included. The results
are presented in descending order, with the most popular subsetted user journey first.
Assumptions:
- Only exact matches to `desiredPage` are currently supported.
- Previous visits to `desiredPage` are ignored, only the last visit is used.
- If `desiredPageRemoveRefreshes` is TRUE, and there is more than one page type (`pageType`), only the first visit
in page refreshes to the same `desiredPage` and page type are used to determine which is the first/last visit.
- Journeys shorter than the number of desired stages (`numberOfStages`) are always included.
- GOV.UK search page paths are assumed to have the format `/search/{TYPE}?keywords={KEYWORDS}{...}`, where
`{TYPE}` is the GOV.UK search content type, `{KEYWORDS}` are the search keywords, where each keyword is
separated by `+`, and `{...}` are any other parts of the search query that are not keyword-related (if they
exist).
- GOV.UK search page titles are assumed to have the format `{KEYWORDS} - {TYPE} - GOV.UK`, where `{TYPE}` is the
GOV.UK search content type, and `{KEYWORDS}` are the search keywords.
- If `entrancePage` is FALSE, each journey (row) contains both instances where the entrance page is included,
and the entrance page is not included. Therefore, if there are more page paths than `numberOfStages`, this
will not be flagged.
- If `deviceAll` is set to TRUE, and `deviceCategories` set to 'desktop', 'mobile', and/or 'tablet', the
query will use `deviceAll` and ignore all other arguments.
*/
-- Declare query variables
DECLARE startDate DEFAULT "20210628";
DECLARE endDate DEFAULT "20210628";
DECLARE pageType DEFAULT ["PAGE", "EVENT"];
DECLARE firstHit DEFAULT TRUE;
DECLARE lastHit DEFAULT FALSE;
DECLARE deviceCategories DEFAULT ["desktop", "mobile", "tablet"];
DECLARE deviceAll DEFAULT FALSE;
DECLARE flagEvents DEFAULT TRUE;
DECLARE eventCategory DEFAULT TRUE;
DECLARE eventAction DEFAULT TRUE;
DECLARE eventLabel DEFAULT TRUE;
DECLARE truncatedSearches DEFAULT TRUE;
DECLARE desiredPage DEFAULT "/trade-tariff";
DECLARE queryString DEFAULT TRUE;
DECLARE desiredPageRemoveRefreshes DEFAULT TRUE;
DECLARE numberOfStages DEFAULT 3;
DECLARE entrancePage DEFAULT TRUE;
DECLARE exitPage DEFAULT TRUE;
WITH
get_session_data AS (
-- Get all the session data between `start_date` and `end_date`, subsetting for specific `page_type`s. As
-- some pages might be dropped by the subsetting, recalculate `hitNumber` as `journeyNumber` so the values
-- are sequential.
SELECT
CONCAT(fullVisitorId, "-", visitId) AS sessionId,
ROW_NUMBER() OVER (PARTITION BY fullVisitorId, visitId ORDER BY hits.hitNumber) AS journeyNumber,
ROW_NUMBER() OVER (PARTITION BY fullVisitorId, visitId ORDER BY hits.hitNumber DESC) AS revJourneyNumber,
hits.type,
device.deviceCategory,
hits.page.pagePath,
CONCAT(
IF(queryString, REGEXP_REPLACE(hits.page.pagePath, r'[?#].*', ''), hits.page.pagePath), -- modify this line to `hits.page.pageTitle` if required
IF(hits.type = "EVENT" AND flagEvents, IF ((eventCategory OR eventAction OR eventLabel), " [E", "[E]"), ""),
IF(hits.type = "EVENT" AND eventCategory, CONCAT(IF ((flagEvents), ", ", " ["), hits.eventInfo.eventCategory, IF ((eventAction OR eventLabel), "", "]")), ""),
IF(hits.type = "EVENT" AND eventAction, CONCAT(IF ((flagEvents OR eventCategory), ", ", " ["), hits.eventInfo.eventAction, IF ((eventLabel), "", "]")), ""),
IF(hits.type = "EVENT" AND eventLabel, CONCAT(IF ((flagEvents OR eventCategory OR eventAction), ", ", " ["), hits.eventInfo.eventLabel, "]"), "")
) AS pageId
FROM `govuk-bigquery-analytics.87773428.ga_sessions_*`
CROSS JOIN UNNEST(hits) AS hits
WHERE _TABLE_SUFFIX BETWEEN startDate AND endDate
AND hits.type IN UNNEST(pageType)
AND (CASE WHEN deviceAll THEN device.deviceCategory in UNNEST(["mobile", "desktop", "tablet"]) END
OR CASE WHEN deviceCategories IS NOT NULL THEN device.deviceCategory in UNNEST(deviceCategories) END )
),
get_search_content_type_and_keywords AS (
-- Extract the content type and keywords (if any) for GOV.UK search pages.
SELECT
*,
IFNULL(
REGEXP_EXTRACT(pagePath, r"^/search/([^ ?#/]+)"),
REGEXP_EXTRACT(pagePath, r"^.+ - ([^-]+) - GOV.UK$")
) AS searchContentType,
IFNULL(
REPLACE(REGEXP_EXTRACT(pagePath, r"^/search/[^ ?#/]+\?keywords=([^&]+)"), "+", " "),
REGEXP_EXTRACT(pagePath, r"^(.+)- [^-]+ - GOV.UK$")
) AS searchKeywords
FROM get_session_data
),
compile_search_entry AS (
-- Truncate the search page into an entry of the search content type and keywords (if any).
SELECT
* EXCEPT (searchContentType, searchKeywords),
CONCAT(
"Sitesearch (",
searchContentType,
"):",
COALESCE(searchKeywords, "none")
) AS search_entry
FROM get_search_content_type_and_keywords
),
replace_escape_characters AS (
-- Replace \ with / as otherwise following REGEXP_REPLACE will not execute
SELECT
*,
REGEXP_REPLACE(search_entry, r"\\", "/") AS searchEntryEscapeRemoved
FROM compile_search_entry
),
revise_search_pageids AS (
-- Replace `pageId` for search pages with the compiled entries if selected by the user.
SELECT
* REPLACE (
IFNULL(IF(truncatedSearches, (REGEXP_REPLACE(pageId, r"^/search/.*", searchEntryEscapeRemoved)), pageId), pageId) AS pageId
)
FROM replace_escape_characters
),
identify_page_refreshes AS (
-- Lag the page `type` and `pageId` columns. This helps identify page refreshes that can be removed in the
-- next CTE
SELECT
*,
LAG(type) OVER (PARTITION BY sessionId ORDER BY journeyNumber) AS lagType,
LAG(pageId) OVER (PARTITION BY sessionId ORDER BY journeyNumber) AS lagPageId
FROM revise_search_pageids
),
identify_hit_to_desired_page AS (
-- Get the first/last hit to the desired page. Ignores previous visits to the desirted page. Page refreshes of the
-- desired page are also ignored if the correct option is declared.
SELECT
sessionId,
deviceCategory,
CASE
WHEN firstHit THEN MIN(journeyNumber)
WHEN lastHit THEN MAX(journeyNumber)
END AS desiredPageJourneyNumber
FROM identify_page_refreshes
WHERE pageId = desiredPage
AND IF(
desiredPageRemoveRefreshes,
(
lagPageId IS NULL
OR pageId != lagPageId
OR IF(ARRAY_LENGTH(pageType) > 1, pageId = lagPageId AND type != lagType, FALSE)
),
TRUE
)
GROUP BY sessionId, deviceCategory
),
subset_journey_to_hit_of_desired_page AS (
-- Subset all user journeys to the first/last hit of the desired page.
SELECT revise_search_pageids.*
FROM revise_search_pageids
INNER JOIN identify_hit_to_desired_page
ON revise_search_pageids.sessionId = identify_hit_to_desired_page.sessionId
AND revise_search_pageids.deviceCategory = identify_hit_to_desired_page.deviceCategory
AND revise_search_pageids.journeyNumber <= identify_hit_to_desired_page.desiredPageJourneyNumber
),
calculate_stages AS (
-- Calculate the number of stages from the first/last hit to the desired page, where the first/last hit to the desired
-- page is '1'.
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY sessionId ORDER BY journeyNumber DESC) AS reverseDesiredPageJourneyNumber
FROM subset_journey_to_hit_of_desired_page
),
subset_journey_to_number_of_stages AS (
-- Compile the subsetted user journeys together for each session in reverse order (first/last hit to the desired
-- page first), delimited by " <<< ".
SELECT DISTINCT
sessionId,
deviceCategory,
MIN(journeyNumber) OVER (PARTITION BY sessionId) = 1 AS flagEntrance,
MIN(revJourneyNumber) OVER (PARTITION BY sessionId) = 1 AS flagExit,
STRING_AGG(pageId, " <<< ") OVER (
PARTITION BY sessionId
ORDER BY reverseDesiredPageJourneyNumber ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS userJourney
FROM calculate_stages
WHERE reverseDesiredPageJourneyNumber <= numberOfStages
),
count_distinct_journeys AS (
-- Count the number of sessions for each distinct subsetted user journey, split by whether the sessions
-- entered on the first page of the subsetted journey or not
SELECT
CASE WHEN entrancePage
THEN CAST(flagEntrance AS STRING)
ELSE 'no flag'
END AS flagEntrance,
CASE WHEN exitPage
THEN CAST(flagExit AS STRING)
ELSE 'no flag'
END AS flagExit,
CASE WHEN deviceAll
THEN CAST(deviceCategory AS STRING)
ELSE ARRAY_TO_STRING(deviceCategories, ", ")
END AS deviceCategory,
userJourney,
(SELECT COUNT(sessionId) FROM subset_journey_to_number_of_stages) AS totalSessions,
COUNT(sessionId) AS countSessions
FROM subset_journey_to_number_of_stages
GROUP BY
flagEntrance, flagExit, deviceCategory, userJourney
)
SELECT
*,
countSessions / totalSessions AS proportionSessions
FROM count_distinct_journeys
ORDER BY countSessions DESC;
```
| github_jupyter |
# Input and Output
```
from __future__ import print_function
import numpy as np
author = "kyubyong. https://github.com/Kyubyong/numpy_exercises"
np.__version__
from datetime import date
print(date.today())
```
## NumPy binary files (NPY, NPZ)
Q1. Save x into `temp.npy` and load it.
```
x = np.arange(10)
np.save('temp.npy', x) # Actually you can omit the extension. If so, it will be added automatically.
# Check if there exists the 'temp.npy' file.
import os
if os.path.exists('temp.npy'):
x2 = np.load('temp.npy')
print(np.array_equal(x, x2))
```
Q2. Save x and y into a single file 'temp.npz' and load it.
```
x = np.arange(10)
y = np.arange(11, 20)
np.savez('temp.npz', x=x, y=y)
# np.savez_compressed('temp.npz', x=x, y=y) # If you want to save x and y into a single file in compressed .npz format.
with np.load('temp.npz') as data:
x2 = data['x']
y2 = data['y']
print(np.array_equal(x, x2))
print(np.array_equal(y, y2))
```
## Text files
Q3. Save x to 'temp.txt' in string format and load it.
```
x = np.arange(10).reshape(2, 5)
header = 'num1 num2 num3 num4 num5'
np.savetxt('temp.txt', x, fmt="%d", header=header)
np.loadtxt('temp.txt')
```
Q4. Save `x`, `y`, and `z` to 'temp.txt' in string format line by line, then load it.
```
x = np.arange(10)
y = np.arange(11, 21)
z = np.arange(22, 32)
np.savetxt('temp.txt', (x, y, z), fmt='%d')
np.loadtxt('temp.txt')
```
Q5. Convert `x` into bytes, and load it as array.
```
x = np.array([1, 2, 3, 4])
x_bytes = x.tostring() # Don't be misled by the function name. What it really does is it returns bytes.
x2 = np.fromstring(x_bytes, dtype=x.dtype) # returns a 1-D array even if x is not.
print(np.array_equal(x, x2))
```
Q6. Convert `a` into an ndarray and then convert it into a list again.
```
a = [[1, 2], [3, 4]]
x = np.array(a)
a2 = x.tolist()
print(a == a2)
```
## String formatting¶
Q7. Convert `x` to a string, and revert it.
```
x = np.arange(10).reshape(2,5)
x_str = np.array_str(x)
print(x_str, "\n", type(x_str))
x_str = x_str.replace("[", "") # [] must be stripped
x_str = x_str.replace("]", "")
x2 = np.fromstring(x_str, dtype=x.dtype, sep=" ").reshape(x.shape)
assert np.array_equal(x, x2)
```
## Text formatting options
Q8. Print `x` such that all elements are displayed with precision=1, no suppress.
```
x = np.random.uniform(size=[10,100])
np.set_printoptions(precision=1, threshold=np.nan, suppress=True)
print(x)
```
## Base-n representations
Q9. Convert 12 into a binary number in string format.
```
out1 = np.binary_repr(12)
out2 = np.base_repr(12, base=2)
assert out1 == out2 # But out1 is better because it's much faster.
print(out1)
```
Q10. Convert 12 into a hexadecimal number in string format.
```
np.base_repr(1100, base=16)
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Climate Analysis
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
last_date = session.query(func.max(Measurement.date)).first()
print(last_date)
last_date2=str(last_date)
year = int(last_date2[2]+ last_date2[3]+ last_date2[4]+ last_date2[5])
print(year)
month = int(last_date2[7]+ last_date2[8])
print(month)
day = int(last_date2[10]+ last_date2[11])
print(day)
query_date = dt.date(year, month, day) - dt.timedelta(days=365)
print("Query Date: ", query_date)
maxdate = dt.date(year, month, day)
prcp_list = []
prcp_list = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date > query_date).filter(Measurement.date <= maxdate).\
all()
print(prcp_list)
prcpdf = pd.DataFrame(prcp_list)
prcpdf['date'] = pd.to_datetime(prcpdf['date'])
prcpdf.head()
prcpdf2 = prcpdf.set_index('date')
prcpdf2.rename(columns = {'prcp': 'Precipitaion'}, inplace=True)
prcpdf2.head()
prcpdf2.plot(figsize=(15, 8),sort_columns=True,rot=50,use_index=True,legend=True)
plt.xlabel('Date')
plt.ylabel("Precipitation")
plt.title("Precipitation ", fontsize=20)
plt.savefig('barplot1')
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
prcpdf2.describe()
# Design a query to show how many stations are available in this dataset?
stationcount = []
stationcount = session.query(Station.station).count()
print(stationcount)
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
s_results = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
s_results
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
best_station = s_results[0][0]
session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.station == best_station).all()
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
stationhits[0][0]
hist_list = []
hist_list = session.query(Measurement.station, Measurement.date, Measurement.tobs).\
filter(Measurement.station == stationhits[0][0]).filter(Measurement.date > query_date).\
filter(Measurement.date <= maxdate).\
all()
hist_df = pd.DataFrame(hist_list)
hist_df.head()
hist_df['date'] = pd.to_datetime(hist_df['date'])
hist_df.head()
hist_temps=hist_df['tobs']
plt.hist(hist_temps, bins=12)
plt.title("Temperature Observations ", fontsize=20)
plt.ylabel('Frequency', fontsize=16)
labels = ['tobs']
plt.legend(labels)
plt.savefig('histogram1')
plt.show()
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
def calc_temps(start_date, end_date):
c_results = session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).\
filter(Measurement.date <= end_date).all()
return c_results
calc_temps('2017-01-01', '2017-12-31')
results = calc_temps('2017-07-02', '2017-07-08')
results
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
fig, ax = plt.subplots(figsize=plt.figaspect(2.))
peak = results[0][2] - results[0][0]
bar = ax.bar(1, results[0][1], yerr = peak, color = "coral")
plt.title("Trip Avg Temp")
plt.ylabel("Temperature (F)")
fig.show()
```
## Optional Challenge Assignment
```
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
| github_jupyter |
[](http://colab.research.google.com/github/ai2es/WAF_ML_Tutorial_Part1/blob/main/colab_notebooks/Notebook10_AHyperparameterSearch.ipynb)
# Notebook 10: A hyperparameter search
### Goal: Show an example of hyperparameter tuning
#### Background
If you look at any of the ML method documentation, you will find there are alot of switches and nobs you can play with. See this page on ```RandomForestRegressor``` [click](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html). Most of these switches and nobs are considered *hyperparameters*. In otherwords, these are parameters you can change that may or may not influence the machine learning model performance. Since every machine learning task is different, you often can *tune* your choice of these hyperparameters to get a better performing model. This notebook sets out to show you one way to do a hyperparameter search with random forest.
#### Step 0: Get the github repo (we need some of the functions there)
The first step with all of these Google Colab notebooks will be to grab the github repo and cd into the notebooks directory.
To run things from the command line, put a ```!``` before your code
```
#get the github repo
!git clone https://github.com/ai2es/WAF_ML_Tutorial_Part1.git
#cd into the repo so the paths work
import os
os.chdir('/content/WAF_ML_Tutorial_Part1/jupyter_notebooks/')
```
# Import packages and load data for Regression
In the paper we do this hyperparameter tuning with the random forest regression example. So let's load in the regression dataset.
```
###################################### Load training data ######################################
#import some helper functions for our other directory.
import sys
sys.path.insert(1, '../scripts/')
from aux_functions import load_n_combine_df
import numpy as np
(X_train,y_train),(X_validate,y_validate),_ = load_n_combine_df(path_to_data='../datasets/sevir/',features_to_keep=np.arange(0,36,1),class_labels=False,dropzeros=True)
#remember since we have all 36 predictors we need to scale the inputs
from sklearn.preprocessing import StandardScaler
#create scaling object
scaler = StandardScaler()
#fit scaler to training data
scaler.fit(X_train)
#transform feature data into scaled space
X_train = scaler.transform(X_train)
X_validate = scaler.transform(X_validate)
################################################################################################
#import other packages we will need
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patheffects as path_effects
pe1 = [path_effects.withStroke(linewidth=2,
foreground="k")]
pe2 = [path_effects.withStroke(linewidth=2,
foreground="w")]
```
# Determine what parameter 'sweeps' you wish to do
Right now I would like you to go check out the random forest document page: [here](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html).
We will go ahead and systematically vary some of these hyperparameters. More specifically, we will play with
1. Tree depth (i.e., number of branches).
2. Number of Trees
While one could always do more hyperparameter tests, for now this will get you started on generally how to do it. Let's vary the depth of trees from 1 to 10, and we will incrementally increase it. Then for the number of trees lets do 1, 5, 10, 25, 50, 100. Note we did 1000 in the paper, but something you will notice is that the deeper the tree and the more numerous trees it takes longer to train your models. So for it not to take forever in this tutorial, we will cut it to 100.
```
#vary depth of trees from 1 to 10.
depth = np.arange(1,11,1)
#vary the number of trees in the forest from 1 to 100
n_tree = [1,5,10,25,50,100]
#build out the parameter sets we will test.
sets = []
#for each number of trees, set their depth
for n in n_tree:
for d in depth:
sets.append([n,d])
print(sets,len(sets))
```
As you can see, we have built out 60 different parameters to try out!
To make the code a bit more concise, let's define a functiont that will calculate all our metrics for us.
```
from gewitter_functions import get_bias,get_mae,get_rmse,get_r2
#define a function to calcualte all the metrics, and return a vector with all 4.
def get_metrics(model,X,y):
yhat = model.predict(X)
mae = get_mae(y,yhat)
rmse = get_rmse(y,yhat)
bias = get_bias(y,yhat)
r2 = get_r2(y,yhat)
return np.array([bias,mae,rmse,r2])
```
Okay, now we are ready to do the hyperparameter sweep.
WARNING, this took 60 mins on google colab with n_jobs set to 4. So if you dont have that kind of time, go ahead and jump past this cell, and just load the pre-computed metrics I have.
```
# import the progress bar so we can see how long this will take
import tqdm
#import RandomForest
from sklearn.ensemble import RandomForestRegressor
#do our hyperparameter search!
# for each set of parameters,train a new model and evaluate it
for i,s in enumerate(tqdm.tqdm(sets)):
#initialize the model
reg = RandomForestRegressor(n_estimators=s[0],max_depth=s[1],n_jobs=4)
#train the model
reg.fit(X_train,y_train)
#get the metrics on both the training dataset and the validation dataset.
met_train = get_metrics(reg,X_train,y_train)
met_val = get_metrics(reg,X_validate,y_validate)
#this if statement lets us stack the observations up.
if i ==0:
#if the first loop, rename things
all_scores_val = met_val
all_scores_train = met_train
else:
#otherwise, stack it on
all_scores_val = np.vstack([all_scores_val,met_val])
all_scores_train = np.vstack([all_scores_train,met_train])
del reg
#import pandas for easy writing and reading functions
import pandas as pd
#this takes a hot min to run, so lets save them when we are done.
df_val = pd.DataFrame(all_scores_val,columns=['Bias','MeanAbsoluteError','RootMeanSquaredError','Rsquared'])
df_val.to_csv('../datasets/hyperparametersearch/validation_metrics.csv',index=False)
df_train = pd.DataFrame(all_scores_train,columns=['Bias','MeanAbsoluteError','RootMeanSquaredError','Rsquared'])
df_train.to_csv('../datasets/hyperparametersearch/train_metrics.csv',index=False)
```
Since that took about 30 mins, we dont want to do that EVERY time we load this notebook, so lets save the results out to a file (i.e., a comma separated values file format) to allow for quick and easy reading later on. (this dataset is already made for you in the ```datasets``` folder in the repo.).
```
df_val = pd.read_csv('../datasets/hyperparametersearch/validation_metrics.csv').to_numpy()
df_train = pd.read_csv('../datasets/hyperparametersearch/train_metrics.csv').to_numpy()
```
And there we go, we have successfully trained 60 models with various different configurations to see if any one particular configuration does better than another. So, how do we check which one is doing best? Well we can look at the validation dataset results, here I named it ```all_scores_val```. This will show us the general performance on the validation data. So lets take a look at that now. In the next cell is some code to plot up that matrix we saved out in the last cell.
The figure I want to make has a metric on each panel. The x-axis will be the tree depth, then each color will be the number of trees.
```
#matplotlib things
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
#make default resolution of figures much higher (i.e., High definition)
%config InlineBackend.figure_format = 'retina'
#plot parameters that I personally like, feel free to make these your own.
matplotlib.rcParams['axes.facecolor'] = [0.9,0.9,0.9] #makes a grey background to the axis face
matplotlib.rcParams['axes.labelsize'] = 14 #fontsize in pts
matplotlib.rcParams['axes.titlesize'] = 14
matplotlib.rcParams['xtick.labelsize'] = 12
matplotlib.rcParams['ytick.labelsize'] = 12
matplotlib.rcParams['legend.fontsize'] = 12
matplotlib.rcParams['legend.facecolor'] = 'w'
matplotlib.rcParams['savefig.transparent'] = False
#make a 2 row, 2 column plot (4 total subplots)
fig,axes = plt.subplots(2,2,figsize=(5,5))
#set background color to white, otherwise its transparent when you copy paste it out of this notebook.
fig.set_facecolor('w')
#lets ravel the 2x2 axes matrix to a 4x1 shape. This makes looping easier (in my opinion).
axes = axes.ravel()
########### colormap stuff ###########
# I want to color each line by the number of trees. So this bit of code does that for us.
#get func to plot it
from aux_functions import make_colorbar
#grab colormap
cmap = matplotlib.cm.cividis
#set up the boundaries to each color
norm = matplotlib.colors.BoundaryNorm(n_tree, cmap.N)
#make a mappable so we can get the color based on the number of trees.
scalarMap = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
######################################
titles = ['Bias','MAE','RMSE','$R^{2}$']
for j,ax in enumerate(axes):
for i,ii in enumerate(n_tree):
color_choice =scalarMap.to_rgba(ii)
ax.plot(np.arange(1,11,1),df_val[(i*10):(i+1)*10,j],'o-',color=color_choice,ms=3)
# ax.plot(np.arange(1,11,1),df_train[(i*10):(i+1)*10,j],'o--',color=color_choice,ms=3)
ax.set_title(titles[j])
ax.set_xlim([0,12])
ax.grid('on')
# ax.xaxis.grid(True, which='minor')
# For the minor ticks, use no labels; default NullFormatter.
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.yaxis.set_minor_locator(MultipleLocator(2.5))
axes[2].set_xlabel('Tree Depth')
axes[3].set_xlabel('Tree Depth')
########### draw and fill the colorbar ###########
ax_cbar = fig.add_axes([1.025, 0.4, 0.015,0.33])
cbar = make_colorbar(ax_cbar,1,100,cmap)
cbar.set_label('# of trees')
##################################################
plt.tight_layout()
```
A reminder that each x-axis as the tree depth (i.e., number of decisions; branches) and the y-axis is the metric that is indicated in the title. The color corresponds to the number of trees used in the random forest, with 1 being the darkest color and 100 being the lightest.
As we can see, and noted in the paper, basically beyond using just 1 tree, which means its a decision tree, the number of trees doesnt have a large effect on the overall performance, while the number of trees seems to have a more appreciable effect. While this is helpful to find generally which models are performing better than others, there is a group of models that all seem to have similar performance with a tree depth greater than 5.
In order to truly assess which one is best to use, we need to include the training data metrics. This is where we will diagnose when a model becomes *overfit*. Overfitting is when the training data performance is really good, but the validation performance is not good. To diagnose this, you compare on the plot when the training data continues to improve its performance while the validation performance starts to degrade or worsen. So lets add the training curves to the same plot as above.
```
#make a 2 row, 2 column plot (4 total subplots)
fig,axes = plt.subplots(2,2,figsize=(5,5))
#set background color to white, otherwise its transparent when you copy paste it out of this notebook.
fig.set_facecolor('w')
#lets ravel the 2x2 axes matrix to a 4x1 shape. This makes looping easier (in my opinion).
axes = axes.ravel()
########### colormap stuff ###########
# I want to color each line by the number of trees. So this bit of code does that for us.
#get func to plot it
from aux_functions import make_colorbar
#grab colormap
cmap = matplotlib.cm.cividis
#set up the boundaries to each color
norm = matplotlib.colors.BoundaryNorm(n_tree, cmap.N)
#make a mappable so we can get the color based on the number of trees.
scalarMap = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
######################################
titles = ['Bias','MAE','RMSE','$R^{2}$']
for j,ax in enumerate(axes):
for i,ii in enumerate(n_tree):
color_choice =scalarMap.to_rgba(ii)
ax.plot(np.arange(1,11,1),df_val[(i*10):(i+1)*10,j],'o-',color=color_choice,ms=3)
ax.plot(np.arange(1,11,1),df_train[(i*10):(i+1)*10,j],'^--',color=color_choice,ms=3)
ax.axvline(8,ls='--',color='k')
ax.set_title(titles[j])
ax.set_xlim([0,12])
ax.grid('on')
# ax.xaxis.grid(True, which='minor')
# For the minor ticks, use no labels; default NullFormatter.
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.yaxis.set_minor_locator(MultipleLocator(2.5))
axes[2].set_xlabel('Tree Depth')
axes[3].set_xlabel('Tree Depth')
########### draw and fill the colorbar ###########
ax_cbar = fig.add_axes([1.025, 0.4, 0.015,0.33])
cbar = make_colorbar(ax_cbar,1,100,cmap)
cbar.set_label('# of trees')
##################################################
plt.tight_layout()
```
Okay, now we have the training curves in the dashed lines and triangle markers. I have also drawn a vertical dashed black line on the tree depth that seems to maximize performance before overfitting. We can see at a tree depth of 8, the $R^{2}$ value for the training data now out performs the validation data, while the other metrics are effectively the same as 5,6, and 7. Thats why we suggest random forest of > 1 tree and depth of 8 is likely a good model to continue on and use.
I hope this was enough to give you an example of how to do hyperparameter tuning. I encourage you to go ahead and try it with the other models!
| github_jupyter |
```
# Load the tensorboard notebook extension
%load_ext tensorboard
cd /tf/src/data/gpt-2/
! pip3 install -r requirements.txt
! python3 download_model.py 117M
import fire
import json
import os
import numpy as np
import tensorflow as tf
import regex as re
from functools import lru_cache
from statistics import median
import argparse
import time
import tqdm
from tensorflow.core.protobuf import rewriter_config_pb2
import glob
import pickle
tf.__version__
```
# Encoding
```
"""Byte pair encoding utilities"""
@lru_cache()
def bytes_to_unicode():
"""
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
This is a signficant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
"""
bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
cs = bs[:]
n = 0
for b in range(2**8):
if b not in bs:
bs.append(b)
cs.append(2**8+n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
"""Return set of symbol pairs in a word.
Word is represented as tuple of symbols (symbols being variable-length strings).
"""
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
class Encoder:
def __init__(self, encoder, bpe_merges, errors='replace'):
self.encoder = encoder
self.decoder = {v:k for k,v in self.encoder.items()}
self.errors = errors # how to handle errors in decoding
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v:k for k, v in self.byte_encoder.items()}
self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
self.cache = {}
# Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token)
pairs = get_pairs(word)
if not pairs:
return token
while True:
bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word)-1 and word[i+1] == second:
new_word.append(first+second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens):
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors)
return text
def get_encoder(model_name, models_dir):
with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f:
encoder = json.load(f)
with open(os.path.join(models_dir, model_name, 'vocab.bpe'), 'r', encoding="utf-8") as f:
bpe_data = f.read()
bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split('\n')[1:-1]]
return Encoder(
encoder=encoder,
bpe_merges=bpe_merges,
)
```
# Model
```
class HParams():
n_vocab=50257
n_ctx=1024
n_embd=768
n_head=12
n_layer=12
def __init__(self, n_vocab, n_ctx, n_embd, n_head, n_layer):
self.n_vocab = n_vocab
self.n_ctx = n_ctx
self.n_embd = n_embd
self.n_head = n_head
self.n_layer = n_layer
def default_hparams():
return HParams(
n_vocab=50257,
n_ctx=1024,
n_embd=768,
n_head=12,
n_layer=12,
)
def shape_list(x):
"""Deal with dynamic shape in tensorflow cleanly."""
static = x.shape.as_list()
dynamic = tf.shape(input=x)
return [dynamic[i] if s is None else s for i, s in enumerate(static)]
def gelu(x):
return 0.5 * x * (1 + tf.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))
def norm(x, scope, *, axis=-1, epsilon=1e-5):
"""Normalize to mean = 0, std = 1, then do a diagonal affine transform."""
with tf.compat.v1.variable_scope(scope):
n_state = x.shape[-1]
g = tf.compat.v1.get_variable('g', [n_state], initializer=tf.compat.v1.constant_initializer(1), use_resource=False)
b = tf.compat.v1.get_variable('b', [n_state], initializer=tf.compat.v1.constant_initializer(0), use_resource=False)
u = tf.reduce_mean(input_tensor=x, axis=axis, keepdims=True)
s = tf.reduce_mean(input_tensor=tf.square(x-u), axis=axis, keepdims=True)
x = (x - u) * tf.math.rsqrt(s + epsilon)
x = x*g + b
return x
def split_states(x, n):
"""Reshape the last dimension of x into [n, x.shape[-1]/n]."""
*start, m = shape_list(x)
return tf.reshape(x, start + [n, m//n])
def merge_states(x):
"""Smash the last two dimensions of x into a single dimension."""
*start, a, b = shape_list(x)
return tf.reshape(x, start + [a*b])
def conv1d(x, scope, nf, *, w_init_stdev=0.02):
with tf.compat.v1.variable_scope(scope):
*start, nx = shape_list(x)
w = tf.compat.v1.get_variable('w', [1, nx, nf], initializer=tf.compat.v1.random_normal_initializer(stddev=w_init_stdev), use_resource=False)
b = tf.compat.v1.get_variable('b', [nf], initializer=tf.compat.v1.constant_initializer(0), use_resource=False)
c = tf.reshape(tf.matmul(tf.reshape(x, [-1, nx]), tf.reshape(w, [-1, nf]))+b, start+[nf])
return c
def attention_mask(nd, ns, *, dtype):
"""1's in the lower triangle, counting from the lower right corner.
Same as tf.matrix_band_part(tf.ones([nd, ns]), -1, ns-nd), but doesn't produce garbage on TPUs.
"""
i = tf.range(nd)[:,None]
j = tf.range(ns)
m = i >= j - ns + nd
return tf.cast(m, dtype)
def attn(x, scope, n_state, *, past, hparams):
assert x.shape.ndims == 3 # Should be [batch, sequence, features]
assert n_state % hparams.n_head == 0
if past is not None:
assert past.shape.ndims == 5 # Should be [batch, 2, heads, sequence, features], where 2 is [k, v]
def split_heads(x):
# From [batch, sequence, features] to [batch, heads, sequence, features]
return tf.transpose(a=split_states(x, hparams.n_head), perm=[0, 2, 1, 3])
def merge_heads(x):
# Reverse of split_heads
return merge_states(tf.transpose(a=x, perm=[0, 2, 1, 3]))
def mask_attn_weights(w):
# w has shape [batch, heads, dst_sequence, src_sequence], where information flows from src to dst.
_, _, nd, ns = shape_list(w)
b = attention_mask(nd, ns, dtype=w.dtype)
b = tf.reshape(b, [1, 1, nd, ns])
w = w*b - tf.cast(1e10, w.dtype)*(1-b)
return w
def multihead_attn(q, k, v):
# q, k, v have shape [batch, heads, sequence, features]
w = tf.matmul(q, k, transpose_b=True)
w = w * tf.math.rsqrt(tf.cast(v.shape[-1], w.dtype))
w = mask_attn_weights(w)
w = tf.nn.softmax(w, axis=-1)
a = tf.matmul(w, v)
return a
with tf.compat.v1.variable_scope(scope):
c = conv1d(x, 'c_attn', n_state*3)
q, k, v = map(split_heads, tf.split(c, 3, axis=2))
present = tf.stack([k, v], axis=1)
if past is not None:
pk, pv = tf.unstack(past, axis=1)
k = tf.concat([pk, k], axis=-2)
v = tf.concat([pv, v], axis=-2)
a = multihead_attn(q, k, v)
a = merge_heads(a)
a = conv1d(a, 'c_proj', n_state)
return a, present
def mlp(x, scope, n_state, *, hparams):
with tf.compat.v1.variable_scope(scope):
nx = x.shape[-1]
h = gelu(conv1d(x, 'c_fc', n_state))
h2 = conv1d(h, 'c_proj', nx)
return h2
def block(x, scope, *, past, hparams):
with tf.compat.v1.variable_scope(scope):
nx = x.shape[-1]
a, present = attn(norm(x, 'ln_1'), 'attn', nx, past=past, hparams=hparams)
x = x + a
m = mlp(norm(x, 'ln_2'), 'mlp', nx*4, hparams=hparams)
x = x + m
return x, present
def past_shape(*, hparams, batch_size=None, sequence=None):
return [batch_size, hparams.n_layer, 2, hparams.n_head, sequence, hparams.n_embd // hparams.n_head]
def expand_tile(value, size):
"""Add a new axis of given size."""
value = tf.convert_to_tensor(value=value, name='value')
ndims = value.shape.ndims
return tf.tile(tf.expand_dims(value, axis=0), [size] + [1]*ndims)
def positions_for(tokens, past_length):
batch_size = tf.shape(input=tokens)[0]
nsteps = tf.shape(input=tokens)[1]
return expand_tile(past_length + tf.range(nsteps), batch_size)
def clf(x, ny, w_init=tf.compat.v1.random_normal_initializer(stddev=0.02), b_init=tf.compat.v1.constant_initializer(0), train=False):
with tf.variable_scope('clf'):
nx = shape_list(x)[-1]
w = tf.compat.v1.get_variable("w", [nx, ny], initializer=w_init)
b = tf.compat.v1.get_variable("b", [ny], initializer=b_init)
return tf.matmul(x, w)+b
def model(hparams, X, past=None, scope='model', reuse=tf.compat.v1.AUTO_REUSE):
with tf.compat.v1.variable_scope(scope, reuse=reuse):
results = {}
batch, sequence = shape_list(X)
wpe = tf.compat.v1.get_variable('wpe', [hparams.n_ctx, hparams.n_embd],
initializer=tf.compat.v1.random_normal_initializer(stddev=0.01), use_resource=False)
wte = tf.compat.v1.get_variable('wte', [hparams.n_vocab, hparams.n_embd],
initializer=tf.compat.v1.random_normal_initializer(stddev=0.02), use_resource=False)
past_length = 0 if past is None else tf.shape(input=past)[-2]
h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
# Transformer
presents = []
pasts = tf.unstack(past, axis=1) if past is not None else [None] * hparams.n_layer
assert len(pasts) == hparams.n_layer
for layer, past in enumerate(pasts):
h, present = block(h, 'h%d' % layer, past=past, hparams=hparams)
presents.append(present)
results['present'] = tf.stack(presents, axis=1)
h = norm(h, 'ln_f')
# Classification on h vector (from paper https://openai.com/blog/language-unsupervised/)
clf_h = tf.reshape(h, [-1, hparams.n_embd])
pool_idx = tf.cast(tf.argmax(tf.cast(tf.equal(X[:, :, 0], hparams.n_vocab), tf.float32), 1), tf.int32)
clf_h = tf.gather(clf_h, tf.range(shape_list(X)[0], dtype=tf.int32)*n_ctx+pool_idx)
clf_h = tf.reshape(clf_h, [-1, 2, hparams.n_embd])
if train and clf_pdrop > 0:
shape = shape_list(clf_h)
shape[1] = 1
clf_h = tf.nn.dropout(clf_h, 1-clf_pdrop, shape)
clf_h = tf.reshape(clf_h, [-1, n_embd])
clf_logits = clf(clf_h, 1, train=train)
clf_logits = tf.reshape(clf_logits, [-1, 2])
results['clf_logits'] = clf_logits
# Language model loss. Do tokens <n predict token n?
h_flat = tf.reshape(h, [batch*sequence, hparams.n_embd])
logits = tf.matmul(h_flat, wte, transpose_b=True)
logits = tf.reshape(logits, [batch, sequence, hparams.n_vocab])
results['logits'] = logits
return results
def model(X, M, Y, train=False, reuse=False):
with tf.variable_scope('model', reuse=reuse):
we = tf.get_variable("we", [n_vocab+n_special+n_ctx, n_embd], initializer=tf.random_normal_initializer(stddev=0.02))
we = dropout(we, embd_pdrop, train)
X = tf.reshape(X, [-1, n_ctx, 2])
M = tf.reshape(M, [-1, n_ctx])
h = embed(X, we)
for layer in range(n_layer):
h = block(h, 'h%d'%layer, train=train, scale=True)
lm_h = tf.reshape(h[:, :-1], [-1, n_embd])
lm_logits = tf.matmul(lm_h, we, transpose_b=True)
lm_losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=lm_logits, labels=tf.reshape(X[:, 1:, 0], [-1]))
lm_losses = tf.reshape(lm_losses, [shape_list(X)[0], shape_list(X)[1]-1])
lm_losses = tf.reduce_sum(lm_losses*M[:, 1:], 1)/tf.reduce_sum(M[:, 1:], 1)
clf_h = tf.reshape(h, [-1, n_embd])
pool_idx = tf.cast(tf.argmax(tf.cast(tf.equal(X[:, :, 0], clf_token), tf.float32), 1), tf.int32)
clf_h = tf.gather(clf_h, tf.range(shape_list(X)[0], dtype=tf.int32)*n_ctx+pool_idx)
clf_h = tf.reshape(clf_h, [-1, 2, n_embd])
if train and clf_pdrop > 0:
shape = shape_list(clf_h)
shape[1] = 1
clf_h = tf.nn.dropout(clf_h, 1-clf_pdrop, shape)
clf_h = tf.reshape(clf_h, [-1, n_embd])
clf_logits = clf(clf_h, 1, train=train)
clf_logits = tf.reshape(clf_logits, [-1, 2])
clf_losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=clf_logits, labels=Y)
return clf_logits, clf_losses, lm_losses
```
# Sample from Model
```
def top_k_logits(logits, k):
if k == 0:
# no truncation
return logits
def _top_k():
values, _ = tf.nn.top_k(logits, k=k)
min_values = values[:, -1, tf.newaxis]
return tf.compat.v1.where(
logits < min_values,
tf.ones_like(logits, dtype=logits.dtype) * -1e10,
logits,
)
return tf.cond(
pred=tf.equal(k, 0),
true_fn=lambda: logits,
false_fn=lambda: _top_k(),
)
def sample_sequence(*, hparams, length, start_token=None, batch_size=None, context=None, temperature=1, top_k=0):
if start_token is None:
assert context is not None, 'Specify exactly one of start_token and context!'
else:
assert context is None, 'Specify exactly one of start_token and context!'
context = tf.fill([batch_size, 1], start_token)
def step(hparams, tokens, past=None):
lm_output = model(hparams=hparams, X=tokens, past=past, reuse=tf.compat.v1.AUTO_REUSE)
logits = lm_output['logits'][:, :, :hparams.n_vocab]
presents = lm_output['present']
presents.set_shape(past_shape(hparams=hparams, batch_size=batch_size))
return {
'logits': logits,
'presents': presents,
}
def body(past, prev, output):
next_outputs = step(hparams, prev, past=past)
logits = next_outputs['logits'][:, -1, :] / tf.cast(temperature, dtype=tf.float32)
logits = top_k_logits(logits, k=top_k)
samples = tf.random.categorical(logits=logits, num_samples=1, dtype=tf.int32)
return [
next_outputs['presents'] if past is None else tf.concat([past, next_outputs['presents']], axis=-2),
samples,
tf.concat([output, samples], axis=1)
]
past, prev, output = body(None, context, context)
def cond(*args):
return True
_, _, tokens = tf.while_loop(
cond=cond, body=body,
maximum_iterations=length - 1,
loop_vars=[
past,
prev,
output
],
shape_invariants=[
tf.TensorShape(past_shape(hparams=hparams, batch_size=batch_size)),
tf.TensorShape([batch_size, None]),
tf.TensorShape([batch_size, None]),
],
back_prop=False,
)
return tokens
from pathlib import Path
def load_dataset(enc, path):
paths = []
if os.path.isfile(path):
# Simple file
paths.append(path)
elif os.path.isdir(path):
# Directory
for i, (dirpath, _, fnames) in enumerate(os.walk(path)):
for fname in fnames:
paths.append(os.path.join(dirpath, fname))
else:
# Assume glob
paths = glob.glob(path)
token_chunks = []
raw_text = ''
for i, path in enumerate(tqdm.tqdm(paths)):
# if i >= 10000: break
try:
with open(path, 'r') as fp:
raw_text += fp.read()
raw_text += '<|endoftext|>'
tokens = np.stack(enc.encode(raw_text))
token_chunks.append(tokens)
raw_text = ''
except Exception as e:
print(e)
return token_chunks
def binary_search(f, lo, hi):
if f(lo) or not f(hi):
return None
while hi > lo + 1:
mid = (lo + hi) // 2
if f(mid):
hi = mid
else:
lo = mid
return hi
class Sampler(object):
"""Fairly samples a slice from a set of variable sized chunks.
'Fairly' means that the distribution is the same as sampling from one concatenated chunk,
but without crossing chunk boundaries."""
def __init__(self, chunks, seed=None):
self.chunks = chunks
self.total_size = sum(chunk.shape[0] for chunk in chunks)
self.boundaries = [0]
for i in range(len(chunks)):
self.boundaries.append(self.boundaries[-1] + chunks[i].shape[0])
self.rs = np.random.RandomState(seed=seed)
def sample(self, length):
assert length < self.total_size // len(
self.chunks
), "Dataset files are too small to sample {} tokens at a time".format(
length)
while True:
index = self.rs.randint(0, self.total_size - length - 1)
i = binary_search(lambda j: self.boundaries[j] > index, 0,
len(self.boundaries) - 1) - 1
if self.boundaries[i + 1] > index + length:
within_chunk = index - self.boundaries[i]
return self.chunks[i][within_chunk:within_chunk + length]
class Args():
def __init__(self, trn_dataset, model_name, combine, batch_size, learning_rate, optimizer, noise, top_k, top_p, run_name, sample_every, sample_length, sample_num, save_every, val_dataset, val_batch_size, val_batch_count, val_every, pretrained, iterations):
self.trn_dataset = trn_dataset
self.model_name = model_name
self.combine = combine
self.batch_size = batch_size
self.learning_rate = learning_rate
self.optimizer = optimizer
self.noise = noise
self.top_k = top_k
self.top_p = top_p
self.run_name = run_name
self.sample_every = sample_every
self.sample_length = sample_length
self.sample_num = sample_num
self.save_every = save_every
self.val_dataset = val_dataset
self.val_batch_size = val_batch_size
self.val_batch_count = val_batch_count
self.val_every = val_every
self.pretrained = pretrained
self.iterations = iterations
args = Args(
trn_dataset="/tf/src/data/methods/DATA00M_[god-r]/train",
model_name="117M",
combine=50000,
batch_size=1, # DO NOT TOUCH. INCREASING THIS WILL RAIN DOWN HELL FIRE ONTO YOUR COMPUTER.
learning_rate=0.00002,
optimizer="sgd",
noise=0.0,
top_k=40,
top_p=0.0,
run_name="m4",
sample_every=100,
sample_length=1023,
sample_num=1,
save_every=1000,
val_dataset="/tf/src/data/methods/DATA00M_[god-r]/valid",
val_batch_size=1,
val_batch_count=40,
val_every=100,
pretrained=True,
iterations=493000
)
enc = get_encoder(args.model_name, "models")
trn_set = load_dataset(enc, args.trn_dataset)
val_set = load_dataset(enc, args.val_dataset)
len(trn_set), len(val_set)
# DATASET_SIZE = len(dataset)
# TRN_SET_SIZE = int(DATASET_SIZE * 0.8)
# VAL_SET_SIZE = int(DATASET_SIZE * 0.1)
# TST_SET_SIZE = int(DATASET_SIZE * 0.1)
# trn_set = dataset[:TRN_SET_SIZE]
# val_set = dataset[TRN_SET_SIZE:TRN_SET_SIZE + VAL_SET_SIZE]
# tst_set = dataset[-TST_SET_SIZE:]
# DATASET_SIZE, len(trn_set), len(val_set), len(tst_set)
CHECKPOINT_DIR = 'checkpoint'
SAMPLE_DIR = 'samples'
trn_losses = []
trn_avgs = []
val_losses = []
# Restore previous metrics
with open(os.path.join(CHECKPOINT_DIR, args.run_name, 'metrics.pickle'), 'rb') as f:
loss_dict = pickle.load(f)
trn_losses = loss_dict["trn_losses"]
trn_avgs = loss_dict["avg_trn_losses"]
val_losses = loss_dict["val_losses"]
len(trn_losses), len(trn_avgs), len(val_losses)
def maketree(path):
try:
os.makedirs(path)
except:
pass
def randomize(context, hparams, p):
if p > 0:
mask = tf.random.uniform(shape=tf.shape(input=context)) < p
noise = tf.random.uniform(shape=tf.shape(input=context), minval=0, maxval=hparams.n_vocab, dtype=tf.int32)
return tf.compat.v1.where(mask, noise, context)
else:
return context
def main():
enc = get_encoder(args.model_name, "models")
hparams = default_hparams()
if args.sample_length > hparams.n_ctx:
raise ValueError(
"Can't get samples longer than window size: %s" % hparams.n_ctx)
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
config.graph_options.rewrite_options.layout_optimizer = rewriter_config_pb2.RewriterConfig.OFF
with tf.compat.v1.Session(config=config) as sess:
context = tf.compat.v1.placeholder(tf.int32, [args.batch_size, None])
context_in = randomize(context, hparams, args.noise)
output = model(hparams=hparams, X=context_in)
val_context = tf.compat.v1.placeholder(tf.int32, [args.val_batch_size, None])
val_output = model(hparams=hparams, X=val_context)
tf_sample = sample_sequence(
hparams=hparams,
length=args.sample_length,
context=context,
batch_size=args.batch_size,
temperature=1.0,
top_k=args.top_k)
all_vars = [v for v in tf.compat.v1.trainable_variables() if 'model' in v.name]
train_vars = all_vars
if args.optimizer == 'adam':
opt = tf.compat.v1.train.AdamOptimizer(learning_rate=args.learning_rate)
elif args.optimizer == 'sgd':
opt = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=args.learning_rate)
else:
exit('Bad optimizer:', args.optimizer)
## Collect Metrics for Tensorboard
with tf.compat.v1.name_scope('metrics'):
with tf.compat.v1.name_scope('train'):
trn_loss = tf.reduce_mean(
input_tensor=tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=context[:, 1:], logits=output['logits'][:, :-1]))
trn_loss_summ = tf.compat.v1.summary.scalar('loss', trn_loss)
trn_med_ph = tf.compat.v1.placeholder(tf.float32,shape=None,name='median')
trn_med_summ = tf.compat.v1.summary.scalar('median', trn_med_ph)
trn_mean_ph = tf.compat.v1.placeholder(tf.float32,shape=None,name='mean')
trn_mean_summ = tf.compat.v1.summary.scalar('mean', trn_mean_ph)
with tf.compat.v1.name_scope('valid'):
val_loss = tf.reduce_mean(
input_tensor=tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=val_context[:, 1:], logits=val_output['logits'][:, :-1]))
val_loss_summ = tf.compat.v1.summary.scalar('loss', val_loss)
val_med_ph = tf.compat.v1.placeholder(tf.float32,shape=None,name='median')
val_med_summ = tf.compat.v1.summary.scalar('median', val_med_ph)
trn_summaries = tf.compat.v1.summary.merge([trn_loss_summ, trn_med_summ, trn_mean_summ])
val_summaries = tf.compat.v1.summary.merge([val_loss_summ, val_med_summ])
opt_grads = tf.gradients(ys=trn_loss, xs=train_vars)
opt_grads = list(zip(opt_grads, train_vars))
opt_apply = opt.apply_gradients(opt_grads)
trn_summ_log = tf.compat.v1.summary.FileWriter(os.path.join(CHECKPOINT_DIR, args.run_name, 'train'))
val_summ_log = tf.compat.v1.summary.FileWriter(os.path.join(CHECKPOINT_DIR, args.run_name, 'valid'))
saver = tf.compat.v1.train.Saver(
var_list=all_vars,
max_to_keep=5,
keep_checkpoint_every_n_hours=2)
sess.run(tf.compat.v1.global_variables_initializer())
ckpt = tf.train.latest_checkpoint(
os.path.join(CHECKPOINT_DIR, args.run_name))
if ckpt is None:
# Get fresh GPT weights if new run.
ckpt = tf.train.latest_checkpoint(
os.path.join('models', args.model_name))
if args.pretrained == True:
print('Loading checkpoint', ckpt)
saver.restore(sess, ckpt)
print('Loading dataset...')
data_sampler = Sampler(trn_set)
if args.val_every > 0:
val_chunks = val_set
print('dataset has', data_sampler.total_size, 'tokens')
print('Training...')
if args.val_every > 0:
# Sample from validation set once with fixed seed to make
# it deterministic during training as well as across runs.
val_data_sampler = Sampler(val_chunks, seed=1)
val_batches = [[val_data_sampler.sample(512) for _ in range(args.val_batch_size)]
for _ in range(args.val_batch_count)]
counter = 1
counter_path = os.path.join(CHECKPOINT_DIR, args.run_name, 'counter')
if os.path.exists(counter_path):
# Load the step number if we're resuming a run
# Add 1 so we don't immediately try to save again
with open(counter_path, 'r') as fp:
counter = int(fp.read()) + 1
def save():
maketree(os.path.join(CHECKPOINT_DIR, args.run_name))
print(
'Saving',
os.path.join(CHECKPOINT_DIR, args.run_name,
'model-{}').format(counter))
saver.save(
sess,
os.path.join(CHECKPOINT_DIR, args.run_name, 'model'),
global_step=counter)
with open(counter_path, 'w') as fp:
fp.write(str(counter) + '\n')
# Save metrics such as losses
metrics = {
"trn_losses": trn_losses,
"avg_trn_losses": trn_avgs,
"val_losses": val_losses
}
with open(os.path.join(CHECKPOINT_DIR, args.run_name, 'metrics.pickle'), 'wb') as f:
pickle.dump(metrics, f, protocol=pickle.HIGHEST_PROTOCOL)
def generate_samples():
print('Generating samples...')
context_tokens = data_sampler.sample(1)
all_text = []
index = 0
while index < args.sample_num:
out = sess.run(
tf_sample,
feed_dict={context: args.batch_size * [context_tokens]})
for i in range(min(args.sample_num - index, args.batch_size)):
text = enc.decode(out[i])
text = '======== SAMPLE {} ========\n{}\n'.format(
index + 1, text)
all_text.append(text)
index += 1
print(text)
maketree(os.path.join(SAMPLE_DIR, args.run_name))
with open(
os.path.join(SAMPLE_DIR, args.run_name,
'samples-{}').format(counter), 'w') as fp:
fp.write('\n'.join(all_text))
def validation():
print('Calculating validation loss...')
losses = []
for batch in tqdm.tqdm(val_batches):
losses.append(sess.run(val_loss, feed_dict={val_context: batch}))
v_val_loss = np.mean(losses)
val_losses.append(v_val_loss)
v_summary = sess.run(val_summaries, feed_dict={val_loss: v_val_loss, val_med_ph: median(losses)})
val_summ_log.add_summary(v_summary, counter)
val_summ_log.flush()
print(
'[{counter} | {time:2.2f}] validation loss = {loss:2.2f}'
.format(
counter=counter,
time=time.time() - start_time,
loss=v_val_loss))
def sample_batch():
return [data_sampler.sample(256) for _ in range(args.batch_size)]
avg_trn_loss = (0.0, 0.1)
# trn_losses = [0.0]
# val_losses = []
start_time = time.time()
# trn_avgs = []
try:
for _ in range(args.iterations):
if counter % args.save_every == 0:
save()
if counter % args.sample_every == 0:
generate_samples()
if args.val_every > 0 and (counter % args.val_every == 0 or counter == 1):
validation()
if _ == 0:
avg = 0
else: avg = avg_trn_loss[0] / avg_trn_loss[1]
(_, v_loss, v_summary) = sess.run(
(opt_apply, trn_loss, trn_summaries),
feed_dict={context: sample_batch(), trn_med_ph: median(trn_losses), trn_mean_ph: avg})
trn_losses.append(v_loss)
trn_summ_log.add_summary(v_summary, counter)
avg_trn_loss = (avg_trn_loss[0] * 0.99 + v_loss,
avg_trn_loss[1] * 0.99 + 1.0)
trn_avgs.append(avg)
print(
'[{counter} | {time:2.2f}] loss={loss:2.2f} avg={avg:2.2f}'
.format(
counter=counter,
time=time.time() - start_time,
loss=v_loss,
avg=avg_trn_loss[0] / avg_trn_loss[1]))
counter += 1
except KeyboardInterrupt:
print('interrupted')
save()
save()
if __name__ == '__main__':
main()
%tensorboard --logdir ./checkpoint/unconditional_experiment/
! curl -X POST -H 'Content-type: application/json' --data '{"text":"from: semeru tower 1\nstatus: model 4 finished training"}' https://hooks.slack.com/services/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
# Reading in the data
with open(os.path.join(CHECKPOINT_DIR, args.run_name, 'metrics.pickle'), 'rb') as f:
loss_dict = pickle.load(f)
loss_dict
```
| github_jupyter |
**KNN model of 10k dataset**
_using data found on kaggle from Goodreads_
_books.csv contains information for 10,000 books, such as ISBN, authors, title, year_
_ratings.csv is a collection of user ratings on these books, from 1 to 5 stars_
```
# imports
import numpy as pd
import pandas as pd
import pickle
from sklearn.neighbors import NearestNeighbors
from scipy.sparse import csr_matrix
import re
```
**Books dataset**
```
books = pd.read_csv('https://raw.githubusercontent.com/zygmuntz/goodbooks-10k/master/books.csv')
print(books.shape)
books.head()
```
**Ratings dataset**
```
ratings = pd.read_csv('https://raw.githubusercontent.com/zygmuntz/goodbooks-10k/master/ratings.csv')
print(ratings.shape)
ratings.head()
```
**Trim down the data**
_In order to make a user rating matrix we will only need bood_id and title._
```
cols = ['book_id', 'title']
books = books[cols]
books.head()
```
**Clean up book titles**
_Book titles are messy, special characters, empty spaces, brackets clutter up the titles_
```
def clean_book_titles(title):
title = re.sub(r'\([^)]*\)', '', title) # handles brackets
title = re.sub(' + ', ' ', title) #compresses multi spaces into a single space
title = title.strip() # handles special characters
return title
books['title'] = books['title'].apply(clean_book_titles)
books.head()
```
**neat-o**
**Create feature matrix**
_Combine datasets to get a new dataset of user ratings for each book_
```
books_ratings = pd.merge(ratings, books, on='book_id')
print(books_ratings.shape)
books_ratings.head()
```
**Remove rows with same user_id and book title**
```
user_ratings = books_ratings.drop_duplicates(['user_id', 'title'])
print(user_ratings.shape)
user_ratings.head()
```
**Pivot table to create user_ratings matrix**
_Each column is a user and each row is a book. The entries in the martix are the user's rating for that book._
```
user_matrix = user_ratings.pivot(index='title', columns='user_id', values='rating').fillna(0)
user_matrix.head()
user_matrix.shape
```
**Compress the matrix since it is extremely sparse**
_Whole lotta zeros_
_
```
compressed = csr_matrix(user_matrix.values)
# build and train knn
# unsupervised learning
# using cosine to measure space/distance
knn = NearestNeighbors(algorithm='brute', metric='cosine')
knn.fit(compressed)
def get_recommendations(book_title, matrix=user_matrix, model=knn, topn=2):
book_index = list(matrix.index).index(book_title)
distances, indices = model.kneighbors(matrix.iloc[book_index,:].values.reshape(1,-1), n_neighbors=topn+1)
print('Recommendations for {}:'.format(matrix.index[book_index]))
for i in range(1, len(distances.flatten())):
print('{}. {}, distance = {}'.format(i, matrix.index[indices.flatten()[i]], "%.3f"%distances.flatten()[i]))
print()
get_recommendations("Harry Potter and the Sorcerer's Stone")
get_recommendations("Pride and Prejudice")
get_recommendations("Matilda")
pickle.dump(knn, open('knn_model.pkl','wb'))
```
| github_jupyter |
# Importando o NLTK
Importar um módulo ou biblioteca significa informar para o programa que você está criando/executando que precisa daquela biblioteca específica.
É possível fazer uma analogia, imagine que você precisa estudar para as provas de Matemática e Português. Você pega seus livros para estudar. Nessa analogia os livros são as "bibliotecas externas" nas quais você quer estudar o assunto.
```
import nltk
```
# Fazendo o download dos dados complementares do NLTK
Os desenvolvedores do NLTK decidiram manter o arquivo de instalação (pip install nltk) com o mínimo de arquivos possível para facilitar o download e instalação. Portanto, eles permitem fazer o download dos arquivos complementares de acordo com a demanda dos desenvolvedores.
Para fazer isso, basta executar o código abaixo e seguir as instruções apresentadas.
```
nltk.download()
```
# O que encontramos no NLTK?
As células abaixo apresentam o exemplo de um dos córpus em Português que podemos acessar com o NLTK.
MACMORPHO - http://nilc.icmc.usp.br/macmorpho/
```
# Mostrar as palavras existentes no MACMorpho
# Observe que elas estão dispostas em uma estrutura de Lista
# Observe também a estrutura para acessar o córpus e seus tokens, imagine
# que está acessando uma estrutura de árvore, com uma raiz e vários ramos filhos.
nltk.corpus.mac_morpho.words()
nltk.corpus.mac_morpho.sents()[1]
nltk.corpus.mac_morpho.tagged_words()
nltk.corpus.mac_morpho.tagged_sents()
```
# Primeira tarefa com o NLTK - a Tokenização
Observe que essa é a forma mais simples de tokenizar um texto usando o NLTK.
A função (trecho de código pré-desenvolvido que executa uma ação) *word_tokenize()* recebe um texto e retorna uma lista de tokens.
```
nltk.word_tokenize("Com um passe de Eli Manning para Plaxico Burress a 39 segundos do fim, o New York Giants anotou o touchdown decisivo e derrubou o favorito New England Patriots por 17 a 14 neste domingo, em Glendale, no Super Bowl XLII.")
```
# Formas adicionais avançadas para tokenização de um texto
O conceito utilizado nas células seguintes é o de Expressões Regulares.
Expressões regulares (chamadas REs, ou regexes ou padrões regex) são essencialmente uma mini linguagem de programação altamente especializada incluída dentro do Python.
Usando esta pequena linguagem, você especifica as regras para o conjunto de strings possíveis que você quer combinar; esse conjunto pode conter sentenças em inglês, endereços de e-mail, ou comandos TeX ou qualquer coisa que você queira. Você poderá então perguntar coisas como “Essa string se enquadra dentro do padrão?” ou “Existe alguma parte da string que se enquadra nesse padrão?”. Você também pode usar as REs para modificar uma string ou dividi-la de diversas formas.
https://docs.python.org/pt-br/3.8/howto/regex.html
https://www.w3schools.com/python/python_regex.asp
```
# Informando ao programa que vamos utilizar a classe RegexpTokenizer
# observe que é outra forma de fazer a 'importação' de um módulo
from nltk.tokenize import RegexpTokenizer
# Nosso texto
texto = "Com um passe de Eli Manning para Plaxico Burress a 39 segundos do fim, o New York Giants anotou o touchdown decisivo e derrubou o favorito New England Patriots por 17 a 14 neste domingo, em Glendale, no Super Bowl XLII."
# Criando o "objeto" que vai tokenizar nosso texto.
# Nesse caso usamos uma expressão regular que vai retornar todos os tokens
# textuais (letras do alfabeto, números e underscore).
# Não queremos os símbolos.
tokenizer = RegexpTokenizer(r'\w+')
# Executando o método do objeto tokenizador
tokens = tokenizer.tokenize(texto)
# Nossos tokens :)
tokens
# Informando ao programa que vamos utilizar a classe RegexpTokenizer
# observe que é outra forma de fazer a 'importação' de um módulo
from nltk.tokenize import RegexpTokenizer
# Nosso texto
texto = "Com um passe de Eli Manning para Plaxico Burress a 39 segundos do fim, o New York Giants anotou o touchdown decisivo e derrubou o favorito New England Patriots por 17 a 14 neste domingo, em Glendale, no Super Bowl XLII."
# Criando o "objeto" que vai tokenizar nosso texto.
# Nesse caso usamos uma expressão regular que vai retornar somente os tokens
# com letras maiúsculas e minúsculas. Não queremos os símbolos e números.
tokenizer = RegexpTokenizer(r'[a-zA-Z]\w+')
tokens = tokenizer.tokenize(texto)
tokens
```
# Frequência de tokens
Muitas vezes é interessante saber a frequencia em que os tokens aparecem em um texto. Com a classe *FreqDist* podemos calcular facilmente.
**Nesse primeiro exemplo, como será a frequencia usando todos os tokens?**
```
# Nosso texto
texto = "Com um passe de Eli Manning para Plaxico Burress a 39 segundos do fim, o New York Giants anotou o touchdown decisivo e derrubou o favorito New England Patriots por 17 a 14 neste domingo, em Glendale, no Super Bowl XLII."
# Tokenizamos nosso texto usando a word_tokenize
tokens = nltk.word_tokenize(texto)
# Calculando nossa frequencia de palavras
frequencia = nltk.FreqDist(tokens)
# Recuperamos a lista de frequencia usando a função most_common()
frequencia.most_common()
```
**E se excluírmos as pontuações?**
```
from nltk.tokenize import RegexpTokenizer
texto = "Com um passe de Eli Manning para Plaxico Burress a 39 segundos do fim, o New York Giants anotou o touchdown decisivo e derrubou o favorito New England Patriots por 17 a 14 neste domingo, em Glendale, no Super Bowl XLII."
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(texto)
frequencia = nltk.FreqDist(tokens)
frequencia.most_common()
```
# Acessando córpus externos
Como já foi apresentado, podemos acessar nossos arquivos que estão no Google Drive apenas "montando" nosso drive no ícone na barra à esquerda.
Para acessar o conteúdo do arquivo, devemos usar a função *open()* que está embutida no python. Essa função retorna o arquivo no formato que o python entende. Para lermos o seu conteúdo devemos usar a função *read()*.
```
# Abrindo nosso córpus
# Nesse código concatenamos a função open com a função read
# Sem concatenar teríamos a seguinte construção
# infile = open('/content/drive/MyDrive/recursos/corpus_teste.txt')
# corpus = infile.read()
corpus = open('/content/drive/MyDrive/recursos/corpus_teste.txt').read()
print(corpus)
```
**Agora vamos tokenizar e calcular a frequência do nosso corpus inteiro :)**
```
from nltk.tokenize import RegexpTokenizer
# Não quero símbolos
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(corpus)
frequencia = nltk.FreqDist(tokens)
frequencia.most_common()
```
# Agrupando minúsculas e maiúsculas
Nas células anteriores percebemos que alguns tokens estão com o texto em maiúsculas e outros em minúsculas. O python considera que são tokens diferentes apenas por conter letras com "caixa" diferente. Portanto, precisamos agrupar todas as palavras que sabemos que são a mesma coisa. O modo mais simples é converter todas para minúsculas ou maiúsculas.
Vimos que podemos modificar uma string para minúsculas ou maiúsculas apenas usando as funções *.lower()* ou *.upper()*, respectivamente.
```
# Vamos usar o tokenizador do tipo Regex
from nltk.tokenize import RegexpTokenizer
# Vamos considerar apenas as letras
tokenizer = RegexpTokenizer(r'[a-zA-Z]\w*')
# Tokenizamos o corpus
tokens = tokenizer.tokenize(corpus)
# Nesse trecho queremos criar uma nova lista com todos os tokens convertidos em
# minúsculas. Para fazer isso "caminhamos" na nossa lista de tokens e executamos
# em cada um a função .lower() e adicionamos esse token convertido na nova lista.
nova_lista = []
for token in tokens:
nova_lista.append(token.lower())
# Com todos os tokens convertidos para minúsculas, calcularemos as suas frequencias :)
frequencia = nltk.FreqDist(nova_lista)
frequencia.most_common()
```
# Tokens que não nos interessam
Alguns tokens que são muito frequentes não ajudam na análise de um texto.
Veja como exemplo a lista de tokens anterior, no topo da lista estão artigos, preposições e etc. No nosso caso não são interessantes.
O NLTK possui uma lista de tokens considerados desinteressantes e que podem ser removidos de uma lista de tokens sem problemas. Em PLN os chamamos de *stopwords*.
Para removê-los da nossa lista de tokens, precisamos comparar um a um com a lista de *stopwords*. Caso um token seja uma *stopword* o removeremos da lista de tokens.
```
# Acessamos a lista de stopwords do NLTK, para a língua portuguesa
stopwords = nltk.corpus.stopwords.words('portuguese')
# Mais uma vez usarmos o tokenizador de Regex
from nltk.tokenize import RegexpTokenizer
# Somente as palavras
tokenizer = RegexpTokenizer(r'[a-zA-Z]\w*')
tokens = tokenizer.tokenize(corpus)
# agora além de convertermos a lista de tokens em minúsculas, vamos comparar
# cada token com a lista de stopwords. Somente vamos adicionar à nova lista
# os tokens que não forem stopwords
nova_lista = []
for token in tokens:
if token.lower() not in stopwords:
nova_lista.append(token.lower())
# E agora calculamos a frequencia novamente
frequencia = nltk.FreqDist(nova_lista)
frequencia.most_common()
```
# List Comprehension
A técnica de *list comprehension* é uma forma diferente e avançada de criar uma lista. Não é obrigatório saber usá-la, mas é muito interessante conhecer sua construção.
O python entende que é uma *list comprehension* quando criamos um laço de repetição entre colchetes: [i for i in range(10)]. Essa construção criará a seguinte lista: [0,1,2,3,4,5,6,7,8,9]. Veja que é possível fazer isso sem essa construção.
Uma forma genérica de imaginar uma *list comprehension* é montar a seguinte estrutura:
<*lista_final* = **[** *elemento_da_lista* **for** *elemento_da_lista* **in** *lista_de_elementos* **]**>
Lembrando que você poderá acrescentar alguma condição para o elemento ser acrescentado na lista:
<*lista_final* = **[** *elemento_da_lista* **for** *elemento_da_lista* **in** *lista_de_elementos* **if** *condição* **]**>
```
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'[a-zA-Z]\w*')
tokens = tokenizer.tokenize(corpus)
nova_lista = []
#for token in tokens:
# if token.lower() not in stopwords:
# nova_lista.append(token.lower())
nova_lista = [token.lower() for token in tokens if token.lower() not in stopwords]
frequencia = nltk.FreqDist(nova_lista)
frequencia.most_common()
```
# Utilizando ngrams
```
# Abrindo nosso córpus
# Nesse código concatenamos a função open com a função read
# Sem concatenar teríamos a seguinte construção
# infile = open('/content/drive/MyDrive/recursos/corpus_teste.txt')
# corpus = infile.read()
corpus = open('/content/drive/MyDrive/recursos/corpus_teste.txt').read()
print(corpus)
from nltk import bigrams
from nltk import trigrams
from nltk import ngrams
tokens = nltk.word_tokenize(corpus)
tokens_bigrams = list(bigrams(tokens))
tokens_bigrams
tokens_trigrams = list(trigrams(tokens))
tokens_trigrams
tokens_ngrams = list(ngrams(tokens, 4))
tokens_ngrams
```
# Reconhecer entidades nomeadas
```
from nltk import bigrams
from nltk import trigrams
bigramas = list(bigrams(tokens))
trigramas = list(trigrams(tokens))
for bigrama in bigramas:
if bigrama[0][0].isupper() and bigrama[1][0].isupper():
print(bigrama)
for trigrama in trigramas:
if trigrama[0][0].isupper() and trigrama[1][0].isupper() and trigrama[2][0].isupper():
print(trigrama)
```
# Stemming e Lematização
```
import nltk
stemmer = nltk.RSLPStemmer()
print(stemmer.stem("Amigão"))
print(stemmer.stem("amigo"))
print(stemmer.stem("amigos"))
print(stemmer.stem("propuseram"))
print(stemmer.stem("propõem"))
print(stemmer.stem("propondo"))
```
# Etiquetador
```
from nltk.corpus import mac_morpho
from nltk.tag import UnigramTagger
tokens = nltk.word_tokenize(corpus)
sentencas_treino = mac_morpho.tagged_sents()
etiquetador = UnigramTagger(sentencas_treino)
etiquetado = etiquetador.tag(tokens)
print(etiquetado)
from nltk.corpus import mac_morpho
from nltk.tag import UnigramTagger
from nltk.tag import DefaultTagger
tokens = nltk.word_tokenize(corpus)
# Dessa vez utilizaremos o DefaultTagger para definir uma etiqueta padrão
etiq_padrao = DefaultTagger('N')
sentencas_treino = mac_morpho.tagged_sents()
etiquetador = UnigramTagger(sentencas_treino, backoff=etiq_padrao)
etiquetado = etiquetador.tag(tokens)
etiquetado
from nltk.chunk import RegexpParser
pattern = 'NP: {<NPROP><NPROP> | <N><N>}'
analise_gramatical = RegexpParser(pattern)
arvore = analise_gramatical.parse(etiquetado)
print(arvore)
```
| github_jupyter |
# Inaugural Project
**Team:** M&M
**Members:** Markus Gorgone Larsen (hbk716) & Matias Bjørn Frydensberg Hall (pkt593)
**Imports and set magics:**
```
import numpy as np
import copy
from types import SimpleNamespace
from scipy import optimize
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# Autoreload modules when code is run
%load_ext autoreload
%autoreload 2
# local modules
import inauguralproject
```
# Question 1
'We consider a household solving the following maximisation problem when looking to buy a home:
$$
\begin{aligned}
c^*, h^* & = \text{arg}\max_{c,h}c^{1-\phi}h^\phi\\
& \text{s.t.}\\
\tilde{p}_h & = p_h\epsilon\\
m & = \tau(p_h, \tilde{p}_h) + c\\
\tau(p_h, \tilde{p}_h) & = rp_h +\tau^g\tilde{p}_h + \tau^p max\{\tilde{p}_h - \bar{p}, 0\}
\end{aligned}
$$
Where $c$ is consumption, $h$ is housing quality, $p_h$ is the price of housing, $\epsilon$ is the public housing assement factor, $\phi$ is the Cobb-Douglas weights, $m$ is cash-on-hand, $r$ is the mortgage interest rate, $\tau^g$ is the base housing tax, $\tau^p$ is the progressive housing tax and $\bar{p}$ is the cutoff price for the progressive tax.
As utility is monotonically increasing in consumption and housing quality, and $\tau$ is a function of h, we can define consumption as:
$$
c = m - \tau(p_h, \tilde{p}_h)
$$
Plugging c into the utility function we get the following:
$$
h^* = \text{arg}\max_{h}(m - rh +\tau^gh\epsilon + \tau^p max\{h\epsilon - \bar{p}, 0\})^{1-\phi}h^\phi
$$
The utility function and optimisation function is defined in the module and used to solve the households problem
```
# a. Create simplenamespace and set parameter values
par = SimpleNamespace()
par.phi = 0.3
par.epsilon = 0.5
par.r = 0.03
par.tau_g = 0.012
par.tau_p = 0.004
par.p_bar = 3
par.m = 0.5
par.seed = 1
# b. Compute optimal housing quality, consumption and utility
h_star, c_star, u_star = inauguralproject.u_optimize(par)
# c. Print solution
print(f'The household will choose optimal housing = {h_star:.2f}, which implies optimal consumption = {c_star:.2f} and utility = {u_star:.2f}')
```
# Question 2
First we create an array of equally spaced values of m using linespace for values between 0.4 and 1.5. We also create arrays as contains for h, c and u values. We then find the optimal values by looping over the values of m. Finally we plot the two graphs. We observe that when m is in the range of 0.72 to about 0.75 optimal housing is unchanged at 6, while consumption increase more rapidly in this range. This is due to the cutoff price. In this range it is more benificial for the household to spend little more than 70% on consumption due to the fact that higher housing quality would increase taxes which in the interval offsets the higher utility from higher housing quality.
```
# a. Create array of m's and container for h*, c* and u*
N = 1000
m_vec = np.linspace(0.4, 1.5, N)
h_vec = np.zeros(N)
c_vec = np.zeros(N)
u_vec = np.zeros(N)
# b. Loop the optimise function over the m_vec array
for i in range(N):
par.m = m_vec[i]
h_vec[i], c_vec[i], u_vec[i] = inauguralproject.u_optimize(par)
# c. Create graph and plot
inauguralproject.two_figures(m_vec, c_vec, "Consumption", "$m$", "$c$", m_vec, h_vec, "House Quality", "$m$", "$h$")
```
# Question 3
In the module we define a function to calculate the total tax burden given the utility function.
```
# a. Adding population size, mean and standard deviation to namespace of parameters
par.pop = 10000
par.mu = -0.4
par.sigma = 0.35
# b. Compute the total tax burden
T = inauguralproject.tax_total(par)
# c. Print the answer
print(f'The average tax burden pr. household is {T/par.pop:.3f}')
```
## Bonus
Using the parameters an array of lognormal distributed m's is created. We also create containers for the h, c and u values. We then find the optimal values by looping over the values of m. Finally we plot the findings as histograms. <br>
Both the distribution of m and h resembel right skewed normal distrubutions, not suprising given m's log-normal distribution. There is nothing odd about m's distrubution, but the distrubution of h is odd since it has a large concentration around a value of 6. This is due to the effect of the progressive tax as described in question 2.
```
# a. Resetting seed and create array of m's and container for h*, c* and u* in our population
np.random.seed(par.seed)
m_pop = np.random.lognormal(par.mu, par.sigma, par.pop)
h_pop = np.zeros(par.pop)
c_pop = np.zeros(par.pop)
u_pop = np.zeros(par.pop)
# b. Compute optimal housing quality, consumption and utility for whole population
for i in range(par.pop):
par.m = m_pop[i]
h_pop[i], c_pop[i], u_pop[i] = inauguralproject.u_optimize(par)
# c. Create histograms to plot distributions
bonus1 = plt.figure(dpi=100)
ax_left = bonus1.add_subplot(1,1,1)
ax_left.hist(m_pop,bins=100,density=True,alpha=0.5,label='cash-on-hand')
ax_left.set_xbound(0, 2.5)
ax_left.set_xlabel('Cash-on-hand')
ax_left.set_ylabel('Probability density')
ax_left.set_title('Distribution of cash-on-hand')
bonus2 = plt.figure(dpi=100)
ax_right = bonus2.add_subplot(1,1,1)
ax_right.hist(h_pop,bins=100,density=True,alpha=0.5,label='housing')
ax_right.set_xbound(1,20)
ax_right.set_xlabel('$h^*$')
ax_right.set_ylabel('Probability density')
ax_right.set_title('Distribution of housing quality');
```
# Question 4
We create a new namespace and change parametervalues. Then we use our tax function to find the total tax burden. We find that the average tax burden increases after the reform.
```
# a. Create a new namespace of parameters by copy and change parameter values
par2 = copy.copy(par)
par2.epsilon = 0.8
par2.tau_g = 0.01
par2.tau_p = 0.009
par2.p_bar = 8
# b. Compute the total tax after the reform
T_reform = inauguralproject.tax_total(par2)
# c. Print the answer
print(f'The average tax burden pr. household after the reform is {T_reform/par.pop:.3f}')
```
# Question 5
We add the tax burden found in Q3 as the policy maker's tax burden goal. We then compute the new $\tau_g$ using the root optimising function as defined in the module. Lastly we check that the tax burden is indeed the same as before the reform.
```
# a. Add the tax burden goal as a parameter
par2.T_goal = T
# b. Calculate the new tau_g and tax burden hereof and add to parameters
tau_g = inauguralproject.base_tax_pct(par2)
par2.tau_g = tau_g
T_reform2 = inauguralproject.tax_total(par2)
# c. Print solution
print(f'The base tax level that leaves the average tax burden unchanged at {T_reform2/par2.pop:.3f} is tau_g = {tau_g:.4f}')
```
# Conclusion
In this assignment we have solved a households utility maximisation problem with respect to housing quality and other consumption. When plotting the optimal housing quality and other consumption for cash-on-hand in the range 0.4 to 1.5, we observe a flat housing quality curve at a value of 6 in the interval of 0.72 to about 0.75, while consumption increase at a higher rate in that interval. As described earlier in the assignment this is a consequence of the progressive housing tax where the extra cost of housing offsets the utility gain from better housing quality, so just increasing consumption gives the household the highest utility.
In Q3 we calculate the average tax burden pr. household in a population with lognormally distributed cash-on-hand. We also plot the distributions of cash-on-hand and housing quality, and notice that the cash-on-hand look as expected, but there is a cluster of households who choose a housing quality of 6. This is of course due to the progressive housing tax as described above. In Q4 we find that the average tax burden pr. household increase after the tax reform.
At last in Q5 we find that in order to keep the tax burden pr. household the same as before the reform the policy maker should set the base housing tax to 0.77%. This change in the reform would redistribute wealth from households with more cash-on-hand to households with less, as households paying the progressive tax, whould finance the decrease in the base housing tax.
| github_jupyter |
# "Wine Quality."
### _"Quality ratings of Portuguese white wines" (Classification task)._
## Table of Contents
## Part 0: Introduction
### Overview
The dataset that's we see here contains 12 columns and 4898 entries of data about Portuguese white wines.
**Метаданные:**
* **fixed acidity**
* **volatile acidity**
* **citric acid**
* **residual sugar**
* **chlorides**
* **free sulfur dioxide**
* **total sulfur dioxide**
* **density**
* **pH**
* **sulphates**
* **alcohol**
* **quality** - score between 3 and 9
### Questions:
Predict which wines are 'Good/1' and 'Not Good/0' (use binary classification; check balance of classes; calculate perdictions; choose the best model)
## [Part 1: Import, Load Data](#Part-1:-Import,-Load-Data.)
* ### Import libraries, Read data from ‘.csv’ file
## [Part 2: Exploratory Data Analysis](#Part-2:-Exploratory-Data-Analysis.)
* ### Info, Head, Describe
* ### Encoding 'quality' attribute
* ### 'quality' attribute value counts and visualisation
* ### Resampling of an imbalanced dataset
* ### Random under-sampling of an imbalanced dataset
* ### Random over-sampling of an imbalanced dataset
## [Part 3: Data Wrangling and Transformation](#Part-3:-Data-Wrangling-and-Transformation.)
* ### Creating datasets for ML part
* ### StandardScaler
* ### 'Train\Test' splitting method
## [Part 4: Machine Learning](#Part-4:-Machine-Learning.)
* ### Build, train and evaluate models without hyperparameters
* #### Logistic Regression, K-Nearest Neighbors, Decision Trees
* #### Classification report
* #### Confusion Matrix
* #### ROC-AUC score
* ### Build, train and evaluate models with hyperparameters
* #### Logistic Regression, K-Nearest Neighbors, Decision Trees
* #### Classification report
* #### Confusion Matrix
* #### ROC-AUC score
## [Conclusion](#Conclusion.)
## Part 1: Import, Load Data.
* ### Import libraries
```
# import standard libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
import warnings
warnings.filterwarnings('ignore')
```
* ### Read data from ‘.csv’ file
```
# read data from '.csv' file
data = pd.read_csv("winequality.csv")
```
## Part 2: Exploratory Data Analysis.
* ### Info
```
# print the full summary of the dataset
data.info()
```
* ### Head
```
# preview of the first 5 lines of the loaded data
data.head()
```
* ### Describe
```
data.describe()
```
* ### Encoding 'quality' attribute
```
# lambda function; wine quality from 3-6 == 0, from 7-9 == 1.
data["quality"] = data["quality"].apply(lambda x: 0 if x < 7 else 1)
# preview of the first 5 lines of the loaded data
data.head()
```
* ### 'quality' attribute value counts and visualisation
```
data["quality"].value_counts()
# visualisation plot
sns.countplot(x="quality", data=data);
```
* ### Resampling of an imbalanced dataset
```
# class count
count_class_0, count_class_1 = data['quality'].value_counts()
# divide by class
class_0 = data[data["quality"] == 0]
class_1 = data[data["quality"] == 1]
```
* ### Random under-sampling of an imbalanced dataset
```
#class_0_under = class_0.sample(count_class_1)
#data_under = pd.concat([class_0_under, class_1], axis=0)
#sns.countplot(x="quality", data=data_under);
```
* ### Random over-sampling of an imbalanced dataset
```
class_1_over = class_1.sample(count_class_0, replace=True)
data_over = pd.concat([class_0, class_1_over], axis=0)
sns.countplot(x="quality", data=data_over);
```
## Part 3: Data Wrangling and Transformation.
* ### Creating datasets for ML part
```
# set 'X' for features' and y' for the target ('quality').
#X = data.drop('quality', axis=1)
#y = data['quality']
# for under-sampling dataset
#X = data_under.drop('quality', axis=1)
#y = data_under['quality']
# for over-sampling dataset
X = data_over.drop('quality', axis=1)
y = data_over['quality']
# preview of the first 5 lines of the loaded data
X.head()
```
* ### 'Train\Test' split
```
# apply 'Train\Test' splitting method
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# print shape of X_train and y_train
X_train.shape, y_train.shape
# print shape of X_test and y_test
X_test.shape, y_test.shape
```
* ### StandardScaler
```
# StandardScaler
sc = StandardScaler()
data_sc_train = pd.DataFrame(sc.fit_transform(X_train), columns=X.columns)
data_sc_test = pd.DataFrame(sc.transform(X_test), columns=X.columns)
data_sc_train.head()
data_sc_test.head()
```
## Part 4: Machine Learning.
* ### Build, train and evaluate models without hyperparameters
* Logistic Regression
* K-Nearest Neighbors
* Decision Trees
```
# Logistic Regression
LR = LogisticRegression()
LR.fit(data_sc_train, y_train)
LR_pred = LR.predict(data_sc_test)
# K-Nearest Neighbors
KNN = KNeighborsClassifier()
KNN.fit(data_sc_train, y_train)
KNN_pred = KNN.predict(data_sc_test)
# Decision Tree
DT = DecisionTreeClassifier(random_state=0)
DT.fit(data_sc_train, y_train)
DT_pred = DT.predict(data_sc_test)
```
* ### Classification report
```
print(f"LogisticRegression: \n {classification_report(y_test, LR_pred, digits=6)} ")
print(f"KNeighborsClassifier: \n {classification_report(y_test, KNN_pred, digits=6)} ")
print(f"DecisionTreeClassifier: \n {classification_report(y_test, DT_pred, digits=6)} ")
```
* ### Confusion matrix
```
sns.heatmap(confusion_matrix(y_test, LR_pred), annot=True);
sns.heatmap(confusion_matrix(y_test, KNN_pred), annot=True);
sns.heatmap(confusion_matrix(y_test, DT_pred), annot=True);
```
* ### ROC-AUC score
```
roc_auc_score(y_test, DT_pred)
```
* ### Build, train and evaluate models with hyperparameters
```
# Logistic Regression
LR = LogisticRegression()
LR_params = {'C':[1,2,3,4,5,6,7,8,9,10], 'penalty':['l1', 'l2', 'elasticnet', 'none'], 'solver':['lbfgs', 'newton-cg', 'liblinear', 'sag', 'saga'], 'random_state':[0]}
LR1 = GridSearchCV(LR, param_grid = LR_params)
LR1.fit(X_train, y_train)
LR1_pred = LR1.predict(X_test)
# K-Nearest Neighbors
KNN = KNeighborsClassifier()
KNN_params = {'n_neighbors':[5,7,9,11]}
KNN1 = GridSearchCV(KNN, param_grid = KNN_params)
KNN1.fit(X_train, y_train)
KNN1_pred = KNN1.predict(X_test)
# Decision Tree
DT = DecisionTreeClassifier()
DT_params = {'max_depth':[2,10,15,20], 'criterion':['gini', 'entropy'], 'random_state':[0]}
DT1 = GridSearchCV(DT, param_grid = DT_params)
DT1.fit(X_train, y_train)
DT1_pred = DT1.predict(X_test)
# print the best hyper parameters set
print(f"LogisticRegression: {LR1.best_params_}")
print(f"KNeighborsClassifier: {KNN1.best_params_}")
print(f"DecisionTreeClassifier: {DT1.best_params_}")
```
* ### Classification report
```
print(f"LogisticRegression: \n {classification_report(y_test, LR1_pred, digits=6)} ")
print(f"KNeighborsClassifier: \n {classification_report(y_test, KNN1_pred, digits=6)} ")
print(f"DecisionTreeClassifier: \n {classification_report(y_test, DT1_pred, digits=6)} ")
```
* ### Confusion matrix
```
# confusion matrix of DT model
conf_mat_DT1 = confusion_matrix(y_test, DT1_pred)
# visualisation
sns.heatmap(conf_mat_DT1, annot=True);
```
* ### ROC-AUC score
```
roc_auc_score(y_test, DT1_pred)
```
## Conclusion.
```
# submission of .csv file with predictions
sub = pd.DataFrame()
sub['ID'] = X_test.index
sub['quality'] = DT1_pred
sub.to_csv('WinePredictionsTest.csv', index=False)
```
**Question**: Predict which wines are 'Good/1' and 'Not Good/0' (use binary classification; check balance of classes; calculate perdictions; choose the best model).
**Answers**:
1. Binary classification was applied.
2. Classes were highly imbalanced.
3. Three options were applied in order to calculate the best predictions:
* Calculate predictions with imbalanced dataset
* Calculate predictions with random under-sampling technique of an imbalanced dataset
* Calculate predictions with random over-sampling technique of an imbalanced dataset (the best solution)
4. Three ML models were used: Logistic Regression, KNN, Decision Tree (without and with hyper parameters).
5. The best result was choosen:
* Random over-sampling dataset with 3838 enteties in class '0' and 3838 enteties in class '1', 7676 enteties in total.
* Train/Test split: test_size=0.2, random_state=0
* Decision Tree model with hyper parameters tuning, with an accuracy score equal 0.921875 and ROC-AUC score equal 0.921773.
| github_jupyter |
<a href="https://colab.research.google.com/github/RihaChri/PureNumpyBinaryClassification/blob/main/NeuronalNetworkPureNumpy_binaryClassification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import scipy.io
import numpy as np
import matplotlib.pyplot as plt
#------------Activations-------------------------------------------------------
def sigmoid_kroko(Z):
A = 1/(1+np.exp(-Z))
cache = Z
#np.exp statt math.exp da dies auch mit Vektoren geht
#+Numbers between 0 and 1
#-Prone to zero gradients
#eher geeignet für letzten Layer
return A, cache
def relu_kroko(Z):
A = np.maximum(0,Z)
cache = Z
return A, cache
def relu_backward_kroko(dA, cache):
Z = cache
dZ = np.array(dA, copy=True) # just converting dz to a correct object.
dZ[Z <= 0] = 0
return dZ
def sigmoid_backward_kroko(dA, cache):
Z = cache
s = 1/(1+np.exp(-Z))
dZ = dA * s * (1-s)
return dZ
#------------------------------------------------------------------------------
def initialize_parameters(layer_dims):#initialisierung a la He--> teilungsfaktor macht geringes W und damit größeren Gradienten
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
#------------------------------------------------------------------------------
def model(X, Y, layers_dims, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
grads = {}
costs = [] # to keep track of the cost
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
AL, caches, Dropouts = forward_propagation(X, parameters, keep_prob)
cost = compute_cost(AL, Y, caches, lambd)
gradients = backward_propagation(AL, X, Y, caches, keep_prob, Dropouts, lambd)
parameters = update_parameters(parameters, gradients, learning_rate)
if print_cost and i % 10000 == 0: print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0: costs.append(cost)
plt.figure("""first figure""")
plt.plot(costs); plt.ylabel('cost');plt.xlabel('iterations (x1,000)');plt.title("Learning rate =" + str(learning_rate));plt.show()
return parameters
#------------------------------------------------------------------------------
def linear_forward(A, W, b):#A.shape=(n_l-1,m), d.h. X.shape=(n_1,m), W.shape=(n_l,n_l-1),b.shape=(n_l,1)
Z = np.dot(W,A)+b#(n_l,n_l-1) * A.shape=(n_l-1,m) = (n_l,m)
cache = (A, W, b)
return Z, cache
def linear_activation_forward(A_prev, W, b, activation): #A.shape=(n_l,m), W.shape=(n_l,n_l-1),b.shape=(n_l,1)
if activation == "sigmoid":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid_kroko(Z)
elif activation == "relu":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu_kroko(Z)
cache = (linear_cache, activation_cache)
return A, cache
def forward_propagation(X, parameters, keep_prob):
caches = []
Dropouts= []
A = X
L = len(parameters) // 2
np.random.seed(1)
for l in range(1, L):#1 bis L-1
A_prev = A
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation = "relu")
D = np.random.rand(A.shape[0], A.shape[1]) #Dropout
D = D < keep_prob #Dropout
A = A * D #Dropout
A = A / keep_prob #Dropout
Dropouts.append(D)
caches.append(cache) #linear_cache, activation_cache = cache
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation = "sigmoid")
caches.append(cache)
return AL, caches, Dropouts
def linear_backward(dZ, cache, lambd):#dZ=(n_l,m)
A_prev, W, b = cache #A_prev.shape=(n_l-1,m), W.shape=(n_l,n_l-1) b.shape=(n_l,1)
m = A_prev.shape[1]
dW = np.dot(dZ,A_prev.T)/m + lambd/m * W #dZ=(n_l,m) * (m*n_l-1) = (n_l,n_l-1) + (n_l,n_l-1)
db = np.sum(dZ, axis=1,keepdims=True)/m #b.shape=(n_l,1)
#keepdims=true sonst wird aus Spaltenvektor in Zeilenvektor, bzw die eine dimension fällt sonst raus
dA_prev = np.dot(W.T,dZ)#(n_l-1,n_l) * (n_l,m) = (n_l-1 , m)
return dA_prev, dW, db
def linear_activation_backward(dA, cache, activation, lambd):
linear_cache, activation_cache = cache
if activation == "relu":
dZ = relu_backward_kroko(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache, lambd)
elif activation == "sigmoid":
dZ = sigmoid_backward_kroko(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache, lambd)
return dA_prev, dW, db
#------------------------------------------------------------------------------
def backward_propagation(AL, X, Y, caches, keep_prob, Dropouts, lambd):
L = len(caches)
gradients = {}
gradients["dZ" + str(L)] = AL - Y #Die alternative dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) würde dich evtl durch null teilen lassen
linear_cache, activation = caches[L-1]
gradients["dA" + str(L-1)], gradients["dW" + str(L)], gradients["db" + str(L)]=linear_backward(gradients["dZ" + str(L)], linear_cache, lambd)
gradients["dA" + str(L-1)] = gradients["dA" + str(L-1)] * Dropouts[L-2]/keep_prob
for l in reversed(range(L-1)):
current_cache = caches[l]
gradients["dA" + str(l)], gradients["dW" + str(l+1)], gradients["db" + str(l+1)] = linear_activation_backward(gradients["dA" + str(l+1)], current_cache, "relu", lambd)
if l>0: gradients["dA" + str(l)]= gradients["dA" + str(l)] * Dropouts[l-1]/ keep_prob #dA0 bekommt kein Dropout
return gradients
#------------------------------------------------------------------------------
def compute_cost(AL, Y, caches, lambd):#A.shape=(n_L,m), Y.shape=(n_L,m)
m = Y.shape[1]
L=len(caches)
cross_entropy = np.nansum(-(Y*np.log(AL)+(1-Y)*np.log(1-AL)),axis=1)/m#Kostenfuntion für Klassifizierung zw. 0 und 1
L2_regularization=0
for l in range(0,L):
(linear_cache, activation_cache)=caches[l]
A,W,b = linear_cache
L2_regularization += np.nansum(np.square(W)) * 1/m * lambd/2
cost = cross_entropy+L2_regularization
cost = np.squeeze(cost)#Dimensionen mit nur einem Eintrag werden gelöscht, i.e. aus [[17]] wird 17
return cost
def update_parameters(parameters, grads, learning_rate):
n = len(parameters) // 2 # number of layers in the neural networks
for k in range(n):
parameters["W" + str(k+1)] = parameters["W" + str(k+1)] - learning_rate * grads["dW" + str(k+1)]
parameters["b" + str(k+1)] = parameters["b" + str(k+1)] - learning_rate * grads["db" + str(k+1)]
return parameters
def predict(X, y, parameters):
m = X.shape[1]
p = np.zeros((1,m), dtype = np.int)
AL, caches, _ = forward_propagation(X, parameters, keep_prob=1.0)
for i in range(0, AL.shape[1]):
if AL[0,i] > 0.5: p[0,i] = 1
else: p[0,i] = 0
print("Accuracy: " + str(np.mean((p[0,:] == y[0,:]))))
return p
#------------------------------------------------------------------------------
data = scipy.io.loadmat('/content/drive/MyDrive/Colab Notebooks/PureNumpy/NeuronalNetwork-binaryClassification/data.mat')
train_X = data['X'].T
train_Y = data['y'].T
test_X = data['Xval'].T
test_Y = data['yval'].T
plt.figure("""second figure""")
plt.scatter(train_X[0, :], train_X[1, :], c=train_Y[0,:], s=40, cmap=plt.cm.Spectral);
layers_dims = [train_X.shape[0], 40, 5, 3, 1]
print("train_X.shape: "+str(train_X.shape))
print("train_Y.shape: "+str(train_Y.shape))
parameters = model(train_X, train_Y, layers_dims, keep_prob = 1, learning_rate = 0.09, lambd=0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
| github_jupyter |
# Perturb-seq K562 co-expression
```
import scanpy as sc
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import itertools
from pybedtools import BedTool
import pickle as pkl
%matplotlib inline
pd.set_option('max_columns', None)
import sys
sys.path.append('/home/ssm-user/Github/scrna-parameter-estimation/dist/memento-0.0.6-py3.8.egg')
sys.path.append('/home/ssm-user/Github/misc-seq/miscseq/')
import encode
import memento
data_path = '/data_volume/memento/k562/'
```
### Read the guide labled K562 data
From perturbseq paper
```
adata = sc.read(data_path + 'h5ad/filtered-cellcycle.h5ad')
guides = adata.obs.guides.drop_duplicates().tolist()
guides = [g for g in guides if ('INTER' not in g and 'nan' not in g)]
ko_genes = adata.obs.query('KO == 1')['KO_GENE'].drop_duplicates().tolist()
adata.X = adata.X.tocsr()
```
### Setup memento
```
adata.obs['q'] = 0.07
memento.setup_memento(adata, q_column='q', filter_mean_thresh=0.07)
```
### Get moments from all groups
```
adata_moments = adata.copy().copy()
memento.create_groups(adata_moments, label_columns=['phase'])
memento.compute_1d_moments(adata_moments, min_perc_group=.9)
moment_df = memento.get_1d_moments(adata_moments)
moment_df = moment_df[0].merge(moment_df[1], on='gene', suffixes=('_m', '_v'))
moment_df = moment_df[['gene','sg^G1_m', 'sg^S_m', 'sg^G2M_m', 'sg^G1_v', 'sg^S_v', 'sg^G2M_v']]
```
### Cell cycle 1D moments
```
adata.obs['s_phase'] = (adata.obs.phase == 'S').astype(int)
adata.obs['g1_phase'] = (adata.obs.phase == 'G1').astype(int)
adata.obs['g2m_phase'] = (adata.obs.phase == 'G2M').astype(int)
g1_s = adata[adata.obs.phase.isin(['S', 'G1'])].copy().copy()
s_g2 = adata[adata.obs.phase.isin(['S', 'G2M'])].copy().copy()
g2_g1 = adata[adata.obs.phase.isin(['G1', 'G2M'])].copy().copy()
memento.create_groups(g1_s, label_columns=['s_phase', 'leiden'])
memento.compute_1d_moments(g1_s, min_perc_group=.9)
memento.create_groups(s_g2, label_columns=['g2m_phase', 'leiden'])
memento.compute_1d_moments(s_g2, min_perc_group=.9)
memento.create_groups(g2_g1, label_columns=['g1_phase', 'leiden'])
memento.compute_1d_moments(g2_g1, min_perc_group=.9)
memento.ht_1d_moments(
g1_s,
formula_like='1 + s_phase',
cov_column='s_phase',
num_boot=20000,
verbose=1,
num_cpus=70)
memento.ht_1d_moments(
s_g2,
formula_like='1 + g2m_phase',
cov_column='g2m_phase',
num_boot=20000,
verbose=1,
num_cpus=70)
memento.ht_1d_moments(
g2_g1,
formula_like='1 + g1_phase',
cov_column='g1_phase',
num_boot=20000,
verbose=1,
num_cpus=70)
g1_s.write(data_path + 'cell_cycle/g1_s.h5ad')
s_g2.write(data_path + 'cell_cycle/s_g2.h5ad')
g2_g1.write(data_path + 'cell_cycle/g2_g1.h5ad')
def get_1d_dfs(subset):
df = memento.get_1d_ht_result(subset)
df['dv_fdr'] = memento.util._fdrcorrect(df['dv_pval'])
df['de_fdr'] = memento.util._fdrcorrect(df['de_pval'])
return df
g1_s_1d = get_1d_dfs(g1_s)
s_g2_1d = get_1d_dfs(s_g2)
g2_g1_1d = get_1d_dfs(g2_g1)
plt.figure(figsize=(10,3))
plt.subplot(1,3,1)
plt.scatter(g1_s_1d['de_coef'], g1_s_1d['dv_coef'], s=1)
plt.subplot(1,3,2)
plt.scatter(s_g2_1d['de_coef'], s_g2_1d['dv_coef'], s=1)
plt.subplot(1,3,3)
plt.scatter(g2_g1_1d['de_coef'], g2_g1_1d['dv_coef'], s=1)
sig_genes = set(
g1_s_1d.query('dv_fdr < 0.01 & (dv_coef < -1 | dv_coef > 1)').gene.tolist() +\
s_g2_1d.query('dv_fdr < 0.01 & (dv_coef < -1 | dv_coef > 1)').gene.tolist() + \
g2_g1_1d.query('dv_fdr < 0.01 & (dv_coef < -1 | dv_coef > 1)').gene.tolist())
```
### GSEA + scatterplots
```
def plot_scatters(gene_set, name, c='k'):
plt.figure(figsize=(10,3))
plt.subplot(1,3,1)
plt.scatter(g1_s_1d['de_coef'], g1_s_1d['dv_coef'], s=1, color='gray')
plt.scatter(g1_s_1d.query('gene in @gene_set')['de_coef'], g1_s_1d.query('gene in @gene_set')['dv_coef'], s=15, color=c)
plt.xlabel('G1->S')
# plt.xlim(-1.2,1.2); plt.ylim(-1.2,1.2);
plt.subplot(1,3,2)
plt.scatter(s_g2_1d['de_coef'], s_g2_1d['dv_coef'], s=1, color='gray')
plt.scatter(s_g2_1d.query('gene in @gene_set')['de_coef'], s_g2_1d.query('gene in @gene_set')['dv_coef'], s=15, color=c)
plt.title(name)
plt.xlabel('S->G2M')
# plt.xlim(-1.2,1.2); plt.ylim(-1.2,1.2);
plt.subplot(1,3,3)
plt.scatter(g2_g1_1d['de_coef'], g2_g1_1d['dv_coef'], s=1, color='gray')
plt.scatter(g2_g1_1d.query('gene in @gene_set')['de_coef'], g2_g1_1d.query('gene in @gene_set')['dv_coef'], s=15, color=c)
plt.xlabel('G2M->G1')
# plt.xlim(-1.2,1.2); plt.ylim(-1.2,1.2);
import gseapy as gp
from gseapy.plot import gseaplot
pre_res = gp.prerank(
rnk=s_g2_1d.query('de_coef > 0 & de_fdr < 0.01')[['gene','dv_coef']].sort_values('dv_coef'),
gene_sets='GO_Biological_Process_2018',
processes=4,
permutation_num=100, # reduce number to speed up testing
outdir=None, seed=6)
terms = pre_res.res2d.index
gsea_table = pre_res.res2d.sort_index().sort_values('fdr')
gsea_table.head(5)
terms = gsea_table.index
idx=0
gseaplot(rank_metric=pre_res.ranking, term=terms[idx], **pre_res.results[terms[idx]])
gsea_table = pre_res.res2d.sort_index().sort_values('fdr')
stress_genes
stress_genes = gsea_table['ledge_genes'].iloc[0].split(';')
plot_scatters(stress_genes, 'chaperones')
cell_cycle_genes = [x.strip() for x in open('./regev_lab_cell_cycle_genes.txt')]
plot_scatters(cell_cycle_genes, 'cell cycle')
manual_gene_set = g1_s_1d.query('dv_coef < -1 & de_coef < -0.5').gene.tolist()
plot_scatters(manual_gene_set, 'G1 genes')
manual_gene_set
```
### Get any hits for KOs
```
guides = adata.obs.guides.drop_duplicates().tolist()
guides = [g for g in guides if ('INTER' not in g and 'nan' not in g)]
ko_genes = adata.obs.query('KO == 1')['KO_GENE'].drop_duplicates().tolist()
```
### Get moments for the gene classes
```
for g in ko_genes:
print(g)
subset = adata[adata.obs.WT | (adata.obs.KO_GENE == g)].copy().copy()
memento.create_groups(subset, label_columns=['KO', 'leiden'])
memento.compute_1d_moments(subset, min_perc_group=.9)
target_genes = list(set(subset.var.index)-set(ko_genes))
# memento.compute_2d_moments(subset, gene_pairs=list(itertools.product([g], target_genes)))
memento.ht_1d_moments(
subset,
formula_like='1 + KO',
cov_column='KO',
num_boot=10000,
verbose=1,
num_cpus=70)
# subset.write(data_path + '2d_self_h5ad/{}.h5ad'.format(g))
break
df = memento.get_1d_ht_result(subset)
df['de_fdr'] = memento.util._fdrcorrect(df['de_pval'])
df.query('de_fdr < 0.1')
plt.hist(df['dv_pval'])
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(moment_df.query('gene in @stress_genes').iloc[:, 1:4].values.T)
plt.xticks([0,1,2],['G1', 'S', 'G2M'])
plt.title('Mean')
plt.subplot(1, 2, 2)
plt.plot(moment_df.query('gene in @stress_genes').iloc[:, 4:].values.T)
plt.xticks([0,1,2],['G1', 'S', 'G2M'])
plt.title('Variability')
plt.plot(moment_df.query('gene in @stress_genes').iloc[:, 4:].values.T)
df['dv_pval'].hist(bins=50)
```
### Find self-DC genes
```
for g in ko_genes:
subset = adata[adata.obs.WT | (adata.obs.KO_GENE == g)].copy().copy()
memento.create_groups(subset, label_columns=['KO'])
memento.compute_1d_moments(subset, min_perc_group=.9)
if g not in subset.var.index:
continue
target_genes = list(set(subset.var.index)-set(ko_genes))
# memento.compute_2d_moments(subset, gene_pairs=list(itertools.product([g], target_genes)))
memento.ht_1d_moments(
subset,
formula_like='1 + KO',
cov_column='KO',
num_boot=10000,
verbose=1,
num_cpus=70)
# subset.write(data_path + '2d_self_h5ad/{}.h5ad'.format(g))
break
df = memento.get_1d_ht_result(subset)
df = memento.get_1d_ht_result(subset)
df['de_pval'].hist(bins=50)
for g, result in result_1d_dict.items():
result.to_csv(data_path + '/result_1d/{}.csv'.format(g), index=False)
```
### Get 1D results
```
result_1d_dict = {g:pd.read_csv(data_path + '/result_1d/{}.csv'.format(g)) for g in guides if ('INTER' not in g and 'nan' not in g)}
g = 'p_sgGABPA_9'
df = result_1d_dict[g]
df.query('de_fdr < 0.1 | dv_fdr < 0.1')
for g in guides:
df = result_1d_dict[g]
df['de_fdr'] = memento.util._fdrcorrect(df['de_pval'])
df['dv_fdr'] = memento.util._fdrcorrect(df['dv_pval'])
print(g, df.query('de_fdr < 0.15').shape[0], df.query('dv_fdr < 0.15').shape[0])
```
### DV shift plots
```
for g in guides:
df = result_1d_dict[g]
plt.figure()
sns.kdeplot(df['dv_coef']);
plt.plot([0, 0], [0, 2])
plt.title(g)
plt.xlim(-2, 2)
```
### within WT
```
adata[adata.obs.WT].obs.guides.value_counts()
subset = adata[(adata.obs.guides=='p_INTERGENIC393453') | (adata.obs.guides=='p_INTERGENIC216151') ].copy().copy()
memento.create_groups(subset, label_columns=['guides'])
memento.compute_1d_moments(subset, min_perc_group=.9)
memento.ht_1d_moments(
subset,
formula_like='1 + guides',
cov_column='guides',
num_boot=10000,
verbose=1,
num_cpus=14)
wt_result = memento.get_1d_ht_result(subset)
sns.kdeplot(wt_result.dv_coef)
plt.title('WT')
plt.plot([0, 0], [0, 2])
```
### Get the change in magnitude for each guide
```
coef_mag = []
for g, df in result_1d_dict.items():
coef_mag.append((g, df['de_coef'].abs().median()))
coef_mag = pd.DataFrame(coef_mag, columns=['guide', 'de_mag'])
coef_mag['gene'] = coef_mag['guide'].str.split('_').str[1].str[2:]
```
### Get WT variability of each TF
```
wt_adata = adata[adata.obs['WT']].copy().copy()
tfs = adata.obs.query('KO==1').KO_GENE.drop_duplicates().tolist()
memento.create_groups(wt_adata, label_columns=['KO'])
memento.compute_1d_moments(wt_adata, min_perc_group=.9,)
tf_moments = memento.get_1d_moments(wt_adata, groupby='KO')
```
### Compare WT variability to De mag
```
merged = coef_mag.merge(tf_moments[1], on='gene')
stats.spearmanr(merged['de_mag'], merged['KO_0'])
plt.scatter(merged['de_mag'], merged['KO_0'])
```
### Number of TF binding sites within 5k(?) KB
```
enc = encode.Encode('/home/ssm-user/Github/misc-seq/miscseq/GRCh38Genes.bed')
encode_links = {
'ELK1':'https://www.encodeproject.org/files/ENCFF119SCQ/@@download/ENCFF119SCQ.bed.gz',
'ELF1':'https://www.encodeproject.org/files/ENCFF133TSU/@@download/ENCFF133TSU.bed.gz',
'IRF1':'https://www.encodeproject.org/files/ENCFF203LRV/@@download/ENCFF203LRV.bed.gz',
'ETS1':'https://www.encodeproject.org/files/ENCFF461PRP/@@download/ENCFF461PRP.bed.gz',
'EGR1':'https://www.encodeproject.org/files/ENCFF375RDB/@@download/ENCFF375RDB.bed.gz',
'YY1':'https://www.encodeproject.org/files/ENCFF635XCI/@@download/ENCFF635XCI.bed.gz',
'GABPA':'https://www.encodeproject.org/files/ENCFF173GUD/@@download/ENCFF173GUD.bed.gz',
'E2F4':'https://www.encodeproject.org/files/ENCFF225TLP/@@download/ENCFF225TLP.bed.gz',
'NR2C2':'https://www.encodeproject.org/files/ENCFF263VIC/@@download/ENCFF263VIC.bed.gz',
'CREB1':'https://www.encodeproject.org/files/ENCFF193LLN/@@download/ENCFF193LLN.bed.gz'
}
bed_objs = {tf:enc.get_encode_peaks(link) for tf,link in encode_links.items()}
target_genes = {tf:enc.get_peak_genes_bed(bed_obj, 0).query('distance==0').gene.tolist() for tf, bed_obj in bed_objs.items()}
x = wt_adata[:, 'EGR1'].X.todense().A1
np.bincount(x.astype(int))
x.mean()
plt.hist(x, bins=20)
target_numbers = []
for tf in encode_links.keys():
target_numbers.append((tf, len(target_genes[tf])))
target_numbers = pd.DataFrame(target_numbers, columns=['gene', 'num_targets'])
merged = target_numbers.merge(tf_moments[1], on='gene')
stats.pearsonr(merged.query('gene != "EGR1"')['num_targets'], merged.query('gene != "EGR1"')['KO_0'])
plt.scatter(merged['num_targets'], merged['KO_0'])
```
### Try with all ENCODE
```
merged
all_encode = pd.read_csv('gene_attribute_matrix.txt', sep='\t', index_col=0, low_memory=False).iloc[2:, 2:].astype(float)
target_counts = pd.DataFrame(all_encode.sum(axis=0), columns=['num_targets']).reset_index().rename(columns={'index':'gene'})
x = target_counts.query('gene in @tfs').sort_values('gene')['num_targets']
y = merged.sort_values('gene')['num_targets']
merged2 = target_counts.merge(tf_moments[1], on='gene')
plt.scatter(merged2['num_targets'], merged2['KO_0'])
merged2
```
### Get gene list
```
wt_adata = adata[adata.obs['WT']].copy().copy()
memento.create_groups(wt_adata, label_columns=['KO'])
memento.compute_1d_moments(wt_adata, min_perc_group=.9)
plt.hist(np.log(wt_adata.uns['memento']['1d_moments']['sg^0'][0]))
wt_high_genes = wt_adata.var.index[np.log(wt_adata.uns['memento']['1d_moments']['sg^0'][0]) > -1].tolist()
```
### Create labels for X genes
```
chr_locations = pd.read_csv('chr_locations.bed', sep='\t').rename(columns={'#chrom':'chr'}).drop_duplicates('geneName')
chr_locations.index=chr_locations.geneName
adata.var = adata.var.join(chr_locations, how='left')
```
### Filter X-chromosomal genes
```
adata_X = adata[:, (adata.var.chr=='chrX') | adata.var.chr.isin(['chr1', 'chr2', 'chr3'])].copy()
adata_X
```
### Escape genes
```
par_genes = """PLCXD1 GTPBP6 PPP2R3B SHOX CRLF2 CSF2RA IL3RA SLC25A6 ASMTL P2RY8 ASMT DHRSXY ZBED1 CD99 XG IL9R SPRY3 VAMP7""".split()
escape_genes = """EIF1AX
USP9X
EIF2S3
CTPS2
TRAPPC2
HDHD1
ZFX
DDX3X
RAB9A
AP1S2
GEMIN8
RPS4X
SMC1A
ZRSR2
STS
FUNDC1
PNPLA4
UBA1
ARSD
NLGN4X
GPM6B
MED14
CD99
RBBP7
SYAP1
PRKX
OFD1
CXorf38
TXLNG
KDM5C
GYG2
TBL1X
CA5B
XIST
RENBP
HCFC1
USP11
PLCXD1
SLC25A6
ASMTL
DHRSX
XG
TMEM27
ARHGAP4
GAB3
PIR
TMEM187
DOCK11
EFHC2
RIBC1
NAP1L3
CA5BP1
MXRA5
KAL1
PCDH11X
KDM6A
PLS3
CITED1
L1CAM
ALG13
BCOR""".split()
```
### Run 1d memento
```
adata_X.obs['is_female'] = (adata_X.obs['Sex'] == 'Female').astype(int)
adata_X.obs.is_female.value_counts()
memento.create_groups(adata_X, label_columns=['is_female', 'ind_cov'])
memento.compute_1d_moments(adata_X, min_perc_group=.9)
memento.ht_1d_moments(
adata_X,
formula_like='1 + is_female',
cov_column='is_female',
num_boot=20000,
verbose=1,
num_cpus=13)
result_1d = memento.get_1d_ht_result(adata_X)
result_1d['dv_fdr'] = memento.util._fdrcorrect(result_1d['dv_pval'])
sns.distplot(result_1d.dv_coef)
x_chr_genes = adata.var.index[adata.var.chr=='chrX'].tolist()
result_1d['escape'] = result_1d['gene'].isin(escape_genes)
result_1d['par'] = result_1d['gene'].isin(par_genes)
result_1d['x_chr'] = result_1d['gene'].isin(x_chr_genes)
sns.distplot(result_1d.query('~x_chr').dv_coef)
sns.distplot(result_1d.query('x_chr').dv_coef)
sns.boxplot(x='x_chr', y='dv_coef', data=result_1d)
dv_genes = result_1d.query('dv_fdr < 0.1').gene.tolist()
result_1d['dv'] = result_1d.gene.isin(dv_genes)
result_1d.query('~dv & ~x_chr & dv_coef > 0').shape
a = [[193, 14],
[23,5]]
stats.chi2_contingency(a)
result_1d.query('dv_fdr < 0.1').x_chr.mean()
result_1d.x_chr.mean()
```
### Run memento for each subset, comparing to control
```
cts = [['ciliated'], ['bc','basal']]
# tps = ['3', '6', '9', '24', '48']
tps = ['3', '6', '9', '24', '48']
stims = ['alpha', 'beta', 'gamma', 'lambda']
import os
done_files = os.listdir('/data_volume/ifn_hbec/binary_test_deep/')
for ct in cts:
for tp in tps:
for stim in stims:
fname = '{}_{}_{}_20200320.h5ad'.format('-'.join(ct), stim, tp)
if fname in done_files:
print('Skipping', fname)
continue
print('starting', ct, tp, stim)
adata_stim = adata.copy()[
adata.obs.cell_type.isin(ct) & \
adata.obs.stim.isin(['control', stim]) & \
adata.obs.time.isin(['0',tp]), :].copy()
time_converter={0:0, int(tp):1}
adata_stim.obs['time_step'] = adata_stim.obs['time'].astype(int).apply(lambda x: time_converter[x])
memento.create_groups(adata_stim, label_columns=['time_step', 'donor'])
memento.compute_1d_moments(adata_stim, min_perc_group=.9)
memento.ht_1d_moments(
adata_stim,
formula_like='1 + time_step',
cov_column='time_step',
num_boot=10000,
verbose=1,
num_cpus=13)
del adata_stim.uns['memento']['mv_regressor']
adata_stim.write('/data_volume/ifn_hbec/binary_test_deep/{}_{}_{}_20200320.h5ad'.format(
'-'.join(ct), stim, tp))
```
| github_jupyter |
```
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing Qiskit
from qiskit import IBMQ, BasicAer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
def phase_oracle(circuit, register):
circuit.cz(qr[2],qr[0])
circuit.cz(qr[2],qr[1])
def n_controlled_Z(circuit, controls, target):
"""Implement a Z gate with multiple controls"""
if (len(controls) > 2):
raise ValueError('The controlled Z with more than 2 controls is not implemented')
elif (len(controls) == 1):
circuit.h(target)
circuit.cx(controls[0], target)
circuit.h(target)
elif (len(controls) == 2):
circuit.h(target)
circuit.ccx(controls[0], controls[1], target)
circuit.h(target)
def inversion_about_average(circuit, register, n, barriers):
"""Apply inversion about the average step of Grover's algorithm."""
circuit.h(register)
circuit.x(register)
if barriers:
circuit.barrier()
n_controlled_Z(circuit, [register[j] for j in range(n-1)], register[n-1])
if barriers:
circuit.barrier()
circuit.x(register)
circuit.h(register)
barriers = True
qr = QuantumRegister(3)
cr = ClassicalRegister(3)
groverCircuit = QuantumCircuit(qr,cr)
groverCircuit.h(qr)
if barriers:
groverCircuit.barrier()
phase_oracle(groverCircuit, qr)
if barriers:
groverCircuit.barrier()
inversion_about_average(groverCircuit, qr, 3, barriers)
if barriers:
groverCircuit.barrier()
groverCircuit.measure(qr,cr)
groverCircuit.draw(output="mpl")
backend = BasicAer.get_backend('qasm_simulator')
shots = 1024
results = execute(groverCircuit, backend=backend, shots=shots).result()
answer = results.get_counts()
answer
#from qiskit import IBMQ
#IBMQ.save_account('8fb564aa5ec8cc13346cc6ffb53d99e10f8298a762bf5b3495245b6ea683d76b40f9e9495fe6cfe9c68ef2c559c15a03347f29073f37a2f1defc77df583ee270')
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits <= 5 and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue
from qiskit.tools.monitor import job_monitor
shots = 1024
job = execute(groverCircuit, backend=backend, shots=shots)
job_monitor(job, interval = 2)
# Get the results from the computation
results = job.result()
answer = results.get_counts(groverCircuit)
plot_histogram(answer)
```
| github_jupyter |
# Final Project Required Coding Activity
Introduction to Python (Unit 2) Fundamentals
All course .ipynb Jupyter Notebooks are available from the project files download topic in Module 1, Section 1.
This activity is based on modules 1 - 4 and is similar to exercises in the Jupyter Notebooks **`Practice_MOD03_IntroPy.ipynb`** and **`Practice_MOD04_IntroPy.ipynb`** which you may have completed as practice.
| **Assignment Requirements** |
|:-------------------------------|
|This program requires the use of **`print`** output and use of **`input`**, **`for`**/**`in`** loop, **`if`**, file **`open`**, **`.readline`**, **`.append`**, **`.strip`**, **`len`**. and function **`def`** and **`return`**. The code should also consider using most of the following (`.upper()` or `.lower()`, `.title()`, `print("hello",end="")` `else`, `elif`, `range()`, `while`, `.close()`) |
## Program: Element_Quiz
In this program the user enters the name of any 5 of the first 20 Atomic Elements and is given a grade and test report for items correct and incorrect.
### Sample input and output:
```
list any 5 of the first 20 elements in the Period table
Enter the name of an element: argon
Enter the name of an element: chlorine
Enter the name of an element: sodium
Enter the name of an element: argon
argon was already entered <--no duplicates allowed
Enter the name of an element: helium
Enter the name of an element: gold
80 % correct
Found: Argon Chlorine Sodium Helium
Not Found: Gold
```
### Create get_names() Function to collect input of 5 unique element names
- The function accepts no arguments and returns a list of 5 input strings (element names)
- define a list to hold the input
- collect input of a element name
- if input it is **not** already in the list add the input to the list
- don't allow empty strings as input
- once 5 unique inputs **return** the list
### Create the Program flow
#### import the file into the Jupyter Notebook environment
- use `!curl` to download https://raw.githubusercontent.com/MicrosoftLearning/intropython/master/elements1_20.txt" as `elements1_20.txt`
- open the file with the first 20 elements
- read one line at a time to get element names, remove any whitespace (spaces, newlines) and save each element name, as lowercase, into a list
#### Call the get_names() function
- the return value will be the quiz responses list
#### check if responses are in the list of elements
Iterate through 5 responses
- compare each response to the list of 20 elements
- any response that is in the list of 20 elements is correct and should be added to a list of correct responses
- if not in the list of 20 elements then add to a list of incorrect responses
#### calculate the % correct
- find the the number of items in the correct responses and divide by 5, this will result in answers like 1.0, .8, .6,...
- to get the % multiple the calculated answer above by 100, this will result in answers like 100, 80, 60...
- *hint: instead of dividing by 5 and then multiplying by 100, the number of correct responses can be multiplied by 20*
#### Print output
- print the Score % right
- print each of the correct responses
- print each of the incorrect responses
### create Element_Quiz
```
# [] create Element_Quiz
# [] copy and paste in edX assignment page
!curl https://raw.githubusercontent.com/MicrosoftLearning/intropython/master/elements1_20.txt -o elements.txt
guesses = []
correct = []
incorrect = []
elements = []
elements_file = open("elements.txt","r")
def get_names():
guesses = []
i = 0
while i < 5:
temp_guess = input("Name one of the first 20 elements of the Periodic Table of Elements: ").lower()
if temp_guess in guesses:
print ("You have already guessed that element.")
elif temp_guess == "":
print ("Emtpy answers are not allowed")
else:
guesses.append(temp_guess)
i += 1
return guesses
index = 0
while True:
temp = elements_file.readline().strip("\n ").lower()
if temp == "":
break
else:
elements.append(temp)
guess = get_names()
index = 0
for index in guess:
if index in elements:
correct.append(index)
else:
incorrect.append(index)
print ("Found: ", end="")
for x in correct:
print (x, end=" ")
print ("")
print ("Not Found: ", end="")
for x in incorrect:
print (x, end=" ")
print ("")
print ("Score: " + str((len(correct)/5)*100) + "%")
elements_file.close()
```
Submit this by creating a python file (.py) and submitting it in D2L. Be sure to test that it works. Know that For this to work correctly in Python rather than Jupyter, you would need to switch to using import os rather than !curl. To convert !curl to run in the normal python interpreter try a method such as importing the os library and calling os.system(cmd) with your shell command in the cmd variable.
| github_jupyter |
# Wind Statistics
### Introduction:
The data have been modified to contain some missing values, identified by NaN.
Using pandas should make this exercise
easier, in particular for the bonus question.
You should be able to perform all of these operations without using
a for loop or other looping construct.
1. The data in 'wind.data' has the following format:
```
"""
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
"""
```
The first three columns are year, month and day. The
remaining 12 columns are average windspeeds in knots at 12
locations in Ireland on that day.
More information about the dataset go [here](wind.desc).
### Step 1. Import the necessary libraries
```
import pandas as pd
import datetime
```
### Step 2. Import the dataset from this [address](https://github.com/guipsamora/pandas_exercises/blob/master/06_Stats/Wind_Stats/wind.data)
### Step 3. Assign it to a variable called data and replace the first 3 columns by a proper datetime index.
```
# parse_dates gets 0, 1, 2 columns and parses them as the index
data_url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/Wind_Stats/wind.data'
data = pd.read_csv(data_url, sep = "\s+", parse_dates = [[0,1,2]])
data.head()
```
### Step 4. Year 2061? Do we really have data from this year? Create a function to fix it and apply it.
```
# The problem is that the dates are 2061 and so on...
# function that uses datetime
def fix_century(x):
year = x.year - 100 if x.year > 1989 else x.year
return datetime.date(year, x.month, x.day)
# apply the function fix_century on the column and replace the values to the right ones
data['Yr_Mo_Dy'] = data['Yr_Mo_Dy'].apply(fix_century)
# data.info()
data.head()
```
### Step 5. Set the right dates as the index. Pay attention at the data type, it should be datetime64[ns].
```
# transform Yr_Mo_Dy it to date type datetime64
data["Yr_Mo_Dy"] = pd.to_datetime(data["Yr_Mo_Dy"])
# set 'Yr_Mo_Dy' as the index
data = data.set_index('Yr_Mo_Dy')
data.head()
# data.info()
```
### Step 6. Compute how many values are missing for each location over the entire record.
#### They should be ignored in all calculations below.
```
# "Number of non-missing values for each location: "
data.isnull().sum()
```
### Step 7. Compute how many non-missing values there are in total.
```
#number of columns minus the number of missing values for each location
data.shape[0] - data.isnull().sum()
#or
data.notnull().sum()
```
### Step 8. Calculate the mean windspeeds of the windspeeds over all the locations and all the times.
#### A single number for the entire dataset.
```
data.sum().sum() / data.notna().sum().sum()
```
### Step 9. Create a DataFrame called loc_stats and calculate the min, max and mean windspeeds and standard deviations of the windspeeds at each location over all the days
#### A different set of numbers for each location.
```
data.describe(percentiles=[])
```
### Step 10. Create a DataFrame called day_stats and calculate the min, max and mean windspeed and standard deviations of the windspeeds across all the locations at each day.
#### A different set of numbers for each day.
```
# create the dataframe
day_stats = pd.DataFrame()
# this time we determine axis equals to one so it gets each row.
day_stats['min'] = data.min(axis = 1) # min
day_stats['max'] = data.max(axis = 1) # max
day_stats['mean'] = data.mean(axis = 1) # mean
day_stats['std'] = data.std(axis = 1) # standard deviations
day_stats.head()
```
### Step 11. Find the average windspeed in January for each location.
#### Treat January 1961 and January 1962 both as January.
```
data.loc[data.index.month == 1].mean()
```
### Step 12. Downsample the record to a yearly frequency for each location.
```
data.groupby(data.index.to_period('A')).mean()
```
### Step 13. Downsample the record to a monthly frequency for each location.
```
data.groupby(data.index.to_period('M')).mean()
```
### Step 14. Downsample the record to a weekly frequency for each location.
```
data.groupby(data.index.to_period('W')).mean()
```
### Step 15. Calculate the min, max and mean windspeeds and standard deviations of the windspeeds across all locations for each week (assume that the first week starts on January 2 1961) for the first 52 weeks.
```
# resample data to 'W' week and use the functions
weekly = data.resample('W').agg(['min','max','mean','std'])
# slice it for the first 52 weeks and locations
weekly.loc[weekly.index[1:53], "RPT":"MAL"] .head(10)
```
| github_jupyter |
# Mark and Recapture
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
```
This chapter introduces "mark and recapture" experiments, in which we sample individuals from a population, mark them somehow, and then take a second sample from the same population. Seeing how many individuals in the second sample are marked, we can estimate the size of the population.
Experiments like this were originally used in ecology, but turn out to be useful in many other fields. Examples in this chapter include software engineering and epidemiology.
Also, in this chapter we'll work with models that have three parameters, so we'll extend the joint distributions we've been using to three dimensions.
But first, grizzly bears.
## The Grizzly Bear Problem
In 1996 and 1997 researchers deployed bear traps in locations in British Columbia and Alberta, Canada, in an effort to estimate the population of grizzly bears. They describe the experiment in [this article](https://www.researchgate.net/publication/229195465_Estimating_Population_Size_of_Grizzly_Bears_Using_Hair_Capture_DNA_Profiling_and_Mark-Recapture_Analysis).
The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.
During the first session, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.
To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each session.
With these assumptions we can compute the probability of the data for a range of possible populations.
As an example, let's suppose that the actual population of bears is 100.
After the first session, 23 of the 100 bears have been identified.
During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified?
I'll define
* $N$: actual population size, 100.
* $K$: number of bears identified in the first session, 23.
* $n$: number of bears observed in the second session, 19 in the example.
* $k$: number of bears in the second session that were previously identified, 4.
For given values of $N$, $K$, and $n$, the probability of finding $k$ previously-identified bears is given by the [hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution):
$$\binom{K}{k} \binom{N-K}{n-k}/ \binom{N}{n}$$
where the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), $\binom{K}{k}$, is the number of subsets of size $k$ we can choose from a population of size $K$.
To understand why, consider:
* The denominator, $\binom{N}{n}$, is the number of subsets of $n$ we could choose from a population of $N$ bears.
* The numerator is the number of subsets that contain $k$ bears from the previously identified $K$ and $n-k$ from the previously unseen $N-K$.
SciPy provides `hypergeom`, which we can use to compute this probability for a range of values of $k$.
```
import numpy as np
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
```
The result is the distribution of $k$ with given parameters $N$, $K$, and $n$.
Here's what it looks like.
```
import matplotlib.pyplot as plt
from utils import decorate
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 100)')
```
The most likely value of $k$ is 4, which is the value actually observed in the experiment.
That suggests that $N=100$ is a reasonable estimate of the population, given this data.
We've computed the distribution of $k$ given $N$, $K$, and $n$.
Now let's go the other way: given $K$, $n$, and $k$, how can we estimate the total population, $N$?
## The Update
As a starting place, let's suppose that, prior to this study, an expert estimates that the local bear population is between 50 and 500, and equally likely to be any value in that range.
I'll use `make_uniform` to make a uniform distribution of integers in this range.
```
import numpy as np
from utils import make_uniform
qs = np.arange(50, 501)
prior_N = make_uniform(qs, name='N')
prior_N.shape
```
So that's our prior.
To compute the likelihood of the data, we can use `hypergeom` with constants `K` and `n`, and a range of values of `N`.
```
Ns = prior_N.qs
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
```
We can compute the posterior in the usual way.
```
posterior_N = prior_N * likelihood
posterior_N.normalize()
```
And here's what it looks like.
```
posterior_N.plot(color='C4')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
```
The most likely value is 109.
```
posterior_N.max_prob()
```
But the distribution is skewed to the right, so the posterior mean is substantially higher.
```
posterior_N.mean()
```
And the credible interval is quite wide.
```
posterior_N.credible_interval(0.9)
```
This solution is relatively simple, but it turns out we can do a little better if we model the unknown probability of observing a bear explicitly.
## Two Parameter Model
Next we'll try a model with two parameters: the number of bears, `N`, and the probability of observing a bear, `p`.
We'll assume that the probability is the same in both rounds, which is probably reasonable in this case because it is the same kind of trap in the same place.
We'll also assume that the probabilities are independent; that is, the probability a bear is observed in the second round does not depend on whether it was observed in the first round. This assumption might be less reasonable, but for now it is a necessary simplification.
Here are the counts again:
```
K = 23
n = 19
k = 4
```
For this model, I'll express the data in a notation that will make it easier to generalize to more than two rounds:
* `k10` is the number of bears observed in the first round but not the second,
* `k01` is the number of bears observed in the second round but not the first, and
* `k11` is the number of bears observed in both rounds.
Here are their values.
```
k10 = 23 - 4
k01 = 19 - 4
k11 = 4
```
Suppose we know the actual values of `N` and `p`. We can use them to compute the likelihood of this data.
For example, suppose we know that `N=100` and `p=0.2`.
We can use `N` to compute `k00`, which is the number of unobserved bears.
```
N = 100
observed = k01 + k10 + k11
k00 = N - observed
k00
```
For the update, it will be convenient to store the data as a list that represents the number of bears in each category.
```
x = [k00, k01, k10, k11]
x
```
Now, if we know `p=0.2`, we can compute the probability a bear falls in each category. For example, the probability of being observed in both rounds is `p*p`, and the probability of being unobserved in both rounds is `q*q` (where `q=1-p`).
```
p = 0.2
q = 1-p
y = [q*q, q*p, p*q, p*p]
y
```
Now the probability of the data is given by the [multinomial distribution](https://en.wikipedia.org/wiki/Multinomial_distribution):
$$\frac{N!}{\prod x_i!} \prod y_i^{x_i}$$
where $N$ is actual population, $x$ is a sequence with the counts in each category, and $y$ is a sequence of probabilities for each category.
SciPy provides `multinomial`, which provides `pmf`, which computes this probability.
Here is the probability of the data for these values of `N` and `p`.
```
from scipy.stats import multinomial
likelihood = multinomial.pmf(x, N, y)
likelihood
```
That's the likelihood if we know `N` and `p`, but of course we don't. So we'll choose prior distributions for `N` and `p`, and use the likelihoods to update it.
## The Prior
We'll use `prior_N` again for the prior distribution of `N`, and a uniform prior for the probability of observing a bear, `p`:
```
qs = np.linspace(0, 0.99, num=100)
prior_p = make_uniform(qs, name='p')
```
We can make a joint distribution in the usual way.
```
from utils import make_joint
joint_prior = make_joint(prior_p, prior_N)
joint_prior.shape
```
The result is a Pandas `DataFrame` with values of `N` down the rows and values of `p` across the columns.
However, for this problem it will be convenient to represent the prior distribution as a 1-D `Series` rather than a 2-D `DataFrame`.
We can convert from one format to the other using `stack`.
```
from empiricaldist import Pmf
joint_pmf = Pmf(joint_prior.stack())
joint_pmf.head(3)
type(joint_pmf)
type(joint_pmf.index)
joint_pmf.shape
```
The result is a `Pmf` whose index is a `MultiIndex`.
A `MultiIndex` can have more than one column; in this example, the first column contains values of `N` and the second column contains values of `p`.
The `Pmf` has one row (and one prior probability) for each possible pair of parameters `N` and `p`.
So the total number of rows is the product of the lengths of `prior_N` and `prior_p`.
Now we have to compute the likelihood of the data for each pair of parameters.
## The Update
To allocate space for the likelihoods, it is convenient to make a copy of `joint_pmf`:
```
likelihood = joint_pmf.copy()
```
As we loop through the pairs of parameters, we compute the likelihood of the data as in the previous section, and then store the result as an element of `likelihood`.
```
observed = k01 + k10 + k11
for N, p in joint_pmf.index:
k00 = N - observed
x = [k00, k01, k10, k11]
q = 1-p
y = [q*q, q*p, p*q, p*p]
likelihood[N, p] = multinomial.pmf(x, N, y)
```
Now we can compute the posterior in the usual way.
```
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
```
We'll use `plot_contour` again to visualize the joint posterior distribution.
But remember that the posterior distribution we just computed is represented as a `Pmf`, which is a `Series`, and `plot_contour` expects a `DataFrame`.
Since we used `stack` to convert from a `DataFrame` to a `Series`, we can use `unstack` to go the other way.
```
joint_posterior = posterior_pmf.unstack()
```
And here's what the result looks like.
```
from utils import plot_contour
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
```
The most likely values of `N` are near 100, as in the previous model. The most likely values of `p` are near 0.2.
The shape of this contour indicates that these parameters are correlated. If `p` is near the low end of the range, the most likely values of `N` are higher; if `p` is near the high end of the range, `N` is lower.
Now that we have a posterior `DataFrame`, we can extract the marginal distributions in the usual way.
```
from utils import marginal
posterior2_p = marginal(joint_posterior, 0)
posterior2_N = marginal(joint_posterior, 1)
```
Here's the posterior distribution for `p`:
```
posterior2_p.plot(color='C1')
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
```
The most likely values are near 0.2.
Here's the posterior distribution for `N` based on the two-parameter model, along with the posterior we got using the one-parameter (hypergeometric) model.
```
posterior_N.plot(label='one-parameter model', color='C4')
posterior2_N.plot(label='two-parameter model', color='C1')
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
```
With the two-parameter model, the mean is a little lower and the 90% credible interval is a little narrower.
```
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
print(posterior2_N.mean(),
posterior2_N.credible_interval(0.9))
```
The two-parameter model yields a narrower posterior distribution for `N`, compared to the one-parameter model, because it takes advantage of an additional source of information: the consistency of the two observations.
To see how this helps, consider a scenario where `N` is relatively low, like 138 (the posterior mean of the two-parameter model).
```
N1 = 138
```
Given that we saw 23 bears during the first trial and 19 during the second, we can estimate the corresponding value of `p`.
```
mean = (23 + 19) / 2
p = mean/N1
p
```
With these parameters, how much variability do you expect in the number of bears from one trial to the next? We can quantify that by computing the standard deviation of the binomial distribution with these parameters.
```
from scipy.stats import binom
binom(N1, p).std()
```
Now let's consider a second scenario where `N` is 173, the posterior mean of the one-parameter model. The corresponding value of `p` is lower.
```
N2 = 173
p = mean/N2
p
```
In this scenario, the variation we expect to see from one trial to the next is higher.
```
binom(N2, p).std()
```
So if the number of bears we observe is the same in both trials, that would be evidence for lower values of `N`, where we expect more consistency.
If the number of bears is substantially different between the two trials, that would be evidence for higher values of `N`.
In the actual data, the difference between the two trials is low, which is why the posterior mean of the two-parameter model is lower.
The two-parameter model takes advantage of additional information, which is why the credible interval is narrower.
## Joint and Marginal Distributions
Marginal distributions are called "marginal" because in a common visualization they appear in the margins of the plot.
Seaborn provides a class called `JointGrid` that creates this visualization.
The following function uses it to show the joint and marginal distributions in a single plot.
```
import pandas as pd
from seaborn import JointGrid
def joint_plot(joint, **options):
"""Show joint and marginal distributions.
joint: DataFrame that represents a joint distribution
options: passed to JointGrid
"""
# get the names of the parameters
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x=x, y=y, data=data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
```
A `JointGrid` is a concise way to represent the joint and marginal distributions visually.
## The Lincoln Index Problem
In [an excellent blog post](http://www.johndcook.com/blog/2010/07/13/lincoln-index/), John D. Cook wrote about the Lincoln index, which is a way to estimate the
number of errors in a document (or program) by comparing results from
two independent testers.
Here's his presentation of the problem:
> "Suppose you have a tester who finds 20 bugs in your program. You
want to estimate how many bugs are really in the program. You know
there are at least 20 bugs, and if you have supreme confidence in your
tester, you may suppose there are around 20 bugs. But maybe your
tester isn't very good. Maybe there are hundreds of bugs. How can you
have any idea how many bugs there are? There's no way to know with one
tester. But if you have two testers, you can get a good idea, even if
you don't know how skilled the testers are."
Suppose the first tester finds 20 bugs, the second finds 15, and they
find 3 in common; how can we estimate the number of bugs?
This problem is similar to the Grizzly Bear problem, so I'll represent the data in the same way.
```
k10 = 20 - 3
k01 = 15 - 3
k11 = 3
```
But in this case it is probably not reasonable to assume that the testers have the same probability of finding a bug.
So I'll define two parameters, `p0` for the probability that the first tester finds a bug, and `p1` for the probability that the second tester finds a bug.
I will continue to assume that the probabilities are independent, which is like assuming that all bugs are equally easy to find. That might not be a good assumption, but let's stick with it for now.
As an example, suppose we know that the probabilities are 0.2 and 0.15.
```
p0, p1 = 0.2, 0.15
```
We can compute the array of probabilities, `y`, like this:
```
def compute_probs(p0, p1):
"""Computes the probability for each of 4 categories."""
q0 = 1-p0
q1 = 1-p1
return [q0*q1, q0*p1, p0*q1, p0*p1]
y = compute_probs(p0, p1)
y
```
With these probabilities, there is a
68% chance that neither tester finds the bug and a
3% chance that both do.
Pretending that these probabilities are known, we can compute the posterior distribution for `N`.
Here's a prior distribution that's uniform from 32 to 350 bugs.
```
qs = np.arange(32, 350, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
```
I'll put the data in an array, with 0 as a place-keeper for the unknown value `k00`.
```
data = np.array([0, k01, k10, k11])
```
And here are the likelihoods for each value of `N`, with `ps` as a constant.
```
likelihood = prior_N.copy()
observed = data.sum()
x = data.copy()
for N in prior_N.qs:
x[0] = N - observed
likelihood[N] = multinomial.pmf(x, N, y)
```
We can compute the posterior in the usual way.
```
posterior_N = prior_N * likelihood
posterior_N.normalize()
```
And here's what it looks like.
```
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
print(posterior_N.mean(),
posterior_N.credible_interval(0.9))
```
With the assumption that `p0` and `p1` are known to be `0.2` and `0.15`, the posterior mean is 102 with 90% credible interval (77, 127).
But this result is based on the assumption that we know the probabilities, and we don't.
## Three-parameter Model
What we need is a model with three parameters: `N`, `p0`, and `p1`.
We'll use `prior_N` again for the prior distribution of `N`, and here are the priors for `p0` and `p1`:
```
qs = np.linspace(0, 1, num=51)
prior_p0 = make_uniform(qs, name='p0')
prior_p1 = make_uniform(qs, name='p1')
```
Now we have to assemble them into a joint prior with three dimensions.
I'll start by putting the first two into a `DataFrame`.
```
joint2 = make_joint(prior_p0, prior_N)
joint2.shape
```
Now I'll stack them, as in the previous example, and put the result in a `Pmf`.
```
joint2_pmf = Pmf(joint2.stack())
joint2_pmf.head(3)
```
We can use `make_joint` again to add in the third parameter.
```
joint3 = make_joint(prior_p1, joint2_pmf)
joint3.shape
```
The result is a `DataFrame` with values of `N` and `p0` in a `MultiIndex` that goes down the rows and values of `p1` in an index that goes across the columns.
```
joint3.head(3)
```
Now I'll apply `stack` again:
```
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head(3)
```
The result is a `Pmf` with a three-column `MultiIndex` containing all possible triplets of parameters.
The number of rows is the product of the number of values in all three priors, which is almost 170,000.
```
joint3_pmf.shape
```
That's still small enough to be practical, but it will take longer to compute the likelihoods than in the previous examples.
Here's the loop that computes the likelihoods; it's similar to the one in the previous section:
```
likelihood = joint3_pmf.copy()
observed = data.sum()
x = data.copy()
for N, p0, p1 in joint3_pmf.index:
x[0] = N - observed
y = compute_probs(p0, p1)
likelihood[N, p0, p1] = multinomial.pmf(x, N, y)
```
We can compute the posterior in the usual way.
```
posterior_pmf = joint3_pmf * likelihood
posterior_pmf.normalize()
```
Now, to extract the marginal distributions, we could unstack the joint posterior as we did in the previous section.
But `Pmf` provides a version of `marginal` that works with a `Pmf` rather than a `DataFrame`.
Here's how we use it to get the posterior distribution for `N`.
```
posterior_N = posterior_pmf.marginal(0)
```
And here's what it looks look.
```
posterior_N.plot(color='C4')
decorate(xlabel='Number of bugs (N)',
ylabel='PDF',
title='Posterior marginal distributions of N')
posterior_N.mean()
```
The posterior mean is 105 bugs, which suggests that there are still many bugs the testers have not found.
Here are the posteriors for `p0` and `p1`.
```
posterior_p1 = posterior_pmf.marginal(1)
posterior_p2 = posterior_pmf.marginal(2)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
```
Comparing the posterior distributions, the tester who found more bugs probably has a higher probability of finding bugs. The posterior means are about 23% and 18%. But the distributions overlap, so we should not be too sure.
This is the first example we've seen with three parameters.
As the number of parameters increases, the number of combinations increases quickly.
The method we've been using so far, enumerating all possible combinations, becomes impractical if the number of parameters is more than 3 or 4.
However there are other methods that can handle models with many more parameters, as we'll see in <<_MCMC>>.
## Summary
The problems in this chapter are examples of [mark and recapture](https://en.wikipedia.org/wiki/Mark_and_recapture) experiments, which are used in ecology to estimate animal populations. They also have applications in engineering, as in the Lincoln index problem. And in the exercises you'll see that they are used in epidemiology, too.
This chapter introduces two new probability distributions:
* The hypergeometric distribution is a variation of the binomial distribution in which samples are drawn from the population without replacement.
* The multinomial distribution is a generalization of the binomial distribution where there are more than two possible outcomes.
Also in this chapter, we saw the first example of a model with three parameters. We'll see more in subsequent chapters.
## Exercises
**Exercise:** [In an excellent paper](http://chao.stat.nthu.edu.tw/wordpress/paper/110.pdf), Anne Chao explains how mark and recapture experiments are used in epidemiology to estimate the prevalence of a disease in a human population based on multiple incomplete lists of cases.
One of the examples in that paper is a study "to estimate the number of people who were infected by hepatitis in an outbreak that occurred in and around a college in northern Taiwan from April to July 1995."
Three lists of cases were available:
1. 135 cases identified using a serum test.
2. 122 cases reported by local hospitals.
3. 126 cases reported on questionnaires collected by epidemiologists.
In this exercise, we'll use only the first two lists; in the next exercise we'll bring in the third list.
Make a joint prior and update it using this data, then compute the posterior mean of `N` and a 90% credible interval.
The following array contains 0 as a place-holder for the unknown value of `k00`, followed by known values of `k01`, `k10`, and `k11`.
```
data2 = np.array([0, 73, 86, 49])
```
These data indicate that there are 73 cases on the second list that are not on the first, 86 cases on the first list that are not on the second, and 49 cases on both lists.
To keep things simple, we'll assume that each case has the same probability of appearing on each list. So we'll use a two-parameter model where `N` is the total number of cases and `p` is the probability that any case appears on any list.
Here are priors you can start with (but feel free to modify them).
```
qs = np.arange(200, 500, step=5)
prior_N = make_uniform(qs, name='N')
prior_N.head(3)
qs = np.linspace(0, 0.98, num=50)
prior_p = make_uniform(qs, name='p')
prior_p.head(3)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** Now let's do the version of the problem with all three lists. Here's the data from Chou's paper:
```
Hepatitis A virus list
P Q E Data
1 1 1 k111 =28
1 1 0 k110 =21
1 0 1 k101 =17
1 0 0 k100 =69
0 1 1 k011 =18
0 1 0 k010 =55
0 0 1 k001 =63
0 0 0 k000 =??
```
Write a loop that computes the likelihood of the data for each pair of parameters, then update the prior and compute the posterior mean of `N`. How does it compare to the results using only the first two lists?
Here's the data in a NumPy array (in reverse order).
```
data3 = np.array([0, 63, 55, 18, 69, 17, 21, 28])
```
Again, the first value is a place-keeper for the unknown `k000`. The second value is `k001`, which means there are 63 cases that appear on the third list but not the first two. And the last value is `k111`, which means there are 28 cases that appear on all three lists.
In the two-list version of the problem we computed `ps` by enumerating the combinations of `p` and `q`.
```
q = 1-p
ps = [q*q, q*p, p*q, p*p]
```
We could do the same thing for the three-list version, computing the probability for each of the eight categories. But we can generalize it by recognizing that we are computing the cartesian product of `p` and `q`, repeated once for each list.
And we can use the following function (based on [this StackOverflow answer](https://stackoverflow.com/questions/58242078/cartesian-product-of-arbitrary-lists-in-pandas/58242079#58242079)) to compute Cartesian products:
```
def cartesian_product(*args, **options):
"""Cartesian product of sequences.
args: any number of sequences
options: passes to `MultiIndex.from_product`
returns: DataFrame with one column per sequence
"""
index = pd.MultiIndex.from_product(args, **options)
return pd.DataFrame(index=index).reset_index()
```
Here's an example with `p=0.2`:
```
p = 0.2
t = (1-p, p)
df = cartesian_product(t, t, t)
df
```
To compute the probability for each category, we take the product across the columns:
```
y = df.prod(axis=1)
y
```
Now you finish it off from there.
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
| github_jupyter |
# UniProtClient
Python classes in this package allow convenient access to [UniProt](https://www.uniprot.org/) for protein ID mapping and information retrieval.
## Installation in Conda
If not already installed, install **pip** and **git**:
```
conda install git
conda install pip
```
Then install via pip:
```
pip install git+git://github.com/c-feldmann/UniProtClient
```
## Usage
### Mapping
Protein IDs differ from database to database. The class *UniProtMapper* can be utilized for mapping of protein IDs from one database to corresponding IDs of another database, specified by [letter codes](https://www.uniprot.org/help/api_idmapping).
```
from UniProtClient import UniProtMapper
origin_database = 'P_GI' # PubChem Gene ID
target_database = 'ACC' # UniProt Accession
gi_2_acc_mappig = UniProtMapper(origin_database, target_database)
```
The obtained object has a function called `map_protein_ids`, which takes a list of strings with protein IDs as input, returning a pandas DataFrame. The DataFrame has two columns: "From" and "To" referring to the origin and target ID, respectively.
```
gi_numbers = ['224586929', '224586929', '4758208'] # IDs should be represented as a list of strings
# a pandas DataFrame is returned containing the columns "From" and "To"
mapping_df = gi_2_acc_mappig.map_protein_ids(gi_numbers)
uniprot_accessions = mapping_df['To'].tolist()
mapping_df
```
### Protein information
UniProt provides a varity of protein specific information, such as protein family, organism, function, EC-number, and many more.
The class *UniProtProteinInfo* is initialized with [column identifier](https://www.uniprot.org/help/uniprotkb%5Fcolumn%5Fnames) specifing the requested information. Spaces in column names should be substituted by underscores.
If no columns are specified the default is used:
| Column-ID |
|:------:|
| id |
| entry_name |
| protein_names |
| families |
| organism |
| ec |
| genes(PREFERRED) |
| go(molecular_function) |
The column "protein_names" contains all protein names, where secondary names are given in brackets or parenthesis. If this column is requested, the primary name is extracted and added as a new column, called "primary_name".
```
from UniProtClient import UniProtProteinInfo
info = UniProtProteinInfo()
info.load_protein_info(["B4DZW8", "Q9Y2R2", "P51452"])
```
#### Protein Families
If downloaded, the string 'protein_families' is parsed automatically. It is split into the categories subfamily, family
and superfamily.
Some proteins belong to multiple families. The default behaviour is to extract the individual categories and merge them
into a `; ` seperated string.
```
# Extending column with. Not important for extraction.
import pandas as pd
pd.set_option('max_colwidth', 400)
info = UniProtProteinInfo(merge_multi_fam_associations="string") # Default behaviour
info.load_protein_info(["Q923J1"])[["organism", "subfamily", "family", "superfamily"]]
```
Setting `merge_multi_fam_associations` to `'list'` will arrange each family association in a list. To keep types consitent this applies to protiens with only one family as well.
```
info = UniProtProteinInfo(merge_multi_fam_associations="list") # Default behaviour
info.load_protein_info(["Q923J1", "Q9Y2R2"])[["organism", "subfamily", "family", "superfamily"]]
```
Setting `merge_multi_fam_associations` to `None` will create for each family association an
individual row where remaining protein information are identical.
```
info = UniProtProteinInfo(merge_multi_fam_associations=None)
info.load_protein_info(["Q923J1"])[["organism", "subfamily", "family", "superfamily"]]
```
| github_jupyter |
# Writing Low-Level TensorFlow Code
**Learning Objectives**
1. Practice defining and performing basic operations on constant Tensors
2. Use Tensorflow's automatic differentiation capability
3. Learn how to train a linear regression from scratch with TensorFLow
## Introduction
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is `tf.GradientTape`, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/write_low_level_code.ipynb)
```
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print(tf.__version__)
```
## Operations on Tensors
### Variables and Constants
Tensors in TensorFlow are either contant (`tf.constant`) or variables (`tf.Variable`).
Constant values can not be changed, while variables values can be.
The main difference is that instances of `tf.Variable` have methods allowing us to change
their values while tensors constructed with `tf.constant` don't have these methods, and
therefore their values can not be changed. When you want to change the value of a `tf.Variable`
`x` use one of the following method:
* `x.assign(new_value)`
* `x.assign_add(value_to_be_added)`
* `x.assign_sub(value_to_be_subtracted`
```
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name='my_variable')
x.assign(45.8)
x
x.assign_add(4)
x
x.assign_sub(3)
x
```
### Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
* `tf.add` allows to add the components of a tensor
* `tf.multiply` allows us to multiply the components of a tensor
* `tf.subtract` allow us to substract the components of a tensor
* `tf.math.*` contains the usual math operations to be applied on the components of a tensor
* and many more...
Most of the standard arithmetic operations (`tf.add`, `tf.substrac`, etc.) are overloaded by the usual corresponding arithmetic symbols (`+`, `-`, etc.)
**Lab Task #1:** Performing basic operations on Tensors
1. In the first cell, define two constants `a` and `b` and compute their sum in c and d respectively, below using `tf.add` and `+` and verify both operations produce the same values.
2. In the second cell, compute the product of the constants `a` and `b` below using `tf.multiply` and `*` and verify both operations produce the same values.
3. In the third cell, compute the exponential of the constant `a` using `tf.math.exp`. Note, you'll need to specify the type for this operation.
```
# TODO 1a
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1b
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1c
# tf.math.exp expects floats so we need to explicitly give the type
a = # TODO -- Your code here.
b = # TODO -- Your code here.
print("b:", b)
```
### NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
```
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py)
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np)
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf)
```
You can convert a native TF tensor to a NumPy array using .numpy()
```
a_tf.numpy()
```
## Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
### Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
```
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
```
Let's also create a test dataset to evaluate our models:
```
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print("X_test:{}".format(X_test))
print("Y_test:{}".format(Y_test))
```
#### Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
```
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
```
Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
```
errors = (Y_hat - Y)**2
loss = tf.reduce_mean(errors)
loss.numpy()
```
This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
```
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
```
### Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of `tf.GradientTape` instance which will record gradient information:
```python
with tf.GradientTape() as tape:
loss = # computation
```
This will allow us to later compute the gradients of any tensor computed within the `tf.GradientTape` context with respect to instances of `tf.Variable`:
```python
gradients = tape.gradient(loss, [w0, w1])
```
We illustrate this procedure by computing the loss gradients with respect to the model weights:
**Lab Task #2:** Complete the function below to compute the loss gradients with respect to the model weights `w0` and `w1`.
```
# TODO 2
def compute_gradients(X, Y, w0, w1):
# TODO -- Your code here.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
```
### Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
**Lab Task #3:** Complete the `for` loop below to train a linear regression.
1. Use `compute_gradients` to compute `dw0` and `dw1`.
2. Then, re-assign the value of `w0` and `w1` using the `.assign_sub(...)` method with the computed gradient values and the `LEARNING_RATE`.
3. Finally, for every 100th step , we'll compute and print the `loss`. Use the `loss_mse` function we created above to compute the `loss`.
```
# TODO 3
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = # TODO -- Your code here.
if step % 100 == 0:
loss = # TODO -- Your code here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
```
Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
```
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
```
This is indeed much better!
## Bonus
Try modeling a non-linear function such as: $y=xe^{-x^2}$
```
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-X**2)
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = .02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
# The .figure() method will create a new figure, or activate an existing figure.
plt.figure()
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
# The .legend() method will place a legend on the axes.
plt.legend()
```
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# An analysis of the State of the Union speeches - Part 2
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from string import punctuation
from nltk import punkt, word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
from collections import Counter
import shelve
plt.style.use('seaborn-dark')
plt.rcParams['figure.figsize'] = (10, 6)
```
Let's start by loading some of the data created in the previous part, so we can continue where we left off:
```
addresses = pd.read_hdf('results/df1.h5', 'addresses')
with shelve.open('results/vars1') as db:
speeches = db['speeches']
```
Let's double-check that we're getting the full set of speeches:
```
print(addresses.shape)
print(len(speeches))
```
## Basic text analysis
Let's ask a few basic questions about this text, by populating our `addresses` dataframe with some extra information. As a reminder, so far we have:
```
addresses.head()
```
Now, let's add the following information to this DF:
* `n_words`: number of words in the speech
* `n_uwords`: number of *unique* words in the speech
* `n_swords`: number of *unique, stemmed* words in the speech
* `n_chars`: number of letters in the speech
* `n_sent`: number of sentences in the speech
For this level of complexity, it's probably best if we go with NLTK. Remember, that `speeches` is our list with all the speeches, indexed in the same way as the `addresses` dataframe:
```
def tokenize_word(doc):
"""word tokenizer
Parameters
----------
doc : string
A document to be tokenized
Returns
-------
tokens
"""
tokens = [token.lower() for token in word_tokenize(doc)]
return tokens
def clean_word_tokenize(doc):
"""custom word toenizer which removes stop words and punctuation
Parameters
----------
doc : string
A document to be tokenized
Returns
-------
tokens
"""
stop = stopwords.words("english") + list(punctuation)
tokens = [token.lower() for token in word_tokenize(doc)
if token not in stop]
return tokens
```
Now we compute these quantities for each speech, as well as saving the set of unique, stemmed words for each speech, which we'll need later to construct the complete term-document matrix across all speeches.
```
n_sent = []
n_words_all=[]
n_uwords=[]
n_chars = []
n_words=[]
n_swords=[]
stemmer = SnowballStemmer('english')
speeches_cleaned = []
speech_words = []
# go through our list of speeches and compute these metrics for each speech
for speech in speeches:
stemmed = []
#all characters in speech
n_chars.append(len(speech))
#unique words before removing stop words and punctuation
tokens_all = tokenize_word(speech)
tokens_all_counter = Counter(tokens_all)
#number of sentences
sent_tokens = sent_tokenize(speech)
n_sent.append(len(sent_tokens))
#add all words before removing stop words and punctuation
n_words_all.append(len(tokens_all))
#words with stop words and punctuation removed
tokens = clean_word_tokenize(speech)
tokens_counter = Counter(tokens)
n_words.append(len(tokens))
#unique words with stop words and punctuation removed
n_uwords.append(len(tokens_counter.values()))
#stemmed words
for token in tokens:
s = stemmer.stem(token.lower())
stemmed.append(s)
#unique, stemmed words
stemmed_counter = Counter(stemmed)
#save our unique stemmed words into speech_words for later use
speech_words.append(list(stemmed_counter.keys()))
#save our stemmed (non-unique) words into speeches_cleaned for later use
speeches_cleaned.append(stemmed)
#number of unique stemmed words
n_swords.append(len(stemmed_counter))
#save these values into our addresses dataframe
addresses['n_sent'] = pd.Series(n_sent)
addresses['n_words_all'] = pd.Series(n_words_all)
addresses['n_words'] = pd.Series(n_words)
addresses['n_uwords'] = pd.Series(n_uwords)
addresses['n_swords'] = pd.Series(n_swords)
addresses['n_chars'] = pd.Series(n_chars)
#a look at our updated dataframe
pd.options.display.precision = 0
addresses.head()
```
Let's look at a summary of these
```
pd.options.display.precision = 2
addresses.describe()
```
## Visualizing characteristics of the speeches
Now we explore some of the relationships between the speeches, their authors, and time.
How properties of the speeches change over time.
```
# plot of how changes speech over time
changeintime=pd.DataFrame(addresses['date'])
changeintime['n_sent']=np.log(addresses.n_sent)
changeintime['n_words']=np.log(addresses.n_words)
changeintime['n_words_pervocb']= (addresses.n_uwords) / (addresses.n_words)
changeintime['avgsent_length']= (addresses.n_words) / (addresses.n_sent)
changeintime['avgword_length']= (addresses.n_chars) / (addresses.n_words)
changeintime['fra_stopword']= (addresses.n_words_all - addresses.n_words) / addresses.n_words
changeintime.index= changeintime.date
changeintime = changeintime.drop('date',axis=1)
fig,axes= plt.subplots(3,2,figsize=(25,20), sharex= True)
fig.suptitle('Change in speech characteristics over time')
axes[0,0].plot_date(x=changeintime.index, y= changeintime.n_sent, linestyle='solid', marker='None')
axes[0,0].set_title('Log number of sentences')
axes[0,0].set_ylabel("Log number of sentences")
axes[0,0].set_xlabel("Date")
axes[0,1].plot_date(x=changeintime.index, y= changeintime.n_words, linestyle='solid', marker='None')
axes[0,1].set_title('Log number of words')
axes[0,1].set_ylabel("Log number of words")
axes[0,1].set_xlabel("Date")
axes[1,0].plot_date(x=changeintime.index, y= changeintime.n_words_pervocb, linestyle='solid', marker='None')
axes[1,0].set_title('Vocabulary size per word')
axes[1,0].set_ylabel("Vocabulary size per word")
axes[1,0].set_xlabel("Date")
axes[1,1].plot_date(x=changeintime.index, y= changeintime.avgsent_length, linestyle='solid', marker='None')
axes[1,1].set_title('Average sentence length')
axes[1,1].set_ylabel("Average sentence length")
axes[1,1].set_xlabel("Date")
axes[2,0].plot_date(x=changeintime.index, y= changeintime.avgword_length, linestyle='solid', marker='None')
axes[2,0].set_title('Average word length')
axes[2,0].set_ylabel("Average word length")
axes[2,0].set_xlabel("Date")
axes[2,1].plot_date(x=changeintime.index, y= changeintime.fra_stopword, linestyle='solid', marker='None')
axes[2,1].set_title('Fraction of stop words')
axes[2,1].set_ylabel("Fraction of stop words")
axes[2,1].set_xlabel("Date")
plt.savefig("fig/speech_changes.png")
```
These charts clearly suggest that the average word and average sentence lengths for the State of the Union speeches have decreased over time, as evidenced by the steady drop in their respective values on their plots. This drop is consistent with what we can expect based on historical trends of the English language. Interestingly, the fraction of stop words has decreased on average as well. Taking the log of the number of words and sentences in each speech, we can see a substantial increase for roughly the first 30 years, while the vocabulary size of each word experienced the opposite. After this period, there is a great deal of variation so we are unable to discern a clear pattern in that data.
Now for the distributions by president
```
# violin plots by president instead of over time
presidentdis= pd.DataFrame(addresses.president)
presidentdis['n_sent']=np.log(addresses.n_sent)
presidentdis['n_words']=np.log(addresses.n_words)
presidentdis['n_words_pervocb']= (addresses.n_uwords) / (addresses.n_words)
presidentdis['avgsent_length']= (addresses.n_words) / (addresses.n_sent)
presidentdis['avgword_length']= (addresses.n_chars) / (addresses.n_words)
presidentdis['fra_stopword']= (addresses.n_words_all - addresses.n_words) / addresses.n_words
fig,axes= plt.subplots(3,2,figsize=(25,20), sharex= True)
fig.suptitle('Speech characteristics by President')
sns.violinplot(x='president', y='n_sent', data= presidentdis , ax=axes[0,0])
axes[0,0].set_title('Log number of sentences')
axes[0,0].set_ylabel("Log number of sentences")
axes[0,0].set_xlabel("president")
plt.setp( axes[0,0].xaxis.get_majorticklabels(), rotation=90)
sns.violinplot(x='president', y='n_words', data= presidentdis , ax=axes[0,1])
axes[0,1].set_title('Log number of words')
axes[0,1].set_ylabel("Log number of words")
axes[0,1].set_xlabel("president")
plt.setp( axes[0,1].xaxis.get_majorticklabels(), rotation=90)
sns.violinplot(x='president', y='n_words_pervocb', data= presidentdis , ax=axes[1,0])
axes[1,0].set_title('Vocabulary size per word')
axes[1,0].set_ylabel("Vocabulary size per word")
axes[1,0].set_xlabel("president")
plt.setp( axes[1,0].xaxis.get_majorticklabels(), rotation=90)
sns.violinplot(x='president', y='avgsent_length', data= presidentdis , ax=axes[1,1])
axes[1,1].set_title('Average sentence length')
axes[1,1].set_ylabel("Average sentence length")
axes[1,1].set_xlabel("president")
plt.setp( axes[1,1].xaxis.get_majorticklabels(), rotation=90)
sns.violinplot(x='president', y='avgword_length', data= presidentdis , ax=axes[2,0])
axes[2,0].set_title('Average word length')
axes[2,0].set_ylabel("Average word length")
axes[2,0].set_xlabel("president")
plt.setp( axes[2,0].xaxis.get_majorticklabels(), rotation=90)
sns.violinplot(x='president', y='fra_stopword', data= presidentdis , ax=axes[2,1])
axes[2,1].set_title('Fraction of stop words')
axes[2,1].set_ylabel("Fraction of stop words")
axes[2,1].set_xlabel("president")
plt.setp( axes[2,1].xaxis.get_majorticklabels(), rotation=90)
plt.savefig("fig/speech_characteristics.png");
```
By changing the x axis from time to presidents, we are able to see the data more discretely in that it is easier to see the data as partitions based on each president's induvidual speeches frozen in time. Displaying the previous plots as violin plot also revealed one particular president's speeches as an outlier - Herbert Hoover. Digging into the text of his speeches, we noticed that he tended to reference numbers and figures in his speeches far more often than other presidents, which led to the glaring distinction in the data for the average length and number of characters in each word. The violin plots also reveal flat lines for Zachary Taylor and Donald Trump due to the dataset only containing 1 speech for each of them, whereas the other presidents had multiple speeches.
## Intermediate results storage
Since this may have taken a while, we now serialize the results we have for further use. Note that we don't overwrite our original dataframe file, so we can load both (even though in this notebook we reused the name `addresses`):
```
addresses.to_hdf('results/df2.h5', 'addresses')
with shelve.open('results/vars2') as db:
db['speech_words'] = speech_words # will contain the set of unique, stemmed words for each speech
db['speeches_cleaned'] = speeches_cleaned # stemmed/cleaned versions of each speech, without collapsing into unique word sets
```
| github_jupyter |
## Modeling the musical difficulty
```
import ipywidgets as widgets
from IPython.display import Audio, display, clear_output
from ipywidgets import interactive
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
distributions = {
"krumhansl_kessler": [
0.15195022732711172, 0.0533620483369227, 0.08327351040918879,
0.05575496530270399, 0.10480976310122037, 0.09787030390045463,
0.06030150753768843, 0.1241923905240488, 0.05719071548217276,
0.08758076094759511, 0.05479779851639147, 0.06891600861450106,
0.14221523253201526, 0.06021118849696697, 0.07908335205571781,
0.12087171422152324, 0.05841383958660975, 0.07930802066951245,
0.05706582790384183, 0.1067175915524601, 0.08941810829027184,
0.06043585711076162, 0.07503931700741405, 0.07121995057290496
],
"sapp": [
0.2222222222222222, 0.0, 0.1111111111111111, 0.0,
0.1111111111111111, 0.1111111111111111, 0.0, 0.2222222222222222,
0.0, 0.1111111111111111, 0.0, 0.1111111111111111,
0.2222222222222222, 0.0, 0.1111111111111111, 0.1111111111111111,
0.0, 0.1111111111111111, 0.0, 0.2222222222222222,
0.1111111111111111, 0.0, 0.05555555555555555, 0.05555555555555555
],
"aarden_essen": [
0.17766092893562843, 0.001456239417504233, 0.1492649402940239,
0.0016018593592562562, 0.19804892078043168, 0.11358695456521818,
0.002912478835008466, 0.2206199117520353, 0.001456239417504233,
0.08154936738025305, 0.002329979068008373, 0.049512180195127924,
0.18264800547944018, 0.007376190221285707, 0.14049900421497014,
0.16859900505797015, 0.0070249402107482066, 0.14436200433086013,
0.0070249402107482066, 0.18616100558483017, 0.04566210136986304,
0.019318600579558018, 0.07376190221285707, 0.017562300526869017
],
"bellman_budge": [
0.168, 0.0086, 0.1295, 0.0141, 0.1349, 0.1193,
0.0125, 0.2028, 0.018000000000000002, 0.0804, 0.0062, 0.1057,
0.1816, 0.0069, 0.12990000000000002,
0.1334, 0.010700000000000001, 0.1115,
0.0138, 0.2107, 0.07490000000000001,
0.015300000000000001, 0.0092, 0.10210000000000001
],
"temperley": [
0.17616580310880825, 0.014130946773433817, 0.11493170042392838,
0.019312293923692884, 0.15779557230334432, 0.10833725859632594,
0.02260951483749411, 0.16839378238341965, 0.02449364107395195,
0.08619877531794629, 0.013424399434762127, 0.09420631182289213,
0.1702127659574468, 0.020081281377002155, 0.1133158020559407,
0.14774085584508725, 0.011714080803251255, 0.10996892182644036,
0.02510160172125269, 0.1785799665311977, 0.09658140090843893,
0.016017212526894576, 0.03179536218025341, 0.07889074826679417
],
'albrecht_shanahan1': [
0.238, 0.006, 0.111, 0.006, 0.137, 0.094,
0.016, 0.214, 0.009, 0.080, 0.008, 0.081,
0.220, 0.006, 0.104, 0.123, 0.019, 0.103,
0.012, 0.214, 0.062, 0.022, 0.061, 0.052
],
'albrecht_shanahan2': [
0.21169, 0.00892766, 0.120448, 0.0100265, 0.131444, 0.0911768, 0.0215947, 0.204703, 0.012894, 0.0900445, 0.012617, 0.0844338,
0.201933, 0.009335, 0.107284, 0.124169, 0.0199224, 0.108324,
0.014314, 0.202699, 0.0653907, 0.0252515, 0.071959, 0.049419
]
}
def compute_threshold(dist_max, dist_min, d, cutoff):
if d < cutoff:
thresh = dist_max - d * ((dist_max - dist_min) / cutoff)
else:
thresh = 0.0
return thresh
def clipped_distribution(orig_dist, d, cutoff):
# make a copy of the original distribution
copy = np.array(orig_dist)
# compute the threshold to get rid of difficult notes at initial difficulties
threshold = compute_threshold(max(copy), min(copy), d, cutoff)
# remove the most difficult notes for low difficulties
copy[copy < threshold] = 0.0
# norm-1 of the distribution
copy = copy / sum(copy)
return copy, threshold
def scaled_distribution(clipped_dist, h, d):
# make a copy of the original distribution
copy = np.array(clipped_dist)
# compute the scaling factor based on handicap parameter and difficulty (user input)
scaling = h - (h * d)
# scale the distribution
copy = copy ** scaling
# norm-1 of the distribution
copy = copy / sum(copy)
return copy
def f(dist_name, clipping, handicap, difficulty):
# create the figures
f, (axmaj, axmin) = plt.subplots(2, 3, sharex=True, sharey=True)
# get the original distributions for major and minor keys
dist = np.array(distributions[dist_name])
major = dist[:12]
minor = dist[12:]
# clip the distributions for lower difficulties
clipped_major, major_threshold = clipped_distribution(major, difficulty, clipping)
clipped_minor, minor_threshold = clipped_distribution(minor, difficulty, clipping)
# get the scaled distribution according to difficulty, handicap, and initial clipping
scaled_major = scaled_distribution(clipped_major, handicap, difficulty)
scaled_minor = scaled_distribution(clipped_minor, handicap, difficulty)
ylim_major = max(max(np.amax(major), np.amax(clipped_major)), np.amax(scaled_major))
ylim_minor = max(max(np.amax(minor), np.amax(clipped_minor)), np.amax(scaled_minor))
# prepare to plot
x = np.array(['C', 'C#', 'D', 'Eb', 'E', 'F',
'F#', 'G', 'Ab', 'A', 'Bb', 'B'])
sns.barplot(x=x, y=major, ax=axmaj[0])
axmaj[0].set_title("Original Major")
axmaj[0].axhline(major_threshold, color="k", clip_on=True)
axmaj[0].set_ylim(0, ylim_major)
sns.barplot(x=x, y=clipped_major, ax=axmaj[1])
axmaj[1].set_title("Clipped Major")
axmaj[1].set_ylim(0, ylim_major)
sns.barplot(x=x, y=scaled_major, ax=axmaj[2])
axmaj[2].set_title("Scaled Major")
axmaj[2].set_ylim(0, ylim_major)
sns.barplot(x=x, y=minor, ax=axmin[0])
axmin[0].set_title("Original Minor")
axmin[0].axhline(minor_threshold, color="k", clip_on=True)
axmin[0].set_ylim(0, ylim_minor)
sns.barplot(x=x, y=clipped_minor, ax=axmin[1])
axmin[1].set_title("Clipped Minor")
axmin[1].set_ylim(0, ylim_minor)
sns.barplot(x=x, y=scaled_minor, ax=axmin[2])
axmin[2].set_title("Scaled Minor")
axmin[2].set_ylim(0, ylim_minor)
plt.tight_layout(h_pad=2)
return scaled_major, scaled_minor
distribution_name = list(distributions.keys())
handicap = widgets.IntSlider(min=1, max=10, value=2, continuous_update=False)
difficulty = widgets.FloatSlider(min=0.0, max=1.0, value=0.5, step=0.01, continuous_update=False)
clipping = widgets.FloatSlider(min=0.2, max=0.8, step=0.1, value=0.2, continuous_update=False)
w = interactive(f, dist_name=distribution_name, handicap=handicap, difficulty=difficulty, clipping=clipping)
rate = 16000.
duration = .1
t = np.linspace(0., duration, int(rate * duration))
notes = range(12)
freqs = 220. * 2**(np.arange(3, 3 + len(notes)) / 12.)
def synth(f):
x = np.sin(f * 2. * np.pi * t) * np.sin(t * np.pi / duration)
display(Audio(x, rate=rate, autoplay=True))
def sample_major_distribution(b):
with output_major:
major = w.result[0]
note = np.random.choice(np.arange(12), p=major)
synth(freqs[note])
clear_output(wait=duration)
def sample_minor_distribution(b):
with output_minor:
minor = w.result[1]
note = np.random.choice(np.arange(12), p=minor)
synth(freqs[note])
clear_output(wait=duration)
display(w)
sample_major = widgets.Button(description="C Major")
output_major = widgets.Output()
display(sample_major, output_major)
sample_minor = widgets.Button(description="C Minor")
output_minor = widgets.Output()
display(sample_minor, output_minor)
sample_major.on_click(sample_major_distribution)
sample_minor.on_click(sample_minor_distribution)
```
| github_jupyter |
Implementation Task1
We implement the 1D example of least square problem for the IGD
```
# generate a vector of random numbers which obeys the given distribution.
#
# n: length of the vector
# mu: mean value
# sigma: standard deviation.
# dist: choices for the distribution, you need to implement at least normal
# distribution and uniform distribution.
#
# For normal distribution, you can use ``numpy.random.normal`` to generate.
# For uniform distribution, the interval to sample will be [mu - sigma/sqrt(3), mu + sigma/sqrt(3)].
def generate_random_numbers(n, mu, sigma, dist="normal"):
# write your code here.
if dist == "normal":
return np.random.normal(mu, sigma, n)
elif dist == "uniform":
return np.random.uniform(mu - sigma/np.sqrt(3),mu + sigma/np.sqrt(3),n)
else:
raise Exception("The distribution {unknown_dist} is not implemented".format(unknown_dist=dist))
# test your code:
y_test = generate_random_numbers(5, 0, 0.1, "normal")
y_test
y1 = generate_random_numbers(105, 0.5, 1.0, "normal")
y2 = generate_random_numbers(105, 0.5, 1.0, "uniform")
# IGD, the ordering is permitted to have replacement.
#
#
def IGD_wr_task1(y): # repeat
x = 0
n = len(y)
ordering = np.random.choice(n, n, replace=True)
# implement the algorithm's iteration of IGD. Your result should return the the final xk
# at the last iteration and also the history of objective function at each xk.
f = np.empty(n) # empty array for histories
X = np.empty(n) # empty array for xk
for k in range(n):
gamma = 1/(k+1)
x = x - gamma*(x - y[ordering[k]])
f[k] = 0.5*np.sum((x - y)**2)
X[k] = x
return x, f, X
# IGD, the ordering is not permitted to have replacement.
#
#
def IGD_wo_task1(y): # no repeat
x = 0
n = len(y)
ordering = np.random.choice(n, n, replace=False)
# implement the algorithm's iteration of IGD. Your result should return the the final xk
# at the last iteration and also the history of objective function at each xk.
f = np.empty(n)
X = np.empty(n)
for k in range(n):
gamma = 1/(k+1)
x = x - gamma*(x - y[ordering[k]])
f[k] = 0.5*np.sum((x - y)**2)
X[k] = x
return x, f, X
# Using y1
x_wr, wr_solu, X1 = IGD_wr_task1(y1)
print("Final x using placement:", x_wr)
x_wo, wo_solu, X2 = IGD_wo_task1(y1)
print("Final x without using placement:", x_wo)
X = np.linspace(0,105,105)
plt.plot(X,wr_solu)
plt.plot(X,wo_solu)
plt.legend(["With Placement","Without Placement"])
plt.xlabel("# of iterations")
plt.ylabel("Histories")
plt.show()
# Average of x with placement
print(np.sum(X1[:5])/5) # first 5
print(np.sum(X1[5:10])/5) # next 5
print(np.sum(X1[10:15])/5) # next 5
print()
# Average of x without placement
print(np.sum(X2[:5])/5)
print(np.sum(X2[5:10])/5)
print(np.sum(X2[10:15])/5)
# Using y2
x_wr, wr_solu, X1 = IGD_wr_task1(y2)
print("Final x using placement:", x_wr)
x_wo, wo_solu, X2 = IGD_wo_task1(y2)
print("Final x without using placement:", x_wo)
X = np.linspace(0,105,105)
plt.plot(X,wr_solu)
plt.plot(X,wo_solu)
plt.legend(["With Placement","Without Placement"])
plt.xlabel("# of iterations")
plt.ylabel("Histories")
plt.show()
```
We calculate average of x with replacement and x without replacement to see more clear
```
# Average of x with placement
print(np.sum(X1[:5])/5) # first 5
print(np.sum(X1[5:10])/5) # next 5
print(np.sum(X1[10:15])/5) # next 5
print()
# Average of x without placement
print(np.sum(X2[:5])/5)
print(np.sum(X2[5:10])/5)
print(np.sum(X2[10:15])/5)
print(np.sum(X2[70:75])/5) #average of x70 to x75
```
Ordering without replacement is better because it is more steady, and we can conclude that xk will converge to 0.5 and the xk+1 will converge to 0.5 as well
Implementation task2
```
# IGD, the ordering is permitted to have replacement.
#
#
def IGD_wr_task2(y, beta):
x = 0
n = len(beta)
ordering = np.random.choice(n, n, replace=True)
f = np.empty(n)
gamma = 0.05*np.amin(1/beta)
for k in range(n):
x = x - gamma*beta[ordering[k]]*(x - y)
f[k] = 0.5*np.sum(beta*(x - y)**2)
return x, f
# IGD, the ordering is not permitted to have replacement.
#
#
def IGD_wo_task2(y, beta):
x = 0
n = len(beta)
ordering = np.random.choice(n, n, replace=False)
f = np.empty(n)
gamma = 0.05*np.amin(1/beta)
for k in range(n):
x = x - gamma*beta[ordering[k]]*(x - y)
f[k] = 0.5*np.sum(beta*(x - y)**2)
return x, f
N = 30
beta = np.random.uniform(1,2,N)
y = 2
x_wr, wr_solu = IGD_wr_task2(y, beta)
print("Final x using placement:", x_wr)
x_wo, wo_solu = IGD_wr_task2(y, beta)
print("Final x without using placement:", x_wo)
X = np.linspace(0,N,N)
plt.plot(X,wr_solu)
plt.plot(X,wo_solu)
plt.legend(["With Placement","Without Placement"])
plt.xlabel("# of iterations")
plt.ylabel("Histories")
plt.show()
N = 80
beta = np.random.uniform(1,2,N)
y = 2
x_wr, wr_solu = IGD_wr_task2(y, beta)
print("Final x using placement:", x_wr)
x_wo, wo_solu = IGD_wr_task2(y, beta)
print("Final x without using placement:", x_wo)
X = np.linspace(0,N,N)
plt.plot(X,wr_solu)
plt.plot(X,wo_solu)
plt.legend(["With Placement","Without Placement"])
plt.xlabel("# of iterations")
plt.ylabel("Histories")
plt.show()
```
After big iterations, both of the methods can approach to the final results. However, without replacement works better since it approaches faster to the result.
Implementation task3
```
# generation of exact solution and data y and matrix A.
def generate_problem_task3(m, n, rho):
A = np.random.normal(0., 1.0, (m, n))
x = np.random.random(n) # uniform in (0,1)
w = np.random.normal(0., rho, m)
y = A@x + w
return A, x, y
A, xstar, y = generate_problem_task3(200, 100, 0.01)
# In these two functions, we could only focus on the first n steps and try to make comparisons on these data only.
# In practice, it requires more iterations to converge, due to the matrix might not be easy to deal with.
# You can put the ordering loop into a naive loop: namely, we simply perform the IGD code several rounds.
#
#
#
# IGD, the ordering is permitted to have replacement.
#
#
def IGD_wr_task3(y, A, xstar):
n = A.shape[1]
m = A.shape[0]
x = np.zeros(n)
f = np.empty(n)
conv = np.empty(n)
gamma = 1e-3
for i in range(3): # performing IGD for three rounds
ordering = np.random.choice(n, n, replace=True)
for k in range(n):
x = x - gamma*A[ordering[k]]*(A[ordering[k]]@x - y[ordering[k]])
f[k] = np.sum((A[k]@x - y[k])**2)
conv[k] = LA.norm(x - xstar)
return x, f, conv
# IGD, the ordering is not permitted to have replacement.
#
#
def IGD_wo_task3(y, A, xstar):
n = A.shape[1]
x = np.zeros(n)
f = np.empty(n)
conv = np.empty(n)
gamma = 1e-3
for i in range(3): # performing IGD for three rounds
ordering = np.random.choice(n, n, replace=False)
for k in range(n):
x = x - gamma*A[ordering[k]]*(A[ordering[k]]@x - y[ordering[k]])
f[k] = np.sum((A[k]@x - y[k])**2)
conv[k] = LA.norm(x - xstar)
return x, f, conv
N = A.shape[1]
x_wr, wr_solu, wr_conv = IGD_wr_task3(y, A, xstar)
x_wo, wo_solu, wo_conv = IGD_wo_task3(y, A, xstar)
X = np.linspace(0,N,N)
plt.plot(X,wr_solu)
plt.plot(X,wo_solu)
plt.legend(["With Placement","Without Placement"])
plt.xlabel("# of iterations")
plt.ylabel("Histories")
plt.show()
# Histories of norm(xk - xstar)
X = np.linspace(0,N,N)
plt.plot(X,wr_conv)
plt.plot(X,wo_conv)
plt.legend(["With Placement","Without Placement"])
plt.xlabel("# of iterations")
plt.ylabel("norm")
plt.show()
```
Ordering without placement is still better because it beneaths the plot of with placement, which means the second method converges faster to the true solution than the first one.
| github_jupyter |
```
import numpy as np
import pandas as pd
from datetime import datetime as dt
import itertools
season_1=pd.read_csv("2015-16.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_2=pd.read_csv("2014-15.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_3=pd.read_csv("2013-14.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_4=pd.read_csv("2012-13.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_5=pd.read_csv("2011-12.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_6=pd.read_csv("2010-11.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_7=pd.read_csv("2009-10.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_8=pd.read_csv("2008-09.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_9=pd.read_csv("2007-08.csv")[['Date','HomeTeam','AwayTeam','FTHG','FTAG','FTR']]
season_1.shape
def parse_date(date):
# print(type(date))
data=str(date)
print(type(date))
print(date)
if date=="":
return None
else:
return dt.strptime(date,"%d/%m/%y").date()
seasons=[season_1,season_2,season_3,season_4,season_5,season_6,season_7,season_8,season_9]
#apply the above functions
for season in seasons:
season.Date=season.Date.apply(parse_date)
season_1.head(5)
#functions adopted from Tewari and Krishna https://github.com/krishnakartik1/LSTM-footballMatchWinner
def get_goals_scored(season):
print("get_goals_scored")
# Create a dictionary with team names as keys
teams = {}
for i in season.groupby('HomeTeam').mean().T.columns:
print("check {} \n".format(i))
teams[i] = []
#print (len(teams["Augsburg"]))
# the value corresponding to keys is a list containing the match location.
for i in range(len(season)):
HTGS = season.iloc[i]['FTHG']
ATGS = season.iloc[i]['FTAG']
teams[season.iloc[i].HomeTeam].append(HTGS)
teams[season.iloc[i].AwayTeam].append(ATGS)
# Create a dataframe for goals scored where rows are teams and cols are matchweek.
GoalsScored = pd.DataFrame(data=teams, index = [i for i in range(1,39)]).T
GoalsScored[0] = 0
# Aggregate to get uptil that point
for i in range(2,39):
GoalsScored[i] = GoalsScored[i] + GoalsScored[i-1]
return GoalsScored
# Gets the goals conceded agg arranged by teams and matchweek
def get_goals_conceded(season):
# Create a dictionary with team names as keys
teams = {}
for i in season.groupby('HomeTeam').mean().T.columns:
print("check {} \n".format(i))
teams[i] = []
# the value corresponding to keys is a list containing the match location.
for i in range(len(season)):
ATGC = season.iloc[i]['FTHG']
HTGC = season.iloc[i]['FTAG']
teams[season.iloc[i].HomeTeam].append(HTGC)
teams[season.iloc[i].AwayTeam].append(ATGC)
# Create a dataframe for goals scored where rows are teams and cols are matchweek.
GoalsConceded = pd.DataFrame(data=teams, index = [i for i in range(1,39)]).T
GoalsConceded[0] = 0
# Aggregate to get uptil that point
for i in range(2,39):
GoalsConceded[i] = GoalsConceded[i] + GoalsConceded[i-1]
return GoalsConceded
def get_gss(season):
GC = get_goals_conceded(season)
GS = get_goals_scored(season)
j = 0
HTGS = []
ATGS = []
HTGC = []
ATGC = []
for i in range(season.shape[0]):
ht = season.iloc[i].HomeTeam
at = season.iloc[i].AwayTeam
HTGS.append(GS.loc[ht][j])
ATGS.append(GS.loc[at][j])
HTGC.append(GC.loc[ht][j])
ATGC.append(GC.loc[at][j])
if ((i + 1)% 10) == 0:
j = j + 1
# print("check line 87")
# print(season.shape,len(HTGS))
season['HTGS'] = HTGS
season['ATGS'] = ATGS
season['HTGC'] = HTGC
season['ATGC'] = ATGC
return season
#apply the above functions
for season in seasons:
season.head()
season = get_gss(season)
season_1.head(5)
season_1
#functions adopted from Tewari and Krishna https://github.com/krishnakartik1/LSTM-footballMatchWinner
def get_points(result):
if result == 'W':
return 3
elif result == 'D':
return 1
else:
return 0
def get_cuml_points(matchres):
matchres_points = matchres.applymap(get_points)
for i in range(2,38):
matchres_points[i] = matchres_points[i] + matchres_points[i-1]
matchres_points.insert(column =0, loc = 0, value = [0*i for i in range(20)])
return matchres_points
def get_matchres(season):
print("here")
# Create a dictionary with team names as keys
teams = {}
for i in season.groupby('HomeTeam').mean().T.columns:
teams[i] = []
# the value corresponding to keys is a list containing the match result
for i in range(len(season)):
if season.iloc[i].FTR == 'H':
teams[season.iloc[i].HomeTeam].append('W')
teams[season.iloc[i].AwayTeam].append('L')
elif season.iloc[i].FTR == 'A':
teams[season.iloc[i].AwayTeam].append('W')
teams[season.iloc[i].HomeTeam].append('L')
else:
teams[season.iloc[i].AwayTeam].append('D')
teams[season.iloc[i].HomeTeam].append('D')
return pd.DataFrame(data=teams, index = [i for i in range(1,39)]).T
def get_agg_points(season):
matchres = get_matchres(season)
cum_pts = get_cuml_points(matchres)
HTP = []
ATP = []
j = 0
for i in range(season.shape[0]):
ht = season.iloc[i].HomeTeam
at = season.iloc[i].AwayTeam
HTP.append(cum_pts.loc[ht][j])
ATP.append(cum_pts.loc[at][j])
if ((i + 1)% 10) == 0:
j = j + 1
season['HTP'] = HTP
season['ATP'] = ATP
return season
#apply the above functions
for season in seasons:
season.head()
season = get_agg_points(season)
season_1.head(40)
la_liga = pd.concat(seasons)
la_liga
la_liga.to_csv('la_liga_stats.csv')
```
| github_jupyter |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from datetime import datetime
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
#Lists and counters
city_list = []
cloud_list = []
country_list = []
date_list = []
humidity_list = []
lats_list = []
lngs_list = []
temp_max_list = []
wind_speed_list = []
index_counter = 0
set_counter = 1
print("Beginning Data Retrieval ")
print("-------------------------------")
base_url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
query_url = f"{base_url}appid={weather_api_key}&units={units}&q="
#For loop matching city names with city_list
for index, city in enumerate(cities, start = 1):
try:
response = requests.get(query_url + city).json()
city_list.append(response["name"])
cloud_list.append(response["clouds"]["all"])
country_list.append(response["sys"]["country"])
date_list.append(response["dt"])
humidity_list.append(response["main"]["humidity"])
lats_list.append(response["coord"]["lat"])
lngs_list.append(response["coord"]["lon"])
temp_max_list.append(response['main']['temp_max'])
wind_speed_list.append(response["wind"]["speed"])
if index_counter > 49:
index_counter = 0
set_counter = set_counter + 1
else:
index_counter = index_counter + 1
print(f"Processing Record {index_counter} of Set {set_counter} | {city}")
except(KeyError, IndexError):
print("City not found. Skipping...")
print("-------------------------------")
print("Data Retrieval Complete")
print("-------------------------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
#Create a dataframe using information from data retrieval
weather_data = pd.DataFrame({
"City" : city_list,
"Lat" : lats_list,
"Lng" : lngs_list,
"Max Temp" : temp_max_list,
"Humidity" : humidity_list,
"Clouds" : cloud_list,
"Wind Speed" : wind_speed_list,
"Country" : country_list,
"Date" : date_list
})
#Save weather data to a cities csv file
weather_data.to_csv("../output_data/cities.csv", index=False)
#Display dataframe
weather_data.head()
```
## Inspect the data and remove the cities where the humidity > 100%.
----
Skip this step if there are no cities that have humidity > 100%.
```
#check if there are any cities with Humidity >100%
weather_data["Humidity"].describe()
# Get the indices of cities that have humidity over 100%.
humidity_101 = weather_data[(weather_data["Humidity"] > 100)].index
humidity_101
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
clean_city_data = weather_data.drop(humidity_101, inplace=False)
clean_city_data.head()
# Export the filtered city data into a csv
clean_city_data.to_csv("../output_data/clean_city_data.csv", index_label="City_ID")
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
```
date_now = datetime.date(datetime.now())
# Create a scatter plot for latitude vs max temperature.
x_values = clean_city_data["Lat"]
y_values = clean_city_data["Max Temp"]
fig1, ax1 = plt.subplots(figsize=(7,4))
plt.scatter(x_values, y_values, edgecolor="black", linewidth=1, marker="o", alpha=0.8)
plt.title(f"City Latitude vs Max Temperature {date_now}")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.grid()
# Save the figure
plt.savefig("../output_data/latitude_vs_max_temp.png", bbox_inches="tight")
plt.show()
```
## Latitude vs. Humidity Plot
```
x_values = clean_city_data["Lat"]
y_values = clean_city_data["Humidity"]
fig1, ax1 = plt.subplots(figsize=(7, 4))
plt.scatter(x_values, y_values, edgecolor="black", linewidth=1, marker="o", alpha=0.8)
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.title(f"City Latitude vs Humidity {date_now}")
plt.grid()
# Save the figure
plt.savefig("../output_data/latitude_vs_humidity.png", bbox_inches="tight")
plt.show()
```
## Latitude vs. Cloudiness Plot
```
# Create a scatter plot for latitude vs cloudiness.
x_values = clean_city_data["Lat"]
y_values = clean_city_data["Clouds"]
fig1, ax1 = plt.subplots(figsize=(7,4))
markersize=12
plt.scatter(x_values, y_values, edgecolor="black", linewidth=1, marker="o", alpha=0.8)
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.title(f"City Latitude vs Cloudiness {date_now}")
plt.grid()
# Save the figure
plt.savefig("../output_data/latitude_vs_cloudiness.png", bbox_inches="tight")
plt.show()
```
## Latitude vs. Wind Speed Plot
```
# Create a scatter plot for latitude vs wind speed.
x_values = clean_city_data["Lat"]
y_values = clean_city_data["Wind Speed"]
fig1, ax1 = plt.subplots(figsize=(7,4))
markersize=12
plt.scatter(x_values, y_values, edgecolor="black", linewidth=1, marker="o", alpha=0.8)
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.title(f"City Latitude vs Wind Speed {date_now}")
plt.grid()
# Save the figure
plt.savefig("../output_data/latitude_vs_wind_speed.png", bbox_inches="tight")
plt.show()
```
## Linear Regression
```
# Create a function to create Linear Regression plots for remaining activities
def plot_linear_regression(x_values, y_values, x_label, y_label, hemisphere, text_coordinates, ylim=None):
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# Generate plots
fig1, ax1 = plt.subplots(figsize=(7,4))
plt.scatter(x_values, y_values, edgecolor="black", linewidth=1, marker="o", alpha=0.8)
plt.plot(x_values,regress_values,"r-")
date_now = datetime.date(datetime.now())
plt.title(f"{hemisphere} Hemisphere - {x_label} vs {y_label} {date_now}",fontsize = 15)
plt.xlabel(x_label,fontsize=14)
plt.ylabel(y_label,fontsize=14)
if ylim is not None:
plt.ylim(0, ylim)
plt.annotate(line_eq, text_coordinates, fontsize=20, color="red")
# Print r square value
print(f"The r-squared is: {rvalue**2}")
# correlation = linregress.pearsonr(x_values, y_values)
# print(f"The correlation between both factors is {round(correlation[0],2)}")
# Create Northern and Southern Hemisphere DataFrames
northern_hemi_weather_df = clean_city_data.loc[clean_city_data["Lat"] >= 0]
southern_hemi_weather_df = clean_city_data.loc[clean_city_data["Lat"] < 0]
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs max temp (northern hemisphere)
x_values = northern_hemi_weather_df["Lat"]
y_values = northern_hemi_weather_df["Max Temp"]
plot_linear_regression(x_values, y_values, "Latitude", "Max Temp (F)", "Northern", (10, 10))
# Save the figure
plt.savefig("../output_data/northern_hem_linear_lat_vs_max_temp.png", bbox_inches="tight")
plt.show()
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs cloudiness (southern hemisphere)
x_values = southern_hemi_weather_df["Lat"]
y_values = southern_hemi_weather_df["Max Temp"]
plot_linear_regression(x_values, y_values, "Latitude", "Max Temp (F)", "Southern", (-52, 75))
# Save the figure
plt.savefig("../output_data/southern_hem_linear_lat_vs_max_temp.png", bbox_inches="tight")
plt.show()
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs humditiy (northern hemisphere)
x_values = northern_hemi_weather_df['Lat']
y_values = northern_hemi_weather_df['Humidity']
plot_linear_regression(x_values, y_values, "Latitude", "Humidity (%)", "Northern",(50,50))
plt.savefig("../output_data/northern_hem_linear_lat_vs_humidity.png", bbox_inches="tight")
plt.show()
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs humditiy (southern hemisphere)
x_values = southern_hemi_weather_df['Lat']
y_values = southern_hemi_weather_df['Humidity']
plot_linear_regression(x_values, y_values, "Latitude", "Humidity (%)", "Southern",(50, 50), 100)
plt.savefig("../output_data/southern_hem_linear_lat_vs_humudity.png", bbox_inches="tight")
plt.show()
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs cloudiness (northern hemisphere)
x_values = northern_hemi_weather_df['Lat']
y_values = northern_hemi_weather_df['Clouds']
plot_linear_regression(x_values, y_values, "Latitude", "Cloudiness (%)", "Northern", (20, 60))
plt.savefig("../output_data/northern_hem_linear_lat_vs_cloudiness.png", bbox_inches="tight")
plt.show()
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs cloudiness (southern hemisphere)
x_values = southern_hemi_weather_df['Lat']
y_values = southern_hemi_weather_df['Clouds']
plot_linear_regression(x_values, y_values, "Latitude", "Cloudiness(%)", "Southern",(-45, 60))
plt.savefig("../output_data/southern_hem_linear_lat_vs_cloudiness.png", bbox_inches="tight")
plt.show()
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs wind speed(northern hemisphere)
x_values = northern_hemi_weather_df['Lat']
y_values = northern_hemi_weather_df['Wind Speed']
plot_linear_regression(x_values, y_values, "Latitude", "Wind Speed (mph)", "Northern",(20, 25))
plt.savefig("../output_data/northern_hem_linear_lat_vs_wind_speed.png", bbox_inches="tight")
plt.show()
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
# Create a scatter plot for latitude vs wind speed (southern hemisphere)
x_values = southern_hemi_weather_df['Lat']
y_values = southern_hemi_weather_df['Wind Speed']
plot_linear_regression(x_values, y_values, "Latitude", "Wind Speed (mph)", "Southern",(-40, 25), ylim=40)
plt.savefig("../output_data/southern_hem_linear_lat_vs_wind_speed.png", bbox_inches="tight")
plt.show()
#Reference: https://github.com/poonam-ux/Python_API_WeatherPy_VacationPy
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
# default_exp losses
# default_cls_lvl 3
#export
from fastai.imports import *
from fastai.torch_imports import *
from fastai.torch_core import *
from fastai.layers import *
#hide
from nbdev.showdoc import *
```
# Loss Functions
> Custom fastai loss functions
```
# export
class BaseLoss():
"Same as `loss_cls`, but flattens input and target."
activation=decodes=noops
def __init__(self, loss_cls, *args, axis=-1, flatten=True, floatify=False, is_2d=True, **kwargs):
store_attr("axis,flatten,floatify,is_2d")
self.func = loss_cls(*args,**kwargs)
functools.update_wrapper(self, self.func)
def __repr__(self): return f"FlattenedLoss of {self.func}"
@property
def reduction(self): return self.func.reduction
@reduction.setter
def reduction(self, v): self.func.reduction = v
def _contiguous(self,x):
return TensorBase(x.transpose(self.axis,-1).contiguous()) if isinstance(x,torch.Tensor) else x
def __call__(self, inp, targ, **kwargs):
inp,targ = map(self._contiguous, (inp,targ))
if self.floatify and targ.dtype!=torch.float16: targ = targ.float()
if targ.dtype in [torch.int8, torch.int16, torch.int32]: targ = targ.long()
if self.flatten: inp = inp.view(-1,inp.shape[-1]) if self.is_2d else inp.view(-1)
return self.func.__call__(inp, targ.view(-1) if self.flatten else targ, **kwargs)
```
Wrapping a general loss function inside of `BaseLoss` provides extra functionalities to your loss functions:
- flattens the tensors before trying to take the losses since it's more convenient (with a potential tranpose to put `axis` at the end)
- a potential `activation` method that tells the library if there is an activation fused in the loss (useful for inference and methods such as `Learner.get_preds` or `Learner.predict`)
- a potential <code>decodes</code> method that is used on predictions in inference (for instance, an argmax in classification)
The `args` and `kwargs` will be passed to `loss_cls` during the initialization to instantiate a loss function. `axis` is put at the end for losses like softmax that are often performed on the last axis. If `floatify=True`, the `targs` will be converted to floats (useful for losses that only accept float targets like `BCEWithLogitsLoss`), and `is_2d` determines if we flatten while keeping the first dimension (batch size) or completely flatten the input. We want the first for losses like Cross Entropy, and the second for pretty much anything else.
```
# export
@delegates()
class CrossEntropyLossFlat(BaseLoss):
"Same as `nn.CrossEntropyLoss`, but flattens input and target."
y_int = True
@use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean')
def __init__(self, *args, axis=-1, **kwargs): super().__init__(nn.CrossEntropyLoss, *args, axis=axis, **kwargs)
def decodes(self, x): return x.argmax(dim=self.axis)
def activation(self, x): return F.softmax(x, dim=self.axis)
tst = CrossEntropyLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
#nn.CrossEntropy would fail with those two tensors, but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.CrossEntropyLoss()(output,target))
#Associated activation is softmax
test_eq(tst.activation(output), F.softmax(output, dim=-1))
#This loss function has a decodes which is argmax
test_eq(tst.decodes(output), output.argmax(dim=-1))
#In a segmentation task, we want to take the softmax over the channel dimension
tst = CrossEntropyLossFlat(axis=1)
output = torch.randn(32, 5, 128, 128)
target = torch.randint(0, 5, (32, 128, 128))
_ = tst(output, target)
test_eq(tst.activation(output), F.softmax(output, dim=1))
test_eq(tst.decodes(output), output.argmax(dim=1))
```
[Focal Loss](https://arxiv.org/pdf/1708.02002.pdf) is the same as cross entropy except easy-to-classify observations are down-weighted in the loss calculation. The strength of down-weighting is proportional to the size of the `gamma` parameter. Put another way, the larger `gamma` the less the easy-to-classify observations contribute to the loss.
```
# export
class FocalLossFlat(CrossEntropyLossFlat):
"""
Same as CrossEntropyLossFlat but with focal paramter, `gamma`. Focal loss is introduced by Lin et al.
https://arxiv.org/pdf/1708.02002.pdf. Note the class weighting factor in the paper, alpha, can be
implemented through pytorch `weight` argument in nn.CrossEntropyLoss.
"""
y_int = True
@use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean')
def __init__(self, *args, gamma=2, axis=-1, **kwargs):
self.gamma = gamma
self.reduce = kwargs.pop('reduction') if 'reduction' in kwargs else 'mean'
super().__init__(*args, reduction='none', axis=axis, **kwargs)
def __call__(self, inp, targ, **kwargs):
ce_loss = super().__call__(inp, targ, **kwargs)
pt = torch.exp(-ce_loss)
fl_loss = (1-pt)**self.gamma * ce_loss
return fl_loss.mean() if self.reduce == 'mean' else fl_loss.sum() if self.reduce == 'sum' else fl_loss
#Compare focal loss with gamma = 0 to cross entropy
fl = FocalLossFlat(gamma=0)
ce = CrossEntropyLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
test_close(fl(output, target), ce(output, target))
#Test focal loss with gamma > 0 is different than cross entropy
fl = FocalLossFlat(gamma=2)
test_ne(fl(output, target), ce(output, target))
#In a segmentation task, we want to take the softmax over the channel dimension
fl = FocalLossFlat(gamma=0, axis=1)
ce = CrossEntropyLossFlat(axis=1)
output = torch.randn(32, 5, 128, 128)
target = torch.randint(0, 5, (32, 128, 128))
test_close(fl(output, target), ce(output, target), eps=1e-4)
test_eq(fl.activation(output), F.softmax(output, dim=1))
test_eq(fl.decodes(output), output.argmax(dim=1))
# export
@delegates()
class BCEWithLogitsLossFlat(BaseLoss):
"Same as `nn.BCEWithLogitsLoss`, but flattens input and target."
@use_kwargs_dict(keep=True, weight=None, reduction='mean', pos_weight=None)
def __init__(self, *args, axis=-1, floatify=True, thresh=0.5, **kwargs):
if kwargs.get('pos_weight', None) is not None and kwargs.get('flatten', None) is True:
raise ValueError("`flatten` must be False when using `pos_weight` to avoid a RuntimeError due to shape mismatch")
if kwargs.get('pos_weight', None) is not None: kwargs['flatten'] = False
super().__init__(nn.BCEWithLogitsLoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
self.thresh = thresh
def decodes(self, x): return x>self.thresh
def activation(self, x): return torch.sigmoid(x)
tst = BCEWithLogitsLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randn(32, 5, 10)
#nn.BCEWithLogitsLoss would fail with those two tensors, but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
output = torch.randn(32, 5)
target = torch.randint(0,2,(32, 5))
#nn.BCEWithLogitsLoss would fail with int targets but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
tst = BCEWithLogitsLossFlat(pos_weight=torch.ones(10))
output = torch.randn(32, 5, 10)
target = torch.randn(32, 5, 10)
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
#Associated activation is sigmoid
test_eq(tst.activation(output), torch.sigmoid(output))
# export
@use_kwargs_dict(weight=None, reduction='mean')
def BCELossFlat(*args, axis=-1, floatify=True, **kwargs):
"Same as `nn.BCELoss`, but flattens input and target."
return BaseLoss(nn.BCELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
tst = BCELossFlat()
output = torch.sigmoid(torch.randn(32, 5, 10))
target = torch.randint(0,2,(32, 5, 10))
_ = tst(output, target)
test_fail(lambda x: nn.BCELoss()(output,target))
# export
@use_kwargs_dict(reduction='mean')
def MSELossFlat(*args, axis=-1, floatify=True, **kwargs):
"Same as `nn.MSELoss`, but flattens input and target."
return BaseLoss(nn.MSELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
tst = MSELossFlat()
output = torch.sigmoid(torch.randn(32, 5, 10))
target = torch.randint(0,2,(32, 5, 10))
_ = tst(output, target)
test_fail(lambda x: nn.MSELoss()(output,target))
#hide
#cuda
#Test losses work in half precision
if torch.cuda.is_available():
output = torch.sigmoid(torch.randn(32, 5, 10)).half().cuda()
target = torch.randint(0,2,(32, 5, 10)).half().cuda()
for tst in [BCELossFlat(), MSELossFlat()]: _ = tst(output, target)
# export
@use_kwargs_dict(reduction='mean')
def L1LossFlat(*args, axis=-1, floatify=True, **kwargs):
"Same as `nn.L1Loss`, but flattens input and target."
return BaseLoss(nn.L1Loss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
#export
class LabelSmoothingCrossEntropy(Module):
y_int = True
def __init__(self, eps:float=0.1, weight=None, reduction='mean'):
store_attr()
def forward(self, output, target):
c = output.size()[1]
log_preds = F.log_softmax(output, dim=1)
if self.reduction=='sum': loss = -log_preds.sum()
else:
loss = -log_preds.sum(dim=1) #We divide by that size at the return line so sum and not mean
if self.reduction=='mean': loss = loss.mean()
return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), weight=self.weight, reduction=self.reduction)
def activation(self, out): return F.softmax(out, dim=-1)
def decodes(self, out): return out.argmax(dim=-1)
lmce = LabelSmoothingCrossEntropy()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
test_eq(lmce(output.flatten(0,1), target.flatten()), lmce(output.transpose(-1,-2), target))
```
On top of the formula we define:
- a `reduction` attribute, that will be used when we call `Learner.get_preds`
- `weight` attribute to pass to BCE.
- an `activation` function that represents the activation fused in the loss (since we use cross entropy behind the scenes). It will be applied to the output of the model when calling `Learner.get_preds` or `Learner.predict`
- a <code>decodes</code> function that converts the output of the model to a format similar to the target (here indices). This is used in `Learner.predict` and `Learner.show_results` to decode the predictions
```
#export
@delegates()
class LabelSmoothingCrossEntropyFlat(BaseLoss):
"Same as `LabelSmoothingCrossEntropy`, but flattens input and target."
y_int = True
@use_kwargs_dict(keep=True, eps=0.1, reduction='mean')
def __init__(self, *args, axis=-1, **kwargs): super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs)
def activation(self, out): return F.softmax(out, dim=-1)
def decodes(self, out): return out.argmax(dim=-1)
```
## Export -
```
#hide
from nbdev.export import *
notebook2script()
```
| github_jupyter |
# Module 1: Dataset
## Import
```
# not all libraries are used
!pip install imdbpy
from bs4 import BeautifulSoup
import urllib.request
import urllib.parse
import re
import csv
import time
import datetime
import imdb
import ast
from tqdm import tnrange, tqdm_notebook
import sys
from urllib.request import HTTPError
import warnings
import html
import json
import os
import math
import inspect
```
## Setup google drive
```
root_dir = '/root/aml/'
drive_dir = root_dir + 'My Drive/AML/'
git_rep = 'Git'
git_dir = drive_dir + git_rep+'/'
dataset_dir = git_dir + 'datasets'
# Run this cell to mount your Google Drive.
from google.colab import drive
drive.mount(root_dir, force_remount=True) # run this line every time you have changed something in you drive
os.chdir(drive_dir)
```
## Utility functions
```
_WARNINGS = False
def urlopen(url, mobile = False):
try:
if mobile:
urlheader = {'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 5_0 like Mac OS X) AppleWebKit/534.46' ,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
else:
urlheader = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '
'AppleWebKit/537.11 (KHTML, like Gecko) '
'Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'
}
#header2 = 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.27 Safari/537.17'
return urllib.request.urlopen(urllib.request.Request(url=url, data=None, headers=urlheader)).read().decode('utf-8')
except HTTPError as e:
if (_WARNINGS):
time.sleep(5);
warnings.warn(str(e))
return urlopen(url)
else:
raise e
def wrap_error(func):
def func_wrapper(*args, **kwargs):
if (_WARNINGS):
try:
return func(*args, **kwargs)
except Exception as e:
warnings.warn(datetime.datetime.now().strftime("%d-%m-%Y %H:%M")+" - "+"Function "+ func.__name__ + " "+str(e))
else:
return func(*args, **kwargs)
return func_wrapper
```
## MoviesDataset class
### Definition of the class
```
class MoviesDataset:
def __init__(self):
self.Youtube_urlroot = "https://www.youtube.com"
self.Imdb_urlroot = "https://www.imdb.com"
self.TheNumbers_urlroot = "https://www.the-numbers.com"
self.max_number_movies = 16439
self.number_movies_per_page = 100
self.IMDb = imdb.IMDb()
self.data = {}
self.filename = "MoviesDataset"
def NewJSON(self):
with open(self.filename+'.json', 'w') as jsonFile:
load = {}
json.dump(load, jsonFile)
jsonFile.close()
self.estimatedSize = 0
# append to the created json file
@wrap_error
def AppendJSON(self, movie):
with open(self.filename+'.json', "r+") as jsonFile:
load = json.load(jsonFile)
load[movie['boxOffice']['id']-1] = movie
jsonFile.seek(0) # rewind
json.dump(load, jsonFile)
jsonFile.truncate()
return os.path.getsize(self.filename+'.json')/1000000, len(load)
#jsonFile.close()
# load dataset
def Load(self, filename = None):
if not filename:
filename = self.filename
else:
self.filename = filename
with open(filename+'.json', "r+") as jsonFile:
self.data = json.load(jsonFile)
jsonFile.seek(0) # rewind
json.dump(self.data, jsonFile)
jsonFile.truncate()
#retrieve data from TheNumbers
@wrap_error
def BoxOfficeRetrieve(self, item):
data = {}
data['id']=int(item[0].text.replace(',',''))
data['name']=item[2].text
data['year']=int(item[1].text)
data['url']=item[2].find_all('a')[0]['href']
data['revenue_total']= int(item[3].text.replace(',','').replace('$',''))
# retrieve first week revenue
url = urllib.parse.urljoin("http://www.the-numbers.com", data['url'])
html = urlopen(url, False)
soup = BeautifulSoup(html, 'html.parser')
div = soup.findAll(attrs={'id':'box_office_chart'})
if len(div) >0:
div = div[0]
tables = div.findAll("table")
data['revenue_week1']= int(div.findAll("td")[2].text.replace(',','').replace('$',''))
else:
return None, data['name']
# retrieve country
url = url[:url.index('#')]+'#tab=summary'
html = urlopen(url, False)
soup = BeautifulSoup(html, 'html.parser')
table = soup.findAll("table")[3]
data['country'] = [i.text for i in table.findAll("tr")[-1].findAll("td")[1].findAll("a")]
return data, data['name']
#search imdb id
@wrap_error
def IMDbSearch(self, movie_name, movie_year):
try:
result = self.IMDb.search_movie(movie_name)
#print(result)
score = 0
for item in result:
try:
if (item['kind'] == 'movie' and item['year'] == movie_year):
if (len(set(list(str(item).lower().split(" "))).intersection(list(movie_name.lower().split(" "))))>score):
return item.movieID
except KeyError:
if (item['kind'] == 'movie'):
if (len(set(list(str(item).lower().split(" "))).intersection(list(movie_name.lower().split(" "))))>score):
return item.movieID
for item in result:
try:
if (item['kind'] == 'episode' and item['year'] == movie_year):
if (len(set(list(str(item).lower().split(" "))).intersection(list(movie_name.lower().split(" "))))>score):
return item.movieID
except KeyError:
if (item['kind'] == 'episode'):
if (len(set(list(str(item).lower().split(" "))).intersection(list(movie_name.lower().split(" "))))>score):
return item.movieID
return None
except:
print('Movie:' + movie_name + ' - year:' + str(movie_year) + ' could not be found in IMDb')
return None
@wrap_error
def IMDbRetrieve(self, movie_name, movie_year):
id = self.IMDbSearch(movie_name, movie_year)
data = {}
if id:
url = 'https://www.imdb.com/title/tt'+str(id)
html = urlopen(url)
soup = BeautifulSoup(html, 'html.parser')
load = json.loads(soup.find('script', type='application/ld+json').text)
data.update(load)
if 'embedUrl' in urlopen(urllib.parse.urljoin(self.Imdb_urlroot, data['url'])):
url = urllib.parse.urljoin(self.Imdb_urlroot, data['trailer']['embedUrl'])
html = urlopen(url)
script = BeautifulSoup(html, 'html.parser').find_all('script')[-3].text
load = json.loads(script[script.index('push(')+len('push('):script.index(');')])
data['video'] = load
return data
#retrieve data from Youtube (also for Mobile device, defined by the url header information)
@wrap_error
def YoutubeRetrieve(self, movie_name, movie_year):
data = {}
query = urllib.parse.quote(movie_name+' '+str(movie_year)+' official trailer')
url = 'https://www.google.com/search?biw=1620&bih=889&tbs=srcf%3AH4sIAAAAAAAAANOuzC8tKU1K1UvOz1UrSM0vyIEwSzJSy4sSC8DsssSizNSSSoiSnMTK5NS8kqLEHL2UVLX0zPREEA0AcHJbJEcAAAA&tbm=vid&q='+query
#print(url)
html = urlopen(url, mobile=True)
soup = BeautifulSoup(html, 'html.parser')
div = soup.findAll(attrs={'class':'mnr-c Tyz4ad'})
if len(div):
try:
pos = 0
while not('watch?v=' in str(div[pos])):
pos += 1
div = div[pos]
href = div.find_all('a')[0]['href']
#print(href)
data['name'] = soup.findAll(attrs={'class':'lCT7Rc Rqb6rf'})[0].text
data['url'] = '/watch?v='+str(href[href.index('watch?v=')+len('watch?v='):])
return data
except IndexError:
return None
else:
return None
#retrieve is based on the list of TheNumbers
def Generate(self, movies_id, filename = None, save = True, new = True):
def getMoviesIDList(movies_id):
if isinstance(movies_id, list):
if len(movies_id) >= 2:
if isinstance(movies_id[0], str) or isinstance(movies_id[-1], str):
if (movies_id[0] is 'start'):
start_id = 1
movies_id = list(range(start_id, movies_id[-1]+1))
if (movies_id[-1] is 'end'):
end_id = self.max_number_movies
movies_id = list(range(movies_id[0], end_id+1))
else:
movies_id = list(set(movies_id) & set(list(range(1,self.max_number_movies+1))))
movies_id.sort()
else:
raise Exception("movies_id arg must be a list of the at least 2 ids")
return list(movies_id)
def id2page(id):
return math.floor((id-1)/self.number_movies_per_page)*100+1
def getMoviesList(page, ids):
url = urllib.parse.urljoin(self.TheNumbers_urlroot, "/box-office-records/domestic/all-movies/cumulative/all-time/") + str(page)
html = urlopen(url)
soup = BeautifulSoup(html, 'html.parser')
tables = soup.findAll("table")
if tables:
first_table = tables[0]
first_table = first_table.find_all('tr')[1:]
return [i for i in first_table if int(i.find_all('td')[0].text.replace(',', '')) in ids]
else:
return None
def getOneMovie (page_movies, id):
for movie in page_movies:
if int(movie.find_all('td')[0].text.replace(',', '')) == id:
return movie.find_all('td')
raise
return None
if filename:
self.filename = filename
if save:
if new:
self.NewJSON()
else:
self.data = []
#regroup
movies_id = getMoviesIDList(movies_id)
pbar_pages = tqdm_notebook(list(set([id2page(id) for id in movies_id])), file=sys.stdout, ncols = 800)
pbar_movies = tqdm_notebook(movies_id, file=sys.stdout, ncols = 800)
current_page = id2page(movies_id[0])
page_movies = getMoviesList(current_page, list(set(movies_id) & set(range(current_page, current_page+100))))
one_movie = getOneMovie(page_movies, movies_id[0])
pbar_pages.set_description(('Page %d: ') % (current_page))
for id in pbar_movies:
if id2page(id) != current_page:
current_page = id2page(id)
page_movies = getMoviesList(current_page, list(set(movies_id) & set(range(current_page, current_page+100))))
pbar_pages.update()
boxoffice_data, imdb_data, youtube_data = None, None, None
#get the movie line
one_movie = getOneMovie(page_movies, id)
if (one_movie):
#retrieve box office
boxoffice_data, movie_name = self.BoxOfficeRetrieve(item = one_movie)
if boxoffice_data:
#retrieve IMDb
imdb_data = self.IMDbRetrieve(movie_name = boxoffice_data['name'], movie_year = boxoffice_data['year'])
if imdb_data:
#retrieve youtube
#print(boxoffice_data['name'])
youtube_data = self.YoutubeRetrieve(movie_name = imdb_data['name'], movie_year = boxoffice_data['year'])
if youtube_data:
#all data retrieved and ready to be stored
movie = {'boxOffice' : boxoffice_data}
movie.update(imdb_data)
if not 'video' in movie: #trailer in imdb was not found
movie['video'] = {'videos': {'videoMetadata':{}}}
movie['video']['videos']['videoMetadata'].update({'youtube': youtube_data})
print(str(id)+': ', movie['name'], ' stored')
#save in json file and update bar
if save:
current_size, json_length = self.AppendJSON(movie)
self.estimatedSize = current_size + current_size/json_length*(len(movies_id))
pbar_pages.set_description(('Page %d estimated total size %d.3MB: ') % (current_page, self.estimatedSize))
else:
self.data.append(movie)
pbar_movies.set_description(str(id)+': '+movie_name)
```
### creation of movies object
```
movies = MoviesDataset()
```
### dataset generation
**movies.Generate**:
* **interval** : list(range(#id_start, #id_end+1)), [4, 32, 501], [#id_start, 'end'], ['start', #id_end]
* **filename**: "MoviesDataset" (the name of the generated json file)
* **save**: True, False (if True, save the retrieved movies in the json file, if False, the data will be stored in the variable movies.data)
* **new**: True, False (create a new json file, be careful, this will overwrite the existing json file named $filename)
```
os.chdir(dataset_dir)
movies.Generate(movies_id = [16364, 'end'], filename='Dataset', save = True, new = False)
```
### load dataset
**movies.Load**:
* **filename**: "MoviesDataset" (the name of the json file to be loaded, the result will be stored in the instance movies.data), no value means the file movies.filename will be used, which is "MoviesDataset" per default
```
movies.Load(filename = 'Dataset')
print(len(movies.data))
```
### filter
Download the software HugeDataViewer and open your dataset to see how the dataset is structured.
There are the most important instructions of using dict in Python for our project: (for more, see python wiki)
* Get the movie of id=#id
```
movies.data[str(#id)]
```
* Get list of movies given a condition
```
[item for id, item in movies.data.items() if item['boxOffice']['country'][0] == 'United States']
```
```
movies.data.get(str(4)).get('actor')[0].get('name')
[item for id, item in movies.data.items() if item.get('boxOffice').get('name') == 'Minions']
```
The returned data is stored in movies.data as a dict and has the following structure:

| github_jupyter |
# Load MXNet model
In this tutorial, you learn how to load an existing MXNet model and use it to run a prediction task.
## Preparation
This tutorial requires the installation of Java Kernel. For more information on installing the Java Kernel, see the [README](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md).
```
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.16.0
%maven ai.djl:model-zoo:0.16.0
%maven ai.djl.mxnet:mxnet-engine:0.16.0
%maven ai.djl.mxnet:mxnet-model-zoo:0.16.0
%maven org.slf4j:slf4j-simple:1.7.32
import java.awt.image.*;
import java.nio.file.*;
import ai.djl.*;
import ai.djl.inference.*;
import ai.djl.ndarray.*;
import ai.djl.modality.*;
import ai.djl.modality.cv.*;
import ai.djl.modality.cv.util.*;
import ai.djl.modality.cv.transform.*;
import ai.djl.modality.cv.translator.*;
import ai.djl.translate.*;
import ai.djl.training.util.*;
import ai.djl.util.*;
```
## Step 1: Prepare your MXNet model
This tutorial assumes that you have a MXNet model trained using Python. A MXNet symbolic model usually contains the following files:
* Symbol file: {MODEL_NAME}-symbol.json - a json file that contains network information about the model
* Parameters file: {MODEL_NAME}-{EPOCH}.params - a binary file that stores the parameter weight and bias
* Synset file: synset.txt - an optional text file that stores classification classes labels
This tutorial uses a pre-trained MXNet `resnet18_v1` model.
We use `DownloadUtils` for downloading files from internet.
```
DownloadUtils.download("https://mlrepo.djl.ai/model/cv/image_classification/ai/djl/mxnet/resnet/0.0.1/resnet18_v1-symbol.json", "build/resnet/resnet18_v1-symbol.json", new ProgressBar());
DownloadUtils.download("https://mlrepo.djl.ai/model/cv/image_classification/ai/djl/mxnet/resnet/0.0.1/resnet18_v1-0000.params.gz", "build/resnet/resnet18_v1-0000.params", new ProgressBar());
DownloadUtils.download("https://mlrepo.djl.ai/model/cv/image_classification/ai/djl/mxnet/synset.txt", "build/resnet/synset.txt", new ProgressBar());
```
## Step 2: Load your model
```
Path modelDir = Paths.get("build/resnet");
Model model = Model.newInstance("resnet");
model.load(modelDir, "resnet18_v1");
```
## Step 3: Create a `Translator`
```
Pipeline pipeline = new Pipeline();
pipeline.add(new CenterCrop()).add(new Resize(224, 224)).add(new ToTensor());
Translator<Image, Classifications> translator = ImageClassificationTranslator.builder()
.setPipeline(pipeline)
.optSynsetArtifactName("synset.txt")
.optApplySoftmax(true)
.build();
```
## Step 4: Load image for classification
```
var img = ImageFactory.getInstance().fromUrl("https://resources.djl.ai/images/kitten.jpg");
img.getWrappedImage()
```
## Step 5: Run inference
```
Predictor<Image, Classifications> predictor = model.newPredictor(translator);
Classifications classifications = predictor.predict(img);
classifications
```
## Summary
Now, you can load any MXNet symbolic model and run inference.
You might also want to check out [load_pytorch_model.ipynb](https://github.com/deepjavalibrary/djl/blob/master/jupyter/load_pytorch_model.ipynb) which demonstrates loading a local model using the ModelZoo API.
| github_jupyter |
# Tau_p effects
```
import pprint
import subprocess
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
%matplotlib inline
np.set_printoptions(suppress=True, precision=2)
sns.set(font_scale=2.0)
```
#### Git machinery
```
run_old_version = False
if run_old_version:
hash_when_file_was_written = 'beb606918461c91b007f25a007b71466d94cf516'
hash_at_the_moment = subprocess.check_output(["git", 'rev-parse', 'HEAD']).strip()
print('Actual hash', hash_at_the_moment)
print('Hash of the commit used to run the simulation', hash_when_file_was_written)
subprocess.call(['git', 'checkout', hash_when_file_was_written])
from network import Protocol, BCPNNFast, NetworkManager
from analysis_functions import calculate_recall_success_sequences, calculate_recall_success
from analysis_functions import calculate_recall_time_quantities, calculate_excitation_inhibition_ratio
from analysis_functions import calculate_total_connections
from plotting_functions import plot_weight_matrix, plot_winning_pattern
```
## How do the probabilities evolve in time depending on tau_p
#### An example
```
# Patterns parameters
hypercolumns = 4
minicolumns = 20
n_patterns = 10
# Manager properties
dt = 0.001
T_recalling = 5.0
values_to_save = ['o', 'p_pre', 'p_post', 'p_co', 'w']
# Protocol
training_time = 0.1
inter_sequence_interval = 1.0
inter_pulse_interval = 0.0
epochs = 3
# Network parameters
tau_z_pre = 0.150
tau_p = 500.0
# Build the network
nn = BCPNNFast(hypercolumns, minicolumns, tau_p=tau_p, tau_z_pre=tau_z_pre)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build protocol
protocol = Protocol()
sequences = [[i for i in range(n_patterns)]]
protocol.cross_protocol(sequences, training_time=training_time,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
manager.run_network_protocol(protocol=protocol, verbose=True)
manager
plot_weight_matrix(manager.nn)
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
o = manager.history['o']
p_pre = manager.history['p_pre']
p_co = manager.history['p_co']
w = manager.history['w']
pattern_1 = 3
pattern_2 = 4
time = np.arange(0, manager.T_total, dt)
ax1.plot(time, o[:, pattern_1])
ax1.plot(time, o[:, pattern_2])
ax1.set_ylabel('activity')
ax1.set_xlabel('Time')
ax2.plot(time, p_pre[:, pattern_1])
ax2.plot(time, p_pre[:, pattern_2])
ax2.set_ylabel('p')
ax2.set_xlabel('Time')
ax3.plot(time, p_co[:, pattern_2, pattern_1])
ax3.set_ylabel('p_co')
ax3.set_xlabel('Time')
ax4.plot(time, w[:, pattern_2, pattern_1])
ax4.set_ylabel('w')
ax4.set_xlabel('Time');
nn.g_w = 15.0
nn.g_w_ampa = 15.0
total, mean, std, success = calculate_recall_time_quantities(manager, T_recall, T_cue, n, sequences)
print('success', success)
plot_winning_pattern(manager)
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
o = manager.history['o']
p_pre = manager.history['p_pre']
p_co = manager.history['p_co']
w = manager.history['w']
pattern_1 = 3
pattern_2 = 4
time = np.arange(0, manager.T_total, dt)
ax1.plot(time, o[:, pattern_1])
ax1.plot(time, o[:, pattern_2])
ax1.set_ylabel('activity')
ax1.set_xlabel('Time')
ax2.plot(time, p_pre[:, pattern_1])
ax2.plot(time, p_pre[:, pattern_2])
ax2.set_ylabel('p')
ax2.set_xlabel('Time')
ax3.plot(time, p_co[:, pattern_2, pattern_1])
ax3.set_ylabel('p_co')
ax3.set_xlabel('Time')
ax4.plot(time, w[:, pattern_2, pattern_1])
ax4.set_ylabel('w')
ax4.set_xlabel('Time');
```
#### Multiple values of tau_p
```
tau_p_list = [5.0, 20.0, 100.0, 1000.0]
tau_p_list = [10.0, 20.0, 30.0, 40.0]
tau_p_list = [1, 10, 100, 1000]
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
pattern_1 = 3
pattern_2 = 4
# Patterns parameters
hypercolumns = 4
minicolumns = 20
n_patterns = 10
# Manager properties
dt = 0.001
T_recalling = 5.0
values_to_save = ['o', 'p_pre', 'p_post', 'p_co', 'w']
# Protocol
training_time = 0.1
inter_sequence_interval = 1.0
inter_pulse_interval = 0.0
epochs = 3
# Network parameters
tau_z_pre = 0.150
tau_p = 10.0
for tau_p in tau_p_list:
# Build the network
nn = BCPNNFast(hypercolumns, minicolumns, tau_p=tau_p, tau_z_pre=tau_z_pre)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build protocol
protocol = Protocol()
sequences = [[i for i in range(n_patterns)]]
protocol.cross_protocol(sequences, training_time=training_time,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
manager.run_network_protocol(protocol=protocol, verbose=False)
# Plotting
o = manager.history['o']
p_pre = manager.history['p_pre']
p_post = manager.history['p_post']
p_co = manager.history['p_co']
w = manager.history['w']
pattern_1 = 3
pattern_2 = 4
time = np.arange(0, manager.T_total, dt)
if False:
ax1.plot(time, o[:, pattern_1])
ax1.plot(time, o[:, pattern_2])
ax1.set_ylabel('activity')
ax1.set_xlabel('Time')
ax1.plot(time, p_post[:, pattern_1], label=str(tau_p))
ax1.plot(time, p_post[:, pattern_2], label=str(tau_p))
ax1.set_ylabel('p')
ax1.set_xlabel('Time')
ax2.plot(time, p_pre[:, pattern_1], label=str(tau_p))
ax2.plot(time, p_pre[:, pattern_2], label=str(tau_p))
ax2.set_ylabel('p')
ax2.set_xlabel('Time')
ax3.plot(time, p_co[:, pattern_2, pattern_1])
ax3.set_ylabel('p_co')
ax3.set_xlabel('Time')
ax4.plot(time, w[:, pattern_2, pattern_1])
ax4.set_ylabel('w')
ax4.set_xlabel('Time')
ax1.legend()
ax2.legend();
```
## Convergence and final weights based on tau_p
```
tau_p_vector = np.logspace(1.0, 2.0, num=15)
weights = []
weights_inhibition = []
weights_ampa = []
weights_free_attactor = []
exc_inh_ratio = []
exc_inh_ratio_ampa = []
mean_recall_time = []
recall_successes = []
from_pattern_inh = 0
from_pattern = 3
to_pattern = 4
T_recall = 5.0
T_cue = 0.100
I_cue = 0
n = 1
for tau_p in tau_p_vector:
print('tau_p', tau_p)
# Patterns parameters
hypercolumns = 4
minicolumns = 20
n_patterns = 10
# Manager properties
dt = 0.001
T_recalling = 5.0
values_to_save = ['o']
# Protocol
training_time = 0.1
inter_sequence_interval = 1.0
inter_pulse_interval = 0.0
epochs = 3
# Build the network
nn = BCPNNFast(hypercolumns, minicolumns, tau_p=tau_p, tau_z_pre=tau_z_pre)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build protocol
protocol = Protocol()
sequences = [[i for i in range(n_patterns)]]
protocol.cross_protocol(sequences, training_time=training_time,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
manager.run_network_protocol(protocol=protocol, verbose=False)
total, mean, std, success = calculate_recall_time_quantities(manager, T_recall, T_cue, n, sequences)
mean_ratio, std_ratio, aux = calculate_excitation_inhibition_ratio(nn, sequences, ampa=False)
mean_ratio_ampa, std_ratio, aux = calculate_excitation_inhibition_ratio(nn, sequences, ampa=True)
# Store
weights.append(nn.w[to_pattern, from_pattern])
weights_inhibition.append(nn.w[to_pattern, from_pattern_inh])
weights_ampa.append(nn.w_ampa[0, minicolumns])
weights_free_attactor.append(nn.w[to_pattern, n_patterns + 2])
exc_inh_ratio.append(mean_ratio)
exc_inh_ratio_ampa.append(mean_ratio_ampa)
mean_recall_time.append(mean)
recall_successes.append(success)
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(tau_p_vector, weights, '*-', markersize=15, label='weights')
ax1.plot(tau_p_vector, weights_inhibition, '*-', markersize=15, label='weights inh')
ax1.plot(tau_p_vector, weights_free_attactor, '*-', markersize=15, label='free_attractor')
ax1.plot(tau_p_vector, weights_ampa, '*-', markersize=15, label='weights ampa')
ax2.plot(tau_p_vector, recall_successes, '*-', markersize=15, label='recall')
ax1.set_xscale('log')
ax1.set_xlabel('tau_p')
ax1.legend()
ax2.set_xscale('log')
ax2.set_xlabel('tau_p')
ax2.legend();
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(tau_p_vector, exc_inh_ratio, '*-', markersize=15, label='exc inh ratio')
ax1.plot(tau_p_vector, exc_inh_ratio_ampa, '*-', markersize=15, label='exc inh ratio ampa')
ax2.plot(tau_p_vector, recall_successes, '*-', markersize=15, label='recall')
ax1.set_xscale('log')
ax1.set_xlabel('tau_p')
ax1.legend()
ax2.set_xscale('log')
ax2.set_xlabel('tau_p')
ax2.legend();
```
## Two sequences assymetry in values
```
tau_p_vector = np.logspace(1.0, 4.0, num=20)
connectivities_1_list = []
connectivities_2_list = []
connectivities_3_list = []
connectivities_4_list = []
connectivities_5_list = []
connectivities_6_list = []
# Patterns parameters
hypercolumns = 4
minicolumns = 35
# Manager properties
dt = 0.001
T_recalling = 5.0
values_to_save = ['o']
# Protocol
training_time = 0.1
inter_sequence_interval = 2.0
inter_pulse_interval = 0.0
epochs = 3
tau_z_pre = 0.150
sigma = 0
tau_p = 1000.0
for tau_p in tau_p_vector:
print('tau p', tau_p)
# Build the network
nn = BCPNNFast(hypercolumns, minicolumns, tau_z_pre=tau_z_pre, sigma=sigma, tau_p=tau_p)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build a protocol
protocol = Protocol()
sequences = [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19],
[20, 21, 22, 23 ,24], [25, 26, 27, 28, 29]]
protocol.cross_protocol(sequences, training_time=training_time,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Train
manager.run_network_protocol(protocol=protocol, verbose=False)
from_pattern = 3
to_pattern = 4
connectivity_seq_1 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
from_pattern = 8
to_pattern = 9
connectivity_seq_2 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
from_pattern = 13
to_pattern = 14
connectivity_seq_3 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
from_pattern = 13
to_pattern = 14
connectivity_seq_3 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
from_pattern = 18
to_pattern = 19
connectivity_seq_4 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
from_pattern = 23
to_pattern = 24
connectivity_seq_5 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
from_pattern = 28
to_pattern = 29
connectivity_seq_6 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=False, normalize=True)
connectivities_1_list.append(connectivity_seq_1)
connectivities_2_list.append(connectivity_seq_2)
connectivities_3_list.append(connectivity_seq_3)
connectivities_4_list.append(connectivity_seq_4)
connectivities_5_list.append(connectivity_seq_5)
connectivities_6_list.append(connectivity_seq_6)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(tau_p_vector, connectivities_1_list, '*-', markersize=15, label='1')
ax.plot(tau_p_vector, connectivities_2_list, '*-', markersize=15, label='2')
ax.plot(tau_p_vector, connectivities_3_list, '*-', markersize=15, label='3')
ax.plot(tau_p_vector, connectivities_4_list, '*-', markersize=15, label='4')
ax.plot(tau_p_vector, connectivities_5_list, '*-', markersize=15, label='5')
ax.plot(tau_p_vector, connectivities_6_list, '*-', markersize=15, label='6')
ax.set_xscale('log')
ax.set_xlabel('tau_p')
ax.set_ylabel('Connectivities')
ax.legend();
```
## Do previous sequences stick?
```
# Patterns parameters
hypercolumns = 4
minicolumns = 40
n_patterns = 10
# Manager properties
dt = 0.001
T_recall = 5.0
T_cue = 0.100
n = 1
values_to_save = ['o']
# Protocol
training_time = 0.1
inter_sequence_interval = 1.0
inter_pulse_interval = 0.0
epochs = 3
sigma = 0
tau_z_pre = 0.200
tau_p = 100.0
# Sequence structure
overlap = 2
number_of_sequences = 5
half_width = 2
# Build the network
nn = BCPNNFast(hypercolumns, minicolumns, tau_z_pre=tau_z_pre, sigma=sigma, tau_p=tau_p)
# Buidl the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build chain protocol
chain_protocol = Protocol()
units_to_overload = [i for i in range(overlap)]
sequences = chain_protocol.create_overload_chain(number_of_sequences, half_width, units_to_overload)
chain_protocol.cross_protocol(sequences, training_time=training_time,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Run the manager
manager.run_network_protocol(protocol=chain_protocol, verbose=True)
print(sequences)
nn.g_w = 15.0
nn.g_w_ampa = 1.0
nn.tau_z_pre = 0.050
nn.tau_a = 2.7
successes = calculate_recall_success_sequences(manager, T_recall=T_recall, T_cue=T_cue, n=n,
sequences=sequences)
successes
plot_weight_matrix(manager.nn)
ampa = False
from_pattern = 1
to_pattern = 4
connectivity_seq_1 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=ampa, normalize=True)
from_pattern = 1
to_pattern = 8
connectivity_seq_2 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=ampa, normalize=True)
from_pattern = 1
to_pattern = 12
connectivity_seq_3 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=ampa, normalize=True)
from_pattern = 1
to_pattern = 16
connectivity_seq_4 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=ampa, normalize=True)
from_pattern = 1
to_pattern = 20
connectivity_seq_5 = calculate_total_connections(manager, from_pattern, to_pattern, ampa=ampa, normalize=True)
print('connectivit 1', connectivity_seq_1)
print('connectivit 2', connectivity_seq_2)
print('connectivit 3', connectivity_seq_3)
print('connectivit 4', connectivity_seq_4)
print('connectivit 5', connectivity_seq_5)
from analysis_functions import calculate_timings
nn.g_w = 15.0
nn.g_w_ampa = 1.0
nn.tau_a = 2.7
nn.tau_z_pre = 0.500
print(nn.get_parameters())
T_recall = 5.0
T_cue = 0.100
n = 1
sequence = 0
patterns_indexes = sequences[sequence]
success_1 = calculate_recall_success(manager, T_recall=T_recall, I_cue=patterns_indexes[0],
T_cue=T_cue, n=n, patterns_indexes=patterns_indexes)
timings = calculate_timings(manager, remove=0.010)
print('succes', success_1)
plot_winning_pattern(manager)
print(patterns_indexes)
print(timings)
```
#### Git machinery
```
if run_old_version:
subprocess.call(['git', 'checkout', 'master'])
```
| github_jupyter |
# Navigation
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Banana.app"`
- **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Banana.app")
```
```
env = UnityEnvironment(file_name="../banana_env/Banana.x86_64")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
| github_jupyter |
## Session 4 : Feature engineering - Home Credit Risk
##### Student: Katayoun B.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import glob
```
### Understanding the tables and loading all
```
def load_data(path):
data_path = path
df_files = glob.glob(data_path+"*.csv")
df_files = sorted(df_files, key=str.lower)
return df_files
df_files = load_data('/Users/katy/desktop/ml/homework3/home_credit_risk/')
df_files
csvs = len(df_files)
csvs
# creating datafram for each csv
for x in range(csvs):
if x == 0:
print(df_files[0])
main_df = pd.read_csv(df_files[0])
#print(train_df)
if x == 1:
print(df_files[1])
bureau_df = pd.read_csv(df_files[1])
if x == 2:
print(df_files[2])
bureau_balance_df = pd.read_csv(df_files[2])
if x == 3:
print(df_files[3])
credit_balance_df = pd.read_csv(df_files[3])
if x == 4:
print(df_files[4])
installments_df = pd.read_csv(df_files[4])
if x == 5:
print(df_files[5])
pos_cash_df = pd.read_csv(df_files[5])
if x == 6:
print(df_files[6])
prev_df = pd.read_csv(df_files[6])
print(main_df.shape)
main_df.head()
#print(main_df.columns.values)
```
### Assignment 1: complete feature analysis for main table
1. To create a new feature between 'AMT_CREDIT', 'AMT_INCOME_TOTAL'
2. I want to call this new feature 'HIGH_INCOME', One(1) for 'HIGH_INCOME' and Zero(0) for less than mean value wich is 3.9
```
main_df[['AMT_CREDIT','AMT_INCOME_TOTAL']]
main_df['HIGH_INCOME'] = main_df['AMT_CREDIT'] / main_df['AMT_INCOME_TOTAL']
main_df['HIGH_INCOME'].mean()
main_df['HIGH_INCOME'] = main_df['HIGH_INCOME'].apply(lambda x : 0 if x < 3 else 1)
main_df[['SK_ID_CURR', 'AMT_CREDIT', 'AMT_INCOME_TOTAL', 'HIGH_INCOME']]
```
1. I create a new feature called 'RISKY_NEW_JOB_FLAG', I assume if they switched the phone number recently that means they changed the job recently.
2. if this number is bigger 180 days ( 6 months - max number of months to pass probation ), ignore this otherwise flag as a risky with new job 'RISKY_NEW_JOB_FLAG'.
3. one(1) for risky, Zero(0) for not-risky.
```
main_df['DAYS_LAST_PHONE_CHANGE']
main_df['RISKY_NEW_JOB_FLAG'] = main_df['DAYS_LAST_PHONE_CHANGE'].abs()
main_df['RISKY_NEW_JOB_FLAG'] = main_df['RISKY_NEW_JOB_FLAG'].apply(lambda x : 0 if x < 180 else 1)
main_df[['SK_ID_CURR', 'RISKY_NEW_JOB_FLAG']]
# Then we might drop that col, just an idea
```
Create a new feature called 'IF_EMPLOYED', zero (0) is not-employed any more, one (1) for employed.
```
main_df['IF_EMPLOYED'] = main_df['DAYS_EMPLOYED'].apply(lambda x : 0 if x > 0 else 1)
main_df[['SK_ID_CURR', 'IF_EMPLOYED']]
# check if flaged them correctly
main_df.loc[main_df.IF_EMPLOYED == 0]
```
1. Create a new feature called 'DAYS_EMPLOYED_PCT' to show the number of work experience (senior or junior).
2. calculating the percentage of 'DAYS_EMPLOYED' and 'DAYS_BIRTH',
3. if this nubmer is bigger than > 0.2 I assume that persion is a senior or at least has enough work experience.
```
main_df['DAYS_EMPLOYED_PCT'] = main_df['DAYS_EMPLOYED'] / main_df['DAYS_BIRTH']
main_df['DAYS_EMPLOYED_PCT'] = main_df['DAYS_EMPLOYED_PCT'].abs()
main_df['DAYS_EMPLOYED_PCT'] = round(main_df['DAYS_EMPLOYED_PCT'])
main_df[['SK_ID_CURR', 'DAYS_EMPLOYED_PCT']]
main_df['DAYS_EMPLOYED_PCT'] = main_df['DAYS_EMPLOYED_PCT'].apply(lambda x : 0 if x < 0.2 else 1)
main_df[['SK_ID_CURR', 'DAYS_EMPLOYED_PCT']]
```
### Assignment 2: team up to expand more features with crazy brain storming
# bureau_df
```
print(bureau_df.shape)
bureau_df.head()
bureau_df.loc[bureau_df.CREDIT_ACTIVE=='Closed']
bureau_df.groupby('SK_ID_CURR')['SK_ID_BUREAU'].size()
print(bureau_df.columns.values)
```
One client open different credit card reporting to credit bureau
```
agg_df = pd.DataFrame(bureau_df.groupby('SK_ID_CURR')['SK_ID_BUREAU'].size()).reset_index()
agg_df.columns = ['SK_ID_CURR','BU_count']
agg_df.sort_values('BU_count',inplace=True,ascending=False)
agg_df.head()
bureau_df.loc[bureau_df.SK_ID_CURR==120860]
```
1. lets look at CREDIT_DAY_OVERDUE and AMT_CREDIT_MAX_OVERDUE
```
bureau_df['CREDIT_DAY_OVERDUE']
bureau_temp_df = bureau_df.loc[bureau_df.CREDIT_DAY_OVERDUE > 40]
bureau_temp_df.sort_values('AMT_CREDIT_MAX_OVERDUE',inplace=True,ascending=False)
bureau_temp_df[['SK_ID_CURR','SK_ID_BUREAU','CREDIT_DAY_OVERDUE','AMT_CREDIT_MAX_OVERDUE']]
```
supposed credit duration:
```
# supposed credit duration:
bureau_df['SUPPOSED_DURATION_CREDIT'] = bureau_df['DAYS_CREDIT_ENDDATE'] - bureau_df['DAYS_CREDIT']
bureau_df['SUPPOSED_DURATION_CREDIT']
```
actual credit duration:
```
bureau_df['ACTUAL_DURATION_CREDIT'] = bureau_df['DAYS_ENDDATE_FACT'] - bureau_df['DAYS_CREDIT']
bureau_df['ACTUAL_DURATION_CREDIT']
```
diff between credit duration and actuall duration:
```
bureau_df['DIFF_DURATION_CREDIT'] = bureau_df['ACTUAL_DURATION_CREDIT'] - bureau_df['SUPPOSED_DURATION_CREDIT']
bureau_df['DIFF_DURATION_CREDIT']
```
### Assignment 3: team up and create features for credit_balance
# bureau_balance
Status of Credit Bureau loan during the month
C means closed,
X means status unknown,
0 means no DPD,
1 means maximal did during month between 1-30,
2 means DPD 31-60,… 5 means DPD 120+ or sold or written off
```
bureau_balance_df.head()
```
lets create a new feature called BB_NO_DPD, if 'STATUS' == 0
```
bureau_balance_df['BB_NO_DPD'] = bureau_balance_df['STATUS'].apply(lambda x: True if x==0 else False)
bureau_balance_df[['SK_ID_BUREAU', 'BB_NO_DPD']]
```
# credit_balance
```
credit_balance_df.head()
credit_balance_df.columns
```
Note: please pay more attention to col: "MONTHS_BALANCE" per user, because it indicates the date of the credit and balance away from current application. I suggest that focus on one user and apply your idea to all users after. This is time series data
```
# let us take ID 378907 as example
temp_df = credit_balance_df.loc[credit_balance_df.SK_ID_CURR==378907]
temp_df.sort_values('MONTHS_BALANCE',inplace=True,ascending=False)
temp_df.head()
```
### Assignment 4: team up and create features for installment history
# installment.csv
```
installments_df.loc[installments_df.AMT_INSTALMENT!=installments_df.AMT_PAYMENT]
installments_df[['SK_ID_PREV', 'AMT_PAYMENT']]
installments_df[['SK_ID_PREV', 'AMT_INSTALMENT']]
```
lets see if the difference between AMT_INSTALMENT and AMT_PAYMENT, give us anything intresting
```
installments_df['AMT_INSTALMENT_DIFF'] = installments_df[ 'AMT_INSTALMENT'] - installments_df[ 'AMT_PAYMENT']
installments_df[['SK_ID_PREV', 'AMT_INSTALMENT_DIFF']]
```
Lets see the difference between:
1. DAYS_ENTRY_PAYMENT - When was the installments of previous credit paid actually (relative to application date of current loan)
2. DAYS_INSTALMENT- When the installment of previous credit was supposed to be paid (relative to application date of current loan)
```
installments_df[ 'DAYS_INSTALMENT']
installments_df['DAYS_INSTALMENT_DIFF'] = (installments_df[ 'DAYS_ENTRY_PAYMENT'] - installments_df[ 'DAYS_INSTALMENT']).clip(lower=0)
installments_df['DAYS_INSTALMENT_DIFF']
installments_df[['SK_ID_CURR', 'SK_ID_PREV', 'DAYS_INSTALMENT_DIFF']]
```
DAYS_INSTALMENT, which indidates the days of paying installent, you need to sort them by each client
```
temp_installments_df = installments_df.loc[installments_df.SK_ID_CURR==378907]
temp_installments_df.sort_values('DAYS_INSTALMENT',inplace=True,ascending=False)
```
let see it for SK_ID_CURR==378907
```
temp_installments_df.head()
```
### Assignment 5: complete feature engineering by sorting by "MONTHS_BALANCE"
# pos_cash.csv
```
pos_cash_df.head()
```
1. SK_DPD refers to the dpd (days past due) for any amounts(even if it is small);
2. SK_DPD_DEF refers to the dpd (days past due) for those relatively "significant" amounts. Which saying, we should take SK_DPD_DEF as the ideal column for evaluating the customer's dpd behaviors.
3. SK_DPD is often bigger than SK_DPD_DEF
******
4. New Feature: Created a new feature called "SK_DPD_DEF_risk" has values 0,1, One(1) for high risk, Zero(0) for low risk applicant. (High Risk - applicants with more than 30 days -ast due. )
```
pos_cash_df['SK_DPD_DEF_risk'] = (pos_cash_df['SK_DPD_DEF'] > 30).astype(int)
pos_cash_df[['SK_ID_PREV','SK_DPD_DEF_risk']]
```
1. We know that SK_DPD is often bigger than SK_DPD_DEF, lets create New Freature call "SK_DPD_diff"
```
pos_cash_df['SK_DPD_diff'] = pos_cash_df['SK_DPD'] - pos_cash_df['SK_DPD_DEF']
pos_cash_df[['SK_ID_PREV','SK_DPD_diff']]
a = sorted(pos_cash_df['MONTHS_BALANCE'])
print(a)
#pos_cash_df[['SK_ID_PREV','MONTHS_BALANCE']]
```
### Assignment 6: complete feature engineering for this table
#### prev_df.csv
This file shows the previous activity for clients in the bank(not in other banks), you can independently process it and merge into the main table as new features
```
prev_df.head()
```
here are some features I made to help you
1. AMT_CREDIT - Final credit amount on the previous application. This differs from AMT_APPLICATION in a way that the AMT_APPLICATION is the amount for which the client initially applied for, but during our approval process he could have received different amount - AMT_CREDIT
2. AMT_APPLICATION For how much credit did client ask on the previous application
3. Creating new feature called: 'RATE_AMT_CREDIT' = how much client asked / final amount recieved, if the number bigger than 1, meaning he is a preferable client.
```
prev_df['RATE_AMT_CREDIT'] = prev_df['AMT_APPLICATION']/prev_df['AMT_CREDIT']
prev_df[['SK_ID_PREV', 'RATE_AMT_CREDIT']]
```
1. AMT_CREDIT - Final credit amount on the previous application. This differs from AMT_APPLICATION in a way that the AMT_APPLICATION is the amount for which the client initially applied for, but during our approval process he could have received different amount - AMT_CREDIT
2. AMT_ANNUITY- Annuity (a fixed sum of money paid to someone each year) of previous application
3. Creating new feature called: 'RATE_ANN_CREDIT' = Annuity of previous application / final amount recieved
```
prev_df['RATE_ANN_CREDIT'] = prev_df['AMT_ANNUITY']/prev_df['AMT_CREDIT']
prev_df[['SK_ID_PREV', 'RATE_ANN_CREDIT']]
```
1. 'AMT_DOWN_PAYMENT' - Down payment on the previous application
2. AMT_CREDIT - Final credit amount on the previous application. This differs from AMT_APPLICATION in a way that the AMT_APPLICATION is the amount for which the client initially applied for, but during our approval process he could have received different amount - AMT_CREDIT
3. Creating new feature called: 'RATE_DOWNPAY_CREDIT' = Down payment on the previous application / Final credit amount on the previous application, 'RATE_DOWNPAY_CREDIT' would be one (1) if it's less or equal to 0, otherwise will be one (0)
```
prev_df['RATE_DOWNPAY_CREDIT'] = prev_df['AMT_DOWN_PAYMENT']/prev_df['AMT_CREDIT']
prev_df[['SK_ID_PREV', 'RATE_DOWNPAY_CREDIT']]
prev_df['RATE_DOWNPAY_CREDIT'] = prev_df['RATE_DOWNPAY_CREDIT'].apply(lambda x: 1 if x < 0 or x ==0 else 0)
prev_df[['SK_ID_PREV', 'RATE_DOWNPAY_CREDIT']]
```
1. AMT_ANNUITY - Annuity (a fixed sum of money paid to someone each year) of previous application
2. AMT_APPLICATION - is the amount for which the client initially applied for, but during our approval process he could have received different amount - AMT_CREDIT
3. Creating new feature called: 'RATE_ANN_APPT' = Annuity of previous application / amount client initially applied for
```
prev_df['RATE_ANN_APP'] = prev_df['AMT_ANNUITY']/prev_df['AMT_APPLICATION']
prev_df[['SK_ID_PREV', 'RATE_ANN_APP']]
# In real scenario it would be useful to encode 'CODE_REJECT_REASON', but I have no way of encoding and understanding XAP or HC
prev_df[['SK_ID_PREV', 'CODE_REJECT_REASON']]
```
Was the previous application for CASH, POS, CAR, …
This might help to find out which have own a car
```
prev_df[['SK_ID_PREV', 'NAME_PORTFOLIO']]
```
Through which channel we acquired the client on the previous application
This might be helpful for marketing
```
prev_df[['SK_ID_PREV', 'CHANNEL_TYPE']]
```
NAME_TYPE_SUITE - Who accompanied client when applying for the previous application, we can create a new feature to find who needed a co-signer, We assume that those with accompany needed a Co-Signer.
```
prev_df[['SK_ID_PREV', 'NAME_TYPE_SUITE']]
```
Create Co_SIGNER flag - True for having co-signer and false for not having
Note: this an assumption, in real case we do more investigation
```
prev_df['CO_SIGNER'] = prev_df['NAME_TYPE_SUITE'].apply(lambda x: False if x is np.nan else True)
prev_df[['SK_ID_PREV', 'CO_SIGNER']]
```
| github_jupyter |
## Exercise 2
In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.
Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.
Some notes:
1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger
2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"
3. If you add any additional variables, make sure you use the same names as the ones used in the class
I've started the code for you below -- how would you finish it?
```
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
# GRADED FUNCTION: train_mnist
def train_mnist():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
# YOUR CODE SHOULD START HERE
class myCallBack(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
# YOUR CODE SHOULD END HERE
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)
# YOUR CODE SHOULD START HERE
x_train = x_train / 255.0
x_test = x_test / 255.0
#x_train = t_train/255.0
#x_test = x_test/255.0
# YOUR CODE SHOULD END HERE
callbacks = myCallBack()
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(
# YOUR CODE SHOULD START HERE
x_train, y_train, epochs=10, callbacks=[callbacks]
# YOUR CODE SHOULD END HERE
)
# model fitting
return history.epoch, history.history['acc'][-1]
train_mnist()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
```
| github_jupyter |
# Fairness and Explainability with SageMaker Clarify - JSONLines Format
1. [Overview](#Overview)
1. [Prerequisites and Data](#Prerequisites-and-Data)
1. [Initialize SageMaker](#Initialize-SageMaker)
1. [Download data](#Download-data)
1. [Loading the data: Adult Dataset](#Loading-the-data:-Adult-Dataset)
1. [Data inspection](#Data-inspection)
1. [Data encoding and upload to S3](#Encode-and-Upload-the-Data)
1. [Train and Deploy Linear Learner Model](#Train-Linear-Learner-Model)
1. [Train Model](#Train-Model)
1. [Deploy Model to Endpoint](#Deploy-Model)
1. [Amazon SageMaker Clarify](#Amazon-SageMaker-Clarify)
1. [Detecting Bias](#Detecting-Bias)
1. [Writing BiasConfig](#Writing-BiasConfig)
1. [Pre-training Bias](#Pre-training-Bias)
1. [Post-training Bias](#Post-training-Bias)
1. [Viewing the Bias Report](#Viewing-the-Bias-Report)
1. [Explaining Predictions](#Explaining-Predictions)
1. [Viewing the Explainability Report](#Viewing-the-Explainability-Report)
1. [Clean Up](#Clean-Up)
## Overview
Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models. The product comes with the tools to help you with the following tasks.
* Measure biases that can occur during each stage of the ML lifecycle (data collection, model training and tuning, and monitoring of ML models deployed for inference).
* Generate model governance reports targeting risk and compliance teams and external regulators.
* Provide explanations of the data, models, and monitoring used to assess predictions.
This sample notebook walks you through:
1. Key terms and concepts needed to understand SageMaker Clarify
1. Measuring the pre-training bias of a dataset and post-training bias of a model
1. Explaining the importance of the various input features on the model's decision
1. Accessing the reports through SageMaker Studio if you have an instance set up.
In doing so, the notebook will first train a [SageMaker Linear Learner](https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html) model using training dataset, then use SageMaker Clarify to analyze a testing dataset in [SageMaker JSONLines dense format](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-inference.html#common-in-formats). SageMaker Clarify also supports analyzing CSV dataset, which is illustrated in [another notebook](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb).
## Prerequisites and Data
### Initialize SageMaker
```
from sagemaker import Session
session = Session()
bucket = session.default_bucket()
prefix = "sagemaker/DEMO-sagemaker-clarify-jsonlines"
region = session.boto_region_name
# Define IAM role
from sagemaker import get_execution_role
import pandas as pd
import numpy as np
import os
import boto3
from datetime import datetime
role = get_execution_role()
s3_client = boto3.client("s3")
```
### Download data
Data Source: [https://archive.ics.uci.edu/ml/machine-learning-databases/adult/](https://archive.ics.uci.edu/ml/machine-learning-databases/adult/)
Let's __download__ the data and save it in the local folder with the name adult.data and adult.test from UCI repository$^{[2]}$.
$^{[2]}$Dua Dheeru, and Efi Karra Taniskidou. "[UCI Machine Learning Repository](http://archive.ics.uci.edu/ml)". Irvine, CA: University of California, School of Information and Computer Science (2017).
```
adult_columns = [
"Age",
"Workclass",
"fnlwgt",
"Education",
"Education-Num",
"Marital Status",
"Occupation",
"Relationship",
"Ethnic group",
"Sex",
"Capital Gain",
"Capital Loss",
"Hours per week",
"Country",
"Target",
]
if not os.path.isfile("adult.data"):
s3_client.download_file(
"sagemaker-sample-files", "datasets/tabular/uci_adult/adult.data", "adult.data"
)
print("adult.data saved!")
else:
print("adult.data already on disk.")
if not os.path.isfile("adult.test"):
s3_client.download_file(
"sagemaker-sample-files", "datasets/tabular/uci_adult/adult.test", "adult.test"
)
print("adult.test saved!")
else:
print("adult.test already on disk.")
```
### Loading the data: Adult Dataset
From the UCI repository of machine learning datasets, this database contains 14 features concerning demographic characteristics of 45,222 rows (32,561 for training and 12,661 for testing). The task is to predict whether a person has a yearly income that is more or less than $50,000.
Here are the features and their possible values:
1. **Age**: continuous.
1. **Workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
1. **Fnlwgt**: continuous (the number of people the census takers believe that observation represents).
1. **Education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
1. **Education-num**: continuous.
1. **Marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
1. **Occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
1. **Relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
1. **Ethnic group**: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.
1. **Sex**: Female, Male.
* **Note**: this data is extracted from the 1994 Census and enforces a binary option on Sex
1. **Capital-gain**: continuous.
1. **Capital-loss**: continuous.
1. **Hours-per-week**: continuous.
1. **Native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
Next, we specify our binary prediction task:
15. **Target**: <=50,000, >$50,000.
```
training_data = pd.read_csv(
"adult.data", names=adult_columns, sep=r"\s*,\s*", engine="python", na_values="?"
).dropna()
testing_data = pd.read_csv(
"adult.test", names=adult_columns, sep=r"\s*,\s*", engine="python", na_values="?", skiprows=1
).dropna()
training_data.head()
```
### Data inspection
Plotting histograms for the distribution of the different features is a good way to visualize the data. Let's plot a few of the features that can be considered _sensitive_.
Let's take a look specifically at the Sex feature of a census respondent. In the first plot we see that there are fewer Female respondents as a whole but especially in the positive outcomes, where they form ~$\frac{1}{7}$th of respondents.
```
training_data["Sex"].value_counts().sort_values().plot(kind="bar", title="Counts of Sex", rot=0)
training_data["Sex"].where(training_data["Target"] == ">50K").value_counts().sort_values().plot(
kind="bar", title="Counts of Sex earning >$50K", rot=0
)
```
### Encode and Upload the Dataset
Here we encode the training and test data. Encoding input data is not necessary for SageMaker Clarify, but is necessary for the model.
```
from sklearn import preprocessing
def number_encode_features(df):
result = df.copy()
encoders = {}
for column in result.columns:
if result.dtypes[column] == np.object:
encoders[column] = preprocessing.LabelEncoder()
# print('Column:', column, result[column])
result[column] = encoders[column].fit_transform(result[column].fillna("None"))
return result, encoders
training_data, _ = number_encode_features(training_data)
testing_data, _ = number_encode_features(testing_data)
```
Then save the testing dataset to a JSONLines file. The file conforms to [SageMaker JSONLines dense format](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-inference.html#common-in-formats), with an additional field to hold the ground truth label.
```
import json
def dump_to_jsonlines_file(df, filename):
with open(filename, "w") as f:
for _, row in df.iterrows():
sample = {"features": row[0:-1].tolist(), "label": int(row[-1])}
print(json.dumps(sample), file=f)
dump_to_jsonlines_file(testing_data, "test_data.jsonl")
```
A quick note about our encoding: the "Female" Sex value has been encoded as 0 and "Male" as 1.
```
!head -n 5 test_data.jsonl
testing_data.head()
```
Lastly, let's upload the data to S3
```
from sagemaker.s3 import S3Uploader
test_data_uri = S3Uploader.upload("test_data.jsonl", "s3://{}/{}".format(bucket, prefix))
```
### Train Linear Learner Model
#### Train Model
Since our focus is on understanding how to use SageMaker Clarify, we keep it simple by using a standard Linear Learner model.
```
from sagemaker.image_uris import retrieve
from sagemaker.amazon.linear_learner import LinearLearner
ll = LinearLearner(
role,
instance_count=1,
instance_type="ml.m5.xlarge",
predictor_type="binary_classifier",
sagemaker_session=session,
)
training_target = training_data["Target"].to_numpy().astype(np.float32)
training_features = training_data.drop(["Target"], axis=1).to_numpy().astype(np.float32)
ll.fit(ll.record_set(training_features, training_target), logs=False)
```
#### Deploy Model
Here we create the SageMaker model.
```
model_name = "DEMO-clarify-ll-model-{}".format(datetime.now().strftime("%d-%m-%Y-%H-%M-%S"))
model = ll.create_model(name=model_name)
container_def = model.prepare_container_def()
session.create_model(model_name, role, container_def)
```
## Amazon SageMaker Clarify
Now that you have your model set up. Let's say hello to SageMaker Clarify!
```
from sagemaker import clarify
clarify_processor = clarify.SageMakerClarifyProcessor(
role=role, instance_count=1, instance_type="ml.m5.xlarge", sagemaker_session=session
)
```
### Detecting Bias
SageMaker Clarify helps you detect possible pre- and post-training biases using a variety of metrics.
#### Writing DataConfig and ModelConfig
A `DataConfig` object communicates some basic information about data I/O to SageMaker Clarify. We specify where to find the input dataset, where to store the output, the target column (`label`), the header names, and the dataset type.
Some special things to note about this configuration for the JSONLines dataset,
* Argument `features` or `label` is **NOT** header string. Instead, it is a [JSONPath string](https://jmespath.org/specification.html) to locate the features list or label in the dataset. For example, for a sample like below, `features` should be 'data.features.values', and `label` should be 'data.label'.
```
{"data": {"features": {"values": [25, 2, 226802, 1, 7, 4, 6, 3, 2, 1, 0, 0, 40, 37]}, "label": 0}}
```
* SageMaker Clarify will load the JSONLines dataset into tabular representation for further analysis, and argument `headers` is the list of column names. The label header shall be the last one in the headers list, and the order of feature headers shall be the same as the order of features in a sample.
```
bias_report_output_path = "s3://{}/{}/clarify-bias".format(bucket, prefix)
bias_data_config = clarify.DataConfig(
s3_data_input_path=test_data_uri,
s3_output_path=bias_report_output_path,
features="features",
label="label",
headers=testing_data.columns.to_list(),
dataset_type="application/jsonlines",
)
```
A `ModelConfig` object communicates information about your trained model. To avoid additional traffic to your production models, SageMaker Clarify sets up and tears down a dedicated endpoint when processing.
* `instance_type` and `instance_count` specify your preferred instance type and instance count used to run your model on during SageMaker Clarify's processing. The testing dataset is small so a single standard instance is good enough to run this example. If your have a large complex dataset, you may want to use a better instance type to speed up, or add more instances to enable Spark parallelization.
* `accept_type` denotes the endpoint response payload format, and `content_type` denotes the payload format of request to the endpoint.
* `content_template` is used by SageMaker Clarify to compose the request payload if the content type is JSONLines. To be more specific, the placeholder `$features` will be replaced by the features list from samples. The request payload of a sample from the testing dataset happens to be similar to the sample itself, like `'{"features": [25, 2, 226802, 1, 7, 4, 6, 3, 2, 1, 0, 0, 40, 37]}'`, because both the dataset and the model input conform to [SageMaker JSONLines dense format](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-inference.html#common-in-formats).
```
model_config = clarify.ModelConfig(
model_name=model_name,
instance_type="ml.m5.xlarge",
instance_count=1,
accept_type="application/jsonlines",
content_type="application/jsonlines",
content_template='{"features":$features}',
)
```
A `ModelPredictedLabelConfig` provides information on the format of your predictions. The argument `label` is a JSONPath string to locate the predicted label in endpoint response. In this case, the response payload for a single sample request looks like `'{"predicted_label": 0, "score": 0.013525663875043}'`, so SageMaker Clarify can find predicted label `0` by JSONPath `'predicted_label'`. There is also probability score in the response, so it is possible to use another combination of arguments to decide the predicted label by a custom threshold, for example `probability='score'` and `probability_threshold=0.8`.
```
predictions_config = clarify.ModelPredictedLabelConfig(label="predicted_label")
```
If you are building your own model, then you may choose a different JSONLines format, as long as it has the key elements like label and features list, and request payload built using `content_template` is supported by the model (you can customize the template but the placeholder of features list must be `$features`). Also, `dataset_type`, `accept_type` and `content_type` don't have to be the same, for example, a use case may use CSV dataset and content type, but JSONLines accept type.
#### Writing BiasConfig
SageMaker Clarify also needs information on what the sensitive columns (`facets`) are, what the sensitive features (`facet_values_or_threshold`) may be, and what the desirable outcomes are (`label_values_or_threshold`).
SageMaker Clarify can handle both categorical and continuous data for `facet_values_or_threshold` and for `label_values_or_threshold`. In this case we are using categorical data.
We specify this information in the `BiasConfig` API. Here that the positive outcome is earning >$50,000, Sex is a sensitive category, and Female respondents are the sensitive group. `group_name` is used to form subgroups for the measurement of Conditional Demographic Disparity in Labels (CDDL) and Conditional Demographic Disparity in Predicted Labels (CDDPL) with regards to Simpson’s paradox.
```
bias_config = clarify.BiasConfig(
label_values_or_threshold=[1], facet_name="Sex", facet_values_or_threshold=[0], group_name="Age"
)
```
#### Pre-training Bias
Bias can be present in your data before any model training occurs. Inspecting your data for bias before training begins can help detect any data collection gaps, inform your feature engineering, and hep you understand what societal biases the data may reflect.
Computing pre-training bias metrics does not require a trained model.
#### Post-training Bias
Computing post-training bias metrics does require a trained model.
Unbiased training data (as determined by concepts of fairness measured by bias metric) may still result in biased model predictions after training. Whether this occurs depends on several factors including hyperparameter choices.
You can run these options separately with `run_pre_training_bias()` and `run_post_training_bias()` or at the same time with `run_bias()` as shown below.
```
clarify_processor.run_bias(
data_config=bias_data_config,
bias_config=bias_config,
model_config=model_config,
model_predicted_label_config=predictions_config,
pre_training_methods="all",
post_training_methods="all",
)
```
#### Viewing the Bias Report
In Studio, you can view the results under the experiments tab.
<img src="./recordings/bias_report.gif">
Each bias metric has detailed explanations with examples that you can explore.
<img src="./recordings/bias_detail.gif">
You could also summarize the results in a handy table!
<img src="./recordings/bias_report_chart.gif">
If you're not a Studio user yet, you can access the bias report in pdf, html and ipynb formats in the following S3 bucket:
```
bias_report_output_path
```
### Explaining Predictions
There are expanding business needs and legislative regulations that require explanations of _why_ a model made the decision it did. SageMaker Clarify uses SHAP to explain the contribution that each input feature makes to the final decision.
Kernel SHAP algorithm requires a baseline (also known as background dataset). Baseline dataset type shall be the same as `dataset_type` of `DataConfig`, and baseline samples shall only include features. By definition, `baseline` should either be a S3 URI to the baseline dataset file, or an in-place list of samples. In this case we chose the latter, and put the first sample of the test dataset to the list.
```
# pick up the first line, load as JSON, then exclude the label (i.e., only keep the features)
with open("test_data.jsonl") as f:
baseline_sample = json.loads(f.readline())
del baseline_sample["label"]
baseline_sample
# Similarly, excluding label header from headers list
headers = testing_data.columns.to_list()
headers.remove("Target")
print(headers)
shap_config = clarify.SHAPConfig(
baseline=[baseline_sample], num_samples=15, agg_method="mean_abs", save_local_shap_values=False
)
explainability_output_path = "s3://{}/{}/clarify-explainability".format(bucket, prefix)
explainability_data_config = clarify.DataConfig(
s3_data_input_path=test_data_uri,
s3_output_path=explainability_output_path,
features="features",
headers=headers,
dataset_type="application/jsonlines",
)
```
Run the explainability job, note that Kernel SHAP algorithm requires probability prediction, so JSONPath `"score"` is used to extract the probability.
```
clarify_processor.run_explainability(
data_config=explainability_data_config,
model_config=model_config,
explainability_config=shap_config,
model_scores="score",
)
```
#### Viewing the Explainability Report
As with the bias report, you can view the explainability report in Studio under the experiments tab
<img src="./recordings/explainability_detail.gif">
The Model Insights tab contains direct links to the report and model insights.
If you're not a Studio user yet, as with the Bias Report, you can access this report at the following S3 bucket.
```
explainability_output_path
```
### Clean Up
Finally, don't forget to clean up the resources we set up and used for this demo!
```
session.delete_model(model_name)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/victorog17/Soulcode_Projeto_Python/blob/main/Projeto_Python_Oficina_Mecanica_V2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
print('Hello World')
print('Essa Fera Bicho')
```
1) Ao executar o algoritmo, deverá aparecer duas opções:
A - Para acessar o programa ou
F* - Para finalizar o programa (CORRIGIR)
OK
2) Caso o usuário digite A, deverá ser direcionado para outra parte do programa que tenha no mínimo 4 funcionalidades que podem ser:
Adicionar produto , adicionar serviço , finalizar a compra , etc.
OK
3) A cada produto ou serviço selecionado, deverá aumentar o valor a ser pago na conta , igualmente num caixa de supermercado convencional . considerando que o cliente pode levar mais de uma quantidade do mesmo produto/serviço (ex : 2 caixas de leite , 2 trocas de pneus ) .
OK
4) Ao fechar/concluir o processo de seleção de produtos/serviços deve exibir ao cliente o total de valor a ser pago e pedir para que o cliente selecione a forma de pagamento , obrigatoriamente deve existir a forma de pagamento em dinheiro que gere troco , caso o troco seja gerado deve-se informar o valor do troco e quantas cedulas vão ser dadas para o cliente, sempre considere a menor quantidade de cédulas possíveis .
5) As cédulas disponíveis são : 50 , 20 , 10 , 5 ,2 e 1 real . Pode descartar valores de centavos
OK
6) No processo de finalização da compra deve existir uma opção para o cliente desistir da compra , em caso positivo deve ser perguntado a confirmação da desistência (informando os produtos/serviços que o cliente está desistindo)
OK
7) Ao finalizar a compra deve-se voltar a tela inicial Acessar programa / finalizar programa . Quando finalizar deve-se exibir uma mensagem agradecendo a visita, informando o que foi comprado e o valor gasto no estabelecimento
OK
```
# Lista de Produtos
lista_prod = [['Pneu(s)', 'Calota(s)', 'Palheta(s)', 'Protetor(es) de Volante', 'Cheirinho(s) de Carro', 'Óleo de Motor', 'Bateria(s)'],[339.00, 15.00, 55.00, 30.00, 15.00, 27.00, 270.00]]
# Lista de Serviços
lista_serv = [['Troca de Óleo', 'Alinhamento', 'Revisão Geral', 'Troca de Lampada', 'Troca de Correia', 'Troca de Pastilha de Freio'],[200.00, 60.00, 300.00, 40.00, 220.00, 150.00]]
#FUNCIONALIDADES
import time
def limparcar(): #FUNÇÃO LIMPEZA DO CARRINHO
somaFatura = 0
for y in range(len(carrinho[0])): #AMOSTRA DO CARRINHO
print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade: {carrinho[2][y]}')
somaFatura += ((carrinho[1][y])*(carrinho[2][y]))
print(f"\nValor total R${somaFatura:.2f}") #VALOR TOTAL
print("[S] para sim\n[N] para não\n") #CONFIRMAÇÃO DA AÇÃO
certeza = input(f'Tem certeza que deseja remover TUDO de seu carrinho? ').upper()[0]
print('='*50)
while (certeza != 'S') and (certeza != 'N'):
certeza = input("Opção inválida! Digite [S] para sim [N] para não:\n").upper()[0]
print('='*50)
if certeza == 'S': #CONFIRMAÇÃO = SIM - LIMPEZA DO CARRINHO
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
print("Limpando seu carrinho ...")
print('='*50)
time.sleep(3)
else: #CONFIRMAÇÃO = NÃO
print("Seus produtos foram mantidos no carrinho!")
print('='*50)
time.sleep(3)
def adcProduto(): #FUNÇÃO ADICIONAR PRODUTO
while True:
print("Opções de produto:\n")
for i in range(len(lista_prod[0])): #LISTA DE PRODUTOS DISPONÍVEIS
print(f'[{i+1}] - {lista_prod[0][i]} --> R${lista_prod[1][i]}')
print("\nPara voltar ao menu principal basta digitar [99] ")
print('='*50)
#CARRINHO
digite = int(input('Adicione um produto ao seu carrinho: '))
print('='*50)
if digite >= 1 and digite <= (len(lista_prod[0])): #ESCOLHA DE PRODUTO
carrinho[0].append(lista_prod[0][digite-1])
carrinho[1].append(lista_prod[1][digite-1])
quant = int(input(f'Qual seria a quantidade de "{lista_prod[0][digite-1]}" (MÁX. 10): ')) #QUANTIDADE DE PRODUTO
print('='*50)
while quant <= 0 or quant > 10:
quant = int(input('Valor inválido! Digite novamente a quantidade: '))
print('='*50)
print(f'Adicionando "{lista_prod[0][digite-1]}" ao seu carrinho ...')
print('='*50)
time.sleep(3)
carrinho[2].append(quant)
elif digite == 99: #SAÍDA DA FUNÇÃO
print('Saindo ...')
print('='*50)
time.sleep(3)
break
else: #OPÇÃO INVÁLIDA
print('Este número não está entre as opções!!')
print('='*50)
time.sleep(3)
def adcServico(): #FUNÇÃO ADICIONAR SERVIÇO
while True:
print("Opções de serviços:\n")
for x in range(len(lista_serv[0])): #LISTA DE SERVIÇOS DISPONÍVEIS
print(f'[{x+1}] - {lista_serv[0][x]} --> R${lista_serv[1][x]}')
print("\nPara voltar ao menu principal basta digitar [99] ")
print('='*50)
#CARRINHO
digite = int(input('Adicione um serviço ao seu carrinho: '))
print('='*50)
if digite >= 1 and digite <= (len(lista_serv[0])): #ESCOLHA DE SERVIÇO
carrinho[0].append(lista_serv[0][digite-1])
carrinho[1].append(lista_serv[1][digite-1])
print(f'Adicionando "{lista_serv[0][digite-1]}" ao seu carrinho ...')
print('='*50)
time.sleep(3)
carrinho[2].append(1)
elif digite == 99: #SAÍDA DA FUNÇÃO
print('Saindo ...')
print('='*50)
time.sleep(3)
break
else: #OPÇÃO INVÁLIDA
print('Este número não está entre as opções!!')
print('='*50)
time.sleep(3)
def rmvProduto(): #FUNÇÃO REMOVER PRODUTO/SERVIÇO
while True:
print("Dentro do carrinho:\n")
for y in range(len(carrinho[0])): #AMOSTRA DO CARRINHO
print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade: {carrinho[2][y]}')
print('='*50)
#ESCOLHA DE OPÇÕES DE REMOÇÃO - PRODUTO OU QUANTIDADE
print("Digite [P] para remover um produto/serviço\nDigite [Q] para diminuir a quantidade de seu produto\nDigite [M] para voltar ao MENU PRINCIPAL")
produto_ou_quantidade = input("\nEscolha uma das opções acima: ").upper()[0]
print('='*50)
while (produto_ou_quantidade != 'P') and (produto_ou_quantidade != 'Q') and ((produto_ou_quantidade != 'M')):
produto_ou_quantidade = input("As únicas opções válidas são [P], [Q] ou [M]: ").upper()[0]
print('='*50)
if produto_ou_quantidade == 'M': #SAÍDA DA FUNÇÃO
print('Saindo ...')
print('='*50)
time.sleep(3)
break
elif produto_ou_quantidade == 'P': #REMOÇÃO DE PRODUTO
remove = int(input("Informe qual produto irá remover: "))
print('='*50)
while remove < 1 or remove > len(carrinho[0]):
remove = int(input("Este produto não está na lista! Informe novamente qual produto irá remover: "))
print('='*50)
elif produto_ou_quantidade == 'Q': #REMOÇÃO POR QUANTIDADE
escolheProdRem = int(input("Informe de qual item irá reduzir a quantidade: ")) #APONTAR PRODUTO
print('='*50)
while escolheProdRem < 1 or escolheProdRem > len(carrinho[2]):
escolheProdRem = int(input("Este produto não está na lista! Informe novamente qual produto irá reduzir a quantidade: "))
print('='*50)
removeQuantidade = int(input(f'Gostaria de remover quantos de "{carrinho[0][escolheProdRem-1]}": ')) #REMOÇÃO DA QUANTIDADE DESSE PRODUTO
print('='*50)
while removeQuantidade <= 0 or removeQuantidade > carrinho[2][escolheProdRem-1]:
removeQuantidade = int(input(f'Tirar este valor é impossível! Gostaria de remover quantos de "{carrinho[0][escolheProdRem-1]}": '))
print('='*50)
print("[S] para sim\n[N] para não\n")
certeza = input(f'Confirme a sua ação: ').upper()[0] #CONFIRMAÇÃO DA AÇÃO
print('='*50)
while (certeza != 'S') and (certeza != 'N'):
certeza = input("Opção inválida! Digite [S] para sim [N] para não: ").upper()[0]
print('='*50)
if certeza == 'S': #CONFIRMAÇÃO = SIM
if produto_ou_quantidade == 'P': #REMOÇÃO DO PRODUTO
del carrinho[0][remove-1]
del carrinho[1][remove-1]
del carrinho[2][remove-1]
elif produto_ou_quantidade == 'Q':
if removeQuantidade == carrinho[2][escolheProdRem-1]: #SE REMOÇÃO DA QUANTIDADE FOR IGUAL A QUANTIDADE DO CARRINHO
del carrinho[0][escolheProdRem-1]
del carrinho[1][escolheProdRem-1]
del carrinho[2][escolheProdRem-1]
else:
carrinho[2][escolheProdRem-1] -= removeQuantidade #REMOVE QUANTIDADE PEDIDA QUANDO MENOR QUE A QUANTIDADE DO PRODUTO
else: #CONFIRMAÇÃO = NÃO - MANTÉM PRODUTO OU MESMA QUANTIDADE NO CARRINHO
print("O produto não foi removido de seu carrinho!")
print('='*50)
time.sleep(3)
def extrato(): #FUNÇÃO EXTRATO CARRINHO
while True:
somaFatura = 0
for y in range(len(carrinho[0])): #AMOSTRA DO CARRINHO
print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade: {carrinho[2][y]}')
somaFatura += ((carrinho[1][y])*(carrinho[2][y]))
print(f"\nValor total R${somaFatura:.2f}") #VALOR TOTAL
sair_extrato = int(input("\nDigite [99] para sair: "))
print('='*50)
while sair_extrato != 99:
sair_extrato = int(input("Dado inválido! Digite 99 para sair: "))
print('='*50)
if sair_extrato == 99: #OPÇÃO DE SAÍDA DA FUNÇÃO
print("Saindo ...")
print('='*50)
time.sleep(3)
break
#PROGRAMA
import time
carrinho = [[],[],[]]
historico = [[],[],[]]
#ACESSAR/FINALIZAR
while True:
print("> Para acessar o programa basta digitar [A]\n> Caso queira finalizar o programa, digite [F]\n")
acessar = str(input("Escolha uma opção: ")).upper()[0]
print('='*50)
while acessar != 'A' and acessar != 'F': #VALIDAÇÃO ACESSAR/FINALIZAR
acessar = input("Valor inválido! Digite A para acessar o programa ou F para finalizar o programa:\n").upper()[0]
print('='*50)
if acessar == 'A':
print('Bem vindo a Oficina Borracha Forte!') #ACESSAR - BOAS VINDAS
print('='*50)
time.sleep(3)
else:
print('Iremos finalizar o programa ...') #FINALIZAR
print('='*50)
time.sleep(3)
print(f"Muito obrigado pela visita!") #AGRADECIMENTO + HISTÓRICO DE COMPRAS
print('='*50)
print('NOTA FISCAL\n')
somaFatura = 0
for y in range(len(historico[0])): #AMOSTRA DO HISTÓRICO FINAL DA COMPRA
print(f'[{y+1}] - {historico[0][y]} --> R${historico[1][y]:.2f} Quantidade: {historico[2][y]}')
somaFatura += ((historico[1][y])*(historico[2][y]))
print(f"\nValor total R${somaFatura:.2f}")
break
while True:
print(f"MENU PRINCIPAL\n") #MENU PRINCIPAL
#OPÇÕES PARA DAR PROCEDIMENTO
print("Escolha a opção que deseja:\n\n[1] - Adicionar Produto\n[2] - Adicionar Serviço\n[3] - Remover Produto ou Serviço\n[4] - Limpar carrinho\n[5] - Extrato\n[6] - Finalizar Compra\n[7] - Sair")
opcao = int(input("\n"))
print('='*50)
if opcao == 1: #ADICIONAR PRODUTOS AO SEU CARRINHO
print("Carregando ...")
print('='*50)
time.sleep(3)
while True:
adcProduto() #FUNÇÃO ADICIONAR PRODUTO
break
elif opcao == 2: #ADICIONAR SERVIÇOS AO SEU CARRINHO
print("Carregando ...")
print('='*50)
time.sleep(3)
while True:
adcServico() #FUNÇÃO ADICIONAR SERVIÇO
break
elif opcao == 3: #REMOVER PRODUTOS/SERVIÇOS DE SEU CARRINHO
print("Carregando ...")
print('='*50)
time.sleep(3)
while True:
rmvProduto() #FUNÇÃO REMOVER PRODUTO
break
elif opcao == 4: #LIMPAR SEU CARRINHO
print("Carregando ...")
print('='*50)
time.sleep(3)
while True:
limparcar() #FUNÇÃO LIMPAR CARRINHO
break
elif opcao == 5: #EXTRATO DE SEU CARRINHO
print("Carregando ...")
print('='*50)
time.sleep(3)
while True:
extrato() #FUNÇÃO EXTRATO CARRINHO
break
elif opcao == 6: #FINALIZAR/DESISTIR DA COMPRA
print("Carregando ...")
print('='*50)
time.sleep(3)
print("Gostaria de dar procedimento a finalização da compra ou gostaria de desistir?\n") #CHANCE DE DESISTÊNCIA DA COMPRA
print("[P] para prosseguir\n[D] para desistir\n")
certeza = input(f'Confirme a sua ação: ').upper()[0]
print('='*50)
while (certeza != 'P') and (certeza != 'D'):
certeza = input("Opção inválida! Digite [P] para prosseguir [D] para desistir: ").upper()[0]
print('='*50)
if certeza == 'D': #DESISTÊNCIA (1ªCONFIRMAÇÃO) - MOSTRA OS PRODUTOS QUE ESTÁ DESISTINDO
print("Você tem certeza? Essa é o seu carrinho:\n")
for y in range(len(carrinho[0])):
print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade: {carrinho[2][y]}')
print('='*50)
print("[S] para sim\n[N] para não\n") #DESISTÊNCIA (2ªCONFIRMAÇÃO) - LIMPEZA DO CARRINHO E SAÍDA DIRETA DO PROGRAMA
certeza = input("Confirme sua ação: ").upper()[0]
print('='*50)
while (certeza != 'S') and (certeza != 'N'):
certeza = input("Opção inválida! Confirme sua ação: ").upper()[0]
print('='*50)
if certeza == 'S':
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
print('VOLTE SEMPRE!')
print('='*50)
time.sleep(3)
break
else:
print("Voltando ...")
print('='*50)
time.sleep(3)
else: #FINALIZAR COMPRA - FORMA DE PAGAMENTO
print("Qual será a forma de pagamento?\n")
print("[C] - Cartão\n[D] - Dinheiro\n[P] - PIX")
FormaPagamento = str(input("\nEscolha a forma de pagamento: ")).upper()[0]
print('='*50)
while (FormaPagamento != 'D') and (FormaPagamento != 'C') and (FormaPagamento != 'P'):
FormaPagamento = str(input("Esta opcção não é válida! Escolha a forma de pagamento: ")).upper()[0]
print('='*50)
if FormaPagamento == 'D': #FORMA DE PAGAMENTO - DINHEIRO
somaFatura = 0
for y in range(len(carrinho[0])): #AMOSTRA DO CARRINHO
print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade: {carrinho[2][y]}')
somaFatura += ((carrinho[1][y])*(carrinho[2][y]))
print(f"\nValor total R${somaFatura:.2f}")
dinheiro = int(input("\nDigite o valor do pagamento: "))
print('='*50)
while dinheiro < somaFatura:
dinheiro = int(input("Inválido! Digite o valor: "))
print('='*50)
troco = dinheiro - somaFatura
print(f"Troco do cliente: R${troco}")
cont50n = 0
cont20n = 0
cont10n = 0
cont5n = 0
cont2n = 0
cont1n = 0
while troco > 0:
if troco >= 50:
troco -= 50
cont50n +=1
elif troco >= 20:
troco -= 20
cont20n += 1
elif troco >= 10:
troco -= 10
cont10n += 1
elif troco >= 5:
troco -= 5
cont5n += 1
elif troco >= 2:
troco -= 2
cont2n += 1
elif troco >= 1:
troco -= 1
cont1n += 1
lista_cont = [cont50n, cont20n, cont10n, cont5n, cont2n, cont1n]
lista_cedulas = [50, 20, 10, 5, 2, 1]
for i, v in zip(lista_cont, lista_cedulas):
if i > 0:
print(f'{i} cédula(s) de {v} reais')
print('='*50)
somaFatura = 0
for i in range(len(carrinho[0])):
historico[0].append(carrinho[0][i])
historico[1].append(carrinho[1][i])
historico[2].append(carrinho[2][i])
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
elif FormaPagamento == 'C': #FORMA DE PAGAMENTO - CARTÃO
somaFatura = 0
for y in range(len(carrinho[0])):
print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade: {carrinho[2][y]}')
somaFatura += ((carrinho[1][y])*(carrinho[2][y]))
print(f"\nValor total R${somaFatura:.2f}")
print("\n[C] - Crédito\n[D] - Débito") #CRÉDITO OU DÉBITO
credito_debito = str(input("\nEscolha entre Crédito ou Débito: ")).upper()[0]
print('='*50)
while (FormaPagamento != 'D') and (FormaPagamento != 'C'):
credito_debito = str(input("Dado inválido! Escolha entre Crédito ou Débito: ")).upper()[0]
print('='*50)
if credito_debito == 'C': #CRÉDITO
print('Obs: Parcelas acima de 3x acarretará juros de 3%. Máximo de parcelas: 10') #
parcelas = int(input('\nDeseja parcelar em quantas vezes: '))
print('='*50)
while parcelas <= 0 or parcelas > 10:
parcelas = int(input('Inválido! Deseja parcelar em quantas vezes: '))
print('='*50)
if parcelas >= 1 and parcelas <= 3:
somaFatura /= parcelas
print(f"O valor parcelado em {parcelas}x fica: R${somaFatura:.2f}") #
print('='*50)
print("Pago com sucesso!")
print('='*50)
somaFatura = 0
for i in range(len(carrinho[0])):
historico[0].append(carrinho[0][i])
historico[1].append(carrinho[1][i])
historico[2].append(carrinho[2][i])
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
time.sleep(3)
elif parcelas == 0:
print(f"O valor parcelado em {parcelas}x fica: R${somaFatura:.2f}")
print('='*50)
print("Pago com sucesso!")
print('='*50)
somaFatura = 0
for i in range(len(carrinho[0])):
historico[0].append(carrinho[0][i])
historico[1].append(carrinho[1][i])
historico[2].append(carrinho[2][i])
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
else:
somaFatura /= parcelas
somaFatura * 1.03
print(f"O valor parcelado em {parcelas}x fica: R${somaFatura:.2f}")
print('='*50)
print("Pago com sucesso!")
print('='*50)
somaFatura = 0
for i in range(len(carrinho[0])):
historico[0].append(carrinho[0][i])
historico[1].append(carrinho[1][i])
historico[2].append(carrinho[2][i])
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
time.sleep(3)
elif credito_debito == 'D': #DÉBITO
print('Pagamento realizado com sucesso!')
print('='*50)
somaFatura = 0
for i in range(len(carrinho[0])):
historico[0].append(carrinho[0][i])
historico[1].append(carrinho[1][i])
historico[2].append(carrinho[2][i])
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
time.sleep(3)
else: #FORMA DE PAGAMENTO - PIX
print('='*50)
print('Pagamento com PIX realizado com sucesso!')
print('='*50)
somaFatura = 0
for i in range(len(carrinho[0])):
historico[0].append(carrinho[0][i])
historico[1].append(carrinho[1][i])
historico[2].append(carrinho[2][i])
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
time.sleep(3)
elif opcao == 7: #SAIR DO PROGRAMA
print("Carregando ...")
print('='*50)
time.sleep(3)
if len(carrinho[0]) == 0: #CARRINHO SEM ITEM - SAÍDA DIRETA
print("VOLTE SEMPRE!")
print('='*50)
time.sleep(3)
break
else:
print("Tem certeza que deseja sair? Todo o conteúdo do seu carrinho será removido.\n\n[S] para sim\n[N] para não") #CONFIRMAÇÃO DA AÇÃO
certeza = input("\nConfirme sua ação: ").upper()[0]
print('='*50)
while (certeza != 'S') and (certeza != 'N'):
certeza = input("Dado inválido! Digite [S] para sim [N] para não:\n").upper()[0]
print('='*50)
if certeza == 'S': #LIMPEZA DO CARRINHO
carrinho[0].clear()
carrinho[1].clear()
carrinho[2].clear()
print("Limpando seu carrinho ...")
print('='*50)
print("VOLTE SEMPRE!")
print('='*50)
time.sleep(3)
break
else: #CASO DESISTA DA AÇÃO - CARRINHO MANTIDO
print("Seus produtos foram mantidos no carrinho!")
print('='*50)
time.sleep(3)
else: #AVISO DE ALTERNATIVA INVÁLIDA
print('Insira uma opção valida!')
print('='*50)
time.sleep(3)
#LEGADO PARA CONSULTA
#def finalizarCompra():
# print("Gostaria de dar procedimento a finalização da compra ou gostaria de desistir?\n")
# print("[S] para sim\n[N] para não\n")
# certeza = input(f'Confirme a sua ação: ').upper()[0] #MOSTRAR O NOME DO PRODUTO QUE SERÁ APAGADO
# print('='*50)
# while (certeza != 'S') and (certeza != 'N'):
# certeza = input("Opção inválida! Digite [S] para sim [N] para não: ").upper()[0] #MOSTRAR O NOME DO PRODUTO QUE SERÁ APAGADO
# print('='*50)
# print("Qual será a forma de pagamento?\n")
# print("[C] - Cartão\n[D] - Dinheiro\n[P] - PIX")
# FormaPagamento = str(input("\nEscolha a forma de pagamento: ")).upper()[0]
# print('='*50)
# while (FormaPagamento != 'D') and (FormaPagamento != 'C') and (FormaPagamento != 'P'):
# FormaPagamento = str(input("Esta opcção não é válida! Escolha a forma de pagamento: ")).upper()[0]
# print('='*50)
#
# if FormaPagamento == 'D':
# somaFatura = 0
# for y in range(len(carrinho[0])):
# print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade ; {carrinho[2][y]}')
# somaFatura += ((carrinho[1][y])*(carrinho[2][y]))
# print(f"\nValor total R${somaFatura:.2f}")
# dinheiro = int(input("\nDigite o valor do pagamento: "))
# print('='*50)
# while dinheiro < somaFatura:
# dinheiro = int(input("Inválido! Digite o valor: "))
# print('='*50)
# troco = dinheiro - somaFatura
# print(f"Troco do cliente: R${troco}")
# cont50n = 0
# cont20n = 0
# cont10n = 0
# cont5n = 0
# cont2n = 0
# cont1n = 0
# while troco > 0:
# if troco >= 50:
# troco -= 50
# cont50n +=1
# elif troco >= 20:
# troco -= 20
# cont20n += 1
# elif troco >= 10:
# troco -= 10
# cont10n += 1
# elif troco >= 5:
# troco -= 5
# cont5n += 1
# elif troco >= 2:
# troco -= 2
# cont2n += 1
# elif troco >= 1:
# troco -= 1
# cont1n += 1
#
# lista_cont = [cont50n, cont20n, cont10n, cont5n, cont2n, cont1n]
# lista_cedulas = [50, 20, 10, 5, 2, 1]
#
# for i, v in zip(lista_cont, lista_cedulas):
# if i > 0:
# print(f'{i} cédula(s) de {v} reais')
# print('='*50)
# somaFatura = 0
# historico = [[],[],[]]
# for i in range(len(carrinho[0])):
# historico[0].append(carrinho[0][i])
# historico[1].append(carrinho[1][i])
# historico[2].append(carrinho[2][i])
# print(f"antes Lista histórico: {historico}")
# print(f"antesLista carrinho: {carrinho}")
# carrinho[0].clear()
# carrinho[1].clear()
# carrinho[2].clear()
# print(f"depois Lista histórico: {historico}")
# print(f"depois Lista carrinho: {carrinho}")
#
# elif FormaPagamento == 'C':
# somaFatura = 0
# for y in range(len(carrinho[0])):
# print(f'[{y+1}] - {carrinho[0][y]} --> R${carrinho[1][y]} Quantidade ; {carrinho[2][y]}')
# somaFatura += ((carrinho[1][y])*(carrinho[2][y]))
# print(f"\nValor total R${somaFatura:.2f}")
# print("\n[C] - Crédito\n[D] - Débito")
# credito_debito = str(input("\nEscolha entre Crédito ou Débito: ")).upper()[0]
# print('='*50)
# while (FormaPagamento != 'D') and (FormaPagamento != 'C'):
# credito_debito = str(input("Dado inválido! Escolha entre Crédito ou Débito: ")).upper()[0]
# print('='*50)
# if credito_debito == 'C':
# print('Obs: Parcelas acima de 3x acarretará juros de 3%. Máximo de parcelas: 10')
# parcelas = int(input('\nDeseja parcelar em quantas vezes: '))
# print('='*50)
# while parcelas <= 0 or parcelas > 10:
# parcelas = int(input('Inválido! Deseja parcelar em quantas vezes: '))
# print('='*50)
# if parcelas >= 1 and parcelas <= 3:
# somaFatura /= parcelas
# print(f"O valor parcelado em {parcelas}x fica: R${somaFatura:.2f}")
# print('='*50)
# print("Pago com sucesso!")
# print('='*50)
# somaFatura = 0
# historico = carrinho.copy()
# carrinho[0].clear()
# carrinho[1].clear()
# carrinho[2].clear()
# time.sleep(3)
# elif parcelas == 0:
# print(f"O valor parcelado em {parcelas}x fica: R${somaFatura:.2f}")
# print('='*50)
# print("Pago com sucesso!")
# print('='*50)
# somaFatura = 0
# historico = carrinho.copy()
# carrinho[0].clear()
# carrinho[1].clear()
# carrinho[2].clear()
# time.sleep(3)
# else:
# somaFatura /= parcelas
# somaFatura * 1.03
# print(f"O valor parcelado em {parcelas}x fica: R${somaFatura:.2f}")
# print('='*50)
# print("Pago com sucesso!")
# print('='*50)
# somaFatura = 0
# historico = carrinho.copy()
# carrinho[0].clear()
# carrinho[1].clear()
# carrinho[2].clear()
# time.sleep(3)
# elif credito_debito == 'D':
# print('Pagamento realizado com sucesso!')
# print('='*50)
# somaFatura = 0
# historico = carrinho
# carrinho[0].clear()
# carrinho[1].clear()
# carrinho[2].clear()
# time.sleep(3)
# else:
# print('='*50)
# print('Pagamento com PIX realizado com sucesso!')
# print('='*50)
# somaFatura = 0
# historico = carrinho
# carrinho[0].clear()
# carrinho[1].clear()
# carrinho[2].clear()
# time.sleep(3)
```
| github_jupyter |

# _*Qiskit Finance: Pricing Fixed-Income Assets*_
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-iqx-tutorials.
***
### Contributors
Stefan Woerner<sup>[1]</sup>, Daniel Egger<sup>[1]</sup>, Shaohan Hu<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>
### Affiliation
- <sup>[1]</sup>IBMQ
### Introduction
We seek to price a fixed-income asset knowing the distributions describing the relevant interest rates. The cash flows $c_t$ of the asset and the dates at which they occur are known. The total value $V$ of the asset is thus the expectation value of:
$$V = \sum_{t=1}^T \frac{c_t}{(1+r_t)^t}$$
Each cash flow is treated as a zero coupon bond with a corresponding interest rate $r_t$ that depends on its maturity. The user must specify the distribution modeling the uncertainty in each $r_t$ (possibly correlated) as well as the number of qubits he wishes to use to sample each distribution. In this example we expand the value of the asset to first order in the interest rates $r_t$. This corresponds to studying the asset in terms of its duration.
<br>
<br>
The approximation of the objective function follows the following paper:<br>
<a href="https://arxiv.org/abs/1806.06893">Quantum Risk Analysis. Woerner, Egger. 2018.</a>
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import BasicAer
from qiskit.aqua.algorithms.single_sample.amplitude_estimation.ae import AmplitudeEstimation
from qiskit.aqua.components.uncertainty_models import MultivariateNormalDistribution
from qiskit.finance.components.uncertainty_problems import FixedIncomeExpectedValue
backend = BasicAer.get_backend('statevector_simulator')
```
### Uncertainty Model
We construct a circuit factory to load a multivariate normal random distribution in $d$ dimensions into a quantum state.
The distribution is truncated to a given box $\otimes_{i=1}^d [low_i, high_i]$ and discretized using $2^{n_i}$ grid points, where $n_i$ denotes the number of qubits used for dimension $i = 1,\ldots, d$.
The unitary operator corresponding to the circuit factory implements the following:
$$\big|0\rangle_{n_1}\ldots\big|0\rangle_{n_d} \mapsto \big|\psi\rangle = \sum_{i_1=0}^{2^n_-1}\ldots\sum_{i_d=0}^{2^n_-1} \sqrt{p_{i_1,...,i_d}}\big|i_1\rangle_{n_1}\ldots\big|i_d\rangle_{n_d},$$
where $p_{i_1, ..., i_d}$ denote the probabilities corresponding to the truncated and discretized distribution and where $i_j$ is mapped to the right interval $[low_j, high_j]$ using the affine map:
$$ \{0, \ldots, 2^{n_{j}}-1\} \ni i_j \mapsto \frac{high_j - low_j}{2^{n_j} - 1} * i_j + low_j \in [low_j, high_j].$$
In addition to the uncertainty model, we can also apply an affine map, e.g. resulting from a principal component analysis. The interest rates used are then given by:
$$ \vec{r} = A * \vec{x} + b,$$
where $\vec{x} \in \otimes_{i=1}^d [low_i, high_i]$ follows the given random distribution.
```
# can be used in case a principal component analysis has been done to derive the uncertainty model, ignored in this example.
A = np.eye(2)
b = np.zeros(2)
# specify the number of qubits that are used to represent the different dimenions of the uncertainty model
num_qubits = [2, 2]
# specify the lower and upper bounds for the different dimension
low = [0, 0]
high = [0.12, 0.24]
mu = [0.12, 0.24]
sigma = 0.01*np.eye(2)
# construct corresponding distribution
u = MultivariateNormalDistribution(num_qubits, low, high, mu, sigma)
# plot contour of probability density function
x = np.linspace(low[0], high[0], 2**num_qubits[0])
y = np.linspace(low[1], high[1], 2**num_qubits[1])
z = u.probabilities.reshape(2**num_qubits[0], 2**num_qubits[1])
plt.contourf(x, y, z)
plt.xticks(x, size=15)
plt.yticks(y, size=15)
plt.grid()
plt.xlabel('$r_1$ (%)', size=15)
plt.ylabel('$r_2$ (%)', size=15)
plt.colorbar()
plt.show()
```
### Cash flow, payoff function, and exact expected value
In the following we define the cash flow per period, the resulting payoff function and evaluate the exact expected value.
For the payoff function we first use a first order approximation and then apply the same approximation technique as for the linear part of the payoff function of the [European Call Option](european_call_option_pricing.ipynb).
```
# specify cash flow
cf = [1.0, 2.0]
periods = range(1, len(cf)+1)
# plot cash flow
plt.bar(periods, cf)
plt.xticks(periods, size=15)
plt.yticks(size=15)
plt.grid()
plt.xlabel('periods', size=15)
plt.ylabel('cashflow ($)', size=15)
plt.show()
# estimate real value
cnt = 0
exact_value = 0.0
for x1 in np.linspace(low[0], high[0], pow(2, num_qubits[0])):
for x2 in np.linspace(low[1], high[1], pow(2, num_qubits[1])):
prob = u.probabilities[cnt]
for t in range(len(cf)):
# evaluate linear approximation of real value w.r.t. interest rates
exact_value += prob * (cf[t]/pow(1 + b[t], t+1) - (t+1)*cf[t]*np.dot(A[:, t], np.asarray([x1, x2]))/pow(1 + b[t], t+2))
cnt += 1
print('Exact value: \t%.4f' % exact_value)
# specify approximation factor
c_approx = 0.125
# get fixed income circuit appfactory
fixed_income = FixedIncomeExpectedValue(u, A, b, cf, c_approx)
# set number of evaluation qubits (samples)
m = 5
# construct amplitude estimation
ae = AmplitudeEstimation(m, fixed_income)
# result = ae.run(quantum_instance=LegacySimulators.get_backend('qasm_simulator'), shots=100)
result = ae.run(quantum_instance=backend)
print('Exact value: \t%.4f' % exact_value)
print('Estimated value:\t%.4f' % result['estimation'])
print('Probability: \t%.4f' % result['max_probability'])
# plot estimated values for "a" (direct result of amplitude estimation, not rescaled yet)
plt.bar(result['values'], result['probabilities'], width=0.5/len(result['probabilities']))
plt.xticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.yticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.title('"a" Value', size=15)
plt.ylabel('Probability', size=15)
plt.xlim((0,1))
plt.ylim((0,1))
plt.grid()
plt.show()
# plot estimated values for fixed-income asset (after re-scaling and reversing the c_approx-transformation)
plt.bar(result['mapped_values'], result['probabilities'], width=3/len(result['probabilities']))
plt.plot([exact_value, exact_value], [0,1], 'r--', linewidth=2)
plt.xticks(size=15)
plt.yticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.title('Estimated Option Price', size=15)
plt.ylabel('Probability', size=15)
plt.ylim((0,1))
plt.grid()
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
# Sentiment Analysis
## Using XGBoost in SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
As our first example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## Step 1: Downloading the data
The dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
We begin by using some Jupyter Notebook magic to download and extract the dataset.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing the data
The data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
```
## Step 3: Processing the data
Now that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
```
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
### Extract Bag-of-Words features
For the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
```
## Step 4: Classification using XGBoost
Now that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker.
### (TODO) Writing the dataset
The XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
```
import pandas as pd
# TODO: Split the train_X and train_y arrays into the DataFrames val_X, train_X and val_y, train_y. Make sure that
# val_X and val_y contain 10 000 entires while train_X and train_y contain the remaining 15 000 entries.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
```
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.
For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
```
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
# TODO: Save the training and validation data to train.csv and validation.csv in the data_dir directory.
# Make sure that the files you create are in the correct format.
# Solution:
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
```
### (TODO) Uploading Training / Validation files to S3
Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.
For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.
Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.
For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
```
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
# TODO: Upload the test.csv, train.csv and validation.csv files which are contained in data_dir to S3 using sess.upload_data().
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
### (TODO) Creating the XGBoost model
Now that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.
- Model Artifacts
- Training Code (Container)
- Inference Code (Container)
The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.
The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.
The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
```
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
```
### Fit the XGBoost model
Now that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
```
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
### (TODO) Testing the model
Now that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set.
To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
```
# TODO: Create a transformer object from the trained model. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
```
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
# Solution:
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
```
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
```
xgb_transformer.wait()
```
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
```
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
```
| github_jupyter |
# Introduction to Programming in Python
In this short introduction, I'll introduce you to the basics of programming, using the Python programming language. By the end of it, you should hopefully be able to write your own HMM POS-tagger.
### First Steps
You can think of a program as a series of instructions for the computer, which it will follow one after the other. When the computer runs the program, it will create objects and manipulate them, according to our instructions. For example, we could tell it to create some objects representing numbers, add them together, then show us (`print`) the result. If you click on the block of code below to select it, then click on the "run" button in the toolbar above (or press ctrl+enter), you should see the output appear underneath.
```
kim = 4
jamie = 3
chris = kim + jamie
print(chris)
```
Now try editing the above code to do a different calculation, and then run it again. As well as adding (`+`), we can also subtract (`-`), multiply (`*`), and divide (`/`). Note that `kim`, `jamie`, and `chris` are just names we've assigned to the objects, and you can change the names to whatever you want.
These named objects are called **variables**. We can also calculate things without explicitly naming the objects, as shown below.
```
print(3 + 4)
```
As you work through this notebook, I would encourage you to play with the examples until you feel comfortable with what the code is doing.
If a line of code can't be interpreted, Python will throw an error. Run the following code, which has a mistake - it will tell you which line caused the error, and give you an error message.
```
hamster = 2 = 3
print(hamster)
```
Now edit the above code so that it does not throw an error, then run it again.
Don't be worried if you introduce errors as you play with the code in this notebook - finding errors and fixing them is called **debugging**, and is an important part of programming.
Finally, if we write a hash symbol, Python will ignore everything after it in that line. This is called a **comment**, and is useful to document what the code is doing.
```
kim = 4 # Define a variable
jamie = 3 # Define another variable
chris = kim + jamie # Add these objects together, and save the result as a new variable
print(chris) # Print the new variable
chris = 10 # If we re-define a variable, it replaces the old value
print(chris)
chris = chris + 1 # We can also assign a new value to a variable based on its current value
print(chris)
chris += 1 # This is shorthand for the line 'chris = chris + 1'
print(chris)
```
### Types of Object
There are many types of object in Python, apart from numbers. Another type is a **string**, to represent text. A string must start and finish with quotes (either single or double quotes, as long as the same kind is used).
```
text = "hello"
more_text = ' world!'
combined = text + more_text # '+' will concatenate strings
print(combined)
repeated = text * 5 # '*' will repeat strings
print(repeated)
string_23 = '23' # This is a string
integer_23 = 23 # This is an integer
# What do you think will be printed if you uncomment the lines below?
#print(string_23 * 3)
#print(integer_23 * 3)
```
We can refer to specific characters in a string, and refer to substrings, using square brackets:
```
long_string = "they fish in rivers in December"
letter = long_string[0] # We start counting from zero. This does: letter = "t"
print(letter)
another_letter = long_string[15]
print(another_letter)
end = long_string[-1] # If you give a negative number, it counts backwards from the end
print(end)
long_string = "they fish in rivers in December"
substring = long_string[0:3] # We can get a substring by specifying start and end points, separated by a colon
print(substring) # This prints the first three characters
long_substring = long_string[5:] # If you don't specify a number, it uses the very start or very end
print(long_substring) # This prints everything except the first five characters
```
Other important types of object are **lists**, **tuples**, and **dictionaries**. Lists and tuples are made up of several objects in a particular order. Dictionaries map from one set of objects (the **keys**) to another (the **values**), and have no inherent order. Lists are written with square brackets `[]`, tuples with round brackets `()`, and dictionaries with curly brackets `{}`.
```
my_list = [1, 5, 12, 'dog']
my_tuple = ('cat', 17, 18)
my_dict = {'banana': 'yellow', 'apple': 'green', 'orange': 'orange'}
print(my_tuple[0]) # You can refer to elements of a tuple or list, in the same way as for a string
print(my_dict['apple']) # You can also look something up in a dictionary in this way
# Lists and dictionaries can also be changed:
my_list[1] = 100
my_dict['apple'] = 'red'
print(my_list)
print(my_dict)
# Note you can't change strings or tuples like this (what happens if you try?)
# This dict maps from bigrams (tuples of strings) to integers
tuple_dict = {('the', 'fish'): 351, ('dog', 'barked'): 233, ('cat', 'barked'): 1}
# If a key is a tuple, the round brackets of the tuple are optional:
print(tuple_dict[('the', 'fish')])
print(tuple_dict['the', 'fish'])
# Note that you can't use lists and dicts as keys of a dict (because lists and dicts can be changed)
```
### Functions and Methods
So far, we've written programs where each line is executed exactly once. However, it is often useful to run the same code in different places, and we can do that using a **function**. We've seen one function so far, namely the `print` function. We can also define our own, using the keyword `def`. The function will take some number of arguments (possibly zero), run some code, and `return` a result. The code inside the function is indented (here, indented with 4 spaces).
```
def add_one(x): # This defines a function 'add_one' which takes one argument 'x'
y = x + 1 # We create a new object which is one larger
return y # We return the result
new_value = add_one(10) # We're calling the add_one function, with x=10
print(new_value) # We're calling the print function
print(add_one(add_one(0))) # We can also pass the result of one function as the input to another function
def repeat_substring(string, number): # This function takes two arguments
substring = string[0:3] # We take the first three letters in the string
return substring * number # We return this substring, repeated some number of times
print(repeat_substring('cathode', 3))
# If the print function is given multiple arguments, it prints them all, separated by a space
print('and finally,', repeat_substring('doggerel', add_one(1)))
```
Try writing your own function for "Pig Latin" - it should take a string, remove the first letter, put the first letter on the end, then add "ay". For example,
"pig" -> "igpay" ("ig" + "p" + "ay")
"latin" -> "atinlay" ("atin" + "l" + "ay")
"eat" -> "ateay" ("at" + "e" + "ay")
(This is a slight simplification of the children's game.)
Some types of object have built-in functions, called **methods**. We can call a method by writing `.` after an object's name, followed by the name of the method. Different types of object have different methods available. Here is one method of strings, called `split`, which splits it up into a list of smaller strings:
```
tagged_string = "they_PNP"
token_tag = tagged_string.split('_') # split whenever we see '_'
print(token_tag)
token, tag = tagged_string.split('_') # we can also assign each element to a separate variable
print(token, tag)
long_string = "they fish in rivers in December"
tokens = long_string.split() # if we don't specify what to split on, the function splits on whitespace
print(tokens)
```
### Loops
Another way that we can execute lines multiple times is with a **loop**. If we have an object that has multiple elements (like a list, tuple, or dict), then we can loop through them, and execute some code for each element. We write a loop using the keywords `for` and `in`, and define a new variable that stands for the current element. As with a function, the code inside the loop is indented.
```
for x in [1, 2, 3, 'pineapple']:
print(x, 5*x)
my_dict = {'banana': 'yellow', 'apple': 'green', 'orange': 'orange'}
for x in my_dict: # This iterates through the keys of the dict
print('this', x, 'is', my_dict[x])
tuple_dict = {('the', 'fish'): 351, ('dog', 'barked'): 233, ('cat', 'barked'): 1}
for thing in tuple_dict: # Each x is a tuple
print(thing[0])
for p, q in tuple_dict: # We can break the tuple into two parts
print(p, q, tuple_dict[p,q])
```
Variables defined inside a loop will be available in the next iteration. For example, let's iterate through a list of tokens and print both the current token and the previous token:
```
tokens = "they fish in rivers in December".split()
previous = "nothing yet..."
for current in tokens:
print('processing new token!')
print('current token is', current)
print('previous token was', previous)
previous = current # Assign a new value to 'previous', in preparation for the next iteration
```
What happens if we get rid of the line `previous = "nothing yet..."`?
Try writing a function that will take a list of numbers as input, and return the product of all the numbers.
### Logic
Sometimes we may want to do different things depending on the value of an object. For example, suppose we have a list of strings, and want to print everything that starts with the letter 'h'. To do this, we write `if`, followed by a condition that can be `True` or `False`.
```
my_list = 'the hungry hamster has a question to ask'.split()
for w in my_list:
if w[0] == 'h': # A *double* equals sign checks for equality
print(w) # As with loops and function, we indent the code
```
Optionally, we can say what to do if the condition is not true, using the keyword `else`:
```
for w in 'the hungry hamster has a question to ask'.split():
if w[0] == 'h':
print(w, 'begins with h')
else:
print(w, 'does not begin with h')
```
Here are a few examples of conditions that we can use:
```
print('1 == 1', 1 == 1) # Equality
print('1 > 2', 1 > 2) # More than
print('1 < 2', 1 < 2) # Less than
print('1 in [1, 2, 3]', 1 in [1, 2, 3]) # Being in a list
print('5 in [1, 2, 3]', 5 in [1, 2, 3])
print('"h" in "hamster"', "h" in "hamster") # Being in a string
print('"cat" in {"cat": 5, "the" : 8}', "cat" in {"cat": 5, "the" : 8}) # Being a key in a dictionary
print('"dog" in {"cat": 5, "the" : 8}', "dog" in {"cat": 5, "the" : 8})
```
### Putting it all together
For example, let's go through a toy corpus and count how many times each token appears.
```
corpus = 'Once upon a time there was a dragon . The dragon liked to fly . The end .'
tokens = corpus.split()
frequency = {} # An empty dictionary, which will map from words to their counts
for w in tokens:
if w in frequency: # If we've seen the word before
frequency[w] += 1 # Add 1 to the count
else:
frequency[w] = 1 # Start the count from 1
# Let's print all the words that appear more than once
for w in frequency:
if frequency[w] > 1:
print(w)
```
In the above code, we are effectively saying that when a word is not in the `frequency` dictionary, the default value is 0. Because this is a common thing to do, there is a special type of dict called a `defaultdict`, which can be given a default value. The code below effectively does the same thing as the code above.
Because a `defaultdict` is not a core part of Python, we have to **import** it to make it available. There are many packages which extend Python in various ways, including a number of packages specifically for Natural Language Processing.
```
from collections import defaultdict # Make defaultdict available
corpus = 'Once upon a time there was a dragon . The dragon liked to fly . The end .'
tokens = corpus.split()
frequency = defaultdict(int) # The default value will be an int (integer), which defaults to 0
for w in tokens:
frequency[w] += 1 # Add 1 to the count
# Let's print all the words that appear more than once
for w in frequency:
if frequency[w] > 1:
print(w)
```
### Writing an HMM POS-tagger
You should now know enough programming to write your own HMM part-of-speech tagger! Use everything we've covered above to split up the corpus into the bits you need, count the frequencies of the things you need, calculate the relevant probabilities, and finally write a function that will take a tagged string as input, and return the probability of that sequence in the model.
The comments below should guide you through writing a tagger. You can uncomment lines and complete them, as you need to - lines with '...' are incomplete! If you're halfway through writing the tagger and you're not sure if you're not sure if you've done something right, you can `print` things to check that they're what you expect.
```
corpus = "They_PNP used_VVD to_TO0 can_VVI fish_NN2 in_PRP those_DT0 towns_NN2 ._PUN These_DT0 days_NN2 now_AV0 in_PRP these_DT0 areas_NN2 few_DT0 people_NN2 can_VM0 fish_VVB ._PUN"
### We need to find the frequency of each tag, each tag-tag bigram, and each tag-token combination
### First, we need to define the right type of object that will record these frequencies
### If you want to use a type that hasn't been imported, make sure to import it first
#tag_count = ...
#tag_tag_count = ...
#tag_token_count = ...
### Next, we need to calculate these counts by looping over the corpus
#token_tag_list = corpus...
#previous_tag = ...
#for token_tag in token_tag_list:
# token, tag = ...
# tag_count[...] ...
# tag_tag_count[...] ...
# tag_token_count[...] ...
# previous_tag ...
### Finally, we need to use these counts to calculate the probabilities
### First define the right type of object
#tag_tag_prob = ...
#tag_token_prob = ...
### And then do the calculation
#for ... in ...:
# tag_tag_prob[...] = ...
#for ... in ...:
# tag_token_prob[...] = ...
### We have now calculated all the probabilities we need for a Hidden Markov Model!
### Let's define a function that will take a tagged sequence, and calculate the probability of generating it
### The 'text' variable will be something like "They_PNP fished_VVD"
#def prob(text):
# token_tag_list = ...
# result = ...
# previous_tag = ...
# for token_tag in token_tag_list:
# token, tag = ...
# result *= ...
# result *= ...
# previous_tag = ...
# return result
### Now, for the last step! Here are the two sequences we wanted to compare:
#option1 = 'These_DT0 areas_NN2 can_VM0 fish_VVB'
#option2 = 'These_DT0 areas_NN2 can_VVB fish_NN2'
#prob1 = prob(option1)
#prob2 = prob(option2)
#print(prob1)
#print(prob2)
#print(prob2 > prob1)
### If you change the corpus at the beginning, see how the results change
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
# default_exp losses
# default_cls_lvl 3
#export
from fastai.imports import *
from fastai.torch_imports import *
from fastai.torch_core import *
from fastai.layers import *
#hide
from nbdev.showdoc import *
```
# Loss Functions
> Custom fastai loss functions
```
F.binary_cross_entropy_with_logits(torch.randn(4,5), torch.randint(0, 2, (4,5)).float(), reduction='none')
funcs_kwargs
# export
class BaseLoss():
"Same as `loss_cls`, but flattens input and target."
activation=decodes=noops
def __init__(self, loss_cls, *args, axis=-1, flatten=True, floatify=False, is_2d=True, **kwargs):
store_attr("axis,flatten,floatify,is_2d")
self.func = loss_cls(*args,**kwargs)
functools.update_wrapper(self, self.func)
def __repr__(self): return f"FlattenedLoss of {self.func}"
@property
def reduction(self): return self.func.reduction
@reduction.setter
def reduction(self, v): self.func.reduction = v
def __call__(self, inp, targ, **kwargs):
inp = inp .transpose(self.axis,-1).contiguous()
targ = targ.transpose(self.axis,-1).contiguous()
if self.floatify and targ.dtype!=torch.float16: targ = targ.float()
if targ.dtype in [torch.int8, torch.int16, torch.int32]: targ = targ.long()
if self.flatten: inp = inp.view(-1,inp.shape[-1]) if self.is_2d else inp.view(-1)
return self.func.__call__(inp, targ.view(-1) if self.flatten else targ, **kwargs)
```
Wrapping a general loss function inside of `BaseLoss` provides extra functionalities to your loss functions:
- flattens the tensors before trying to take the losses since it's more convenient (with a potential tranpose to put `axis` at the end)
- a potential `activation` method that tells the library if there is an activation fused in the loss (useful for inference and methods such as `Learner.get_preds` or `Learner.predict`)
- a potential <code>decodes</code> method that is used on predictions in inference (for instance, an argmax in classification)
The `args` and `kwargs` will be passed to `loss_cls` during the initialization to instantiate a loss function. `axis` is put at the end for losses like softmax that are often performed on the last axis. If `floatify=True`, the `targs` will be converted to floats (useful for losses that only accept float targets like `BCEWithLogitsLoss`), and `is_2d` determines if we flatten while keeping the first dimension (batch size) or completely flatten the input. We want the first for losses like Cross Entropy, and the second for pretty much anything else.
```
# export
@delegates()
class CrossEntropyLossFlat(BaseLoss):
"Same as `nn.CrossEntropyLoss`, but flattens input and target."
y_int = True
@use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean')
def __init__(self, *args, axis=-1, **kwargs): super().__init__(nn.CrossEntropyLoss, *args, axis=axis, **kwargs)
def decodes(self, x): return x.argmax(dim=self.axis)
def activation(self, x): return F.softmax(x, dim=self.axis)
tst = CrossEntropyLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
#nn.CrossEntropy would fail with those two tensors, but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.CrossEntropyLoss()(output,target))
#Associated activation is softmax
test_eq(tst.activation(output), F.softmax(output, dim=-1))
#This loss function has a decodes which is argmax
test_eq(tst.decodes(output), output.argmax(dim=-1))
#In a segmentation task, we want to take the softmax over the channel dimension
tst = CrossEntropyLossFlat(axis=1)
output = torch.randn(32, 5, 128, 128)
target = torch.randint(0, 5, (32, 128, 128))
_ = tst(output, target)
test_eq(tst.activation(output), F.softmax(output, dim=1))
test_eq(tst.decodes(output), output.argmax(dim=1))
# export
@delegates()
class BCEWithLogitsLossFlat(BaseLoss):
"Same as `nn.BCEWithLogitsLoss`, but flattens input and target."
@use_kwargs_dict(keep=True, weight=None, reduction='mean', pos_weight=None)
def __init__(self, *args, axis=-1, floatify=True, thresh=0.5, **kwargs):
if kwargs.get('pos_weight', None) is not None and kwargs.get('flatten', None) is True:
raise ValueError("`flatten` must be False when using `pos_weight` to avoid a RuntimeError due to shape mismatch")
if kwargs.get('pos_weight', None) is not None: kwargs['flatten'] = False
super().__init__(nn.BCEWithLogitsLoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
self.thresh = thresh
def decodes(self, x): return x>self.thresh
def activation(self, x): return torch.sigmoid(x)
tst = BCEWithLogitsLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randn(32, 5, 10)
#nn.BCEWithLogitsLoss would fail with those two tensors, but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
output = torch.randn(32, 5)
target = torch.randint(0,2,(32, 5))
#nn.BCEWithLogitsLoss would fail with int targets but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
tst = BCEWithLogitsLossFlat(pos_weight=torch.ones(10))
output = torch.randn(32, 5, 10)
target = torch.randn(32, 5, 10)
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
#Associated activation is sigmoid
test_eq(tst.activation(output), torch.sigmoid(output))
# export
@use_kwargs_dict(weight=None, reduction='mean')
def BCELossFlat(*args, axis=-1, floatify=True, **kwargs):
"Same as `nn.BCELoss`, but flattens input and target."
return BaseLoss(nn.BCELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
tst = BCELossFlat()
output = torch.sigmoid(torch.randn(32, 5, 10))
target = torch.randint(0,2,(32, 5, 10))
_ = tst(output, target)
test_fail(lambda x: nn.BCELoss()(output,target))
# export
@use_kwargs_dict(reduction='mean')
def MSELossFlat(*args, axis=-1, floatify=True, **kwargs):
"Same as `nn.MSELoss`, but flattens input and target."
return BaseLoss(nn.MSELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
tst = MSELossFlat()
output = torch.sigmoid(torch.randn(32, 5, 10))
target = torch.randint(0,2,(32, 5, 10))
_ = tst(output, target)
test_fail(lambda x: nn.MSELoss()(output,target))
#hide
#cuda
#Test losses work in half precision
output = torch.sigmoid(torch.randn(32, 5, 10)).half().cuda()
target = torch.randint(0,2,(32, 5, 10)).half().cuda()
for tst in [BCELossFlat(), MSELossFlat()]: _ = tst(output, target)
# export
@use_kwargs_dict(reduction='mean')
def L1LossFlat(*args, axis=-1, floatify=True, **kwargs):
"Same as `nn.L1Loss`, but flattens input and target."
return BaseLoss(nn.L1Loss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
#export
class LabelSmoothingCrossEntropy(Module):
y_int = True
def __init__(self, eps:float=0.1, reduction='mean'): self.eps,self.reduction = eps,reduction
def forward(self, output, target):
c = output.size()[-1]
log_preds = F.log_softmax(output, dim=-1)
if self.reduction=='sum': loss = -log_preds.sum()
else:
loss = -log_preds.sum(dim=-1) #We divide by that size at the return line so sum and not mean
if self.reduction=='mean': loss = loss.mean()
return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), reduction=self.reduction)
def activation(self, out): return F.softmax(out, dim=-1)
def decodes(self, out): return out.argmax(dim=-1)
```
On top of the formula we define:
- a `reduction` attribute, that will be used when we call `Learner.get_preds`
- an `activation` function that represents the activation fused in the loss (since we use cross entropy behind the scenes). It will be applied to the output of the model when calling `Learner.get_preds` or `Learner.predict`
- a <code>decodes</code> function that converts the output of the model to a format similar to the target (here indices). This is used in `Learner.predict` and `Learner.show_results` to decode the predictions
```
#export
@delegates()
class LabelSmoothingCrossEntropyFlat(BaseLoss):
"Same as `LabelSmoothingCrossEntropy`, but flattens input and target."
y_int = True
@use_kwargs_dict(keep=True, eps=0.1, reduction='mean')
def __init__(self, *args, axis=-1, **kwargs): super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs)
def activation(self, out): return F.softmax(out, dim=-1)
def decodes(self, out): return out.argmax(dim=-1)
```
## Export -
```
#hide
from nbdev.export import *
notebook2script()
```
| github_jupyter |
# Getting dataset information
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
x_train = pd.read_csv("data/train.csv")
x_test = pd.read_csv("data/test.csv")
x_test.head()
y_train = x_train["label"].values
y_train.shape
y_train[:10]
x_train = x_train.drop("label", axis=1).values
x_train.shape
x_test = x_test.values
x_test.shape
x_test = x_test.reshape(x_test.shape[0], 28,28)
x_train = x_train.reshape(x_train.shape[0], 28,28)
print(x_train.shape, x_test.shape)
def draw_mnist(data, nrow = 3, ncol = 3, title=None):
f, ax = plt.subplots(nrows=nrow, ncols=ncol, sharex=True, sharey=True)
for i in range(nrow):
for j in range(ncol):
ax[i, j].imshow(data[i * ncol + j], cmap=plt.cm.binary)
if title is None:
ax[i, j].set_title(i * ncol + j)
else:
ax[i, j].set_title(title[i * ncol + j])
plt.show()
draw_mnist(x_train, 3, 5, y_train)
draw_mnist(x_test, 3, 5)
```
# Build model
```
import tensorflow as tf
def conv_layer(input, w, b, s = [1,1,1,1], p = 'SAME'):
conv = tf.nn.conv2d(input, w, s, p)
conv = tf.nn.bias_add(conv, b)
return tf.nn.relu(conv)
def pool_layer(input, size=2, s=[1, 1, 1, 1], p='SAME', ptype='max'):
pool = tf.nn.max_pool(input, ksize=[1, size, size, 1], strides=s, padding=p)
return pool
def fc_layer(input, w, b, relu=False, drop=False, drop_prob=0.5):
fc = tf.add(tf.matmul(input, w), b)
if relu:
fc = tf.nn.relu(fc)
if drop:
fc = tf.nn.dropout(fc, drop_prob)
return fc
def build_model_short(input):
# conv - relu - pool 1
w_conv11 = tf.Variable(tf.truncated_normal([5, 5, 1, 32]))
b_conv11 = tf.Variable(tf.zeros([32]))
conv1 = conv_layer(input, w_conv11, b_conv11)
pool1 = pool_layer(conv1)
# conv - relu - pool 2
w_conv12 = tf.Variable(tf.truncated_normal([3, 3, 32, 64], stddev=0.1))
b_conv12 = tf.Variable(tf.zeros([64]))
conv2 = conv_layer(pool1, w_conv12, b_conv12)
pool2 = pool_layer(conv2)
# flat
conv_size = pool2.get_shape().as_list()
flat_shape = conv_size[1] * conv_size[2] * conv_size[3]
flat = tf.reshape(pool2, [conv_size[0], flat_shape])
# fc1 size 100
fc1_size = 100
w_fc1 = tf.Variable(tf.truncated_normal([flat_shape, fc1_size], stddev=0.1))
b_fc1 = tf.Variable(tf.truncated_normal([fc1_size], stddev=0.1))
fc1 = fc_layer(flat, w_fc1, b_fc1, relu=True, drop_prob=0.4)
# fc2 size 10
fc2_size = 10
w_fc2 = tf.Variable(tf.truncated_normal([fc1_size, fc2_size], stddev=0.1))
b_fc2 = tf.Variable(tf.truncated_normal([fc2_size], stddev=0.1))
fc2 = fc_layer(fc1, w_fc2, b_fc2)
return fc2
lr = 0.0001
train_batch_size, eval_batch_size = 1000, 1000
num_classes = 10
input_w, input_h, channels = 28, 28, 1
train_input_shape = (train_batch_size, input_w, input_h, channels)
train_input = tf.placeholder(tf.float32, shape=train_input_shape, name='train_input')
train_target = tf.placeholder(tf.int32, shape=(train_batch_size, num_classes), name='train_target')
# eval_input_shape = (eval_batch_size, input_w, input_h, channels)
# eval_input = tf.placeholder(tf.float32, shape=eval_input_shape)
# eval_target = tf.placeholder(tf.int32, shape=(eval_batch_size, num_classes))
# gpu0
model_output = build_model_short(train_input)
# gpu1
# eval_model_output = build_model_short(eval_input)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=model_output, labels=train_target))
# eval_cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=eval_model_output, labels=eval_target))
optimazer = tf.train.AdamOptimizer(learning_rate=lr).minimize(cross_entropy)
init = tf.global_variables_initializer()
# data preparation
EVAL_SIZE = 1000
one_short_labels = np.array([np.array([int(i==number) for i in range(10)]) for number in y_train])
eval_data = np.expand_dims(x_train[-EVAL_SIZE:], -1)/255.0
eval_labels = one_short_labels[-EVAL_SIZE:]
input_data = np.expand_dims(x_train[:-EVAL_SIZE], -1)/255.0
input_labels = one_short_labels[:-EVAL_SIZE]
print('train: ', input_data.shape, input_labels.shape)
print('eval: ', eval_data.shape, eval_labels.shape)
epochs = 30
sess = tf.Session()
sess.run(init)
for epoch in range(epochs):
start_batch = 0
end_batch = train_batch_size
while end_batch <= input_data.shape[0]:
_, cost_train = sess.run([optimazer, cross_entropy],
feed_dict={train_input: input_data[start_batch:end_batch],
train_target: input_labels[start_batch:end_batch]})
start_batch += train_batch_size
end_batch += train_batch_size
cost_eval = sess.run(cross_entropy,
feed_dict={train_input: eval_data,
train_target: eval_labels})
print('epoch: %d, train loss: %f, val loss: %f' % (epoch, cost_train, cost_eval))
test_data = np.expand_dims(x_test, -1)
print(test_data.shape)
answer = np.array([], dtype=np.int32)
start_batch = 0
end_batch = eval_batch_size
while end_batch <= test_data.shape[0]:
pred = sess.run(tf.nn.softmax(model_output), feed_dict={train_input: test_data[start_batch:end_batch]})
answer = np.hstack((answer, np.argmax(pred, axis=1, )))
start_batch += train_batch_size
end_batch += train_batch_size
sess.close()
answer.shape
answer
sub_sample = pd.read_csv('data/sample_submission.csv')
sub_sample.head()
submission = pd.DataFrame({'ImageId': range(1, answer.shape[0]+1), 'Label': answer })
# submission['Label'] = answer
submission.to_csv("sub_18_09_18_1.csv", index=False, encoding='utf-8')
```
| github_jupyter |
# Finding Outliers with k-Means
## Setup
```
import numpy as np
import pandas as pd
import sqlite3
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
logs_2018 = pd.read_sql(
"""
SELECT *
FROM logs
WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";
""",
conn, parse_dates=['datetime'], index_col='datetime'
)
logs_2018.head()
def get_X(log, day):
"""
Get data we can use for the X
Parameters:
- log: The logs dataframe
- day: A day or single value we can use as a datetime index slice
Returns:
A pandas DataFrame
"""
return pd.get_dummies(log[day].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username':'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username':'usernames_with_failures'}
).assign(
day_of_week=lambda x: x.index.dayofweek,
hour=lambda x: x.index.hour
).drop(columns=['failures']), columns=['day_of_week', 'hour'])
X = get_X(logs_2018, '2018')
X.columns
```
## k-Means
Since we want a "normal" activity cluster and an "anomaly" cluster, we need to make 2 clusters.
```
from sklearn.cluster import KMeans
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
kmeans_pipeline = Pipeline([
('scale', StandardScaler()),
('kmeans', KMeans(random_state=0, n_clusters=2))
]).fit(X)
```
The cluster label doesn't mean anything to us, but we can examine the size of each cluster. We don't expect the clusters to be of equal size because anomalous activity doesn't happen as often as normal activity (we presume).
```
preds = kmeans_pipeline.predict(X)
pd.Series(preds).value_counts()
```
### Evaluating the clustering
#### Step 1: Get the true labels
```
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
def get_y(datetimes, hackers, resolution='1min'):
"""
Get data we can use for the y (whether or not a hacker attempted a log in during that time).
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
A pandas Series of booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series()
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
is_hacker = get_y(X.reset_index().datetime, hackers_2018)
```
### Step 2: Calculate Fowlkes Mallows Score
This indicates percentage of the observations belong to the same cluster in the true labels and in the predicted labels.
```
from sklearn.metrics import fowlkes_mallows_score
fowlkes_mallows_score(is_hacker, preds)
```
| github_jupyter |
# Detecting malaria in blood smear images
### The Problem
Malaria is a mosquito-borne disease caused by the parasite _Plasmodium_. There are an estimated 219 million cases of malaria annually, with 435,000 deaths, many of whom are children. Malaria is prevalent in sub-tropical regions of Africa.
Microscopy is the most common and reliable method for diagnosing malaria and computing parasitic load.
With this technique, malaria parasites are identified by examining a drop of the patient’s blood, spread out as a “blood smear” on a slide. Prior to examination, the specimen is stained (most often with the Giemsa stain) to give the parasites a distinctive appearance. This technique remains the gold standard for laboratory confirmation of malaria.

Blood smear from a patient with malaria; microscopic examination shows _Plasmodium falciparum_ parasites (arrows) infecting some of the patient’s red blood cells. (CDC photo)
However, the diagnostic accuracy of this technique is dependent on human expertise and can be affectived by and observer's variability.
### Deep learning as a diagnostic aid
Recent advances in computing and deep learning techniques have led to the applications of large-scale medical image analysis. Here, we aim to use a convolutional neural network (CNN) in order to quickly and accurately classify parasitized from healthy cells from blood smears.
This notebook is based on the work presented by [Dipanjan Sarkar](https://towardsdatascience.com/detecting-malaria-with-deep-learning-9e45c1e34b60)
### About the dataset
A [dataset](https://ceb.nlm.nih.gov/repositories/malaria-datasets/) of parasitized and unparasitized cells from blood smear slides was collected and annotated by [Rajaraman et al](https://doi.org/10.7717/peerj.4568). The dataset contains a total of 27,558 cell images with equal instances of parasitized and uninfected cells from Giemsa-stained thin blood smear slides from 150 P. falciparum-infected and 50 healthy patients collected and photographed at Chittagong Medical College Hospital, Bangladesh. There are also CSV files containing the Patient-ID to cell mappings for the parasitized and uninfected classes. The CSV file for the parasitized class contains 151 patient-ID entries. The slide images for the parasitized patient-ID “C47P8thinOriginal” are read from two different microscope models (Olympus and Motif). The CSV file for the uninfected class contains 201 entries since the normal cells from the infected patients’ slides also make it to the normal cell category (151+50 = 201).
The data appears along with the publication:
Rajaraman S, Antani SK, Poostchi M, Silamut K, Hossain MA, Maude, RJ, Jaeger S, Thoma GR. (2018) Pre-trained convolutional neural networks as feature extractors toward improved Malaria parasite detection in thin blood smear images. PeerJ6:e4568 https://doi.org/10.7717/peerj.4568
## Malaria Dataset
Medium post:
https://towardsdatascience.com/detecting-malaria-using-deep-learning-fd4fdcee1f5a
Data:
https://ceb.nlm.nih.gov/repositories/malaria-datasets/
## Data preprocessing
The [cell images](https://ceb.nlm.nih.gov/proj/malaria/cell_images.zip) dataset can be downloaded from the [NIH repository](https://ceb.nlm.nih.gov/repositories/malaria-datasets/).
Parasitized and healthy cells are sorted into their own folders.
```
# mkdir ../data/
# wget https://ceb.nlm.nih.gov/proj/malaria/cell_images.zip
# unzip cell_images.zip
import os
os.listdir('../data/cell_images/')
import random
import glob
# Get file paths for files
base_dir = os.path.join('../data/cell_images')
infected_dir = os.path.join(base_dir, 'Parasitized')
healthy_dir = os.path.join(base_dir, 'Uninfected')
# Glob is used to identify filepath patterns
infected_files = glob.glob(infected_dir+'/*.png')
healthy_files = glob.glob(healthy_dir+'/*.png')
# View size of dataset
len(infected_files), len(healthy_files)
```
Our data is evenly split between parasitized and healthy cells/images so we won't need to further balance our data.
## Split data into train, test, split sets
We can aggregate all of our images by adding the filepaths and labels into a single dataframe.
We'll then shuffle and split the data into a 60/30/10 train/test/validation set.
```
import numpy as np
import pandas as pd
np.random.seed(1)
# Build a dataframe of filenames with labels
files = pd.DataFrame(data={'filename': infected_files, 'label': ['malaria' for i in range(len(infected_files))]})
files = pd.concat([files, pd.DataFrame(data={'filename': healthy_files, 'label': ['healthy' for i in range(len(healthy_files))]})])
files = files.sample(frac=1).reset_index(drop=True) # Shuffle rows
files.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(files.filename.values, files.label.values, test_size=0.3, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=42)
X_train.shape, X_val.shape, y_test.shape
```
As the dimensions of each image will vary, we will resize the images to be 125 x 125 pixels. The cv2 module can be used to load and resize images.
```
import cv2
# Read and resize images
nrows = 125
ncols = 125
channels = 3
cv2.imread(X_train[0], cv2.IMREAD_COLOR)
cv2.resize(cv2.imread(X_train[0], cv2.IMREAD_COLOR), (nrows, ncols), interpolation=cv2.INTER_CUBIC).shape
import threading
from concurrent import futures
# Resize images
IMG_DIMS = (125, 125)
def get_img_data_parallel(idx, img, total_imgs):
if idx % 5000 == 0 or idx == (total_imgs - 1):
print('{}: working on img num: {}'.format(threading.current_thread().name,
idx))
img = cv2.imread(img)
img = cv2.resize(img, dsize=IMG_DIMS,
interpolation=cv2.INTER_CUBIC)
img = np.array(img, dtype=np.float32)
return img
ex = futures.ThreadPoolExecutor(max_workers=None)
X_train_inp = [(idx, img, len(X_train)) for idx, img in enumerate(X_train)]
X_val_inp = [(idx, img, len(X_val)) for idx, img in enumerate(X_val)]
X_test_inp = [(idx, img, len(X_test)) for idx, img in enumerate(X_test)]
print('Loading Train Images:')
X_train_map = ex.map(get_img_data_parallel,
[record[0] for record in X_train_inp],
[record[1] for record in X_train_inp],
[record[2] for record in X_train_inp])
X_train = np.array(list(X_train_map))
print('\nLoading Validation Images:')
X_val_map = ex.map(get_img_data_parallel,
[record[0] for record in X_val_inp],
[record[1] for record in X_val_inp],
[record[2] for record in X_val_inp])
X_val = np.array(list(X_val_map))
print('\nLoading Test Images:')
X_test_map = ex.map(get_img_data_parallel,
[record[0] for record in X_test_inp],
[record[1] for record in X_test_inp],
[record[2] for record in X_test_inp])
X_test = np.array(list(X_test_map))
X_train.shape, X_val.shape, X_test.shape
```
Using the matplotlib module, we can view a sample of the resized cell images. A brief inspection shows the presence of purple-stained parasites only in malaria-labeled samples.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(1 , figsize = (8 , 8))
n = 0
for i in range(16):
n += 1
r = np.random.randint(0 , X_train.shape[0] , 1)
plt.subplot(4 , 4 , n)
plt.subplots_adjust(hspace = 0.5 , wspace = 0.5)
plt.imshow(X_train[r[0]]/255.)
plt.title('{}'.format(y_train[r[0]]))
plt.xticks([]) , plt.yticks([])
```
## Model training
We can set some initial parameters for our model, including batch size, the number of classes, number of epochs, and image dimensions.
We'll encode the text category labels as 0 or 1.
```
from sklearn.preprocessing import LabelEncoder
BATCH_SIZE = 64
NUM_CLASSES = 2
EPOCHS = 25
INPUT_SHAPE = (125, 125, 3)
X_train_imgs_scaled = X_train / 255.
X_val_imgs_scaled = X_val / 255.
le = LabelEncoder()
le.fit(y_train)
y_train_enc = le.transform(y_train)
y_val_enc = le.transform(y_val)
print(y_train[:6], y_train_enc[:6])
```
### Simple CNN model
To start with, we'll build a simple CNN model with 2 convolution and pooling layers and a dense dropout layer for regularization.
```
from keras.models import Sequential
from keras.utils import to_categorical
from keras.layers import Conv2D, Dense, MaxPooling2D, Flatten
# Build a simple CNN
model = Sequential()
model.add(Conv2D(32, kernel_size=(5,5), strides=(1,1), activation='relu', input_shape=INPUT_SHAPE))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(64, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1000, activation='relu'))
model.add(Dense(1, activation='softmax'))
# out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
# model = tf.keras.Model(inputs=inp, outputs=out)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
```
We can evaluate the accuracy of model
```
import datetime
from keras import callbacks
# View accuracy
logdir = os.path.join('../tensorboard_logs',
datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = callbacks.TensorBoard(logdir, histogram_freq=1)
reduce_lr = callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
patience=2, min_lr=0.000001)
callbacks = [reduce_lr, tensorboard_callback]
history = model.fit(x=X_train_imgs_scaled, y=y_train_enc,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(X_val_imgs_scaled, y_val_enc),
callbacks=callbacks,
verbose=1)
```
| github_jupyter |
<img align="right" src="images/tf.png" width="128"/>
<img align="right" src="images/ninologo.png" width="128"/>
<img align="right" src="images/dans.png" width="128"/>
# Tutorial
This notebook gets you started with using
[Text-Fabric](https://annotation.github.io/text-fabric/) for coding in the Old-Babylonian Letter corpus (cuneiform).
Familiarity with the underlying
[data model](https://annotation.github.io/text-fabric/tf/about/datamodel.html)
is recommended.
## Installing Text-Fabric
### Python
You need to have Python on your system. Most systems have it out of the box,
but alas, that is python2 and we need at least python **3.6**.
Install it from [python.org](https://www.python.org) or from
[Anaconda](https://www.anaconda.com/download).
### TF itself
```
pip3 install text-fabric
```
### Jupyter notebook
You need [Jupyter](http://jupyter.org).
If it is not already installed:
```
pip3 install jupyter
```
## Tip
If you cloned the repository containing this tutorial,
first copy its parent directory to somewhere outside your clone of the repo,
before computing with this it.
If you pull changes from the repository later, it will not conflict with
your computations.
Where you put your tutorial directory is up to you.
It will work from any directory.
## Old Babylonian data
Text-Fabric will fetch the data set for you from the newest github release binaries.
The data will be stored in the `text-fabric-data` in your home directory.
# Features
The data of the corpus is organized in features.
They are *columns* of data.
Think of the corpus as a gigantic spreadsheet, where row 1 corresponds to the
first sign, row 2 to the second sign, and so on, for all 200,000 signs.
The information which reading each sign has, constitutes a column in that spreadsheet.
The Old Babylonian corpus contains nearly 60 columns, not only for the signs, but also for thousands of other
textual objects, such as clusters, lines, columns, faces, documents.
Instead of putting that information in one big table, the data is organized in separate columns.
We call those columns **features**.
```
%load_ext autoreload
%autoreload 2
import os
import collections
```
# Incantation
The simplest way to get going is by this *incantation*:
```
from tf.app import use
```
For the very last version, use `hot`.
For the latest release, use `latest`.
If you have cloned the repos (TF app and data), use `clone`.
If you do not want/need to upgrade, leave out the checkout specifiers.
```
A = use("oldbabylonian:clone", checkout="clone", hoist=globals())
# A = use('oldbabylonian:hot', checkout="hot", hoist=globals())
# A = use('oldbabylonian:latest', checkout="latest", hoist=globals())
# A = use('oldbabylonian', hoist=globals())
```
You can see which features have been loaded, and if you click on a feature name, you find its documentation.
If you hover over a name, you see where the feature is located on your system.
## API
The result of the incantation is that we have a bunch of special variables at our disposal
that give us access to the text and data of the corpus.
At this point it is helpful to throw a quick glance at the text-fabric API documentation
(see the links under **API Members** above).
The most essential thing for now is that we can use `F` to access the data in the features
we've loaded.
But there is more, such as `N`, which helps us to walk over the text, as we see in a minute.
The **API members** above show you exactly which new names have been inserted in your namespace.
If you click on these names, you go to the API documentation for them.
## Search
Text-Fabric contains a flexible search engine, that does not only work for the data,
of this corpus, but also for other corpora and data that you add to corpora.
**Search is the quickest way to come up-to-speed with your data, without too much programming.**
Jump to the dedicated [search](search.ipynb) search tutorial first, to whet your appetite.
The real power of search lies in the fact that it is integrated in a programming environment.
You can use programming to:
* compose dynamic queries
* process query results
Therefore, the rest of this tutorial is still important when you want to tap that power.
If you continue here, you learn all the basics of data-navigation with Text-Fabric.
# Counting
In order to get acquainted with the data, we start with the simple task of counting.
## Count all nodes
We use the
[`N.walk()` generator](https://annotation.github.io/text-fabric/tf/core/nodes.html#tf.core.nodes.Nodes.walk)
to walk through the nodes.
We compared the TF data to a gigantic spreadsheet, where the rows correspond to the signs.
In Text-Fabric, we call the rows `slots`, because they are the textual positions that can be filled with signs.
We also mentioned that there are also other textual objects.
They are the clusters, lines, faces and documents.
They also correspond to rows in the big spreadsheet.
In Text-Fabric we call all these rows *nodes*, and the `N()` generator
carries us through those nodes in the textual order.
Just one extra thing: the `info` statements generate timed messages.
If you use them instead of `print` you'll get a sense of the amount of time that
the various processing steps typically need.
```
A.indent(reset=True)
A.info("Counting nodes ...")
i = 0
for n in N.walk():
i += 1
A.info("{} nodes".format(i))
```
Here you see it: over 300,000 nodes.
## What are those nodes?
Every node has a type, like sign, or line, face.
But what exactly are they?
Text-Fabric has two special features, `otype` and `oslots`, that must occur in every Text-Fabric data set.
`otype` tells you for each node its type, and you can ask for the number of `slot`s in the text.
Here we go!
```
F.otype.slotType
F.otype.maxSlot
F.otype.maxNode
F.otype.all
C.levels.data
```
This is interesting: above you see all the textual objects, with the average size of their objects,
the node where they start, and the node where they end.
## Count individual object types
This is an intuitive way to count the number of nodes in each type.
Note in passing, how we use the `indent` in conjunction with `info` to produce neat timed
and indented progress messages.
```
A.indent(reset=True)
A.info("counting objects ...")
for otype in F.otype.all:
i = 0
A.indent(level=1, reset=True)
for n in F.otype.s(otype):
i += 1
A.info("{:>7} {}s".format(i, otype))
A.indent(level=0)
A.info("Done")
```
# Viewing textual objects
You can use the A API (the extra power) to display cuneiform text.
See the [display](display.ipynb) tutorial.
# Feature statistics
`F`
gives access to all features.
Every feature has a method
`freqList()`
to generate a frequency list of its values, higher frequencies first.
Here are the repeats of numerals (the `-1` comes from a `n(rrr)`:
```
F.repeat.freqList()
```
Signs have types and clusters have types. We can count them separately:
```
F.type.freqList("cluster")
F.type.freqList("sign")
```
Finally, the flags:
```
F.flags.freqList()
```
# Word matters
## Top 20 frequent words
We represent words by their essential symbols, collected in the feature *sym* (which also exists for signs).
```
for (w, amount) in F.sym.freqList("word")[0:20]:
print(f"{amount:>5} {w}")
```
## Word distribution
Let's do a bit more fancy word stuff.
### Hapaxes
A hapax can be found by picking the words with frequency 1
We print 20 hapaxes.
```
for w in [w for (w, amount) in F.sym.freqList("word") if amount == 1][0:20]:
print(f'"{w}"')
```
### Small occurrence base
The occurrence base of a word are the documents in which occurs.
We compute the occurrence base of each word.
```
occurrenceBase = collections.defaultdict(set)
for w in F.otype.s("word"):
pNum = T.sectionFromNode(w)[0]
occurrenceBase[F.sym.v(w)].add(pNum)
```
An overview of how many words have how big occurrence bases:
```
occurrenceSize = collections.Counter()
for (w, pNums) in occurrenceBase.items():
occurrenceSize[len(pNums)] += 1
occurrenceSize = sorted(
occurrenceSize.items(),
key=lambda x: (-x[1], x[0]),
)
for (size, amount) in occurrenceSize[0:10]:
print(f"base size {size:>4} : {amount:>5} words")
print("...")
for (size, amount) in occurrenceSize[-10:]:
print(f"base size {size:>4} : {amount:>5} words")
```
Let's give the predicate *private* to those words whose occurrence base is a single document.
```
privates = {w for (w, base) in occurrenceBase.items() if len(base) == 1}
len(privates)
```
### Peculiarity of documents
As a final exercise with words, lets make a list of all documents, and show their
* total number of words
* number of private words
* the percentage of private words: a measure of the peculiarity of the document
```
docList = []
empty = set()
ordinary = set()
for d in F.otype.s("document"):
pNum = T.documentName(d)
words = {F.sym.v(w) for w in L.d(d, otype="word")}
a = len(words)
if not a:
empty.add(pNum)
continue
o = len({w for w in words if w in privates})
if not o:
ordinary.add(pNum)
continue
p = 100 * o / a
docList.append((pNum, a, o, p))
docList = sorted(docList, key=lambda e: (-e[3], -e[1], e[0]))
print(f"Found {len(empty):>4} empty documents")
print(f"Found {len(ordinary):>4} ordinary documents (i.e. without private words)")
print(
"{:<20}{:>5}{:>5}{:>5}\n{}".format(
"document",
"#all",
"#own",
"%own",
"-" * 35,
)
)
for x in docList[0:20]:
print("{:<20} {:>4} {:>4} {:>4.1f}%".format(*x))
print("...")
for x in docList[-20:]:
print("{:<20} {:>4} {:>4} {:>4.1f}%".format(*x))
```
# Locality API
We travel upwards and downwards, forwards and backwards through the nodes.
The Locality-API (`L`) provides functions: `u()` for going up, and `d()` for going down,
`n()` for going to next nodes and `p()` for going to previous nodes.
These directions are indirect notions: nodes are just numbers, but by means of the
`oslots` feature they are linked to slots. One node *contains* an other node, if the one is linked to a set of slots that contains the set of slots that the other is linked to.
And one if next or previous to an other, if its slots follow or precede the slots of the other one.
`L.u(node)` **Up** is going to nodes that embed `node`.
`L.d(node)` **Down** is the opposite direction, to those that are contained in `node`.
`L.n(node)` **Next** are the next *adjacent* nodes, i.e. nodes whose first slot comes immediately after the last slot of `node`.
`L.p(node)` **Previous** are the previous *adjacent* nodes, i.e. nodes whose last slot comes immediately before the first slot of `node`.
All these functions yield nodes of all possible otypes.
By passing an optional parameter, you can restrict the results to nodes of that type.
The result are ordered according to the order of things in the text.
The functions return always a tuple, even if there is just one node in the result.
## Going up
We go from the first word to the document it contains.
Note the `[0]` at the end. You expect one document, yet `L` returns a tuple.
To get the only element of that tuple, you need to do that `[0]`.
If you are like me, you keep forgetting it, and that will lead to weird error messages later on.
```
firstDoc = L.u(1, otype="document")[0]
print(firstDoc)
```
And let's see all the containing objects of sign 3:
```
s = 3
for otype in F.otype.all:
if otype == F.otype.slotType:
continue
up = L.u(s, otype=otype)
upNode = "x" if len(up) == 0 else up[0]
print("sign {} is contained in {} {}".format(s, otype, upNode))
```
## Going next
Let's go to the next nodes of the first document.
```
afterFirstDoc = L.n(firstDoc)
for n in afterFirstDoc:
print(
"{:>7}: {:<13} first slot={:<6}, last slot={:<6}".format(
n,
F.otype.v(n),
E.oslots.s(n)[0],
E.oslots.s(n)[-1],
)
)
secondDoc = L.n(firstDoc, otype="document")[0]
```
## Going previous
And let's see what is right before the second document.
```
for n in L.p(secondDoc):
print(
"{:>7}: {:<13} first slot={:<6}, last slot={:<6}".format(
n,
F.otype.v(n),
E.oslots.s(n)[0],
E.oslots.s(n)[-1],
)
)
```
## Going down
We go to the faces of the first document, and just count them.
```
faces = L.d(firstDoc, otype="face")
print(len(faces))
```
## The first line
We pick two nodes and explore what is above and below them:
the first line and the first word.
```
for n in [
F.otype.s("word")[0],
F.otype.s("line")[0],
]:
A.indent(level=0)
A.info("Node {}".format(n), tm=False)
A.indent(level=1)
A.info("UP", tm=False)
A.indent(level=2)
A.info("\n".join(["{:<15} {}".format(u, F.otype.v(u)) for u in L.u(n)]), tm=False)
A.indent(level=1)
A.info("DOWN", tm=False)
A.indent(level=2)
A.info("\n".join(["{:<15} {}".format(u, F.otype.v(u)) for u in L.d(n)]), tm=False)
A.indent(level=0)
A.info("Done", tm=False)
```
# Text API
So far, we have mainly seen nodes and their numbers, and the names of node types.
You would almost forget that we are dealing with text.
So let's try to see some text.
In the same way as `F` gives access to feature data,
`T` gives access to the text.
That is also feature data, but you can tell Text-Fabric which features are specifically
carrying the text, and in return Text-Fabric offers you
a Text API: `T`.
## Formats
Cuneiform text can be represented in a number of ways:
* original ATF, with bracketings and flags
* essential symbols: readings and graphemes, repeats and fractions (of numerals), no flags, no clusterings
* unicode symbols
If you wonder where the information about text formats is stored:
not in the program text-fabric, but in the data set.
It has a feature `otext`, which specifies the formats and which features
must be used to produce them. `otext` is the third special feature in a TF data set,
next to `otype` and `oslots`.
It is an optional feature.
If it is absent, there will be no `T` API.
Here is a list of all available formats in this data set.
```
sorted(T.formats)
```
## Using the formats
The ` T.text()` function is central to get text representations of nodes. Its most basic usage is
```python
T.text(nodes, fmt=fmt)
```
where `nodes` is a list or iterable of nodes, usually word nodes, and `fmt` is the name of a format.
If you leave out `fmt`, the default `text-orig-full` is chosen.
The result is the text in that format for all nodes specified:
```
T.text([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], fmt="text-orig-plain")
```
There is also another usage of this function:
```python
T.text(node, fmt=fmt)
```
where `node` is a single node.
In this case, the default format is *ntype*`-orig-full` where *ntype* is the type of `node`.
If the format is defined in the corpus, it will be used. Otherwise, the word nodes contained in `node` will be looked up
and represented with the default format `text-orig-full`.
In this way we can sensibly represent a lot of different nodes, such as documents, faces, lines, clusters, words and signs.
We compose a set of example nodes and run `T.text` on them:
```
exampleNodes = [
F.otype.s("sign")[0],
F.otype.s("word")[0],
F.otype.s("cluster")[0],
F.otype.s("line")[0],
F.otype.s("face")[0],
F.otype.s("document")[0],
]
exampleNodes
for n in exampleNodes:
print(f"This is {F.otype.v(n)} {n}:")
print(T.text(n))
print("")
```
## Using the formats
Now let's use those formats to print out the first line in this corpus.
Note that only the formats starting with `text-` are usable for this.
For the `layout-` formats, see [display](display.ipynb).
```
for fmt in sorted(T.formats):
if fmt.startswith("text-"):
print("{}:\n\t{}".format(fmt, T.text(range(1, 12), fmt=fmt)))
```
If we do not specify a format, the **default** format is used (`text-orig-full`).
```
T.text(range(1, 12))
firstLine = F.otype.s("line")[0]
T.text(firstLine)
T.text(firstLine, fmt="text-orig-unicode")
```
The important things to remember are:
* you can supply a list of slot nodes and get them represented in all formats
* you can get non-slot nodes `n` in default format by `T.text(n)`
* you can get non-slot nodes `n` in other formats by `T.text(n, fmt=fmt, descend=True)`
## Whole text in all formats in just 2 seconds
Part of the pleasure of working with computers is that they can crunch massive amounts of data.
The text of the Old Babylonian Letters is a piece of cake.
It takes just ten seconds to have that cake and eat it.
In nearly a dozen formats.
```
A.indent(reset=True)
A.info("writing plain text of all letters in all text formats")
text = collections.defaultdict(list)
for ln in F.otype.s("line"):
for fmt in sorted(T.formats):
if fmt.startswith("text-"):
text[fmt].append(T.text(ln, fmt=fmt, descend=True))
A.info("done {} formats".format(len(text)))
for fmt in sorted(text):
print("{}\n{}\n".format(fmt, "\n".join(text[fmt][0:5])))
```
### The full plain text
We write all formats to file, in your `Downloads` folder.
```
for fmt in T.formats:
if fmt.startswith("text-"):
with open(os.path.expanduser(f"~/Downloads/{fmt}.txt"), "w") as f:
f.write("\n".join(text[fmt]))
```
## Sections
A section in the letter corpus is a document, a face or a line.
Knowledge of sections is not baked into Text-Fabric.
The config feature `otext.tf` may specify three section levels, and tell
what the corresponding node types and features are.
From that knowledge it can construct mappings from nodes to sections, e.g. from line
nodes to tuples of the form:
(p-number, face specifier, line number)
You can get the section of a node as a tuple of relevant document, face, and line nodes.
Or you can get it as a passage label, a string.
You can ask for the passage corresponding to the first slot of a node, or the one corresponding to the last slot.
If you are dealing with document and face nodes, you can ask to fill out the line and face parts as well.
Here are examples of getting the section that corresponds to a node and vice versa.
**NB:** `sectionFromNode` always delivers a verse specification, either from the
first slot belonging to that node, or, if `lastSlot`, from the last slot
belonging to that node.
```
someNodes = (
F.otype.s("sign")[100000],
F.otype.s("word")[10000],
F.otype.s("cluster")[5000],
F.otype.s("line")[15000],
F.otype.s("face")[1000],
F.otype.s("document")[500],
)
for n in someNodes:
nType = F.otype.v(n)
d = f"{n:>7} {nType}"
first = A.sectionStrFromNode(n)
last = A.sectionStrFromNode(n, lastSlot=True, fillup=True)
tup = (
T.sectionTuple(n),
T.sectionTuple(n, lastSlot=True, fillup=True),
)
print(f"{d:<16} - {first:<18} {last:<18} {tup}")
```
# Clean caches
Text-Fabric pre-computes data for you, so that it can be loaded faster.
If the original data is updated, Text-Fabric detects it, and will recompute that data.
But there are cases, when the algorithms of Text-Fabric have changed, without any changes in the data, that you might
want to clear the cache of precomputed results.
There are two ways to do that:
* Locate the `.tf` directory of your dataset, and remove all `.tfx` files in it.
This might be a bit awkward to do, because the `.tf` directory is hidden on Unix-like systems.
* Call `TF.clearCache()`, which does exactly the same.
It is not handy to execute the following cell all the time, that's why I have commented it out.
So if you really want to clear the cache, remove the comment sign below.
```
# TF.clearCache()
```
# Next steps
By now you have an impression how to compute around in the corpus.
While this is still the beginning, I hope you already sense the power of unlimited programmatic access
to all the bits and bytes in the data set.
Here are a few directions for unleashing that power.
* **[display](display.ipynb)** become an expert in creating pretty displays of your text structures
* **[search](search.ipynb)** turbo charge your hand-coding with search templates
* **[exportExcel](exportExcel.ipynb)** make tailor-made spreadsheets out of your results
* **[share](share.ipynb)** draw in other people's data and let them use yours
* **[similarLines](similarLines.ipynb)** spot the similarities between lines
---
See the [cookbook](cookbook) for recipes for small, concrete tasks.
CC-BY Dirk Roorda
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3Av2:oracle.run1.framed -> cores+wisig",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 200],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 200]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 16000, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "W_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 1337,
"dataset_seed": 1337,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Distributed DeepRacer RL training with SageMaker and RoboMaker
---
## Introduction
In this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications.
This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/home#welcome), giving us more control over the training/simulation process and RL algorithm tuning.

---
## How it works?

The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes.
The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following:
1. **Objective**: Learn to drive autonomously by staying close to the center of the track.
2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.
3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.
4. **Action**: Six discrete steering wheel positions at different angles (configurable)
5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added).
## Prequisites
### Run these command if you wish to modify the SageMaker and Robomaker code
<span style="color:red">Note: Make sure you have atleast 25 GB of space when you are planning to modify the Sagemaker and Robomaker code</span>
```
# #
# # Run these commands only for the first time
# #
# # Clean the build directory if present
# !python3 sim_app_bundler.py --clean
# # Download Robomaker simApp from the deepracer public s3 bucket
# simulation_application_bundle_location = "s3://deepracer-managed-resources-us-east-1/deepracer-simapp.tar.gz"
# !aws s3 cp {simulation_application_bundle_location} ./
# # Untar the simapp bundle
# !python3 sim_app_bundler.py --untar ./deepracer-simapp.tar.gz
# # Now modify the simapp(Robomaker) from build directory and run this command.
# # Most of the simapp files can be found here (Robomaker changes). You can modify them in these locations
# # bundle/opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/
# # bundle/opt/install/deepracer_simulation_environment/share/deepracer_simulation_environment/
# # bundle/opt/install/deepracer_simulation_environment/lib/deepracer_simulation_environment/
# # # Copying the notebook src/markov changes to the simapp (For sagemaker container)
# !rsync -av ./src/markov/ ./build/simapp/bundle/opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/markov
# print("############################################")
# print("This command execution takes around >2 min...")
# !python3 sim_app_bundler.py --tar
```
### Imports
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on SageMaker and a simulation job on RoboMaker.
```
import boto3
import sagemaker
import sys
import os
import re
import numpy as np
import subprocess
import yaml
sys.path.append("common")
sys.path.append("./src")
from misc import get_execution_role, wait_for_s3_object
from docker_utils import build_and_push_docker_image
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from time import gmtime, strftime
import time
from IPython.display import Markdown
from markdown_helper import *
```
### Initializing basic parameters
```
# Select the instance type
instance_type = "ml.c4.2xlarge"
#instance_type = "ml.p2.xlarge"
#instance_type = "ml.c5.4xlarge"
# Starting SageMaker session
sage_session = sagemaker.session.Session()
# Create unique job name.
job_name_prefix = 'deepracer-notebook'
# Duration of job in seconds (1 hours)
job_duration_in_seconds = 3600
# AWS Region
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia),"
"US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
```
### Setup S3 bucket
Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
```
# S3 bucket
s3_bucket = sage_session.default_bucket()
# SDK appends the job name and output folder
s3_output_path = 's3://{}/'.format(s3_bucket)
#Ensure that the S3 prefix contains the keyword 'sagemaker'
s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", gmtime())
# Get the AWS account id of this account
sts = boto3.client("sts")
account_id = sts.get_caller_identity()['Account']
print("Using s3 bucket {}".format(s3_bucket))
print("Model checkpoints and other metadata will be stored at: \ns3://{}/{}".format(s3_bucket, s3_prefix))
```
### Create an IAM role
Either get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
```
try:
sagemaker_role = sagemaker.get_execution_role()
except:
sagemaker_role = get_execution_role('sagemaker')
print("Using Sagemaker IAM role arn: \n{}".format(sagemaker_role))
```
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service.
### Permission setup for invoking AWS RoboMaker from this notebook
In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
```
display(Markdown(generate_help_for_robomaker_trust_relationship(sagemaker_role)))
```
### Permission setup for Sagemaker to S3 bucket
The sagemaker writes the Redis IP address, models to the S3 bucket. This requires PutObject permission on the bucket. Make sure the sagemaker role you are using as this permissions.
```
display(Markdown(generate_s3_write_permission_for_sagemaker_role(sagemaker_role)))
```
### Permission setup for Sagemaker to create KinesisVideoStreams
The sagemaker notebook has to create a kinesis video streamer. You can observer the car making epsiodes in the kinesis video streamer.
```
display(Markdown(generate_kinesis_create_permission_for_sagemaker_role(sagemaker_role)))
```
### Build and push docker image
The file ./Dockerfile contains all the packages that are installed into the docker. Instead of using the default sagemaker container. We will be using this docker container.
```
%%time
from copy_to_sagemaker_container import get_sagemaker_docker, copy_to_sagemaker_container, get_custom_image_name
cpu_or_gpu = 'gpu' if instance_type.startswith('ml.p') else 'cpu'
repository_short_name = "sagemaker-docker-%s" % cpu_or_gpu
custom_image_name = get_custom_image_name(repository_short_name)
try:
print("Copying files from your notebook to existing sagemaker container")
sagemaker_docker_id = get_sagemaker_docker(repository_short_name)
copy_to_sagemaker_container(sagemaker_docker_id, repository_short_name)
except Exception as e:
print("Creating sagemaker container")
docker_build_args = {
'CPU_OR_GPU': cpu_or_gpu,
'AWS_REGION': boto3.Session().region_name,
}
custom_image_name = build_and_push_docker_image(repository_short_name, build_args=docker_build_args)
print("Using ECR image %s" % custom_image_name)
```
### Clean the docker images
Remove this only when you want to completely remove the docker or clean up the space of the sagemaker instance
```
# !docker rm -f $(docker ps -a -q);
# !docker rmi -f $(docker images -q);
```
### Configure VPC
Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts.
We will check if the deepracer-vpc stack is created and use it if present (This is present if the AWS Deepracer console is used atleast once to create a model). Else we will use the default VPC stack.
```
ec2 = boto3.client('ec2')
#
# Check if the user has Deepracer-VPC and use that if its present. This will have all permission.
# This VPC will be created when you have used the Deepracer console and created one model atleast
# If this is not present. Use the default VPC connnection
#
deepracer_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups']\
if group['GroupName'].startswith("aws-deepracer-")]
# deepracer_security_groups = False
if(deepracer_security_groups):
print("Using the DeepRacer VPC stacks. This will be created if you run one training job from console.")
deepracer_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] \
if "Tags" in vpc for val in vpc['Tags'] \
if val['Value'] == 'deepracer-vpc'][0]
deepracer_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == deepracer_vpc]
else:
print("Using the default VPC stacks")
deepracer_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
deepracer_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if 'VpcId' in group and group["GroupName"] == "default" and group["VpcId"] == deepracer_vpc]
deepracer_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == deepracer_vpc and subnet['DefaultForAz']==True]
print("Using VPC:", deepracer_vpc)
print("Using security group:", deepracer_security_groups)
print("Using subnets:", deepracer_subnets)
```
### Create Route Table
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
```
#TODO: Explain to customer what CREATE_ROUTE_TABLE is doing
CREATE_ROUTE_TABLE = True
def create_vpc_endpoint_table():
print("Creating ")
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == deepracer_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(sagemaker_role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, deepracer_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
if not route_tables:
raise Exception(("No route tables were found. Please follow the VPC S3 endpoint creation "
"guide by clicking the above link."))
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=deepracer_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, deepracer_vpc)))
raise e
if CREATE_ROUTE_TABLE:
create_vpc_endpoint_table()
```
## Setup the environment
The environment is defined in a Python file called “deepracer_racetrack_env.py” and the file can be found at `src/markov/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only.
We can experiment with different reward functions by modifying `reward_function` in `src/markov/rewards/`. Action space and steering angles can be changed by modifying `src/markov/actions/`.json file
### Configure the preset for RL algorithm
The parameters that configure the RL training job are defined in `src/markov/presets/`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example.
You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
```
# Uncomment the pygmentize code lines to see the code
# Reward function
#!pygmentize src/markov/rewards/default.py
# Action space
#!pygmentize src/markov/actions/single_speed_stereo_shallow.json
# Preset File
#!pygmentize src/markov/presets/default.py
#!pygmentize src/markov/presets/preset_attention_layer.py
```
### Copy custom files to S3 bucket so that sagemaker & robomaker can pick it up
```
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
print(s3_location)
# Clean up the previously uploaded files
!aws s3 rm --recursive {s3_location}
!aws s3 cp ./src/artifacts/rewards/default.py {s3_location}/customer_reward_function.py
!aws s3 cp ./src/artifacts/actions/default.json {s3_location}/model/model_metadata.json
#!aws s3 cp src/markov/presets/default.py {s3_location}/presets/preset.py
#!aws s3 cp src/markov/presets/preset_attention_layer.py {s3_location}/presets/preset.py
```
### Train the RL model using the Python SDK Script mode
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
```
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
```
We use the RLEstimator for training RL jobs.
1. Specify the source directory which has the environment file, preset and training code.
2. Specify the entry point as the training code
3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**
4. Set the RLCOACH_PRESET as "deepracer" for this example.
5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
```
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
image_name=custom_image_name,
dependencies=["common/"],
role=sagemaker_role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
metric_definitions=metric_definitions,
train_max_run=job_duration_in_seconds,
hyperparameters={
"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"model_metadata_s3_key": "%s/model/model_metadata.json" % s3_prefix,
"reward_function_s3_source": "%s/customer_reward_function.py" % s3_prefix,
"batch_size": "64",
"num_epochs": "10",
"stack_size": "1",
"lr": "0.0003",
"exploration_type": "Categorical",
"e_greedy_value": "1",
"epsilon_steps": "10000",
"beta_entropy": "0.01",
"discount_factor": "0.999",
"loss_type": "Huber",
"num_episodes_between_training": "20",
"max_sample_count": "0",
"sampling_frequency": "1"
# ,"pretrained_s3_bucket": "sagemaker-us-east-1-259455987231"
# ,"pretrained_s3_prefix": "deepracer-notebook-sagemaker-200729-202318"
},
subnets=deepracer_subnets,
security_group_ids=deepracer_security_groups,
)
estimator.fit(wait=False)
job_name = estimator.latest_training_job.job_name
print("Training job: %s" % job_name)
training_job_arn = estimator.latest_training_job.describe()['TrainingJobArn']
```
### Create the Kinesis video stream
```
kvs_stream_name = "dr-kvs-{}".format(job_name)
!aws --region {aws_region} kinesisvideo create-stream --stream-name {kvs_stream_name} --media-type video/h264 --data-retention-in-hours 24
print ("Created kinesis video stream {}".format(kvs_stream_name))
```
### Start the Robomaker job
```
robomaker = boto3.client("robomaker")
```
### Create Simulation Application
```
robomaker_s3_key = 'robomaker/simulation_ws.tar.gz'
robomaker_source = {'s3Bucket': s3_bucket,
's3Key': robomaker_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE',
'version': '1.x'}
```
Download the DeepRacer bundle provided by RoboMaker service and upload it in our S3 bucket to create a RoboMaker Simulation Application
```
if not os.path.exists('./build/output.tar.gz'):
print("Using the latest simapp from public s3 bucket")
# Download Robomaker simApp for the deepracer public s3 bucket
simulation_application_bundle_location = "s3://deepracer-managed-resources-us-east-1/deepracer-simapp.tar.gz"
!aws s3 cp {simulation_application_bundle_location} ./
# Remove if the Robomaker sim-app is present in s3 bucket
!aws s3 rm s3://{s3_bucket}/{robomaker_s3_key}
# Uploading the Robomaker SimApp to your S3 bucket
!aws s3 cp ./deepracer-simapp.tar.gz s3://{s3_bucket}/{robomaker_s3_key}
# Cleanup the locally downloaded version of SimApp
!rm deepracer-simapp.tar.gz
else:
print("Using the simapp from build directory")
!aws s3 cp ./build/output.tar.gz s3://{s3_bucket}/{robomaker_s3_key}
app_name = "deepracer-notebook-application" + strftime("%y%m%d-%H%M%S", gmtime())
print(app_name)
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[robomaker_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
```
### Launch the Simulation job on RoboMaker
We create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
```
s3_yaml_name="training_params.yaml"
world_name = "reInvent2019_track"
# Change this for multiple rollouts. This will invoke the specified number of robomaker jobs to collect experience
num_simulation_workers = 1
with open("./src/artifacts/yaml/training_yaml_template.yaml", "r") as filepointer:
yaml_config = yaml.load(filepointer)
yaml_config['WORLD_NAME'] = world_name
yaml_config['SAGEMAKER_SHARED_S3_BUCKET'] = s3_bucket
yaml_config['SAGEMAKER_SHARED_S3_PREFIX'] = s3_prefix
yaml_config['TRAINING_JOB_ARN'] = training_job_arn
yaml_config['METRICS_S3_BUCKET'] = s3_bucket
yaml_config['METRICS_S3_OBJECT_KEY'] = "{}/training_metrics.json".format(s3_prefix)
yaml_config['SIMTRACE_S3_BUCKET'] = s3_bucket
yaml_config['SIMTRACE_S3_PREFIX'] = "{}/iteration-data/training".format(s3_prefix)
yaml_config['AWS_REGION'] = aws_region
yaml_config['ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID'] = account_id
yaml_config['KINESIS_VIDEO_STREAM_NAME'] = kvs_stream_name
yaml_config['REWARD_FILE_S3_KEY'] = "{}/customer_reward_function.py".format(s3_prefix)
yaml_config['MODEL_METADATA_FILE_S3_KEY'] = "{}/model/model_metadata.json".format(s3_prefix)
yaml_config['NUM_WORKERS'] = num_simulation_workers
yaml_config['MP4_S3_BUCKET'] = s3_bucket
yaml_config['MP4_S3_OBJECT_PREFIX'] = "{}/iteration-data/training".format(s3_prefix)
# Race-type supported for training are TIME_TRIAL, OBJECT_AVOIDANCE, HEAD_TO_BOT
# If you need to modify more attributes look at the template yaml file
race_type = "TIME_TRIAL"
if race_type == "OBJECT_AVOIDANCE":
yaml_config['NUMBER_OF_OBSTACLES'] = "6"
yaml_config['RACE_TYPE'] = "OBJECT_AVOIDANCE"
elif race_type == "HEAD_TO_BOT":
yaml_config['NUMBER_OF_BOT_CARS'] = "6"
yaml_config['RACE_TYPE'] = "HEAD_TO_BOT"
# Printing the modified yaml parameter
for key, value in yaml_config.items():
print("{}: {}".format(key.ljust(40, ' '), value))
# Uploading the modified yaml parameter
with open("./training_params.yaml", "w") as filepointer:
yaml.dump(yaml_config, filepointer)
!aws s3 cp ./training_params.yaml {s3_location}/training_params.yaml
!rm training_params.yaml
vpcConfig = {"subnets": deepracer_subnets,
"securityGroups": deepracer_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
client_request_token = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
envriron_vars = {
"S3_YAML_NAME": s3_yaml_name,
"SAGEMAKER_SHARED_S3_PREFIX": s3_prefix,
"SAGEMAKER_SHARED_S3_BUCKET": s3_bucket,
"WORLD_NAME": world_name,
"KINESIS_VIDEO_STREAM_NAME": kvs_stream_name,
"APP_REGION": aws_region,
"MODEL_METADATA_FILE_S3_KEY": "%s/model/model_metadata.json" % s3_prefix,
"ROLLOUT_IDX": str(job_no)
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation_environment",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
response = robomaker.create_simulation_job(iamRole=sagemaker_role,
clientRequestToken=client_request_token,
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Fail",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
responses.append(response)
time.sleep(5)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
```
### Visualizing the simulations in RoboMaker
You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
```
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
```
### Creating temporary folder top plot metrics
```
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
```
### Plot metrics for training job
```
%matplotlib inline
import pandas as pd
import json
training_metrics_file = "training_metrics.json"
training_metrics_path = "{}/{}".format(s3_prefix, training_metrics_file)
wait_for_s3_object(s3_bucket, training_metrics_path, tmp_dir)
json_file = "{}/{}".format(tmp_dir, training_metrics_file)
with open(json_file) as fp:
data = json.load(fp)
df = pd.DataFrame(data['metrics'])
x_axis = 'episode'
y_axis = 'reward_score'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
```
### Clean up RoboMaker and SageMaker training job
Execute the cells below if you want to kill RoboMaker and SageMaker job.
```
# # Cancelling robomaker job
# for job_arn in job_arns:
# robomaker.cancel_simulation_job(job=job_arn)
# # Stopping sagemaker training job
# sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
```
# Evaluation (Time trail, Object avoidance, Head to bot)
```
s3_yaml_name="evaluation_params.yaml"
world_name = "reInvent2019_track"
with open("./src/artifacts/yaml/evaluation_yaml_template.yaml", "r") as filepointer:
yaml_config = yaml.load(filepointer)
yaml_config['WORLD_NAME'] = world_name
yaml_config['MODEL_S3_BUCKET'] = s3_bucket
yaml_config['MODEL_S3_PREFIX'] = s3_prefix
yaml_config['AWS_REGION'] = aws_region
yaml_config['METRICS_S3_BUCKET'] = s3_bucket
yaml_config['METRICS_S3_OBJECT_KEY'] = "{}/evaluation_metrics.json".format(s3_prefix)
yaml_config['SIMTRACE_S3_BUCKET'] = s3_bucket
yaml_config['SIMTRACE_S3_PREFIX'] = "{}/iteration-data/evaluation".format(s3_prefix)
yaml_config['ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID'] = account_id
yaml_config['NUMBER_OF_TRIALS'] = "5"
yaml_config['MP4_S3_BUCKET'] = s3_bucket
yaml_config['MP4_S3_OBJECT_PREFIX'] = "{}/iteration-data/evaluation".format(s3_prefix)
# Race-type supported for training are TIME_TRIAL, OBJECT_AVOIDANCE, HEAD_TO_BOT
# If you need to modify more attributes look at the template yaml file
race_type = "TIME_TRIAL"
if race_type == "OBJECT_AVOIDANCE":
yaml_config['NUMBER_OF_OBSTACLES'] = "6"
yaml_config['RACE_TYPE'] = "OBJECT_AVOIDANCE"
elif race_type == "HEAD_TO_BOT":
yaml_config['NUMBER_OF_BOT_CARS'] = "6"
yaml_config['RACE_TYPE'] = "HEAD_TO_BOT"
# Printing the modified yaml parameter
for key, value in yaml_config.items():
print("{}: {}".format(key.ljust(40, ' '), value))
# Uploading the modified yaml parameter
with open("./evaluation_params.yaml", "w") as filepointer:
yaml.dump(yaml_config, filepointer)
!aws s3 cp ./evaluation_params.yaml {s3_location}/evaluation_params.yaml
!rm evaluation_params.yaml
num_simulation_workers = 1
envriron_vars = {
"S3_YAML_NAME": s3_yaml_name,
"MODEL_S3_PREFIX": s3_prefix,
"MODEL_S3_BUCKET": s3_bucket,
"WORLD_NAME": world_name,
"KINESIS_VIDEO_STREAM_NAME": kvs_stream_name,
"APP_REGION": aws_region,
"MODEL_METADATA_FILE_S3_KEY": "%s/model/model_metadata.json" % s3_prefix
}
simulation_application = {
"application":simulation_app_arn,
"launchConfig": {
"packageName": "deepracer_simulation_environment",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars
}
}
vpcConfig = {"subnets": deepracer_subnets,
"securityGroups": deepracer_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
outputLocation={
"s3Bucket": s3_bucket,
"s3Prefix": s3_prefix
},
maxJobDurationInSeconds=job_duration_in_seconds,
iamRole=sagemaker_role,
failureBehavior="Fail",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
```
### Visualizing the simulations in RoboMaker
You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
```
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
```
### Creating temporary folder top plot metrics
```
evaluation_metrics_file = "evaluation_metrics.json"
evaluation_metrics_path = "{}/{}".format(s3_prefix, evaluation_metrics_file)
wait_for_s3_object(s3_bucket, evaluation_metrics_path, tmp_dir)
json_file = "{}/{}".format(tmp_dir, evaluation_metrics_file)
with open(json_file) as fp:
data = json.load(fp)
df = pd.DataFrame(data['metrics'])
# Converting milliseconds to seconds
df['elapsed_time'] = df['elapsed_time_in_milliseconds']/1000
df = df[['trial', 'completion_percentage', 'elapsed_time']]
display(df)
```
### Clean Up Simulation Application Resource
```
# robomaker.delete_simulation_application(application=simulation_app_arn)
```
### Clean your S3 bucket (Uncomment the awscli commands if you want to do it)
```
## Uncomment if you only want to clean the s3 bucket
# sagemaker_s3_folder = "s3://{}/{}".format(s3_bucket, s3_prefix)
# !aws s3 rm --recursive {sagemaker_s3_folder}
# robomaker_s3_folder = "s3://{}/{}".format(s3_bucket, job_name)
# !aws s3 rm --recursive {robomaker_s3_folder}
# robomaker_sim_app = "s3://{}/{}".format(s3_bucket, 'robomaker')
# !aws s3 rm --recursive {robomaker_sim_app}
# model_output = "s3://{}/{}".format(s3_bucket, s3_bucket)
# !aws s3 rm --recursive {model_output}
```
# Head-to-head Evaluation
```
# S3 bucket
s3_bucket_2 = sage_session.default_bucket()
# Ensure that the S3 prefix contains the keyword 'sagemaker'
# s3_prefix_2 = "deepracer-notebook-sagemaker-200422-231836"
s3_prefix_2 = "deepracer-notebook-sagemaker-200422-231836"
if not s3_prefix_2:
raise Exception("Please provide the second agents s3_prefix and s3_bucket. The prefix would have sagemaker in between")
print("Using s3 bucket {}".format(s3_bucket_2))
print("Model checkpoints and other metadata will be stored at: \ns3://{}/{}".format(s3_bucket_2, s3_prefix_2))
s3_yaml_name="evaluation_params.yaml"
world_name = "reInvent2019_track"
with open("./src/artifacts/yaml/head2head_yaml_template.yaml", "r") as filepointer:
yaml_config = yaml.load(filepointer)
yaml_config['WORLD_NAME'] = world_name
yaml_config['MODEL_S3_BUCKET'] = [s3_bucket,
s3_bucket_2]
yaml_config['MODEL_S3_PREFIX'] = [s3_prefix,
s3_prefix_2]
yaml_config['MODEL_METADATA_FILE_S3_KEY'] =["{}/model/model_metadata.json".format(s3_prefix),
"{}/model/model_metadata.json".format(s3_prefix_2)]
yaml_config['AWS_REGION'] = aws_region
yaml_config['METRICS_S3_BUCKET'] = [s3_bucket,
s3_bucket_2]
yaml_config['METRICS_S3_OBJECT_KEY'] = ["{}/evaluation_metrics.json".format(s3_prefix),
"{}/evaluation_metrics.json".format(s3_prefix_2)]
yaml_config['SIMTRACE_S3_BUCKET'] = [s3_bucket,
s3_bucket_2]
yaml_config['SIMTRACE_S3_PREFIX'] = ["{}/iteration-data/evaluation".format(s3_prefix),
"{}/iteration-data/evaluation".format(s3_prefix_2)]
yaml_config['ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID'] = account_id
yaml_config['NUMBER_OF_TRIALS'] = "5"
yaml_config['MP4_S3_BUCKET'] = [s3_bucket,
s3_bucket_2]
yaml_config['MP4_S3_OBJECT_PREFIX'] = ["{}/iteration-data/evaluation".format(s3_prefix),
"{}/iteration-data/evaluation".format(s3_prefix_2)]
# Race-type supported for training are TIME_TRIAL, OBJECT_AVOIDANCE, HEAD_TO_BOT
# If you need to modify more attributes look at the template yaml file
race_type = "TIME_TRIAL"
if race_type == "OBJECT_AVOIDANCE":
yaml_config['NUMBER_OF_OBSTACLES'] = "6"
yaml_config['RACE_TYPE'] = "OBJECT_AVOIDANCE"
elif race_type == "HEAD_TO_BOT":
yaml_config['NUMBER_OF_BOT_CARS'] = "6"
yaml_config['RACE_TYPE'] = "HEAD_TO_BOT"
# Printing the modified yaml parameter
for key, value in yaml_config.items():
print("{}: {}".format(key.ljust(40, ' '), value))
# Uploading the modified yaml parameter
with open("./evaluation_params.yaml", "w") as filepointer:
yaml.dump(yaml_config, filepointer)
!aws s3 cp ./evaluation_params.yaml {s3_location}/evaluation_params.yaml
!rm evaluation_params.yaml
num_simulation_workers = 1
envriron_vars = {
"S3_YAML_NAME": s3_yaml_name,
"MODEL_S3_PREFIX": s3_prefix,
"MODEL_S3_BUCKET": s3_bucket,
"WORLD_NAME": world_name,
"KINESIS_VIDEO_STREAM_NAME": kvs_stream_name,
"APP_REGION": aws_region,
"MODEL_METADATA_FILE_S3_KEY": "%s/model/model_metadata.json" % s3_prefix
}
simulation_application = {
"application":simulation_app_arn,
"launchConfig": {
"packageName": "deepracer_simulation_environment",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars
}
}
vpcConfig = {"subnets": deepracer_subnets,
"securityGroups": deepracer_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
outputLocation={
"s3Bucket": s3_bucket,
"s3Prefix": s3_prefix
},
maxJobDurationInSeconds=job_duration_in_seconds,
iamRole=sagemaker_role,
failureBehavior="Fail",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
```
### Visualizing the simulations in RoboMaker
You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
```
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
```
### Creating temporary folder top plot metrics
```
evaluation_metrics_file = "evaluation_metrics.json"
evaluation_metrics_path = "{}/{}".format(s3_prefix, evaluation_metrics_file)
wait_for_s3_object(s3_bucket, evaluation_metrics_path, tmp_dir)
json_file = "{}/{}".format(tmp_dir, evaluation_metrics_file)
with open(json_file) as fp:
data = json.load(fp)
df_1 = pd.DataFrame(data['metrics'])
# Converting milliseconds to seconds
df_1['elapsed_time'] = df_1['elapsed_time_in_milliseconds']/1000
df_1 = df_1[['trial', 'completion_percentage', 'elapsed_time']]
display(df_1)
evaluation_metrics_file = "evaluation_metrics.json"
evaluation_metrics_path = "{}/{}".format(s3_prefix_2, evaluation_metrics_file)
wait_for_s3_object(s3_bucket_2, evaluation_metrics_path, tmp_dir)
json_file = "{}/{}".format(tmp_dir, evaluation_metrics_file)
with open(json_file) as fp:
data = json.load(fp)
df_2 = pd.DataFrame(data['metrics'])
# Converting milliseconds to seconds
df_2['elapsed_time'] = df_2['elapsed_time_in_milliseconds']/1000
df_2 = df_2[['trial', 'completion_percentage', 'elapsed_time']]
display(df_2)
```
| github_jupyter |
```
#!pip install pytorch_lightning
#!pip install torchsummaryX
!pip install webdataset
# !pip install datasets
# !pip install wandb
#!pip install -r MedicalZooPytorch/installation/requirements.txt
#!pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
!git clone https://github.com/McMasterAI/Radiology-and-AI.git #--branch augmentation
!git clone https://github.com/jcreinhold/intensity-normalization.git
! python intensity-normalization/setup.py install
!pip install scikit-fuzzy
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
cd drive/MyDrive/MacAI
import sys
sys.path.append('./Radiology-and-AI/Radiology_and_AI')
sys.path.append('./intensity-normalization')
import os
import torch
import numpy as np
import webdataset as wds
import intensity_normalization
from io import BytesIO
from nibabel import FileHolder, Nifti1Image
import torch
import numpy as np
from scipy.interpolate import RegularGridInterpolator
from scipy.ndimage.filters import gaussian_filter
from time import time
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.interpolate import interp1d
train_dataset = wds.Dataset("macai_datasets/brats/train/brats_train.tar.gz")
eval_dataset = wds.Dataset("macai_datasets/brats/validation/brats_validation.tar.gz")
def np_img_collator(batch):
bytes_data_list = [list(batch[i].items())[1][1] for i in range(5)]
bytes_data_keys = [list(batch[i].items())[0][1].split('_')[-1] for i in range(5)]
bytes_data_dict = dict(zip(bytes_data_keys,bytes_data_list))
bb = BytesIO(bytes_data_dict['flair'])
fh = FileHolder(fileobj=bb)
f_flair = Nifti1Image.from_file_map({'header': fh, 'image':fh}).get_fdata()
bb = BytesIO(bytes_data_dict['seg'])
fh = FileHolder(fileobj=bb)
f_seg = Nifti1Image.from_file_map({'header': fh, 'image':fh}).get_fdata()
bb = BytesIO(bytes_data_dict['t1'])
fh = FileHolder(fileobj=bb)
f_t1 = Nifti1Image.from_file_map({'header': fh, 'image':fh}).get_fdata()
bb = BytesIO(bytes_data_dict['t1ce'])
fh = FileHolder(fileobj=bb)
f_t1ce=Nifti1Image.from_file_map({'header':fh, 'image':fh}).get_fdata()
bb = BytesIO(bytes_data_dict['t2'])
fh = FileHolder(fileobj=bb)
f_t2 =Nifti1Image.from_file_map({'header':fh, 'image':fh}).get_fdata()
padding = [(0, 0), (0, 0), (0, 0)]# last (2,3)
f_flair = np.expand_dims(np.pad(f_flair, padding), axis=0)
f_t1 = np.expand_dims(np.pad(f_t1, padding), axis=0)
f_t2 = np.expand_dims(np.pad(f_t2, padding), axis=0)
f_t1ce = np.expand_dims(np.pad(f_t1ce, padding), axis=0)
f_seg = np.pad(f_seg, padding)
concat = np.concatenate([f_t1, f_t1ce, f_t2, f_flair], axis=0)
f_seg = np.expand_dims(f_seg, axis=0)
return ([concat, f_seg])
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=5,collate_fn=np_img_collator)
def nyul_train_dataloader(dataloader, n_imgs = 4, i_min=1, i_max=99, i_s_min=1, i_s_max=100, l_percentile=10, u_percentile=90, step=20):
"""
determine the standard scale for the set of images
Args:
img_fns (list): set of NifTI MR image paths which are to be normalized
mask_fns (list): set of corresponding masks (if not provided, estimated)
i_min (float): minimum percentile to consider in the images
i_max (float): maximum percentile to consider in the images
i_s_min (float): minimum percentile on the standard scale
i_s_max (float): maximum percentile on the standard scale
l_percentile (int): middle percentile lower bound (e.g., for deciles 10)
u_percentile (int): middle percentile upper bound (e.g., for deciles 90)
step (int): step for middle percentiles (e.g., for deciles 10)
Returns:
standard_scale (np.ndarray): average landmark intensity for images
percs (np.ndarray): array of all percentiles used
"""
percss = [np.concatenate(([i_min], np.arange(l_percentile, u_percentile+1, step), [i_max])) for _ in range(n_imgs)]
standard_scales = [np.zeros(len(percss[0])) for _ in range(n_imgs)]
iteration = 1
for all_img, seg_data in dataloader:
print(iteration)
# print(seg_data.shape)
mask_data = seg_data
mask_data[seg_data ==0] = 1
mask_data = np.squeeze(mask_data, axis=0)
#mask_data[mask_data==2] = 0 # ignore edema
for i in range(n_imgs):
img_data = all_img[i]
masked = img_data[mask_data > 0]
landmarks = intensity_normalization.normalize.nyul.get_landmarks(masked, percss[i])
min_p = np.percentile(masked, i_min)
max_p = np.percentile(masked, i_max)
f = interp1d([min_p, max_p], [i_s_min, i_s_max])
landmarks = np.array(f(landmarks))
standard_scales[i] += landmarks
iteration += 1
standard_scales = [scale / iteration for scale in standard_scales]
return standard_scales, percss
standard_scales, percss = nyul_train_dataloader(train_dataloader)
def dataloader_hist_norm(img_data, landmark_percs, standard_scale, seg_data):
"""
do the Nyul and Udupa histogram normalization routine with a given set of learned landmarks
Args:
img (nibabel.nifti1.Nifti1Image): image on which to find landmarks
landmark_percs (np.ndarray): corresponding landmark points of standard scale
standard_scale (np.ndarray): landmarks on the standard scale
mask (nibabel.nifti1.Nifti1Image): foreground mask for img
Returns:
normalized (nibabel.nifti1.Nifti1Image): normalized image
"""
mask_data = seg_data
mask_data[seg_data ==0] = 1
mask_data = np.squeeze(mask_data, axis=0)
masked = img_data[mask_data > 0]
landmarks = intensity_normalization.normalize.nyul.get_landmarks(masked, landmark_percs)
f = interp1d(landmarks, standard_scale, fill_value='extrapolate')
normed = f(img_data)
z = img_data
z[img_data > 0] = normed[img_data > 0]
return z #normed
for all_img, seg_data in train_dataloader:
for i, this_img in enumerate(all_img):
if i == 0:
transformed_img = dataloader_hist_norm(this_img, percss[i], standard_scales[i], seg_data)
transformed_img = transformed_img[transformed_img>0]
plt.hist(np.ravel(transformed_img), bins=30)
plt.xlim(0, 150)
plt.show()
# plt.hist(np.ravel(this_img))
# plt.show()
```
| github_jupyter |
# Chapter 6 - Data Sourcing via Web
## Part 1 - Objects in BeautifulSoup
```
import sys
print(sys.version)
from bs4 import BeautifulSoup
```
### BeautifulSoup objects
```
our_html_document = '''
<html><head><title>IoT Articles</title></head>
<body>
<p class='title'><b>2018 Trends: Best New IoT Device Ideas for Data Scientists and Engineers</b></p>
<p class='description'>It’s almost 2018 and IoT is on the cusp of an explosive expansion. In this article, I offer you a listing of new IoT device ideas that you can use...
<br>
<br>
It’s almost 2018 and IoT is on the cusp of an explosive expansion. In this article, I offer you a listing of new IoT device ideas that you can use to get practice in designing your first IoT applications.
<h1>Looking Back at My Coolest IoT Find in 2017</h1>
Before going into detail about best new IoT device ideas, here’s the backstory. <span style="text-decoration: underline;"><strong><a href="http://bit.ly/LPlNDJj">Last month Ericsson Digital invited me</a></strong></span> to tour the Ericsson Studio in Kista, Sweden. Up until that visit, <a href="http://www.data-mania.com/blog/m2m-vs-iot/">IoT</a> had been largely theoretical to me. Of course, I know the usual mumbo-jumbo about wearables and IoT-connected fitness trackers. That stuff is all well and good, but it’s somewhat old hat – plus I am not sure we are really benefiting so much from those, so I’m not that impressed.
It wasn’t until I got to the Ericsson Studio that I became extremely impressed by how far IoT has really come. Relying on the promise of the 5g network expansion, IoT-powered smart devices are on the cusp of an explosive growth in adoption. It was Ericsson’s Smart Car that sent me reeling:<a href="bit.ly/LPlNDJj"><img class="aligncenter size-full wp-image-3802" src="http://www.data-mania.com/blog/wp-content/uploads/2017/12/new-IoT-device-ideas.jpg" alt="Get your new iot device ideas here" width="1024" height="683" /></a>
This car is connected to Ericsson’s Connected Vehicle Cloud, an IoT platform that manages services for the Smart Cars to which it’s connected. The Volvo pictured above acts as a drop-off location for groceries that have been ordered by its owner.
To understand how it works, imagine you’re pulling your normal 9-to-5 and you know you need to grab some groceries on your way home. Well, since you’re smart you’ve used Ericsson IoT platform to connect your car to the local grocery delivery service (<a href="http://mat.se/">Mat.se</a>), so all you need to do is open the Mat.se app and make your usual order. Mat.se automatically handles the payment, grocery selection, delivery, and delivery scheduling. Since your car is IoT-enabled, Mat.se issues its trusted delivery agent a 1-time token to use for opening your car in order to place your groceries in your car for you at 4:40 pm (just before you get off from work).
To watch some of the amazing IoT device demos I witnessed at Ericsson Studio, make sure to go <span style="text-decoration: underline;"><strong><a href="http://bit.ly/LPlNDJj">watch the videos on this page</a></strong></span>.
<h1>Future Trends for IoT in 2018</h1>
New IoT device ideas won’t do you much good unless you at least know the basic technology trends that are set to impact IoT over the next year(s). These include:
<ol>
<li><strong>Big Data</strong> & Data Engineering: Sensors that are embedded within IoT devices spin off machine-generated data like it’s going out of style. For IoT to function, the platform must be solidly engineered to handle big data. Be assured, that requires some serious data engineering.</li>
<li><strong>Machine Learning</strong> Data Science: While a lot of IoT devices are still operated according to rules-based decision criteria, the age of artificial intelligence is upon us. IoT will increasingly depend on machine learning algorithms to control device operations so that devices are able to autonomously respond to a complex set of overlapping stimuli.</li>
<li><strong>Blockchain</strong>-Enabled Security: Above all else, IoT networks must be secure. Blockchain technology is primed to meet the security demands that come along with building and expanding the IoT.</li>
</ol>
<h1>Best New IoT Device Ideas</h1>
This listing of new IoT device ideas has been sub-divided according to the main technology upon which the IoT devices are built. Below I’m providing a list of new IoT device ideas, but for detailed instructions on how to build these IoT applications, I recommend the <a href="https://click.linksynergy.com/deeplink?id=*JDLXjeE*wk&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Ftopic%2Finternet-of-things%2F%3Fsort%3Dhighest-rated">IoT courses on Udemy</a> (ß Please note: if you purchase a Udemy course through this link, I may receive a small commission), or courses that are available at <a href="http://www.skyfilabs.com/iot-online-courses">SkyFi</a> and <a href="https://www.coursera.org/specializations/iot">Coursera</a>.
<h2>Raspberry Pi IoT Ideas</h2>
Using Raspberry Pi as open-source hardware, you can build IoT applications that offer any one of the following benefits:
<ol>
<li>Enable built-in sensing to build a weather station that measures ambient temperature and humidity</li>
<li>Build a system that detects discrepancies in electrical readings to identify electricity theft</li>
<li>Use IoT to build a Servo that is controlled by motion detection readings</li>
<li>Build a smart control switch that operates devices based on external stimuli. Use this for home automation.</li>
<li>Build a music playing application that enables music for each room in your house</li>
<li>Implement biometrics on IoT-connected devices</li>
</ol>
<h2>Arduino IoT Ideas</h2>
There are a number of new IoT device ideas that deploy Arduino as a microcontroller. These include:
<ol>
<li>Integrate Arduino with Android to build a remote-control RGB LED device.</li>
<li>Connect PIR sensors across the IoT to implement a smart building.</li>
<li>Build a temperature and sunlight sensor system to remotely monitor and control the conditions of your garden.</li>
<li>Deploy Arduino and IoT to automate your neighborhood streetlights.</li>
<li>Build a smart irrigation system based on IoT-connected temperature and moisture sensors built-in to your agricultural plants.</li>
</ol>
[caption id="attachment_3807" align="aligncenter" width="300"]<a href="bit.ly/LPlNDJj"><img class="wp-image-3807 size-medium" src="http://www.data-mania.com/blog/wp-content/uploads/2017/12/IMG_3058-300x295.jpg" alt="" width="300" height="295" /></a> An IoT Chatbot Tree at the Ericsson Studio[/caption]
<h2>Wireless (GSM) IoT Ideas</h2>
Several new IoT device ideas are developed around the GSM wireless network. Those are:
<ol>
<li>Monitor soil moisture to automate agricultural irrigation cycles.</li>
<li>Automate and control the conditions of a greenhouse.</li>
<li>Enable bio-metrics to build a smart security system for your home or office building</li>
<li>Build an autonomously operating fitness application that automatically makes recommendations based on motion detection and heart rate sensors that are embedded on wearable fitness trackers.</li>
<li>Build a healthcare monitoring system that tracks, informs, and automatically alerts healthcare providers based on sensor readings that describe a patients vital statistics (like temperature, pulse, blood pressure, etc).</li>
</ol>
<h2>IoT Automation Ideas</h2>
Almost all new IoT device ideas offer automation benefits, but to outline a few more ideas:
<ol>
<li>Build an IoT device that automatically locates and reports the closest nearby parking spot.</li>
<li>Build a motion detection system that automatically issues emails or sms messages to alert home owners of a likely home invasion.</li>
<li>Use temperature sensors connected across the IoT to automatically alert you if your home windows or doors have been left open.</li>
<li>Use bio-metric sensors to build a smart system that automate security for your home or office building</li>
</ol>
To learn more about IoT and what’s happening on the leading edge, be sure to pop over to Ericsson’s Studio Tour recap and <span style="text-decoration: underline;"><strong><a href="http://bit.ly/LPlNDJj">watch these videos</a></strong></span>.
<em>(I captured some of this content on behalf of DevMode Strategies during an invite-only tour of the Ericsson Studio in Kista. Rest assure, the text and opinions are my own</em>)
<p class='description'>...</p>
'''
our_soup_object = BeautifulSoup(our_html_document, 'html.parser')
print(our_soup_object)
print(our_soup_object.prettify()[0:300])
```
### Tag objects
#### Tag names
```
soup_object = BeautifulSoup('<h1 attribute_1 = "Heading Level 1"">Future Trends for IoT in 2018</h1>', "lxml")
tag = soup_object.h1
type(tag)
print(tag)
tag.name
tag.name = 'heading 1'
tag
tag.name
```
#### Tag attributes
```
soup_object = BeautifulSoup('<h1 attribute_1 = "Heading Level 1"">Future Trends for IoT in 2018</h1>', "lxml")
tag = soup_object.h1
tag
tag['attribute_1']
tag.attrs
tag['attribute_2'] = 'Heading Level 1*'
tag.attrs
tag
del tag['attribute_2']
tag
del tag['attribute_1']
tag.attrs
```
#### Navigating a parse tree using tags
```
# First we will recreate our original parse tree.
our_html_document = '''
<html><head><title>IoT Articles</title></head>
<body>
<p class='title'><b>2018 Trends: Best New IoT Device Ideas for Data Scientists and Engineers</b></p>
<p class='description'>It’s almost 2018 and IoT is on the cusp of an explosive expansion. In this article, I offer you a listing of new IoT device ideas that you can use...
<br>
<br>
It’s almost 2018 and IoT is on the cusp of an explosive expansion. In this article, I offer you a listing of new IoT device ideas that you can use to get practice in designing your first IoT applications.
<h1>Looking Back at My Coolest IoT Find in 2017</h1>
Before going into detail about best new IoT device ideas, here’s the backstory. <span style="text-decoration: underline;"><strong><a href="http://bit.ly/LPlNDJj">Last month Ericsson Digital invited me</a></strong></span> to tour the Ericsson Studio in Kista, Sweden. Up until that visit, <a href="http://www.data-mania.com/blog/m2m-vs-iot/">IoT</a> had been largely theoretical to me. Of course, I know the usual mumbo-jumbo about wearables and IoT-connected fitness trackers. That stuff is all well and good, but it’s somewhat old hat – plus I am not sure we are really benefiting so much from those, so I’m not that impressed.
It wasn’t until I got to the Ericsson Studio that I became extremely impressed by how far IoT has really come. Relying on the promise of the 5g network expansion, IoT-powered smart devices are on the cusp of an explosive growth in adoption. It was Ericsson’s Smart Car that sent me reeling:<a href="bit.ly/LPlNDJj"><img class="aligncenter size-full wp-image-3802" src="http://www.data-mania.com/blog/wp-content/uploads/2017/12/new-IoT-device-ideas.jpg" alt="Get your new iot device ideas here" width="1024" height="683" /></a>
This car is connected to Ericsson’s Connected Vehicle Cloud, an IoT platform that manages services for the Smart Cars to which it’s connected. The Volvo pictured above acts as a drop-off location for groceries that have been ordered by its owner.
To understand how it works, imagine you’re pulling your normal 9-to-5 and you know you need to grab some groceries on your way home. Well, since you’re smart you’ve used Ericsson IoT platform to connect your car to the local grocery delivery service (<a href="http://mat.se/">Mat.se</a>), so all you need to do is open the Mat.se app and make your usual order. Mat.se automatically handles the payment, grocery selection, delivery, and delivery scheduling. Since your car is IoT-enabled, Mat.se issues its trusted delivery agent a 1-time token to use for opening your car in order to place your groceries in your car for you at 4:40 pm (just before you get off from work).
To watch some of the amazing IoT device demos I witnessed at Ericsson Studio, make sure to go <span style="text-decoration: underline;"><strong><a href="http://bit.ly/LPlNDJj">watch the videos on this page</a></strong></span>.
<h1>Future Trends for IoT in 2018</h1>
New IoT device ideas won’t do you much good unless you at least know the basic technology trends that are set to impact IoT over the next year(s). These include:
<ol>
<li><strong>Big Data</strong> & Data Engineering: Sensors that are embedded within IoT devices spin off machine-generated data like it’s going out of style. For IoT to function, the platform must be solidly engineered to handle big data. Be assured, that requires some serious data engineering.</li>
<li><strong>Machine Learning</strong> Data Science: While a lot of IoT devices are still operated according to rules-based decision criteria, the age of artificial intelligence is upon us. IoT will increasingly depend on machine learning algorithms to control device operations so that devices are able to autonomously respond to a complex set of overlapping stimuli.</li>
<li><strong>Blockchain</strong>-Enabled Security: Above all else, IoT networks must be secure. Blockchain technology is primed to meet the security demands that come along with building and expanding the IoT.</li>
</ol>
<h1>Best New IoT Device Ideas</h1>
This listing of new IoT device ideas has been sub-divided according to the main technology upon which the IoT devices are built. Below I’m providing a list of new IoT device ideas, but for detailed instructions on how to build these IoT applications, I recommend the <a href="https://click.linksynergy.com/deeplink?id=*JDLXjeE*wk&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Ftopic%2Finternet-of-things%2F%3Fsort%3Dhighest-rated">IoT courses on Udemy</a> (ß Please note: if you purchase a Udemy course through this link, I may receive a small commission), or courses that are available at <a href="http://www.skyfilabs.com/iot-online-courses">SkyFi</a> and <a href="https://www.coursera.org/specializations/iot">Coursera</a>.
<h2>Raspberry Pi IoT Ideas</h2>
Using Raspberry Pi as open-source hardware, you can build IoT applications that offer any one of the following benefits:
<ol>
<li>Enable built-in sensing to build a weather station that measures ambient temperature and humidity</li>
<li>Build a system that detects discrepancies in electrical readings to identify electricity theft</li>
<li>Use IoT to build a Servo that is controlled by motion detection readings</li>
<li>Build a smart control switch that operates devices based on external stimuli. Use this for home automation.</li>
<li>Build a music playing application that enables music for each room in your house</li>
<li>Implement biometrics on IoT-connected devices</li>
</ol>
<h2>Arduino IoT Ideas</h2>
There are a number of new IoT device ideas that deploy Arduino as a microcontroller. These include:
<ol>
<li>Integrate Arduino with Android to build a remote-control RGB LED device.</li>
<li>Connect PIR sensors across the IoT to implement a smart building.</li>
<li>Build a temperature and sunlight sensor system to remotely monitor and control the conditions of your garden.</li>
<li>Deploy Arduino and IoT to automate your neighborhood streetlights.</li>
<li>Build a smart irrigation system based on IoT-connected temperature and moisture sensors built-in to your agricultural plants.</li>
</ol>
[caption id="attachment_3807" align="aligncenter" width="300"]<a href="bit.ly/LPlNDJj"><img class="wp-image-3807 size-medium" src="http://www.data-mania.com/blog/wp-content/uploads/2017/12/IMG_3058-300x295.jpg" alt="" width="300" height="295" /></a> An IoT Chatbot Tree at the Ericsson Studio[/caption]
<h2>Wireless (GSM) IoT Ideas</h2>
Several new IoT device ideas are developed around the GSM wireless network. Those are:
<ol>
<li>Monitor soil moisture to automate agricultural irrigation cycles.</li>
<li>Automate and control the conditions of a greenhouse.</li>
<li>Enable bio-metrics to build a smart security system for your home or office building</li>
<li>Build an autonomously operating fitness application that automatically makes recommendations based on motion detection and heart rate sensors that are embedded on wearable fitness trackers.</li>
<li>Build a healthcare monitoring system that tracks, informs, and automatically alerts healthcare providers based on sensor readings that describe a patients vital statistics (like temperature, pulse, blood pressure, etc).</li>
</ol>
<h2>IoT Automation Ideas</h2>
Almost all new IoT device ideas offer automation benefits, but to outline a few more ideas:
<ol>
<li>Build an IoT device that automatically locates and reports the closest nearby parking spot.</li>
<li>Build a motion detection system that automatically issues emails or sms messages to alert home owners of a likely home invasion.</li>
<li>Use temperature sensors connected across the IoT to automatically alert you if your home windows or doors have been left open.</li>
<li>Use bio-metric sensors to build a smart system that automate security for your home or office building</li>
</ol>
To learn more about IoT and what’s happening on the leading edge, be sure to pop over to Ericsson’s Studio Tour recap and <span style="text-decoration: underline;"><strong><a href="http://bit.ly/LPlNDJj">watch these videos</a></strong></span>.
<em>(I captured some of this content on behalf of DevMode Strategies during an invite-only tour of the Ericsson Studio in Kista. Rest assure, the text and opinions are my own</em>)
<p class='description'>...</p>
'''
our_soup_object = BeautifulSoup(our_html_document, 'html.parser')
our_soup_object.head
our_soup_object.title
our_soup_object.body.b
our_soup_object.body
our_soup_object.li
our_soup_object.a
```
| github_jupyter |
```
import os
import argparse
import xml.etree.ElementTree as ET
import pandas as pd
import numpy as np
import csv
# Useful if you want to perform stemming.
import nltk
stemmer = nltk.stem.PorterStemmer()
categories_file_name = r'/workspace/datasets/product_data/categories/categories_0001_abcat0010000_to_pcmcat99300050000.xml'
queries_file_name = r'/workspace/datasets/train.csv'
output_file_name = r'/workspace/datasets/labeled_query_data.txt'
# parser = argparse.ArgumentParser(description='Process arguments.')
# general = parser.add_argument_group("general")
# general.add_argument("--min_queries", default=1, help="The minimum number of queries per category label (default is 1)")
# general.add_argument("--output", default=output_file_name, help="the file to output to")
# args = parser.parse_args()
# output_file_name = args.output
# if args.min_queries:
# min_queries = int(args.min_queries)
# The root category, named Best Buy with id cat00000, doesn't have a parent.
min_queries = 10
root_category_id = 'cat00000'
tree = ET.parse(categories_file_name)
root = tree.getroot()
# Parse the category XML file to map each category id to its parent category id in a dataframe.
categories = []
parents = []
for child in root:
id = child.find('id').text
cat_path = child.find('path')
cat_path_ids = [cat.find('id').text for cat in cat_path]
leaf_id = cat_path_ids[-1]
if leaf_id != root_category_id:
categories.append(leaf_id)
parents.append(cat_path_ids[-2])
parents_df = pd.DataFrame(list(zip(categories, parents)), columns =['category', 'parent'])
# Read the training data into pandas, only keeping queries with non-root categories in our category tree.
df = pd.read_csv(queries_file_name)[['category', 'query']]
df = df[df['category'].isin(categories)]
category_value_counts= pd.DataFrame(df['category'].value_counts().reset_index().\
rename(columns = {"index": "category", "category": "category_count"}))
faulty_categories = list(category_value_counts[category_value_counts['category_count'] < min_queries]['category'])
while len(faulty_categories) > 0:
df.loc[df['category'].isin(faulty_categories), 'category'] = df['category'].\
map(parents_df.set_index('category')['parent'])
category_value_counts= pd.DataFrame(df['category'].value_counts().reset_index().\
rename(columns = {"index": "category", "category": "category_count"}))
faulty_categories = list(category_value_counts[category_value_counts['category_count'] < min_queries]['category'])
# find faulty categories
category_value_counts= pd.DataFrame(df['category'].value_counts().reset_index().\
rename(columns = {"index": "category", "category": "category_count"}))
faulty_categories = list(category_value_counts[category_value_counts['category_count'] < min_queries]['category'])
df.loc[df['category'].isin(faulty_categories), 'category'] = df['category'].map(parents_df.set_index('category')['parent'])
faulty_categories
df.isnull().sum()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/vs1991/ga-learner-dsmp-repo/blob/master/Capstone_project_EDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Loading from drive
```
from google.colab import drive
drive.mount('../Greyatom',force_remount=True)
cd ../Greyatom/'My Drive'/'my first book'/'capstone project'
ls
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly
import plotly
import plotly.graph_objs as go
from plotly.offline import iplot
import plotly.express as px
```
# Loading Files
```
#cleaned invoice data
inv=pd.read_csv('Invoice_Cleaned.csv')
inv=inv.drop(columns=['Unnamed: 0'])
inv['Customer No.'] = inv['Customer No.'].str.lstrip('0')
inv.head()
#cleaned customer data
customer=pd.read_csv('Cusomer_Cleaned.csv')
customer=customer.drop(columns=['Unnamed: 0','Business Partner'])
customer['Customer No.'] = customer['Customer No.'].astype(str)
customer['Data Origin'].value_counts()
#cleaned jtd data
jtd=pd.read_csv('JTD_Cleaned.csv')
jtd=jtd.drop(columns=['Unnamed: 0'])
jtd.head()
plant=pd.read_csv('Plant_Cleaned.csv')
plant=plant.drop(columns=['Unnamed: 0'])
plant.head()
#jtd_grouped=jtd.groupby(['DBM Order','Item Category'],as_index=False).agg({"Net value":'sum',"Order Quantity":"sum"})
#inv_jtd=pd.merge(inv,jtd_grouped,how="left",left_on='Job Card No',right_on='DBM Order')
inv_cust=pd.merge(inv,customer,on='Customer No.',how='left')
inv_plant=pd.merge(inv,plant,on='Plant',how='left')
inv_cust_plant=pd.merge(inv_cust,plant,on='Plant',how='left')
#inv_jtd_customer=pd.merge(inv_jtd,customer,how='left',on='Customer No.')
#combined_data=pd.merge(inv_jtd_customer,plant,on='Plant',how='left')
inv_cust_plant.isnull().sum()/len(inv_cust_plant)
inv_cust_plant=inv_cust_plant.drop(columns=['Name 1','House number and street','PO Box'])
inv_cust_plant.shape
```
# **EDA**
# Revenue Analysis
1. Year Wise revenue Analysis
2. Order Wise revenue Analysis
3. Make wise revenue Analysis
4. State wise revenus Analyise
```
year_income=inv_cust_plant.groupby(['Job Year','Job Month'],as_index=False)['Total Amt Wtd Tax.'].sum()
fig = px.line(year_income, x="Job Month", y="Total Amt Wtd Tax.", color='Job Year')
fig.show()
order_income=inv_cust_plant.groupby(['Job Year','Order Type'],as_index=False)['Total Amt Wtd Tax.'].sum()
fig = px.line(order_income, x="Job Year", y="Total Amt Wtd Tax.", color='Order Type')
fig.update_layout(title='Year Wise Order Revenue')
fig.show()
make_income=inv_cust_plant.groupby(['Job Year','Make'],as_index=False)['Total Amt Wtd Tax.'].sum()
fig = px.line(make_income, x="Job Year", y="Total Amt Wtd Tax.", color='Make')
fig.update_layout(title='Year Wise Make/Car Revenue')
fig.show()
state_income=inv_cust_plant.groupby(['Job Year','State'],as_index=False)['Total Amt Wtd Tax.'].sum()
fig = px.line(state_income, x="Job Year", y="Total Amt Wtd Tax.", color='State')
fig.update_layout(title='State wise Revenue')
fig.show()
#model_income.sort_values(by='Total Amt Wtd Tax.',ascending=False)
```
# Source Income
```
#source affecting to more income
source_income=inv_cust_plant.groupby(['Job Year','Data Origin'],as_index=False)['Total Amt Wtd Tax.'].sum()
fig = px.line(source_income, x="Job Year", y="Total Amt Wtd Tax.", color='Data Origin')
fig.update_layout(title='Source wise income ')
fig.show()
#model_income.sort_values(by='Total Amt Wtd Tax.',ascending=False)
```
# Labour Revenue Analysis?
```
# Mean labour charges according to each order
labour_charge=inv_cust_plant[['Labour Total','Order Type']]
lab=pd.DataFrame(labour_charge.groupby(['Order Type'])['Labour Total'].mean()).rename(columns={'Labour total':'Mean Labour Cost'}).reset_index()
lab.head()
fig = px.bar(lab, y='Labour Total', x='Order Type')
fig.update_layout(template='ggplot2', title="Mean Labour charges for various order type")
fig.show()
labor_year_income=inv_cust_plant.groupby(['Job Year','Job Month'],as_index=False)['Labour Total'].sum()
fig = px.line(labor_year_income, x="Job Year", y="Labour Total", color='Job Month')
fig.update_layout(template='ggplot2', title="labor Charges For various months ")
fig.show()
month_income=inv_cust_plant.groupby(['Job Month'],as_index=False)['Labour Total'].sum()
fig = px.line(month_income, x="Job Month", y="Labour Total")
fig.update_layout(template='ggplot2', title="Overall Labor costing during various months ")
fig.show()
order_year_income=inv_cust_plant.groupby(['Order Type','Job Year'],as_index=False)['Labour Total'].sum()
fig = px.line(order_year_income, x="Job Year", y="Labour Total", color='Order Type')
fig.update_layout(template='ggplot2', title="labor Charges For various order in all years ")
fig.show()
```
## Total number of plants in each state
```
#total number of plants in each states
state=pd.crosstab(columns=plant['State'],index='Plant')
state.head()
city=pd.crosstab(columns=plant['City'],index='Plant')
city.head()
```
## Number of plants in each city
```
#graphical representation of number of plants in each state
plt.figure(figsize=(15,4))
plt.xticks(rotation=90)
sns.set(style='darkgrid')
ax=sns.barplot(plant['State'].value_counts().index,plant['State'].value_counts())
#graphical representation of number of plants in each city
plt.figure(figsize=(15,4))
plt.xticks(rotation=90)
sns.set(style='darkgrid')
ax=sns.barplot(plant['City'].value_counts().head(30).index,plant['City'].value_counts().head(30))
```
## Number of plants according to various zones
```
#divide states into zones
northern_zone =['Chandigarh','Delhi','Haryana','Himachal Pradesh','Jammu and Kashmir','Ladakh'
,'Punjab','Rajasthan','Uttarakhand','Uttar Pradesh']
north_eastern_Zone =[ ]
eastern_zone =['Bihar', 'Jharkhand', 'Odisha','West Bengal','Assam', 'Arunachal Pradesh', 'Manipur', 'Meghalaya', 'Mizoram', 'Nagaland']
central_western_zone=['Madhya Pradesh', 'Chhattisgarh', 'Goa', 'Gujarat', 'Maharashtra']
southern_zone =[ 'Andhra Pradesh', 'Karnataka', 'Kerala', 'Puducherry', 'Tamil Nadu','Telangana']
f1=plant['State'].isin(northern_zone)
f2=plant['State'].isin(eastern_zone)
f3=plant['State'].isin(central_western_zone)
f4=plant['State'].isin(southern_zone)
#filt5=plant['State'].isin(north_eastern_Zone)
n_state =plant.loc[f1]
e_state =plant.loc[f2]
c_w_state=plant.loc[f3]
s_state =plant.loc[f4]
#north_east_state=plant.loc[filt5]
trace1=go.Bar(
y = n_state['State'].value_counts().values,
x = n_state['State'].value_counts().index,
name = "Northern Zone",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace2 = go.Bar(
y =s_state['State'].value_counts().values,
x = s_state['State'].value_counts().index,
name = "Southern Zone",
marker = dict(color = 'rgba(155, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace3 = go.Bar(
y =e_state['State'].value_counts().values ,
x = e_state['State'].value_counts().index,
name = "Eastern Zone",
marker = dict(color = 'rgba(355, 355,1000, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace4 = go.Bar(
y =c_w_state['State'].value_counts().values ,
x = c_w_state['State'].value_counts().index,
name = "Central and Western Zone",
marker = dict(color = 'rgba(255, 225,1, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
fig = go.Figure(data = [trace1,trace2,trace3,trace4])
fig.update_layout(template='ggplot2', title="Plant Count in various state")
#fig, axs=plt.subplots(nrows=2,ncols=2,figsize=(16.5,10))
#sns.barplot(north_state['State'].value_counts().values,north_state['State'].value_counts().index,ax=axs[0,0])
#axs[0,0].set_title('Northern Zone')
#sns.barplot(east_state['State'].value_counts().values,east_state['State'].value_counts().index,ax=axs[0,1])
#axs[0,1].set_title('Eastern Zone')
#sns.barplot(cent_west_state['State'].value_counts().values,cent_west_state['State'].value_counts().index,ax=axs[1,0])
#axs[1,0].set_title('Cental & Western Zone')
#sns.barplot(south_state['State'].value_counts().values,south_state['State'].value_counts().index,ax=axs[1,1])
#axs[1,1].set_title('Southern Zone')
#sns.barplot(north_east_state['State'].value_counts().values,north_east_state['State'].value_counts().index,ax=axs[2,0])
#axs[2,0].set_title('North Eastern Zone')
```
# Location Based Analysis
```
#k=inv_plant[inv_plant['State']=='Maharashtra']
#for i in ['CITY','City','State','District','Regn No']:
# print(k[i].value_counts())
#print('*'*80)
inv_plant.shape
#city=[]
#state=[]
#car_count=[]
#for i in loc1['City'].unique():
# city.append(i)
# car_count.append(len(loc1[loc1['City']==i]['Regn No'].value_counts()))
#state.append(loc1[loc1['City']==i]['State'].value_counts().index)
#print('*'*60)
#k=loc[loc['State']=='Kerala']
#len(k[k['City']=='Kottayam']['Regn No'].value_counts())
#plant_most_cars=pd.DataFrame({'City':city,'State':state,'Total Unique Cars':car_count})
#plant_most_cars.sort_values(by=['Total Unique Cars'],inplace=True,ascending=False)
#plant_most_cars
#sns.barplot(plant_most_cars['Total Unique Cars'].head(10),plant_most_cars['City'].head(10))
filt1=inv_plant['State'].isin(northern_zone)
north_state=inv_plant.loc[filt1]
filt2=inv_plant['State'].isin(eastern_zone)
east_state=inv_plant.loc[filt2]
filt3=inv_plant['State'].isin(central_western_zone)
cent_west_state=inv_plant.loc[filt3]
filt4=inv_plant['State'].isin(southern_zone)
south_state=inv_plant.loc[filt4]
```
# **Which** **make** **and** **model** **is** **more** popular?
1. Make popular in various zones
2. Model popular in various zone
3. Make with most sales
```
#graphical representation of famous makes among various zones
from plotly.subplots import make_subplots
fig = make_subplots(rows=4, cols=2)
#for northern zone
fig.add_trace(go.Bar(
y = north_state['Make'].value_counts().head(5).values,
x = north_state['Make'].value_counts().head(5).index,
marker=dict(color=[1, 2, 3,4,5])),
1, 1)
fig.add_trace(go.Bar(
y = north_state['Model'].value_counts().head(5).values,
x = north_state['Model'].value_counts().head(5).index,
marker=dict(color=[15,8,9,10,11])),
1, 2)
fig.update_xaxes(title_text="Make count('northern Zone )", row=1, col=1)
fig.update_xaxes(title_text="Model count('northern Zone )", row=1, col=2)
#figure for eastern zone
fig.add_trace(go.Bar(
y = east_state['Make'].value_counts().head(5).values,
x = east_state['Make'].value_counts().head(5).index,
marker=dict(color=[1, 2, 3,4,5])),
2, 1)
fig.add_trace(go.Bar(
y = east_state['Model'].value_counts().head(5).values,
x = east_state['Model'].value_counts().head(5).index,
marker=dict(color=[15,8,9,10,11])),
2, 2)
fig.update_xaxes(title_text="Make count('Eastern Zone )", row=2, col=1)
fig.update_xaxes(title_text="Model count('Eastern Zone )", row=2, col=2)
#figure for southern zone
fig.add_trace(go.Bar(
y = south_state['Make'].value_counts().head(5).values,
x = south_state['Make'].value_counts().head(5).index,
marker=dict(color=[1, 2, 3,4,5])),
3, 1)
fig.add_trace(go.Bar(
y = south_state['Model'].value_counts().head(5).values,
x = south_state['Model'].value_counts().head(5).index,
marker=dict(color=[15,8,9,10,11])),
3, 2)
fig.update_xaxes(title_text="Make count('Southern Zone )", row=3, col=1)
fig.update_xaxes(title_text="Model count('Southern Zone )", row=3, col=2)
#figure for centeral and western zone
fig.add_trace(go.Bar(
y = cent_west_state['Make'].value_counts().head(5).values,
x = cent_west_state['Make'].value_counts().head(5).index,
marker=dict(color=[1, 2, 3,4,5])),
4, 1)
fig.add_trace(go.Bar(
y = cent_west_state['Model'].value_counts().head(5).values,
x = cent_west_state['Model'].value_counts().head(5).index,
marker=dict(color=[15,8,9,10,11])),
4, 2)
fig.update_xaxes(title_text="Central & Western Zone )", row=4, col=1)
fig.update_xaxes(title_text="Central & Western Zone )", row=4, col=2)
fig.update_layout(template='ggplot2', title="Zonal Count",height=1100, width=1100)
fig.show()
#sns.scatterplot(inv_plant[inv_plant["Make"]=="PORCHE"]["Total Amt Wtd Tax."],inv_cust_plant['Total Amt Wtd Tax.'])
```
# which area has most cars?
1. Zone wise
2. Top 5 States
```
#according to zones
data=[['Northern Zone',north_state['Make'].count()],['Eastern Zone',east_state['Make'].count()],
['Central & Western Zone',cent_west_state['Make'].count()],['Southern Zone',south_state['Make'].count()]]
overall=pd.DataFrame(data,columns=['Zones','Count'])
overall.head()
import plotly.graph_objects as go
#graphical representation of most cars in various zones
colors = ['gold', 'mediumturquoise', 'darkorange', 'lightgreen']
fig = go.Figure(data=[go.Pie(labels=overall['Zones'],
values=overall['Count'])])
fig.update_traces(hoverinfo='label+percent', textinfo='value', textfont_size=20,
marker=dict(colors=colors, line=dict(color='#000000', width=2)))
fig.update_layout(template='ygridoff', title="Zone Wise Car Count")
fig.show()
#overall car count in each state
car1=[]
state1=[]
for i in inv_plant['State'].unique():
car1.append(inv_plant[inv_plant['State']==i]['Make'].count())
state1.append(i)
df1=pd.DataFrame({'States':state1,'car count':car1})
df1=df1.sort_values(by='car count',ascending=False)
df1
#state wise count
colors = ['gold', 'mediumturquoise', 'darkorange', 'lightgreen']
fig = go.Figure(data=[go.Pie(labels=df1['States'][0:10],
values=df1['car count'][0:10])])
fig.update_traces(hoverinfo='label+percent', textinfo='value', textfont_size=20,
marker=dict(colors=colors, line=dict(color='#000000', width=2)))
fig.update_layout(template='ggplot2', title=" Top 10 State wise count car count")
fig.show()
```
# which service structure is popular in different zones ?
1. Northern Zone
2. Eastern Zone
3. Central and western Zone
4. Southern Zone
## **Northern** Zone
```
one=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Uttar Pradesh'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
one=one.rename(columns={'Order Type':'count'})
one=one.reset_index()
one.head()
two=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Haryana'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
two=two.rename(columns={'Order Type':'count'})
two=two.reset_index()
two.head()
three=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Punjab'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
three=three.rename(columns={'Order Type':'count'})
three=three.reset_index()
three.head()
four=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Uttarakhand'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
four=four.rename(columns={'Order Type':'count'})
four=four.reset_index()
four.head()
five=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Himachal Pradesh'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
five=five.rename(columns={'Order Type':'count'})
five=five.reset_index()
six=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Rajasthan'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
six=six.rename(columns={'Order Type':'count'})
six=six.reset_index()
seven=pd.DataFrame(north_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Chandigarh'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
seven=seven.rename(columns={'Order Type':'count'})
seven=seven.reset_index()
trace1=go.Bar(
y = one['count'],
x = one['Order Type'],
name = "Uttar Pradesh",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace2 = go.Bar(
y =two['count'],
x = two['Order Type'],
name = "Haryana",
marker = dict(color = 'rgba(155, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace3 = go.Bar(
y =three['count'] ,
x = three['Order Type'],
name = "Punjab",
marker = dict(color = 'rgba(355, 355,1000, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace4 = go.Bar(
y =four['count'] ,
x = four['Order Type'],
name = "Uttarakhand",
marker = dict(color = 'rgba(255, 225,1, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace5 = go.Bar(
y =five['count'] ,
x = five['Order Type'],
name = "Himachal Pradesh",
marker = dict(color = 'DarkSlateGrey',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace6 = go.Bar(
y =six['count'] ,
x = six['Order Type'],
name = "Rajasthan",
marker = dict(color = 'goldenrod',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace7 = go.Bar(
y =seven['count'] ,
x = seven['Order Type'],
name = "Chandigarh",
marker = dict(color = 'darksalmon',
line=dict(color='rgb(0,0,0)',width=1.5)))
fig = go.Figure(data = [trace1,trace2,trace3,trace4,trace5,trace6,trace7])
fig.update_layout(template='plotly_dark', title="Famous order typ in Northern Zone")
iplot(fig)
```
## Central and Western Zone
```
one=pd.DataFrame(cent_west_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Maharashtra'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
one=one.rename(columns={'Order Type':'count'})
one=one.reset_index()
one.head()
two=pd.DataFrame(cent_west_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Gujarat'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
two=two.rename(columns={'Order Type':'count'})
two=two.reset_index()
two.head()
three=pd.DataFrame(cent_west_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Madhya Pradesh'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
three=three.rename(columns={'Order Type':'count'})
three=three.reset_index()
three.head()
four=pd.DataFrame(cent_west_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Chhattisgarh'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
four=four.rename(columns={'Order Type':'count'})
four=four.reset_index()
four.head()
trace1=go.Bar(
y = one['count'],
x = one['Order Type'],
name = "Maharashtra",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace2 = go.Bar(
y =two['count'],
x = two['Order Type'],
name = "Gujarat",
marker = dict(color = 'rgba(155, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace3 = go.Bar(
y =three['count'] ,
x = three['Order Type'],
name = "Rajasthan",
marker = dict(color = 'rgba(355, 355,1000, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace4 = go.Bar(
y =four['count'] ,
x = four['Order Type'],
name = "Chhattisgarh",
marker = dict(color = 'rgba(255, 225,1, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
fig = go.Figure(data = [trace1,trace2,trace3,trace4])
fig.update_layout(template='plotly_dark', title="Famous order in Central & Western Zone")
iplot(fig)
```
## eastern and north eastern zone
```
one=pd.DataFrame(east_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Bihar'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
one=one.rename(columns={'Order Type':'count'})
one=one.reset_index()
one.head()
two=pd.DataFrame(east_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['West Bengal'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
two=two.rename(columns={'Order Type':'count'})
two=two.reset_index()
two.head()
three=pd.DataFrame(east_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Odisha'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
three=three.rename(columns={'Order Type':'count'})
three=three.reset_index()
three.head()
four=pd.DataFrame(east_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Jharkhand'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
four=four.rename(columns={'Order Type':'count'})
four=four.reset_index()
four.head()
five=pd.DataFrame(east_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Assam'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
five=five.rename(columns={'Order Type':'count'})
five=five.reset_index()
trace1=go.Bar(
y = one['count'],
x = one['Order Type'],
name = "Bihar",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace2 = go.Bar(
y =two['count'],
x = two['Order Type'],
name = "West Bengal",
marker = dict(color = 'rgba(155, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace3 = go.Bar(
y =three['count'] ,
x = three['Order Type'],
name = "Odisha",
marker = dict(color = 'rgba(355, 355,1000, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace4 = go.Bar(
y =four['count'] ,
x = four['Order Type'],
name = "Jharkhand",
marker = dict(color = 'rgba(255, 225,1, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace5 = go.Bar(
y =five['count'] ,
x = five['Order Type'],
name = "Assam",
marker = dict(color = 'DarkSlateGrey',
line=dict(color='rgb(0,0,0)',width=1.5)))
fig = go.Figure(data = [trace1,trace2,trace3,trace4,trace5])
fig.update_layout(template='plotly_dark', title="Famous order in North Eastern Zone")
iplot(fig)
```
## Southern Zone
```
one=pd.DataFrame(south_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Telangana'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
one=one.rename(columns={'Order Type':'count'})
one=one.reset_index()
one.head()
two=pd.DataFrame(south_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Tamil Nadu'])
#one=pd.DataFrame(loc1groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
two=two.rename(columns={'Order Type':'count'})
two=two.reset_index()
two.head()
three=pd.DataFrame(south_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Karnataka'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
three=three.rename(columns={'Order Type':'count'})
three=three.reset_index()
three.head()
four=pd.DataFrame(south_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Puducherry'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
four=four.rename(columns={'Order Type':'count'})
four=four.reset_index()
four.head()
five=pd.DataFrame(south_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Andhra Pradesh'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
five=five.rename(columns={'Order Type':'count'})
five=five.reset_index()
six=pd.DataFrame(south_state.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False).loc['Kerala'])
#one=pd.DataFrame(loc1.groupby(['State'])['Order Type'].value_counts().sort_values(ascending=False))
six=six.rename(columns={'Order Type':'count'})
six=six.reset_index()
trace1=go.Bar(
y = one['count'],
x = one['Order Type'],
name = "Telangana",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace2 = go.Bar(
y =two['count'],
x = two['Order Type'],
name = "Tamil Nadu",
marker = dict(color = 'rgba(155, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace3 = go.Bar(
y =three['count'] ,
x = three['Order Type'],
name = "Karnataka",
marker = dict(color = 'rgba(355, 355,1000, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace4 = go.Bar(
y =four['count'] ,
x = four['Order Type'],
name = "Puducherry",
marker = dict(color = 'rgba(255, 225,1, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace5 = go.Bar(
y =five['count'] ,
x = five['Order Type'],
name = "Andhra Pradesh",
marker = dict(color = 'DarkSlateGrey',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace6 = go.Bar(
y =six['count'] ,
x = six['Order Type'],
name = "Kerala",
marker = dict(color = 'goldenrod',
line=dict(color='rgb(0,0,0)',width=1.5)))
#trace7 = go.Bar(
# y =seven['count'] ,
# x = seven['Order Type'],
# name = "Chandigarh",
# marker = dict(color = 'darksalmon',
# line=dict(color='rgb(0,0,0)',width=1.5)))
fig = go.Figure(data = [trace1,trace2,trace3,trace4,trace5,trace6])
fig.update_layout(template='plotly_dark', title="Famous order in Southern Zone")
iplot(fig)
```
# Service Structure for particular car ?
```
one=pd.DataFrame(inv_plant.groupby(['Order Type'])['Make'].value_counts().sort_values(ascending=False)).loc['Running Repairs']
one=one.rename(columns={'Make':'count'})
one=one.reset_index()
one
two=pd.DataFrame(inv_plant.groupby(['Order Type'])['Make'].value_counts().sort_values(ascending=False)).loc['Accidental']
two=two.rename(columns={'Make':'count'})
two=two.reset_index()
two
three=pd.DataFrame(inv_plant.groupby(['Order Type'])['Make'].value_counts().sort_values(ascending=False)).loc['Mechanical']
three=three.rename(columns={'Make':'count'})
three=three.reset_index()
three
trace1=go.Bar(
y = one['count'][0:5],
x = one['Make'][0:5],
name = "Running repairs",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace2 = go.Bar(
y =two['count'][0:5],
x = two['Make'][0:5],
name = "Accidental",
marker = dict(color = 'rgba(155, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
trace3 = go.Bar(
y =three['count'][0:5] ,
x = three['Make'][0:5],
name = "Mechanical",
marker = dict(color = 'rgba(355, 355,1000, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
fig = go.Figure(data = [trace1,trace2,trace3])
fig.update_layout(template='ggplot2', title="Famous order among cars")
iplot(fig)
```
# Seasonal Orders
1. Year Wise Analysis
2. Overall Ananlysis
## Year Wise
```
for_2012=inv_cust_plant[inv_cust_plant['Job Year']==2012]
for_2013=inv_cust_plant[inv_cust_plant['Job Year']==2013]
for_2014=inv_cust_plant[inv_cust_plant['Job Year']==2014]
for_2015=inv_cust_plant[inv_cust_plant['Job Year']==2015]
for_2016=inv_cust_plant[inv_cust_plant['Job Year']==2016]
inv_cust_plant['Job Year'].value_counts().sort_index().index
for i in inv_cust_plant['Job Year'].value_counts().sort_index().index:
year=inv_cust_plant[inv_cust_plant['Job Year']==i]
rain=[6,7,8,9]
filt=year['Job Month'].isin(rain)
rain_data=year.loc[filt]
###creating dataframes of season wise analysis of order type
###for rain season:
rain_df=pd.DataFrame(rain_data['Order Type'].value_counts())
rain_df=rain_df.rename(columns={'Order Type':'count'})
rain_df=rain_df.reset_index()
rain_df=rain_df.rename(columns={'index':'order type'})
rain_df.head()
###summer
summer=[2,3,4,5]
filt2=year['Job Month'].isin(summer)
summer_data=year.loc[filt2]
###winter
winter=[10,11,12,1]
filt1=year['Job Month'].isin(winter)
winter_data=year.loc[filt1]
winter_data.head()
###for winter season
winter_df=pd.DataFrame(winter_data['Order Type'].value_counts())
winter_df=winter_df.rename(columns={'Order Type':'count'})
winter_df=winter_df.reset_index()
winter_df=winter_df.rename(columns={'index':'order type'})
winter_df.head()
###for summer season
summer_df=pd.DataFrame(summer_data['Order Type'].value_counts())
summer_df=summer_df.rename(columns={'Order Type':'count'})
summer_df=summer_df.reset_index()
summer_df=summer_df.rename(columns={'index':'order type'})
summer_df.head()
colors = ['gold', 'mediumturquoise', 'darkorange', 'lightgreen']
#fig = go.Figure(data=[go.Pie(labels=rain_df['order type'],title='Rainy Season Orders',
#values=rain_df['count'])])
#fig.update_traces(hoverinfo='label+percent', textinfo='value', textfont_size=20,
#marker=dict(colors=colors, line=dict(color='#000000', width=2)))
#fig.show()
fig = make_subplots(rows=1, cols=3)
#for northern zone
fig = make_subplots(rows=1, cols=3, specs=[[{'type':'domain'}, {'type':'domain'},{'type':'domain'}]],subplot_titles=['WINTER', 'RAIN','SUMMER'])
fig.add_trace(go.Pie(labels=winter_df['order type'], values=winter_df['count']),
1, 1)
fig.add_trace(go.Pie(labels=rain_df['order type'], values=rain_df['count']),
1, 2)
fig.add_trace(go.Pie(labels=summer_df['order type'], values=summer_df['count']),
1, 3)
print('for the {}'.format(i))
fig.update_layout(template='ggplot2', title='For the year {}'.format(i))
fig.show()
```
## Overall Analysis
```
######Rainy season:
rain=[6,7,8,9]
filt=inv_plant['Job Month'].isin(rain)
rain_data=inv_plant.loc[filt]
###creating dataframes of season wise analysis of order type
###for rain season:
rain_df=pd.DataFrame(inv_plant['Order Type'].value_counts())
rain_df=rain_df.rename(columns={'Order Type':'count'})
rain_df=rain_df.reset_index()
rain_df=rain_df.rename(columns={'index':'order type'})
rain_df.head()
###summer
summer=[2,3,4,5]
filt2=inv_plant['Job Month'].isin(summer)
summer_data=inv_plant.loc[filt2]
###winter
winter=[10,11,12,1]
filt1=inv_plant['Job Month'].isin(winter)
winter_data=inv_plant.loc[filt1]
winter_data.head()
###for winter season
winter_df=pd.DataFrame(winter_data['Order Type'].value_counts())
winter_df=winter_df.rename(columns={'Order Type':'count'})
winter_df=winter_df.reset_index()
winter_df=winter_df.rename(columns={'index':'order type'})
winter_df.head()
###for summer season
summer_df=pd.DataFrame(summer_data['Order Type'].value_counts())
summer_df=summer_df.rename(columns={'Order Type':'count'})
summer_df=summer_df.reset_index()
summer_df=summer_df.rename(columns={'index':'order type'})
summer_df.head()
fig = make_subplots(rows=1, cols=3, specs=[[{'type':'domain'}, {'type':'domain'},{'type':'domain'}]],subplot_titles=['WINTER', 'RAIN','SUMMER'])
fig.add_trace(go.Pie(labels=winter_df['order type'], values=winter_df['count']),
1, 1)
fig.add_trace(go.Pie(labels=rain_df['order type'], values=rain_df['count']),
1, 2)
fig.add_trace(go.Pie(labels=summer_df['order type'], values=summer_df['count']),
1, 3)
print('for the {}'.format(i))
fig.update_layout(template='ggplot2', title="Overall Orders ")
fig.show()
```
# Inventory Management
```
#combination of customer,invoice,plant and item
inv_cust_plant_jtd=pd.merge(inv_cust_plant,jtd,left_on='Job Card No',right_on='DBM Order')
inv_cust_plant
#inventory management
#P002 is for parts
inventory=inv_cust_plant_jtd[['Make','Model','Order Type','Item Category','Description','Material','Order Quantity','Net value','Target quantity UoM','Parts Total']]
z=inventory[inventory['Item Category']=='P002']
parts=z.groupby(['Material','Description'],as_index=False)['Net value'].sum().sort_values(by='Net value',ascending=False)#,'Net values':'sum'})
parts['Net value']=parts['Net value'].apply(lambda x: '{:.2f}'.format(x))
trace1 = go.Bar(
y =parts['Net value'][0:10],
x = parts['Description'][0:10]
)
fig = go.Figure(data = [trace1])
fig.update_layout(template='ggplot2', title="Top 10 most sold parts according to revenue")
#services code P010
services=inventory[inventory['Item Category']=='P010']
s=services.groupby(['Material','Description'],as_index=False)['Net value'].sum().sort_values(by='Net value',ascending=False)#,'Net values':'sum'})
s['Net value']=s['Net value'].apply(lambda x: '{:.2f}'.format(x))
trace1 = go.Bar(
y =s['Net value'][0:10],
x =s['Description'][0:10]
)
fig = go.Figure(data = [trace1])
fig.update_layout(template='ggplot2', title="Top 10 Service provided according to revenue")
make=z.groupby(['Model','Make','Description'],as_index=False)['Net value'].sum()
ma=[]
description=[]
famous_parts=[]
for i in make['Make'].unique():
o=make[make['Make']==i].sort_values(by='Net value',ascending=False)
ma.append(i)
description.append(o['Description'].iloc[0])
famous_parts.append(o['Net value'].iloc[0])
df1=pd.DataFrame({'Make':ma,'description':description,'value':famous_parts})
df1
ser=services.groupby(['Model','Make','Description'],as_index=False)['Net value'].sum()
ma1=[]
description1=[]
famous_parts1=[]
for i in ser['Make'].unique():
o=ser[ser['Make']==i].sort_values(by='Net value',ascending=False)
ma1.append(i)
description1.append(o['Description'].iloc[0])
famous_parts1.append(o['Net value'].iloc[0])
df2=pd.DataFrame({'Make':ma1,'description':description1,'value':famous_parts1})
df2
```
# Customer Segmentation
```
inv_cust_plant['Invoice_DateTime'] = pd.to_datetime(inv_cust_plant['Invoice_DateTime'])
inv_cust_plant['Invoice_Date']=inv_cust_plant['Invoice_DateTime'].dt.date
inv_cust_plant['Invoice_Date'].max()
clust=inv_cust_plant[['Customer No.','Invoice_Date','Total Amt Wtd Tax.']]
tx_user = pd.DataFrame(clust['Customer No.'].unique())
tx_user.columns = ['CustomerID']
#get the max purchase date for each customer and create a dataframe with it
tx_max_purchase = clust.groupby('Customer No.').Invoice_Date.max().reset_index()
tx_max_purchase.columns = ['CustomerID','MaxPurchaseDate']
#we take our observation point as the max invoice date in our dataset
tx_max_purchase['Recency'] = (tx_max_purchase['MaxPurchaseDate'].max() - tx_max_purchase['MaxPurchaseDate']).dt.days
#merge this dataframe to our new user dataframe
tx_user = pd.merge(tx_user, tx_max_purchase[['CustomerID','Recency']], on='CustomerID')
tx_frequency = clust.groupby('Customer No.').Invoice_Date.count().reset_index()
tx_frequency.columns = ['CustomerID','Frequency']
#add this data to our main dataframe
tx_user = pd.merge(tx_user, tx_frequency, on='CustomerID')
tx_user.head()
tx_revenue = clust.groupby('Customer No.')['Total Amt Wtd Tax.'].sum().reset_index()
tx_frequency.columns = ['CustomerID','Revenue']
#merge it with our main dataframe
tx_user = pd.merge(tx_user, tx_frequency, on='CustomerID')
tx_user
sns.distplot(tx_user['Recency'])
sns.distplot(np.log(tx_user['Frequency']))
#tx_user['Revenue']=np.log(tx_user['Revenue'])
sns.distplot(tx_user['Revenue'])
from scipy import stats
customers_fix = pd.DataFrame()
#customers_fix["Recency"] = stats.boxcox(tx_user['Recency'])[0]
customers_fix["Frequency"] = stats.boxcox(tx_user['Frequency'])[0]
customers_fix["Revenue"] = pd.Series(np.cbrt(tx_user['Revenue'])).values
customers_fix["Recency"] = pd.Series(np.cbrt(tx_user['Recency'])).values
customers_fix.tail()
# Import library
from sklearn.preprocessing import StandardScaler
# Initialize the Object
scaler = StandardScaler()
# Fit and Transform The Data
scaler.fit(customers_fix)
customers_normalized = scaler.transform(customers_fix)
# Assert that it has mean 0 and variance 1
print(customers_normalized.mean(axis = 0).round(2)) # [0. -0. 0.]
print(customers_normalized.std(axis = 0).round(2)) # [1. 1. 1.]
from sklearn.cluster import KMeans
sse = {}
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, random_state=42)
kmeans.fit(customers_normalized)
sse[k] = kmeans.inertia_ # SSE to closest cluster centroid
plt.title('The Elbow Method')
plt.xlabel('k')
plt.ylabel('SSE')
sns.pointplot(x=list(sse.keys()), y=list(sse.values()))
plt.show()
model = KMeans(n_clusters=3, random_state=42)
model.fit(customers_normalized)
model.labels_.shape
tx_user["Cluster"] = model.labels_
tx_user.groupby('Cluster').agg({
'Recency':'mean',
'Frequency':'mean',
'Revenue':['mean', 'count']}).round(2)
```
| github_jupyter |
```
%load_ext blackcellmagic
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from mpl_toolkits.mplot3d import Axes3D
from pathlib import Path
from sklearn import preprocessing
# code from 'aegis4048.github.io', modified for VI and yield
######################################## Data preparation #########################################
# data path
df_path = (
Path.cwd()
/ "data"
/ "processed"
/ "Jun22_2020"
/ "Jun22_2020_df.csv"
)
df = pd.read_csv(df_path).iloc[:, 2:]
print(df.head().iloc[:,0:4])
# Train Test Split
train, test = train_test_split(df, test_size=0.2, shuffle=True)
X_train = train.iloc[:, 1:4].values
y_train = train['yield'].values
X_test = test.iloc[:, 1:4].values
y_test = test['yield'].values
# scale features
scaler = preprocessing.StandardScaler()
X_test_scaled = scaler.fit_transform(X_test)
X_train_scaled = scaler.fit_transform(X_train)
################################################ Train #############################################
ols = linear_model.LinearRegression()
model = ols.fit(X_train_scaled, y_train)
model.score(X_train_scaled, y_train)
############################################## Evaluate ############################################
y_pred = model.predict(X_test_scaled)
model.score(X_test_scaled, y_test)
############################################## Plot ################################################
plt.scatter(y_test, y_pred)
plt.show()
x = X_test_scaled[:,0]
y = X_test_scaled[:,1]
z = predicted
plt.style.use('default')
fig = plt.figure(figsize=(12, 4))
ax1 = fig.add_subplot(131, projection='3d')
ax2 = fig.add_subplot(132, projection='3d')
ax3 = fig.add_subplot(133, projection='3d')
axes = [ax1, ax2, ax3]
for ax in axes:
ax.plot(x, y, z, color='k', zorder=15, linestyle='none', marker='o', alpha=0.5)
ax.scatter(x, y, predicted, facecolor=(0,0,0,0), s=20, edgecolor='#70b3f0')
ax.set_xlabel('NDVI', fontsize=12)
ax.set_ylabel('SAVI', fontsize=12)
ax.set_zlabel('yield (CWT/A)', fontsize=12)
ax.locator_params(nbins=4, axis='x')
ax.locator_params(nbins=5, axis='x')
# ax1.text2D(0.2, 0.32, 'aegis4048.github.io', fontsize=13, ha='center', va='center',
# transform=ax1.transAxes, color='grey', alpha=0.5)
# ax2.text2D(0.3, 0.42, 'aegis4048.github.io', fontsize=13, ha='center', va='center',
# transform=ax2.transAxes, color='grey', alpha=0.5)
# ax3.text2D(0.85, 0.85, 'aegis4048.github.io', fontsize=13, ha='center', va='center',
# transform=ax3.transAxes, color='grey', alpha=0.5)
ax1.view_init(elev=28, azim=120)
ax2.view_init(elev=4, azim=114)
ax3.view_init(elev=60, azim=165)
fig.suptitle('$R^2 = %.2f$' % r2, fontsize=20)
fig.tight_layout()
```
| github_jupyter |
```
import re
import pandas as pd
import spacy
from typing import List
from math import sqrt, ceil
# gensim
from gensim import corpora
from gensim.models.ldamulticore import LdaMulticore
# plotting
from matplotlib import pyplot as plt
from wordcloud import WordCloud
import matplotlib.colors as mcolors
# progress bars
from tqdm.notebook import tqdm
tqdm.pandas()
```
### Params
```
params = dict(
num_topics = 15,
iterations = 200,
epochs = 20,
minDF = 0.02,
maxDF = 0.8,
)
```
#### Files
Input CSV file and stopword files.
```
inputfile = "../../data/nytimes.tsv"
stopwordfile = "../stopwords/custom_stopwords.txt"
def get_stopwords():
# Read in stopwords
with open(stopwordfile) as f:
stopwords = []
for line in f:
stopwords.append(line.strip("\n"))
return stopwords
stopwords = get_stopwords()
```
### Read in New York Times Dataset
A pre-processed version of the NYT news dataset is read in as a DataFrame.
```
def read_data(inputfile):
"Read in a tab-separated file with date, headline and news content"
df = pd.read_csv(inputfile, sep='\t', header=None,
names=['date', 'headline', 'content'])
df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
return df
df = read_data(inputfile)
df.head()
```
### Clean the input text
We clean the text from each article's content to only contain relevant alphanumeric strings (symbols do not add any value to topic modelling).
```
def clean_data(df):
"Extract relevant text from DataFrame using a regex"
# Regex pattern for only alphanumeric, hyphenated text with 3 or more chars
pattern = re.compile(r"[A-Za-z0-9\-]{3,50}")
df['clean'] = df['content'].str.findall(pattern).str.join(' ')
return df
df_clean = clean_data(df)
```
#### (Optional) Subset the dataframe for testing
Test on a subset of the full data for quicker results.
```
df1 = df_clean.iloc[:2000, :].copy()
# df1 = df_clean.copy()
```
### Preprocess text for topic modelling
```
def lemmatize(text, nlp):
"Perform lemmatization and stopword removal in the clean text"
doc = nlp(text)
lemma_list = [str(tok.lemma_).lower() for tok in doc
if tok.is_alpha and tok.text.lower() not in stopwords]
return lemma_list
def preprocess(df):
"Preprocess text in each row of the DataFrame"
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
nlp.add_pipe(nlp.create_pipe('sentencizer'))
df['lemmas'] = df['clean'].progress_apply(lambda row: lemmatize(row, nlp))
return df.drop('clean', axis=1)
df_preproc = preprocess(df1)
df_preproc.head(3)
```
### Build LDA Topic Model
#### Multicore LDA algorithm
```
# Choose number of workers for multicore LDA as (num_physical_cores - 1)
def run_lda_multicore(text_df, params, workers=7):
id2word = corpora.Dictionary(text_df['lemmas'])
# Filter out words that occur in less than 2% documents or more than 50% of the documents.
id2word.filter_extremes(no_below=params['minDF'], no_above=params['maxDF'])
corpus = [id2word.doc2bow(text) for text in text_df['lemmas']]
# LDA Model
lda_model = LdaMulticore(
corpus=corpus,
id2word=id2word,
workers=workers,
num_topics=params['num_topics'],
random_state=1,
chunksize=2048,
passes=params['epochs'],
iterations=params['iterations'],
)
return lda_model, corpus
```
### Wordclouds of most likely words in each topic
```
def plot_wordclouds(topics, colormap="cividis"):
cloud = WordCloud(
background_color='white',
width=600,
height=400,
colormap=colormap,
prefer_horizontal=1.0,
)
num_topics = len(topics)
fig_width = min(ceil(0.6 * num_topics + 6), 20)
fig_height = min(ceil(0.65 * num_topics), 20)
fig = plt.figure(figsize=(fig_width, fig_height))
for idx, word_weights in tqdm(enumerate(topics), total=num_topics):
ax = fig.add_subplot((num_topics / 5) + 1, 5, idx + 1)
wordcloud = cloud.generate_from_frequencies(word_weights)
ax.imshow(wordcloud, interpolation="bilinear")
ax.set_title('Topic {}'.format(idx + 1))
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.tick_params(length=0)
plt.tick_params(labelsize=14)
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.margins(x=0.1, y=0.1)
st = fig.suptitle("LDA Topics", y=0.92)
fig.savefig("pyspark-topics.png", bbox_extra_artists=[st], bbox_inches='tight')
```
### Run topic model and plot wordclouds
```
model, corpus = run_lda_multicore(df_preproc, params)
```
#### Convert topic words to a list of dicts
```
topic_list = model.show_topics(formatted=False,
num_topics=params['num_topics'],
num_words=15)
topics = [dict(item[1]) for item in topic_list]
plot_wordclouds(topics)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Distributed CNTK using custom docker images
In this tutorial, you will train a CNTK model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using a custom docker image and distributed training.
## Prerequisites
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* Go through the [configuration notebook](../../../configuration.ipynb) to:
* install the AML SDK
* create a workspace and its configuration file (`config.json`)
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
```
## Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
```
## Create or Attach existing AmlCompute
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpucluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current AmlCompute
print(compute_target.get_status().serialize())
```
## Upload training data
For this tutorial, we will be using the MNIST dataset.
First, let's download the dataset. We've included the `install_mnist.py` script to download the data and convert it to a CNTK-supported format. Our data files will get written to a directory named `'mnist'`.
```
import install_mnist
install_mnist.main('mnist')
```
To make the data accessible for remote training, you will need to upload the data from your local machine to the cloud. AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data, and interact with it from your remote compute targets.
Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore, which we will then mount on the remote compute for training in the next section.
```
ds = ws.get_default_datastore()
print(ds.datastore_type, ds.account_name, ds.container_name)
```
The following code will upload the training data to the path `./mnist` on the default datastore.
```
ds.upload(src_dir='./mnist', target_path='./mnist')
```
Now let's get a reference to the path on the datastore with the training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--data_dir` argument.
```
path_on_datastore = 'mnist'
ds_data = ds.path(path_on_datastore)
print(ds_data)
```
## Train model on the remote compute
Now that we have the cluster ready to go, let's run our distributed training job.
### Create a project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
```
import os
project_folder = './cntk-distr'
os.makedirs(project_folder, exist_ok=True)
```
Copy the training script `cntk_distr_mnist.py` into this project directory.
```
import shutil
shutil.copy('cntk_distr_mnist.py', project_folder)
```
### Create an experiment
Create an [experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed CNTK tutorial.
```
from azureml.core import Experiment
experiment_name = 'cntk-distr'
experiment = Experiment(ws, name=experiment_name)
```
### Create an Estimator
The AML SDK's base Estimator enables you to easily submit custom scripts for both single-node and distributed runs. You should this generic estimator for training code using frameworks such as sklearn or CNTK that don't have corresponding custom estimators. For more information on using the generic estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models).
```
from azureml.train.estimator import Estimator
script_params = {
'--num_epochs': 20,
'--data_dir': ds_data.as_mount(),
'--output_dir': './outputs'
}
estimator = Estimator(source_directory=project_folder,
compute_target=compute_target,
entry_script='cntk_distr_mnist.py',
script_params=script_params,
node_count=2,
process_count_per_node=1,
distributed_backend='mpi',
pip_packages=['cntk-gpu==2.6'],
custom_docker_base_image='microsoft/mmlspark:gpu-0.12',
use_gpu=True)
```
We would like to train our model using a [pre-built Docker container](https://hub.docker.com/r/microsoft/mmlspark/). To do so, specify the name of the docker image to the argument `custom_docker_base_image`. You can only provide images available in public docker repositories such as Docker Hub using this argument. To use an image from a private docker repository, use the constructor's `environment_definition` parameter instead. Finally, we provide the `cntk` package to `pip_packages` to install CNTK 2.6 on our custom image.
The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_backend='mpi'`.
### Submit job
Run your experiment by submitting your estimator object. Note that this call is asynchronous.
```
run = experiment.submit(estimator)
print(run)
```
### Monitor your run
You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
Alternatively, you can block until the script has completed training before running more code.
```
run.wait_for_completion(show_output=True)
```
| github_jupyter |
```
#@title Copyright 2020 The Cirq Developers
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Circuits
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/circuits"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/circuits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/circuits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/circuits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
import cirq
print("installed cirq.")
```
## Conceptual overview
The primary representation of quantum programs in Cirq is the `Circuit` class. A `Circuit` is a collection of `Moments`. A `Moment` is a collection of `Operations` that all act during the same abstract time slice. An `Operation` is a some effect that operates on a specific subset of Qubits, the most common type of `Operation` is a `GateOperation`.

Let's unpack this.
At the base of this construction is the notion of a qubit. In Cirq, qubits and other quantum objects are identified by instances of subclasses of the `cirq.Qid` base class. Different subclasses of Qid can be used for different purposes. For example, the qubits that Google’s devices use are often arranged on the vertices of a square grid. For this, the class `cirq.GridQubit` subclasses `cirq.Qid`. For example, we can create a 3 by 3 grid of qubits using
```
qubits = [cirq.GridQubit(x, y) for x in range(3) for y in range(3)]
print(qubits[0])
```
The next level up is the notion of `cirq.Gate`. A `cirq.Gate` represents a physical process that occurs on a qubit. The important property of a gate is that it can be applied to one or more qubits. This can be done via the `gate.on(*qubits)` method itself or via `gate(*qubits)`, and doing this turns a `cirq.Gate` into a `cirq.Operation`.
```
# This is an Pauli X gate. It is an object instance.
x_gate = cirq.X
# Applying it to the qubit at location (0, 0) (defined above)
# turns it into an operation.
x_op = x_gate(qubits[0])
print(x_op)
```
A `cirq.Moment` is simply a collection of operations, each of which operates on a different set of qubits, and which conceptually represents these operations as occurring during this abstract time slice. The `Moment` structure itself is not required to be related to the actual scheduling of the operations on a quantum computer, or via a simulator, though it can be. For example, here is a `Moment` in which **Pauli** `X` and a `CZ` gate operate on three qubits:
```
cz = cirq.CZ(qubits[0], qubits[1])
x = cirq.X(qubits[2])
moment = cirq.Moment(x, cz)
print(moment)
```
The above is not the only way one can construct moments, nor even the typical method, but illustrates that a `Moment` is just a collection of operations on disjoint sets of qubits.
Finally, at the top level a `cirq.Circuit` is an ordered series of `cirq.Moment` objects. The first `Moment` in this series contains the first `Operations` that will be applied. Here, for example, is a simple circuit made up of two moments:
```
cz01 = cirq.CZ(qubits[0], qubits[1])
x2 = cirq.X(qubits[2])
cz12 = cirq.CZ(qubits[1], qubits[2])
moment0 = cirq.Moment([cz01, x2])
moment1 = cirq.Moment([cz12])
circuit = cirq.Circuit((moment0, moment1))
print(circuit)
```
Note that the above is one of the many ways to construct a `Circuit`, which illustrates the concept that a `Circuit` is an iterable of `Moment` objects.
## Constructing circuits
Constructing Circuits as a series of `Moment` objects, with each `Moment` being hand-crafted, is tedious. Instead, we provide a variety of different ways to create a `Circuit`.
One of the most useful ways to construct a `Circuit` is by appending onto the `Circuit` with the `Circuit.append` method.
```
from cirq.ops import CZ, H
q0, q1, q2 = [cirq.GridQubit(i, 0) for i in range(3)]
circuit = cirq.Circuit()
circuit.append([CZ(q0, q1), H(q2)])
print(circuit)
```
This appended a new moment to the qubit, which we can continue to do:
```
circuit.append([H(q0), CZ(q1, q2)])
print(circuit)
```
In these two examples, we appended full moments, what happens when we append all of these at once?
```
circuit = cirq.Circuit()
circuit.append([CZ(q0, q1), H(q2), H(q0), CZ(q1, q2)])
print(circuit)
```
We see that here we have again created two `Moment` objects. How did `Circuit` know how to do this? `Circuit`'s `Circuit.append` method (and its cousin, `Circuit.insert`) both take an argument called `strategy` of type `cirq.InsertStrategy`. By default, `InsertStrategy` is `InsertStrategy.NEW_THEN_INLINE`.
### InsertStrategies
`cirq.InsertStrategy` defines how `Operations` are placed in a `Circuit` when requested to be inserted at a given location. Here, a location is identified by the index of the `Moment` (in the `Circuit`) where the insertion is requested to be placed at (in the case of `Circuit.append`, this means inserting at the `Moment`, at an index one greater than the maximum moment index in the `Circuit`).
There are four such strategies: `InsertStrategy.EARLIEST`, `InsertStrategy.NEW`, `InsertStrategy.INLINE` and `InsertStrategy.NEW_THEN_INLINE`.
`InsertStrategy.EARLIEST`, which is the default, is defined as:
*Scans backward from the insert location until a moment with operations touching qubits affected by the operation to insert is found. The operation is added to the moment just after that location.*
For example, if we first create an `Operation` in a single moment, and then use `InsertStrategy.EARLIEST`, `Operation` can slide back to this first ` Moment` if there is space:
```
from cirq.circuits import InsertStrategy
circuit = cirq.Circuit()
circuit.append([CZ(q0, q1)])
circuit.append([H(q0), H(q2)], strategy=InsertStrategy.EARLIEST)
print(circuit)
```
After creating the first moment with a `CZ` gate, the second append uses the `InsertStrategy.EARLIEST` strategy. The `H` on `q0` cannot slide back, while the `H` on `q2` can and so ends up in the first `Moment`.
Contrast this with `InsertStrategy.NEW` that is defined as:
*Every operation that is inserted is created in a new moment.*
```
circuit = cirq.Circuit()
circuit.append([H(q0), H(q1), H(q2)], strategy=InsertStrategy.NEW)
print(circuit)
```
Here every operator processed by the append ends up in a new moment. `InsertStrategy.NEW` is most useful when you are inserting a single operation and do not want it to interfere with other `Moments`.
Another strategy is `InsertStrategy.INLINE`:
*Attempts to add the operation to insert into the moment just before the desired insert location. But, if there’s already an existing operation affecting any of the qubits touched by the operation to insert, a new moment is created instead.*
```
circuit = cirq.Circuit()
circuit.append([CZ(q1, q2)])
circuit.append([CZ(q1, q2)])
circuit.append([H(q0), H(q1), H(q2)], strategy=InsertStrategy.INLINE)
print(circuit)
```
After two initial `CZ` between the second and third qubit, we try to insert three `H` operations. We see that the `H` on the first qubit is inserted into the previous `Moment`, but the `H` on the second and third qubits cannot be inserted into the previous `Moment`, so a new `Moment` is created.
Finally, we turn to a useful strategy to start a new moment and then start
inserting from that point, `InsertStrategy.NEW_THEN_INLINE`
*Creates a new moment at the desired insert location for the first operation, but then switches to inserting operations according to `InsertStrategy.INLINE`.*
```
circuit = cirq.Circuit()
circuit.append([H(q0)])
circuit.append([CZ(q1,q2), H(q0)], strategy=InsertStrategy.NEW_THEN_INLINE)
print(circuit)
```
The first append creates a single moment with an `H` on the first qubit. Then, the append with the `InsertStrategy.NEW_THEN_INLINE` strategy begins by inserting the `CZ` in a new `Moment` (the `InsertStrategy.NEW` in `InsertStrategy.NEW_THEN_INLINE`). Subsequent appending is done `InsertStrategy.INLINE`, so the next `H` on the first qubit is appending in the just created `Moment`.
Here is a picture showing simple examples of appending 1 and then 2 using the different strategies

### Patterns for arguments to append and insert
In the above examples, we used a series of `Circuit.append `calls with a list of different `Operations` added to the circuit. However, the argument where we have supplied a list can also take more than just list values. For instance:
```
def my_layer():
yield CZ(q0, q1)
yield [H(q) for q in (q0, q1, q2)]
yield [CZ(q1, q2)]
yield [H(q0), [CZ(q1, q2)]]
circuit = cirq.Circuit()
circuit.append(my_layer())
for x in my_layer():
print(x)
print(circuit)
```
Recall that Python functions with a `yield` are generators. Generators are functions that act as iterators. In the above example, we see that we can iterate `over my_layer()`. In this case, each of the `yield` returns produces what was yielded, and here these are:
* `Operations`,
* lists of `Operations`,
* or lists of `Operations` mixed with lists of `Operations`.
When we pass an iterator to the `append` method, `Circuit` is able to flatten all of these and pass them as one giant list to `Circuit.append` (this also works for `Circuit.insert`).
The above idea uses the concept of `cirq.OP_TREE`. An `OP_TREE` is not a class, but a *contract*. The basic idea is that, if the input can be iteratively flattened into a list of operations, then the input is an `OP_TREE`.
A very nice pattern emerges from this structure: define generators for sub-circuits, which can vary by size or `Operation` parameters.
Another useful method to construct a `Circuit` fully formed from an `OP_TREE` is to pass the `OP_TREE` into `Circuit` when initializing it:
```
circuit = cirq.Circuit(H(q0), H(q1))
print(circuit)
```
### Slicing and iterating over circuits
Circuits can be iterated over and sliced. When they are iterated, each item in the iteration is a moment:
```
circuit = cirq.Circuit(H(q0), CZ(q0, q1))
for moment in circuit:
print(moment)
```
Slicing a `Circuit`, on the other hand, produces a new `Circuit` with only the moments corresponding to the slice:
```
circuit = cirq.Circuit(H(q0), CZ(q0, q1), H(q1), CZ(q0, q1))
print(circuit[1:3])
```
Especially useful is dropping the last moment (which are often just measurements): `circuit[:-1]`, or reversing a circuit: `circuit[::-1]`.
### Related
- [Transform circuits](transform.ipynb) - features related to circuit optimization and compilation
- [Import/export circuits](interop.ipynb) - features to serialize/deserialize circuits into/from different formats
| github_jupyter |
```
from keras.models import Sequential
from keras.layers import Dropout
from keras.layers import Input, Dense, Activation,Dropout
from keras import regularizers
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.normalization import BatchNormalization
import numpy as np
# fix random seed for reproducibility
np.random.seed(42)
import pandas as pd
#Folder for the dataset
datasetFolder = '/home/carnd/dbpedia2016/all4_2x125/dataset/'
#Number of files
numberOfFiles = 638
#Test split
testSplit=0.1
validationSplit=0.2
def load_data(datasetFolder, datasetXFile, datasetYFile, wrap=True, printIt=False):
#print('Loading X')
# load file
with open(datasetFolder + datasetXFile, "r") as f:
head = f.readline()
cols = head.split(',')
numberOfCols = len(cols)
#print(numberOfCols)
numberOfRows=0
for line in f:
numberOfRows+=1
f.close()
if(printIt):
print('Input Features: {} x {}'.format(numberOfRows,numberOfCols))
if(wrap==True):
maxY = 8384
else:
maxY = numberOfCols-1
half=(numberOfCols//maxY)*0.5
dataX = np.zeros([numberOfRows,maxY],np.int8)
with open(datasetFolder + datasetXFile, "r") as f:
head = f.readline()
rowCounter=0
for line in f:
row=line.split(',')
for i in range(1, len(row)):
if(int(row[i])<=0):
continue;
val = 1 + ((int(row[i])-1)//maxY);
if(val>half):
val = 0 - (val - half)
dataX[rowCounter][(int(row[i])-1)%maxY]= val
#if((1 + ((int(row[i])-1)//maxY))>1):
# print("{} data[{}][{}] = {}".format(int(row[i])-1, rowCounter,(int(row[i])-1)%maxY,1 + ((int(row[i])-1)//maxY)))
rowCounter+=1
f.close()
#print('Loading Y')
# load file
with open(datasetFolder + datasetYFile, "r") as f:
head = f.readline()
cols = head.split(',')
numberOfCols = len(cols)
#print(numberOfCols)
numberOfRows=0
for line in f:
numberOfRows+=1
f.close()
if(printIt):
print('Output Features: {} x {}'.format(numberOfRows,numberOfCols))
dataY = np.zeros([numberOfRows,(numberOfCols-1)],np.float16)
with open(datasetFolder + datasetYFile, "r") as f:
head = f.readline()
rowCounter=0
for line in f:
row=line.split(',')
for i in range(1, len(row)):
if(int(row[i])<=0):
continue;
dataY[rowCounter][(int(row[i])-1)]=1
rowCounter+=1
f.close()
return dataX, dataY
dataX, dataY = load_data(datasetFolder,'datasetX_1.csv', 'datasetY_1.csv', printIt=True)
dataX, dataY = load_data(datasetFolder,'datasetX_1.csv', 'datasetY_1.csv')
print(dataX.shape)
print(dataX[0:5])
print(dataY.shape)
print(dataY[0:5])
print("Input Features for classification: {}".format(dataX.shape[1]))
print("Output Classes for classification: {}".format(dataY.shape[1]))
deepModel = Sequential(name='Deep Model (5 Dense Layers)')
deepModel.add(Dense(2048, input_dim=dataX.shape[1], init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(1024, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(768, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(512, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(256, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(dataY.shape[1], activation='sigmoid', init='glorot_normal'))
# Compile model
import keras.backend as K
def count_predictions(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
return true_positives, predicted_positives, possible_positives
def f1score(y_true, y_pred):
true_positives, predicted_positives, possible_positives = count_predictions(y_true, y_pred)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1score = 2.0 * precision * recall / (precision+recall+ K.epsilon())
return f1score
def fBetaScore(y_true, y_pred, beta):
true_positives, predicted_positives, possible_positives = count_predictions(y_true, y_pred)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1score = (1+(beta*beta)) * precision * recall / ((beta*beta*precision)+recall+ K.epsilon())
return f1score
deepModel.compile(loss='binary_crossentropy', optimizer='nadam', metrics=[f1score])
def fit_data(model, dataX, dataY):
# Fit the model
#model.fit(dataX, dataY, nb_epoch=5, verbose=2, batch_size=256)
return model.train_on_batch(dataX, dataY)
def countPredictions(y_true, y_pred):
true_positives = np.sum(np.round(y_pred*y_true))
predicted_positives = np.sum(np.round(y_pred))
possible_positives = np.sum(y_true)
return true_positives, predicted_positives, possible_positives
#Randomize the list of numbers so we can split train and test dataset
listOfFiles=list(range(1,numberOfFiles+1))
import random
random.shuffle(listOfFiles)
splitIndex=int((1-(testSplit+validationSplit))*numberOfFiles)
testSplitIndex=int((1-(testSplit))*numberOfFiles)
numberOfEons = 8
for eon in range(0, numberOfEons):
print('{}. Eon {}/{}'.format(eon+1,eon+1, numberOfEons))
for trainIndex in range(0,splitIndex):
dataX, dataY = load_data(datasetFolder,'datasetX_{}.csv'.format(listOfFiles[trainIndex]), 'datasetY_{}.csv'.format(listOfFiles[trainIndex]))
#print('Model = {}'.format(model.name))
deepModel.fit(dataX, dataY, nb_epoch=1, verbose=0, batch_size=256)
print('Learning deep model for file {} / {} : datasetX/Y_{}'.format(trainIndex+1, splitIndex, listOfFiles[trainIndex]), end='\r')
#sc=deepModel.test_on_batch(dataX,dataY)
#loss = sc[0]
#f1score = sc[1]
#loss, f1score=fit_data(deepModel,dataX, dataY)
#print('Learning deep model for file {} / {} : datasetX/Y_{} loss={:.4f} f1score={:.4f}'.format(trainIndex+1, splitIndex, listOfFiles[trainIndex], loss, f1score), end='\r')
counts = {}
counts[deepModel.name] = {'true_positives':0, 'predicted_positives':0, 'possible_positives':0}
for testIndex in range(splitIndex, testSplitIndex):
dataX, dataY = load_data(datasetFolder,'datasetX_{}.csv'.format(listOfFiles[testIndex]), 'datasetY_{}.csv'.format(listOfFiles[testIndex]))
predY=deepModel.predict_on_batch(dataX)
true_positives, predicted_positives, possible_positives = countPredictions(dataY, predY)
counts[deepModel.name]['true_positives'] += true_positives
counts[deepModel.name]['predicted_positives'] += predicted_positives
counts[deepModel.name]['possible_positives'] += possible_positives
print ('Validating deep model {} / {} : - true +ve:{} pred +ve:{} possible +ve:{}'.format(testIndex+1, testSplitIndex, true_positives,predicted_positives,possible_positives), end='\r')
count = counts[deepModel.name]
precision = (count['true_positives'])/(count['predicted_positives']+0.0001)
recall = (count['true_positives'])/(count['possible_positives']+0.0001)
f1score = 2.0 * precision * recall / (precision+recall+0.0001)
print(' - Model = {} \t f1-score = {:.4f}\t precision = {:.4f} \t recall = {:.4f}'.format(deepModel.name, f1score, precision, recall))
counts = {}
counts[deepModel.name] = {'true_positives':0, 'predicted_positives':0, 'possible_positives':0}
for testIndex in range(testSplitIndex, numberOfFiles):
dataX, dataY = load_data(datasetFolder,'datasetX_{}.csv'.format(listOfFiles[testIndex]), 'datasetY_{}.csv'.format(listOfFiles[testIndex]))
predY=deepModel.predict_on_batch(dataX)
true_positives, predicted_positives, possible_positives = countPredictions(dataY, predY)
counts[deepModel.name]['true_positives'] += true_positives
counts[deepModel.name]['predicted_positives'] += predicted_positives
counts[deepModel.name]['possible_positives'] += possible_positives
print ('Testing deep model {} / {} : - true +ve:{} pred +ve:{} possible +ve:{}'.format(testIndex+1, numberOfFiles, true_positives,predicted_positives,possible_positives), end='\r')
count = counts[deepModel.name]
precision = (count['true_positives'])/(count['predicted_positives']+0.0001)
recall = (count['true_positives'])/(count['possible_positives']+0.0001)
f1score = 2.0 * precision * recall / (precision+recall+0.0001)
print(' - Final Test Score for {} \t f1-score = {:.4f}\t precision = {:.4f} \t recall = {:.4f}'.format(deepModel.name, f1score, precision, recall))
deepModel.save('deepModelDBpediaOntologyTypes.h5')
```
1. Eon 1/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9112 precision = 0.9350 recall = 0.8886
2. Eon 2/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9174 precision = 0.9359 recall = 0.8997
3. Eon 3/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9195 precision = 0.9365 recall = 0.9032
4. Eon 4/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9194 precision = 0.9340 recall = 0.9054
5. Eon 5/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9205 precision = 0.9368 recall = 0.9048
6. Eon 6/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9202 precision = 0.9359 recall = 0.9051
7. Eon 7/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9203 precision = 0.9351 recall = 0.9061
8. Eon 8/8
- Model = Deep Model (5 Dense Layers) f1-score = 0.9199 precision = 0.9350 recall = 0.9053
- Final Test Score for Deep Model (5 Dense Layers) f1-score = 0.9200 precision = 0.9352 recall = 0.9054
| github_jupyter |
# Initial_t_rad Bug
The purpose of this notebook is to demonstrate the bug associated with setting the initial_t_rad tardis.plasma property.
```
pwd
import tardis
import numpy as np
```
## Density and Abundance test files
Below are the density and abundance data from the test files used for demonstrating this bug.
```
density_dat = np.loadtxt('data/density.txt',skiprows=1)
abund_dat = np.loadtxt('data/abund.dat', skiprows=1)
print(density_dat)
print(abund_dat)
```
## No initial_t_rad
Below we run a simple tardis simulation where `initial_t_rad` is not set. The simulation has v_inner_boundary = 3350 km/s and v_outer_boundary = 3750 km/s, both within the velocity range in the density file. The simulation runs fine.
```
no_init_trad = tardis.run_tardis('data/config_no_init_trad.yml')
no_init_trad.model.velocity
no_init_trad.model.no_of_shells, no_init_trad.model.no_of_raw_shells
print('raw velocity: \n',no_init_trad.model.raw_velocity)
print('raw velocity shape: ',no_init_trad.model.raw_velocity.shape)
print('(v_boundary_inner, v_boundary_outer) = (%i, %i)'%
(no_init_trad.model.v_boundary_inner.to('km/s').value, no_init_trad.model.v_boundary_outer.to('km/s').value))
print('v_boundary_inner_index: ', no_init_trad.model.v_boundary_inner_index)
print('v_boundary_outer_index: ', no_init_trad.model.v_boundary_outer_index)
print('t_rad', no_init_trad.model.t_rad)
```
## Debugging
```
%%debug
init_trad = tardis.run_tardis('data/config_init_trad.yml')
init_trad = tardis.run_tardis('data/config_init_trad.yml')
```
## Debugging
## Debugging No initial_t_radiative run to compare with Yes initial_t_radiative run
We place two breakpoints:
break 1. tardis/base:37 --> Stops in the run_tardis() function when the simulation is initialized.
break 2. tardis/simulation/base:436 --> Stops after the Radial1DModel has been built from the config file, but before the plasma has been initialized.
## IMPORTANT:
We check the model.t_radiative property INSIDE the assemble_plasma function. Notice that it has len(model.t_radiative) = model.no_of_shells = 5
```
%%debug
no_init_trad = tardis.run_tardis('config_no_init_trad.yml')
```
## Debugging Yes initial_t_radiative run
We place the same two breakpoints as above:
break 1. tardis/base:37 --> Stops in the run_tardis() function when the simulation is initialized.
break 2. tardis/simulation/base:436 --> Stops after the Radial1DModel has been built from the config file, but before the plasma has been initialized.
## IMPORTANT:
We check the model.t_radiative property INSIDE the assemble_plasma function. Notice that it has len(model.t_radiative) = 6 which is NOT EQUAL to model.no_of_shells = 5
```
%%debug
init_trad = tardis.run_tardis('config_init_trad.yml')
```
## Checking model.t_radiative initialization when YES initial_t_rad
In the above debugging blocks, we have identified the following discrepancy INSIDE assemble_plasma():
### len(model.t_radiative) = 6 when YES initial_t_rad
### len(model.t_radiative) = 5 when NO initial_t_rad
Therefore, we investigate in the following debugging block how model.t_radiative is initialized. We place a breakpoint at tardis/simulation/base:432 and step INSIDE the Radial1DModel initialization.
Breakpoints:
break 1. tardis/simulation/base:432 --> Stops so that we can step INSIDE Radial1DModel initialization from_config().
break 2. tardis/model/base:330 --> Where temperature is handled INSIDE Radial1DModel initialization from_config().
break 3. tardis/model/base:337 --> `t_radiative` is initialized. It has the same length as `velocity` which is the raw velocities from the density file.
break 4. tardis/model/base:374 --> init() for Radial1DModel is called. We check values of relevant variables.
break 5. tardis/model/base:76 --> Stops at first line of Radial1DModel init() function.
break 6. tardis/model/base:101 --> self.\_t\_radiative is set.
break 7. tardis/model/base:140 --> Stops at first line of self.t_radiative setter.
break 8. tardis/model/base:132 --> Stops at first line of self.t_radiative getter.
break 9. tardis/model/base:108 --> Stop right after self.\_t\_radiative is set. NOTICE that neither the setter nor the getter was called. __IMPORTANT:__ at line 108, we have len(self.\_t\_radiative) = 10. __TO DO:__ Check len(self.\_t\_radiative) at line 108 in the NO initial\_t\_rad case.
```
%%debug
init_trad = tardis.run_tardis('config_init_trad.yml')
```
## Checking self.\_t\_radiative initialization when NO initial_t_rad at line 108
__IMPORTANT:__ We find that len(self.\_t\_radiative) = 5. This is a DISCREPANCY with the YES initial_t_rad case.
```
%%debug
no_init_trad = tardis.run_tardis('config_no_init_trad.yml')
```
## CODE CHANGE:
We propose the following change to tardis/model/base:106
__Line 106 Before Change:__ `self._t_radiative = t_radiative`
__Line 106 After Change:__ `self._t_radiative = t_radiative[1:1 + self.no_of_shells]`
t_radiative\[0\] corresponds to the temperature within the inner boundary, and so should be ignored.
```
init_trad = tardis.run_tardis('config_init_trad.yml')
import numpy as np
a = np.array([1,2,3,4,5,6,7,8])
a[3:8]
a
2 in a
np.argwhere(a==6)[0][0]
np.searchsorted(a, 6.5)
if (2 in a) and (3.5 in a):
print('hi')
assert 1==1.2, "test"
a[3:6]
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import sklearn
import matplotlib.pyplot as plt
import matplotlib
from joblib import dump
from sklearn.ensemble import IsolationForest
```
# Load the data
```
X_train = pd.read_csv('./Datasets/train.csv')
X_test = pd.read_csv('./Datasets/test.csv')
X_train.shape
```
# Train the basic classifier
```
# Base model; without contamination.
clf = IsolationForest(n_estimators = 100, random_state=16).fit(X_train)
clf
predictions = clf.predict(X_train)
predictions
a = np.linspace(-2, 70, 100)
a.ravel()
```
# Visualization
```
# Plot of the base model's decision frontier.
plt.rcParams['figure.figsize'] = [15, 15]
xx, yy = np.meshgrid(np.linspace(-2, 70, 100), np.linspace(-2, 70, 100))
print(xx.ravel())
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
print(Z.shape)
print(Z.min())
Z = Z.reshape(xx.shape)
plt.title("Decision Boundary (base model)")
# This draws the "soft" or secondary boundaries.
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 8), cmap=plt.cm.PuBu, alpha=0.5)
# This draws the line that separates the hard from the soft boundaries.
plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='g')
# This draws the hard boundary
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='palevioletred')
plt.scatter(X_train.iloc[:, 0],
X_train.iloc[:, 1],
edgecolors='k')
plt.xlabel('Mean')
plt.ylabel('SD')
plt.grid(True)
plt.show()
# With contamination.
clf = IsolationForest(n_estimators = 100, random_state=16, contamination=0.001).fit(X_train)
# Plot of the contamination model's decision frontier.
plt.rcParams['figure.figsize'] = [15, 15]
xx, yy = np.meshgrid(np.linspace(-2, 70, 100), np.linspace(-2, 70, 100))
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("Decision Boundary (with contamination)")
plt.contourf(xx, yy, Z, levels=np.linspace(
Z.min(), 0, 8), cmap=plt.cm.PuBu, alpha=0.5)
plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='g')
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='palevioletred')
plt.scatter(X_test.iloc[:, 0],
X_test.iloc[:, 1],
cmap = matplotlib.colors.ListedColormap(['blue', 'red']), edgecolors='k')
plt.xlabel('Mean')
plt.ylabel('SD')
plt.grid(True)
plt.show()
predictions = clf.predict(X_test)
df_predictions = pd.concat([X_test, pd.Series(predictions)], axis=1)
df_predictions.columns = ['mean', 'sd', 'output']
df_predictions
# Plot of the test dataset and the contamination model's decision frontier.
plt.rcParams['figure.figsize'] = [15, 15]
xx, yy = np.meshgrid(np.linspace(-2, 70, 100), np.linspace(-2, 70, 100))
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("Decision Boundary (with contamination)")
plt.contourf(xx, yy, Z, levels=np.linspace(
Z.min(), 0, 8), cmap=plt.cm.PuBu, alpha=0.5)
plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='darkred')
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='palevioletred')
# output 1 means normal
plt.scatter(df_predictions[df_predictions['output'] == 1 ].iloc[:, 0],
df_predictions[df_predictions['output'] == 1 ].iloc[:, 1],
c='blue', edgecolors='k')
# output -1 means abnormal
plt.scatter(df_predictions[df_predictions['output'] == -1 ].iloc[:, 0],
df_predictions[df_predictions['output'] == -1 ].iloc[:, 1],
c='red', edgecolors='k')
plt.xlabel('Mean')
plt.ylabel('SD')
plt.grid(True)
plt.show()
# Export the model.
dump(clf, 'model.joblib')
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#In-This-Notebook" data-toc-modified-id="In-This-Notebook-1"><span class="toc-item-num">1 </span>In This Notebook</a></span></li><li><span><a href="#Final-Result" data-toc-modified-id="Final-Result-2"><span class="toc-item-num">2 </span>Final Result</a></span></li><li><span><a href="#Loss-Function-Notes-&-Scratch" data-toc-modified-id="Loss-Function-Notes-&-Scratch-3"><span class="toc-item-num">3 </span>Loss Function Notes & Scratch</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.1"><span class="toc-item-num">3.1 </span>Notes</a></span></li><li><span><a href="#Scratch-Work" data-toc-modified-id="Scratch-Work-3.2"><span class="toc-item-num">3.2 </span>Scratch Work</a></span></li></ul></li></ul></div>
# In This Notebook
**Context**
I worked on this notebook after getting back from my PNW van trip with Rose, Tim, and Eva. I started on a Thursday and ended on a Monday.
The notes on loss function in section 3 was critically important to coming up with the final solution. While working on this notebook, I dug into the depths of the Learner class and studied callbacks.
**Breakthroughs in this notebook:**
- **Massively improved results.** Achieved by changing the weights of CEL and MSE in the loss function. Before, I was weighting CEL by 10; now I'm weighting MSE by 5.
- **Added metrics.** MSE, CEL, and ACC were all added.
- **`view_results` working.**
- **`show_batch` working.**
# Final Result
```
from fastai.vision.all import *
### Params ###
im_size = 224
batch_size = 64
path = Path('/home/rory/data/coco2017')
valid_split = .15
### Load data (singles) ###
# Grab cols
def grab_cols(df, cols):
"""Expects: DataFrame df; str or list of strs cols. Returns: L or an LoL."""
def _grab_col(df, col):
return L((ColReader(col)(df)).to_list())
if isinstance(cols, str): return _grab_col(df, cols)
if len(cols)==1: return _grab_col(df, cols)
if len(cols)>=2:
r=L()
for c in cols:
r.append(_grab_col(df,c))
return r
df = pd.read_pickle(path/'singles.pkl')
imp, lbl, bbox = grab_cols(df, ['im','lbl','bbox'])
bbox = bbox.map(lambda x:list(x)) # fixed pickle bug; lists incorrectly unpickled as tups
# Create getters for pipeline
imp2lbl = {p:l for p,l in zip(imp,lbl)}
imp2bbox = {p:b for p,b in zip(imp,bbox)}
def get_lbl(p): return imp2lbl[p]
def get_bbox(p): return imp2bbox[p]
### Datasets ###
dss_tfms = [[PILImage.create],
[get_bbox, TensorBBox.create],
[get_lbl, Categorize()]]
splits = RandomSplitter(valid_split)(imp)
dss = Datasets(imp, tfms=dss_tfms, splits=splits)
### DataLoaders ###
cpu_tfms = [BBoxLabeler(), PointScaler(), Resize(im_size, method='squish'), ToTensor()]
gpu_tfms = [IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)]
dls = dss.dataloaders(bs=batch_size,after_item=cpu_tfms,after_batch=gpu_tfms,drop_last=True)
dls.n_inp = 1
### Model ###
class custom_module(Module):
def __init__(self, body, head):
self.body, self.head = body, head
def forward(self, x):
return self.head(self.body(x))
body = create_body(resnet34, pretrained=True)
head = create_head(1024, 4+dss.c, ps=0.5)
mod = custom_module(body, head)
### Loss ###
def mse(f, bb, lbl): return MSELossFlat()(f[:,:4], torch.squeeze(bb))
def cel(f, bb, lbl): return CrossEntropyLossFlat()(f[:,4:], lbl)
def lbb_loss(f, bb, lbl): return 5*mse(f,bb,lbl) + cel(f,bb,lbl)
def acc(f, bb, lbl): return accuracy(f[:,4:], lbl)
### Training ###
learner = Learner(dls, mod, loss_func=lbb_loss, metrics=[mse, cel, acc])
lr_min, _ = learner.lr_find(); print("lr_min:", lr_min)
learner.fit_one_cycle(10, lr=lr_min)
### Results ###
def view_results(learner, n=16, nrows=4, ncols=4, offset=0):
# get batch of ims & targs, get preds
ims, targ_bbs, targ_lbls = learner.dls.one_batch()
preds = learner.model(ims)
pred_bbs, pred_lbls = preds[:,:4], preds[:,4:].argmax(dim=-1)
decoded_ims = Pipeline(gpu_tfms).decode(ims)
# show grid results
for i,ctx in enumerate(get_grid(n, nrows, ncols)):
idx = i+offset*n
# title
pred_cls = dls.vocab[pred_lbls[idx].item()]
targ_cls = dls.vocab[targ_lbls[idx].item()]
icon = '✔️' if pred_cls==targ_cls else '✖️'
title = f"{icon} P {pred_cls} : A {targ_cls}"
# im
show_image(decoded_ims[idx], ctx=ctx, title=title)
# bbs
pred_bb = TensorBBox(pred_bbs[idx])
targ_bb = TensorBBox(targ_bbs[idx])
((pred_bb+1)*224//2).show(ctx=ctx, color='magenta')
((targ_bb+1)*224//2).show(ctx=ctx);
view_results(learner)
```
# Loss Function Notes & Scratch
## Notes
My notes from my journal
1. `xb,yb = b`
2. `p = mod(xb)`
3. `alpha = cel(mb) / rmse(mb); beta = 1`
4. `if epoch <= 3: gamma=10; else: gamma=1` (do later)
5. `loss(p, yb): rmse(p[0:4],yb[0])*alpha*gamma + cel(p[4:],yb[1])*beta`
Method chain for learning (from https://github.com/fastai/fastai/blob/master/fastai/learner.py#L163)
1. `learner.fit_one_cycle()`
2. → `.fit()`
3. → `._do_fit()`
4. → `._do_epoch()`
5. → `._do_epoch_train(); ._do_epoch_validate()` (rest of chain follows `_do_epoch_train`)
6. → `.dl = .dls.train; .all_batches()`
7. → `.n_iter = len(.dl); for o in enum(.dl): .one_batch(*o)` o=(i,b)
8. → `.iter = i; ._split(b); ._do_one_batch()` (item 9 has `_split`, item 10 has `_do_one_batch`)
9. → `._split(b)`: `i = dls.n_inp; .xb, .yb = b[:i], b[i:]`
10. → `._do_one_batch()`:
- `.pred = .model(*xb)`
- `.loss = .loss_func(.pred, *.yb)`
- `._backward()` → `.loss.backward()`
- `._step()` → `.opt.step()`
- `.opt.zero_grad()`
## Scratch Work
```
learner = Learner(dls, mod, loss_func=lbb_loss)
learner.dl = learner.dls.train
learner.b = learner.dl.one_batch()
learner._split(learner.b)
learner.pred = learner.model(learner.xb[0])
learner.pred.shape
learner.pred[0]
learner.yb[0].shape
[learner.yb[0].shape[0], learner.yb[0].shape[-1]]
learner.yb
learner.pred[:,4:].shape
(learner.pred[:,4:]).argmax(dim=-1)
torch.squeeze(learner.yb[0]).shape
learner.metrics
def cel_loss(pred, targ_bb, targ_lbl):
# mse = MSELossFlat()(pred[:,:4], torch.squeeze(targ_bb))
cel = CrossEntropyLossFlat()(pred[:,4:], targ_lbl)
return cel
lbb_loss(learner.pred, learner.yb[0], learner.yb[1])
```
| github_jupyter |
```
import pandas as pd
import os
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import seaborn as sns
os.chdir("E:\\PYTHON NOTES\\projects\\100 data science projeect\\choronic kidney")
data=pd.read_csv("kidney_disease.csv")
data
data.shape
data.describe()
columns=pd.read_csv("data_description.txt",sep="-")
columns=columns.reset_index()
columns.columns=["cols","abb_col_names"]
columns
data.columns=columns["abb_col_names"].values
data.head()
data.info()
def convert_dtype(data,col):
data[col]=pd.to_numeric(data[col],errors="coerce")
features=["packed cell volume","white blood cell count","red blood cell count"]
for feature in features:
convert_dtype(data,feature)
data.drop("id",axis=1,inplace=True)
data.info()
def extract_cat_num(data):
cat_col=[col for col in data.columns if data[col].dtype=="O"]
num_col=[col for col in data.columns if data[col].dtype!="O"]
return cat_col,num_col
cat_col,num_col=extract_cat_num(data)
num_col
for col in cat_col:
print("{} has {} values".format(col,data[col].unique()))
print("\n")
data["diabetes mellitus"].replace(to_replace={"\tno":"no","\tyes":"yes"},inplace=True)
data["coronary artery disease"]=data["coronary artery disease"].replace(to_replace="\tno",value="no")
data["class"]=data["class"].replace(to_replace="ckd\t",value="ckd")
for col in cat_col:
print("{} has {} values".format(col,data[col].unique()))
print("\n")
plt.figure(figsize=(30,20))
for i ,feature in enumerate (num_col):
plt.subplot(5,3,i+1)
data[feature].hist()
plt.title(feature)
plt.figure(figsize=(30,20))
for i, feature in enumerate(cat_col):
plt.subplot(4,3,i+1)
sns.countplot(data[feature])
plt.title(feature)
sns.countplot(data["class"])
plt.figure(figsize=(10,8))
data.corr()
sns.heatmap(data.corr(),annot=True)
data.groupby(["red blood cells","class"])["red blood cell count"].agg(["count","mean","median","max","min"])
px.violin(data,x="class",y="red blood cell count",color="class")
px.scatter(data,x="haemoglobin",y="packed cell volume")
import warnings
from warnings import filterwarnings
filterwarnings("ignore")
grid=sns.FacetGrid(data,hue='class',aspect=2)
grid.map(sns.kdeplot,'red blood cell count')
grid.add_legend()
def violin(col):
fig=px.violin(data,x="class",y=col,color="class",box=True)
return fig.show()
def scatter(col1,col2):
fig=px.scatter(data,x=col1,y=col2,color="class")
return fig.show()
def kdeplot(feature):
grid=sns.FacetGrid(data,hue="class",aspect=2)
grid.map(sns.kdeplot,feature)
grid.add_legend()
data.columns
kdeplot("haemoglobin")
kdeplot("packed cell volume")
scatter("packed cell volume","red blood cell count")
scatter("haemoglobin","red blood cell count")
data.columns
violin("red blood cell count")
scatter("red blood cell count","albumin")
data.isnull().sum().sort_values(ascending=False)
```
## missing value filed with random sample
```
df=data.copy()
df.head()
df["red blood cells"].isnull().sum()
random_sample=df["red blood cells"].dropna().sample(df["red blood cells"].isnull().sum())
df[df["red blood cells"].isnull()].index
random_sample.index
random_sample.index=df[df["red blood cells"].isnull()].index
random_sample.index
random_sample
df.loc[df["red blood cells"].isnull(),"red blood cells"]=random_sample
df["red blood cells"].isnull().sum()
```
## missing impuation funtiom
```
def Random_value_impuation(feature):
random_sample=df[feature].dropna().sample(df[feature].isnull().sum())
random_sample.index=df[df[feature].isnull()].index
df.loc[df[feature].isnull(),feature]=random_sample
for col in num_col:
Random_value_impuation(col)
df[num_col].isnull().sum()
df[cat_col].isnull().sum()
Random_value_impuation("pus cell")
df["pus cell clumps"].mode()[0]
def impute_mode(feature):
mode=df[feature].mode()[0]
df[feature]=df[feature].fillna(mode)
for col in cat_col:
impute_mode(col)
df[cat_col].isnull().sum()
```
## lable encoding
```
for col in cat_col:
print("{} has {} categories".format(col,df[col].nunique()))
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
for col in cat_col:
df[col]=le.fit_transform(df[col])
df.head()
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
ind_col=[col for col in df.columns if col!="class"]
dep_col="class"
x=df[ind_col]
y=df[dep_col]
order_rank_feature=SelectKBest(score_func=chi2,k=20)
order_feature=order_rank_feature.fit(x,y)
order_feature.scores_
datascore=pd.DataFrame(order_feature.scores_,columns=["score"])
datascore
dfcols=pd.DataFrame(x.columns)
dfcols
feature_rank=pd.concat([dfcols,datascore],axis=1)
feature_rank.columns=["features","score"]
feature_rank
feature_rank.nlargest(10,"score")
selected_columns=feature_rank.nlargest(10,"score")['features'].values
x_new=df[selected_columns]
x_new.shape
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x_new,y,random_state=0,test_size=0.25)
from xgboost import XGBClassifier
xgb=XGBClassifier()
params={"learning_rate":[0.05,0.20,0.25],
"max_depth":[5,8,10],
"min_child_weight":[1,3,5,7],
"gamma":[0.0,0.1,0.2,0.4],
"colsample_bytree":[0.3,0.4,0.7]
}
from sklearn.model_selection import RandomizedSearchCV
random_serach=RandomizedSearchCV(xgb,param_distributions=params,n_iter=5,scoring="roc_auc",n_jobs=-1,cv=5,verbose=3)
random_serach.fit(x_train,y_train)
random_serach.best_estimator_
random_serach.best_params_
classifier=XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.4, gamma=0.1, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.2, max_delta_step=0, max_depth=5,
min_child_weight=1,monotone_constraints='()',
n_estimators=100, n_jobs=0, num_parallel_tree=1,
objective='binary:logistic', random_state=0, reg_alpha=0,
reg_lambda=1, scale_pos_weight=1, subsample=1, tree_method='exact',
validate_parameters=1, verbosity=None)
classifier.fit(x_train,y_train)
y_pred=classifier.predict(x_test)
from sklearn.metrics import confusion_matrix,accuracy_score
confusion_matrix(y_pred,y_test)
accuracy_score(y_pred,y_test)
```
| github_jupyter |
# Baseline
```
import os
from typing import Any, Dict
import numpy as np
import nltk
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
```
## Utilities
```
def train_validate_test_logistic_regression_model(train_dict: Dict[str, Any],
dev_dict: Dict[str, Any],
C: float, random_seed: int) -> Dict[str, float]:
"""Train and validate logistic regression model with tfidf word features"""
# define tfidf vectorizer
vectorizer = TfidfVectorizer(
max_features=None,
encoding='utf-8',
tokenizer=nltk.word_tokenize,
ngram_range=(1, 1),
)
# fit vectorizer
vectorizer.fit(train_dict['text'])
train_X = vectorizer.transform(train_dict['text'])
dev_X = vectorizer.transform(dev_dict['text'])
# Define Logistic Regression model
model = LogisticRegression(
solver='liblinear',
random_state=random_seed,
verbose=False,
C=C,
)
# Fit the model to training data
model.fit(
train_X,
train_dict['labels']
)
# make prediction using the trained model
train_pred = model.predict(train_X)
dev_pred = model.predict(dev_X)
# compute F1 scores
train_f1 = f1_score(y_pred=train_pred, y_true=train_dict['labels'], average='macro', labels=['0', '1'])
dev_f1 = f1_score(y_pred=dev_pred, y_true=dev_dict['labels'], average='macro', labels=['0', '1'])
return {
'train_f1': train_f1,
'dev_f1': dev_f1,
}
def pick_best_dev_score(scores_dict: Dict[float, Dict[str, float]]) -> Dict[str, float]:
best_val = {'dev_f1': -1}
for k, val in scores_dict.items():
if val['dev_f1'] > best_val['dev_f1']:
best_val = val
return best_val
```
## Load data
```
DATA_DIR = os.path.join('../data/GermEval21_Toxic_Train')
assert os.path.isdir(DATA_DIR)
train_df = pd.read_csv(os.path.join(DATA_DIR, 'train.csv'), encoding='utf-8', sep=',')
dev_df = pd.read_csv(os.path.join(DATA_DIR, 'dev.csv'), encoding='utf-8', sep=',')
train_di = {
'text': train_df['comment_text'],
'labels': train_df['Sub3_FactClaiming'].astype(str),
}
dev_di = {
'text': dev_df['comment_text'],
'labels': dev_df['Sub3_FactClaiming'].astype(str),
}
```
## Train and evaluate
```
scores_dict = {}
for c in [1.0, 2.0, 3.0, 4.0, 5.0]:
scores_dict[c] = train_validate_test_logistic_regression_model(
train_dict=train_di,
dev_dict=dev_di,
C=c,
random_seed=123,
)
scores_dict
pick_best_dev_score(scores_dict)
```
## Train using 5-fold cross-validation data
```
CROSS_VALIDATION_DATA_DIR = os.path.join('../data/cross_validation')
results_dict = {}
for fold_name in ['fold_A', 'fold_B', 'fold_C', 'fold_D', 'fold_E']:
print(f'*** {fold_name} ***')
data_dir = os.path.join(CROSS_VALIDATION_DATA_DIR, fold_name)
assert os.path.isdir(data_dir)
train_df = pd.read_csv(os.path.join(data_dir, 'train.csv'), encoding='utf-8', sep=',')
dev_df = pd.read_csv(os.path.join(data_dir, 'dev.csv'), encoding='utf-8', sep=',')
train_di = {
'text': train_df['comment_text'],
'labels': train_df['Sub3_FactClaiming'].astype(str),
}
dev_di = {
'text': dev_df['comment_text'],
'labels': dev_df['Sub3_FactClaiming'].astype(str),
}
scores_dict = {}
for c in [1.0, 2.0, 3.0, 4.0, 5.0]:
scores_dict[c] = train_validate_test_logistic_regression_model(
train_dict=train_di,
dev_dict=dev_di,
C=c,
random_seed=123,
)
results_dict[fold_name] = scores_dict
fold_names = ['fold_A', 'fold_B', 'fold_C', 'fold_D', 'fold_E']
train_f1_means = []
train_f1_stds = []
dev_f1_means = []
dev_f1_stds = []
Cs = []
for c in [1.0, 2.0, 3.0, 4.0, 5.0]:
Cs.append(c)
train_f1_means.append(
np.mean([results_dict[fold_name][c]['train_f1'] for fold_name in fold_names])
)
train_f1_stds.append(
np.std([results_dict[fold_name][c]['train_f1'] for fold_name in fold_names])
)
dev_f1_means.append(
np.mean([results_dict[fold_name][c]['dev_f1'] for fold_name in fold_names])
)
dev_f1_stds.append(
np.std([results_dict[fold_name][c]['dev_f1'] for fold_name in fold_names])
)
table_dict = {
'C': Cs,
'train_f1': [f'{train_f1_mean:0.3f} ± {train_f1_std:0.2f}' for train_f1_mean, train_f1_std in
zip(train_f1_means, train_f1_stds)],
'dev_f1': [f'{dev_f1_mean:0.3f} ± {dev_f1_std:0.2f}' for dev_f1_mean, dev_f1_std in zip(dev_f1_means, dev_f1_stds)],
}
pd.DataFrame(table_dict)
```
| github_jupyter |
```
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
```
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
```
# Start: 1970-10-01
# End: 2020-09-31
stock_data = pd.read_csv(
"/content/drive/My Drive/data/stock_data/^GSCP.csv",
index_col=0,
parse_dates=True
)
stock_data
```
```
stock_data = pd.read_csv(
"/content/drive/My Drive/data/stock_data/^GSPC.csv",
index_col = 0,
parse_dates=True
)
stock_data
```
```
stock_data.drop(
["Open", "High", "Low", "Close", "Volume"],
axis="columns",
inplace=True
)
stock_data
```
```
stock_data.drop(
["Open", "High", "Low", "Close", "Volume"],
axis="columns",
inplace=True
)
stock_data
```
```
stock_data.plot(figsize=(12, 4))
```
```
stock_data.plot(figsize=(12, 4))
```
```
# Convert a feature into a one-dimensional Numpy Array
y = stock_data["Adj Close"].values
y
```
```
y = stock_data["Adj Close"].values
y
```
```
# Normalization:
from sklearn.preprocessing import MinMaxScaler
```
```
from sklearn.preprocessing import MinMaxScaler
```
```
# Converts a one-dimensional Numpy Array to a two-dimensional Numpy Array
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler.fit(y.reshape(-1, 1))
y = scaler.transform(y.reshape(-1, 1))
y
```
```
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler.fit(y.reshape(-1, 1))
y = scaler.transform(y.reshape(-1, 1))
y
```
```
# Convert a two-dimensional Numpy Array to a one-dimensional Pytorch Tensor
y = torch.FloatTensor(y).view(-1)
y
```
```
y = torch.FloatTensor(y).view(-1)
y
```
```
# Separate normalized data for training and testing
test_size = 24
train_seq = y[:-test_size]
test_seq = y[-test_size:]
```
```
test_size = 24
# train_seq = y[:-test_size]
# test_seq = y[-test_size:]
```
```
# Plot y, train_seq and test_seq
plt.figure(figsize=(12, 4))
plt.xlim(-20, len(y)+20)
plt.grid(True)
plt.plot(y)
```
```
plt.figure(figsize=(12, 4))
plt.xlim(-20, len(y)+20)
plt.grid(True)
plt.plot(y)
```
```
train_window_size = 12
```
```
train_window_size = 12
```
```
def input_data(seq, ws):
out = []
L = len(seq)
for i in range(L-ws):
window = seq[i:i+ws]
label = seq[i+ws:i+ws+1]
out.append((window, label))
return out
```
```
def input_data(seq, ws):
out = []
L = len(seq)
for i in range(L-ws):
window = seq[i:i+ws]
label = seq[i+ws:i+ws+1]
out.append((window, label))
return out
```
```
train_data = input_data(train_seq, train_window_size)
```
```
# train_data = input_data(train_seq, train_window_size)
train_data = input_data(y, train_window_size)
```
```
print("The Number of Training Data: ", len(train_data))
```
```
# 600-24-12=564
print("The Nunber of Training Data: ", len(train_data))
```
```
class Model(nn.Module):
def __init__(self, input=1, h=50, output=1):
super().__init__()
self.hidden_size = h
self.lstm = nn.LSTM(input, h)
self.fc = nn.Linear(h, output)
self.hidden = (
torch.zeros(1, 1, h),
torch.zeros(1, 1, h)
)
def forward(self, seq):
out, _ = self.lstm(
seq.view(len(seq), 1, -1),
self.hidden
)
out = self.fc(
out.view(len(seq), -1)
)
return out[-1]
```
```
class Model(nn.Module):
def __init__(self, input=1, h=50, output=1):
super().__init__()
self.hidden_size = h
self.lstm = nn.LSTM(input, h)
self.fc = nn.Linear(h, output)
self.hidden = (
torch.zeros(1, 1, h),
torch.zeros(1, 1, h)
)
def forward(self, seq):
out, _ = self.lstm(
seq.view(len(seq), 1, -1),
self.hidden
)
out = self.fc(
out.view(len(seq), -1)
)
return out[-1]
```
```
torch.manual_seed(123)
model = Model()
# mean squared error loss 平均二乗誤差損失
criterion = nn.MSELoss()
# stochastic gradient descent 確率的勾配降下
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
```
torch.manual_seed(123)
model = Model()
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
```
epochs = 10
train_losses = []
test_losses = []
```
```
epochs = 10
train_losses = []
test_losses = []
```
```
def run_train():
model.train()
for train_window, correct_label in train_data:
optimizer.zero_grad()
model.hidden = (
torch.zeros(1, 1, model.hidden_size),
torch.zeros(1, 1, model.hidden_size)
)
train_predicted_label = model.forward(train_window)
train_loss = criterion(train_predicted_label, correct_label)
train_loss.backward()
optimizer.step()
train_losses.append(train_loss)
```
```
def run_train():
model.train()
for train_window, correct_label in train_data:
optimizer.zero_grad()
model.hidden = (
torch.zeros(1, 1, model.hidden_size),
torch.zeros(1, 1, model.hidden_size)
)
train_predicted_label = model.forward(train_window)
train_loss = criterion(train_predicted_label, correct_label)
train_loss.backward()
optimizer.step()
train_losses.append(train_loss)
```
```
# Extract the value of an element from a one-dimensional Tensor with a single element
a = torch.tensor([3])
a
# a.item()
```
```
a = torch.tensor([3])
a.item()
```
```
def run_test():
model.eval()
for i in range(test_size):
test_window = torch.FloatTensor(extending_seq[-test_size:])
# print()
# print("The Length of Extending Sequence: ", len(extending_seq))
# print("The Length of window", len(test_window))
# print()
# Stop storing parameters by not computing the slope, so as not to consume memory
with torch.no_grad():
model.hidden = (
torch.zeros(1, 1, model.hidden_size),
torch.zeros(1, 1, model.hidden_size)
)
test_predicted_label = model.forward(test_window)
extending_seq.append(test_predicted_label.item())
test_loss = criterion(
torch.FloatTensor(extending_seq[-test_size:]),
y[len(y)-test_size:]
)
test_losses.append(test_loss)
```
```
def run_test():
model.eval()
for i in range(test_size):
test_window = torch.FloatTensor(extending_seq[-test_size:])
with torch.no_grad():
model.hidden = (
torch.zeros(1, 1, model.hidden_size),
torch.zeros(1, 1, model.hidden_size)
)
test_predicted_label = model.forward(test_window)
extending_seq.append(test_predicted_label.item())
test_loss = criterion(
torch.FloatTensor(extending_seq[-test_size:]),
y[len(y)-test_size:]
)
test_losses.append(test_loss)
```
```
train_seq[-test_size:]
```
```
# train_seq[-test_size:]
```
```
train_seq[-test_size:].tolist()
```
```
# train_seq[-test_size:].tolist()
```
```
for epoch in range(epochs):
print()
print(f'Epoch: {epoch+1}')
run_train()
extending_seq = train_seq[-test_size:].tolist()
run_test()
plt.figure(figsize=(12, 4))
plt.xlim(-20, len(y)+20)
plt.grid(True)
plt.plot(y.numpy())
plt.plot(
range(len(y)-test_size, len(y)),
extending_seq[-test_size:]
)
plt.show()
```
```
for epoch in range(epochs):
print()
print(f'Epoch: {epoch+1}')
run_train()
# extending_seq = train_seq[-test_size:].tolist()
extending_seq = y[-test_size:].tolist()
run_test()
plt.figure(figsize=(12, 4))
# plt.xlim(-20, len(y)+20)
plt.xlim(-20, len(y)+50)
plt.grid(True)
plt.plot(y.numpy())
plt.plot(
# range(len(y)-test_size, len(y)),
range(len(y), len(y)+test_size),
extending_seq[-test_size:]
)
plt.show()
```
```
plt.plot(train_losses)
```
```
plt.plot(train_losses)
```
```
plt.plot(test_losses)
```
```
plt.plot(test_losses)
```
```
# List
predicted_normalized_labels_list = extending_seq[-test_size:]
predicted_normalized_labels_list
```
```
predicted_normalized_labels_list = extending_seq[-test_size:]
```
```
# Convert a list to a one-dimensional Numpy Array
predicted_normalized_labels_array_1d = np.array(predicted_normalized_labels_list)
predicted_normalized_labels_array_1d
```
```
predicted_normalized_labels_array_1d = np.array(predicted_normalized_labels_list)
predicted_normalized_labels_array_1d
```
```
# Converts a one-dimensional Numpy Array to a two-dimensional Numpy Array
predicted_normalized_labels_array_2d = predicted_normalized_labels_array_1d.reshape(-1, 1)
predicted_normalized_labels_array_2d
```
```
predicted_normalized_labels_array_2d = predicted_normalized_labels_array_1d.reshape(-1, 1)
predicted_normalized_labels_array_2d
```
```
# From a normalized number to a true number.
predicted_labels_array_2d = scaler.inverse_transform(predicted_normalized_labels_array_2d)
predicted_labels_array_2d
```
```
predicted_labels_array_2d = scaler.inverse_transform(predicted_normalized_labels_array_2d)
predicted_labels_array_2d
```
```
len(predicted_labels_array_2d)
```
```
len(predicted_labels_array_2d)
```
```
stock_data["Adj Close"][-test_size:]
```
```
stock_data["Adj Close"][-test_size:]
```
```
len(stock_data["Adj Close"][-test_size:])
```
```
len(stock_data["Adj Close"][-test_size:])
```
```
stock_data.index
```
```
stock_data.index
```
```
# Either way of writing works.
x_2018_10_to_2020_09 = np.arange('2018-10', '2020-10', dtype='datetime64[M]')
# x_2018_10_to_2020_09 = np.arange('2018-10-01', '2020-10-31', dtype='datetime64[M]')
x_2018_10_to_2020_09
```
```
# x_2018_10_to_2020_09 = np.arange('2018-10', '2020-10', dtype='datetime64[M]')
# x_2018_10_to_2020_09
x_2020_10_to_2022_09 = np.arange('2020-10', '2022-10', dtype='datetime64[M]')
x_2020_10_to_2022_09
```
```
len(x_2018_10_to_2020_09)
```
```
# len(x_2018_10_to_2020_09)
len(x_2020_10_to_2022_09)
```
```
fig = plt.figure(figsize=(12, 4))
plt.title('Stock Price Prediction')
plt.ylabel('Price')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
fig.autofmt_xdate()
plt.plot(stock_data["Adj Close"]['2016-01':])
plt.plot(x_2018_10_to_2020_09, predicted_labels_array_2d)
plt.show()
```
```
fig = plt.figure(figsize=(12, 4))
plt.title('Stock Price Prediction')
plt.ylabel('Price')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
fig.autofmt_xdate()
plt.plot(stock_data["Adj Close"]['2016-01':])
# plt.plot(x_2018_10_to_2020_09, predicted_labels_array_2d)
plt.plot(x_2020_10_to_2022_09, predicted_labels_array_2d)
plt.show()
```
| github_jupyter |
```
import projector
import numpy as np
import dnnlib
from dnnlib import tflib
import pickle
import tensorflow as tf
import PIL
import os
import tqdm
network_pkl = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ffhq.pkl"
def project(network_pkl: str, target_fname: str, outdir: str, save_video: bool, seed: int):
# Load networks.
tflib.init_tf({'rnd.np_random_seed': seed})
print('Loading networks from "%s"...' % network_pkl)
with dnnlib.util.open_url(network_pkl) as fp:
_G, _D, Gs = pickle.load(fp)
# Load target image.
# files = [f for f in listdir(target_fname) if isfile(join(target_fname, f))]
# for i in range(len(files)):
# path = target_fname + files[i]
target_pil = PIL.Image.open(target_fname)
w, h = target_pil.size
s = min(w, h)
target_pil = target_pil.crop(((w - s) // 2, (h - s) // 2, (w + s) // 2, (h + s) // 2))
target_pil= target_pil.convert('RGB')
target_pil = target_pil.resize((Gs.output_shape[3], Gs.output_shape[2]), PIL.Image.ANTIALIAS)
target_uint8 = np.array(target_pil, dtype=np.uint8)
target_float = target_uint8.astype(np.float32).transpose([2, 0, 1]) * (2 / 255) - 1
# Initialize projector.
proj = projector.Projector()
proj.set_network(Gs)
proj.start([target_float])
# Setup output directory.
# os.makedirs(outdir, exist_ok=True)
# target_pil.save(f'{outdir}/target.png')
# writer = None
# if save_video:
# writer = imageio.get_writer(f'{outdir}/proj.mp4', mode='I', fps=60, codec='libx264', bitrate='16M')
# # Run projector.
with tqdm.trange(proj.num_steps) as t:
for step in t:
assert step == proj.cur_step
# if writer is not None:
# writer.append_data(np.concatenate([target_uint8, proj.images_uint8[0]], axis=1))
dist, loss = proj.step()
t.set_postfix(dist=f'{dist[0]:.4f}', loss=f'{loss:.2f}')
# Save results.
# PIL.Image.fromarray(proj.images_uint8[0], 'RGB').save(f'{outdir}/proj.png')
# np.savez('out/dlatents.npz', dlatents=proj.dlatents)
return proj.dlatents
# if writer is not None:
# writer.close()
dlatent = project(network_pkl, "images/002_02.png", "out/", False, 303)
dlatent.shape
tflib.init_tf({'rnd.np_random_seed': 303})
with dnnlib.util.open_url(network_pkl) as fp:
_G, _D, Gs = pickle.load(fp)
data = np.load('out/dlatents.npz')
dlat = data[data.files[0]]
image_float_expr = tf.cast(Gs.components.synthesis.get_output_for(dlat), tf.float32)
images_uint8_expr = tflib.convert_images_to_uint8(image_float_expr, nchw_to_nhwc=True)[0]
img = PIL.Image.fromarray(tflib.run(images_uint8_expr), 'RGB')
img.show()
```
| github_jupyter |
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```
```
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
```
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>
#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
```
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
```
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
```
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
```
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
```
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
```
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
```
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
```
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts.
## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus.
### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences.
```
from collections import Counter, defaultdict
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
result = defaultdict(lambda: defaultdict(int))
for tag,word in zip(sequences_A,sequences_B):
result[tag][word]+=1
return result
raise NotImplementedError
# Calculate C(t_i, w_i)
tags = [tag for i, (word, tag) in enumerate(data.training_set.stream())]
words = [word for i, (word, tag) in enumerate(data.training_set.stream())]
emission_counts = pair_counts(tags,words)
assert len(emission_counts) == 12,"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
```
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
```
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(words,tags)
mfc_table = dict((word, max(tags.keys(), key=lambda key: tags[key])) for word, tags in word_counts.items())
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
```
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
```
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
```
### Example Decoding Sequences with MFC Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
```
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
```
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus.
```
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
```
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information.
### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$
```
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
# TODO: Finish this function!
return Counter(sequences)
raise NotImplementedError
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(tags)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
```
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
```
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
d = Counter(sequences)
return d
raise NotImplementedError
# TODO: call bigram_counts with a list of tag sequences from the training set
tags = [tag for i, (word, tag) in enumerate(data.stream())]
new_d = [(tags[i],tags[i+1]) for i in range(0,len(tags)-2,2)]
tag_bigrams = bigram_counts(new_d)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
```
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag.
```
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
d = Counter(sequences)
return d
raise NotImplementedError
# TODO: Calculate the count of each tag starting a sequence
starting_tag = [i[0] for i in data.Y]
tag_starts = starting_counts(starting_tag)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
```
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag.
```
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
d = Counter(sequences)
return d
raise NotImplementedError
# TODO: Calculate the count of each tag ending a sequence
ending_tag = [i[len(i)-1] for i in data.Y]
tag_ends = ending_counts(ending_tag)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
```
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
```
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
tags = [tag for i, (word, tag) in enumerate(data.stream())]
words = [word for i, (word, tag) in enumerate(data.stream())]
#Calling functions
tags_count=unigram_counts(tags)
tag_words_count=pair_counts(tags,words)
starting_tag = [i[0] for i in data.Y]
ending_tag = [i[-1] for i in data.Y]
starting_tag_counts = starting_counts(starting_tag)
ending_tag_counts = ending_counts(ending_tag)
states = []
for tag, words_dict in tag_words_count.items():
total = float(sum(words_dict.values()))
distribution = {word: count/total for word, count in words_dict.items()}
tag_emissions = DiscreteDistribution(distribution)
tag_state = State(tag_emissions, name=tag)
states.append(tag_state)
# (Hint: you may need to loop & create/add new states)
basic_model.add_states()
start_prob={} #a dict to store
for tag in tags:
start_prob[tag]=starting_tag_counts[tag]/tags_count[tag]
for tag_state in states :
basic_model.add_transition(basic_model.start,tag_state,start_prob[tag_state.name])
end_prob={}
for tag in tags:
end_prob[tag]=ending_tag_counts[tag]/tags_count[tag]
for tag_state in states :
basic_model.add_transition(tag_state,basic_model.end,end_prob[tag_state.name])
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
# basic_model.add_transition()
transition_prob_pair={}
for key in tag_bigrams.keys():
transition_prob_pair[key]=tag_bigrams.get(key)/tags_count[key[0]]
for tag_state in states :
for next_tag_state in states :
basic_model.add_transition(tag_state,next_tag_state,transition_prob_pair[(tag_state.name,next_tag_state.name)])
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
```
### Example Decoding Sequences with the HMM Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>
```
!!jupyter nbconvert *.ipynb
```
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
```
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
```
| github_jupyter |
# Deep Neural Network with 5 Input Features
In this notebook we will train a deep neural network using 5 input feature to perform binary classification of our dataset.
## Setup
We first need to import the libraries and frameworks to help us create and train our model.
- Numpy will allow us to manipulate our input data
- Matplotlib gives us easy graphs to visualize performance
- Sklearn helps us with data normalization and shuffling
- Keras is our deep learning frameworks which makes it easy to create and train our model
```
#!bin/env/python3
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import Normalizer
from sklearn.utils import shuffle
from tensorflow.keras import models
from tensorflow.keras import layers
```
## Load Data
Here we load the numpy array that we create previously. Let's check the dimension to make sure they are correctly formatted.
```
X = np.load("../db/x3.npy")
Y = np.load("../db/y3.npy")
print("X: " + str(X.shape))
print("Y: " + str(Y.shape))
```
## Data Preparation
The neural network will perform better during training if data is normalized. We also want to shuffle the inputs to avoid training out model on a skewed dataset.
```
transformer = Normalizer().fit(X)
X = transformer.transform(X) # normalizes data according to columns
X, Y = shuffle(X, Y, random_state=0) # shuffle the samples
```
## Training - Test Split
Ideally we would split our dataset into a training, validation and test set. For this example we will only use a training and validation set. The training set will have 3000 samples and the validation set will contain the remaining samples.
```
X_train = X[:3000]
Y_train = Y[:3000]
X_test = X[3000:]
Y_test = Y[3000:]
# 3000 training samples
print("Input training tensor: " + str(X_train.shape))
print("Label training tensor: " + str(Y_train.shape) + "\n")
# 559 test/validation samples
print("Input validation tensor: " + str(X_test.shape))
print("Label validation tensor: " + str(Y_test.shape))
```
## Defining our model
Here we finally create our model which in this case will be fully connected deep neural network with three hidden layer and a dropout layer.
We also choose an optimizer (RMSprop), a loss function (binary crossentropy) and our metric for evaluation (accuracy).
We can also take a look at the size of our model
```
nn = models.Sequential()
nn.add(layers.Dense(200, activation='relu', input_shape=(30,)))
nn.add(layers.Dense(100, activation='relu'))
nn.add(layers.Dropout(0.5))
nn.add(layers.Dense(1, activation='sigmoid'))
nn.compile(
optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy']
)
```
# Training
Here we actually train the model we defined above. We can specify the batch size (30) and the number of epochs. We also specify the validation set to evaluate how our model performs on data it has never seen after each epochs. This is important so that we can identify when the model begin to overfit to the training data
```
history = nn.fit(
X_train,
Y_train,
epochs=100,
batch_size=30,
validation_data=(X_test,Y_test)
)
history_dict = history.history
nn.summary()
print("Training accuracy: " + str(history_dict['accuracy'][-1]))
print("Training loss: " + str(history_dict['loss'][-1]) + "\n")
print("Validation accuracy: " + str(history_dict['val_accuracy'][-1]))
print("Validation loss: " + str(history_dict['val_loss'][-1]))
```
## Evaluating the Model
After our training we get ~75% accuracy on our validation data. When looking at our loss, we can see that our model is indeed learning and it does not seem to be overfitting too much as the training and validation accuracies remain fairly consistent troughout training.
```
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
plt.plot(epochs, loss_values, 'bo', label='Training loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')
plt.title('Losses')
plt.xlabel('Epoch')
plt.ylabel('Loss Evaluation')
plt.legend()
plt.show()
plt.clf()
loss_values = history_dict['accuracy']
val_loss_values = history_dict['val_accuracy']
epochs = range(1, len(loss_values) + 1)
plt.plot(epochs, loss_values, 'bo', label='Training accuracy')
plt.plot(epochs, val_loss_values, 'b', label='Validation accuracy')
plt.title('Accuracy Evaluation')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
plt.clf()
```
## The last step
If we were happy with our model tuning the last step would be to save our model and evaluate on the test set.
| github_jupyter |
```
#!/usr/bin/python
# DS4A Project
# Group 84
# using node/edge info to create network graph
# and do social network analysis
from os import path
import pandas as pd
from sklearn.linear_model import LogisticRegression
nominee_count_degree_data_path = '../data/nominee_degree_counts_data.csv'
def load_dataset(filepath):
df = pd.read_csv(filepath)
return df
df = load_dataset(nominee_count_degree_data_path)
print(df.head(2))
df.loc[(df.winner == False),'winner']=0
df.loc[(df.winner == True),'winner']=1
df.loc[(df.gender == 'F'),'gender']=0
df.loc[(df.gender == 'M'),'gender']=1
df.winner = df.winner.astype('int64', copy=False)
df.gender = df.gender.astype('int64', copy=False)
print(df.info())
print(df.head(2))
X = df[['gender','ceremonyAge','num_times_nominated','degree']]
y = df['winner']
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
clf.predict(X_test)
print('Accuracy of logistic regression classifier on test set: {:.2f}'.format(clf.score(X_test, y_test)))
import pandas as pd
import numpy as np
from sklearn import preprocessing
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
logit_roc_auc = roc_auc_score(y_test, clf.predict(X_test))
fpr, tpr, thresholds = roc_curve(y_test, clf.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()
y_pred = clf.predict(X_test)
print('Accuracy of logistic regression classifier on test set: {:.2f}'.format(clf.score(X_test, y_test)))
from sklearn.metrics import confusion_matrix
confusion_matrix = confusion_matrix(y_test, y_pred)
print(confusion_matrix)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
import statsmodels.api as sm
logit_model=sm.Logit(y_train,X_train)
result=logit_model.fit()
print(result.summary2())
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MIT-LCP/sccm-datathon/blob/master/04_timeseries.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# eICU Collaborative Research Database
# Notebook 4: Timeseries for a single patient
This notebook explores timeseries data for a single patient.
## Load libraries and connect to the database
```
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# authenticate
auth.authenticate_user()
# Set up environment variables
project_id='sccm-datathon'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
```
## Selecting a single patient stay
### The patient table
The patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/
```
# Display the patient table
%%bigquery
SELECT *
FROM `physionet-data.eicu_crd_demo.patient`
patient.head()
```
### The `vitalperiodic` table
The `vitalperiodic` table comprises data that is consistently interfaced from bedside vital signs monitors into eCareManager. Data are generally interfaced as 1 minute averages, and archived into the `vitalperiodic` table as 5 minute median values. For more detail, see: http://eicu-crd.mit.edu/eicutables/vitalPeriodic/
```
# Get periodic vital signs for a single patient stay
%%bigquery vitalperiodic
SELECT *
FROM `physionet-data.eicu_crd_demo.vitalperiodic`
WHERE patientunitstayid = 210014
vitalperiodic.head()
# sort the values by the observationoffset (time in minutes from ICU admission)
vitalperiodic = vitalperiodic.sort_values(by='observationoffset')
vitalperiodic.head()
# subselect the variable columns
columns = ['observationoffset','temperature','sao2','heartrate','respiration',
'cvp','etco2','systemicsystolic','systemicdiastolic','systemicmean',
'pasystolic','padiastolic','pamean','icp']
vitalperiodic = vitalperiodic[columns].set_index('observationoffset')
vitalperiodic.head()
# plot the data
plt.rcParams['figure.figsize'] = [12,8]
title = 'Vital signs (periodic) for patientunitstayid = {} \n'.format(patientunitstayid)
ax = vitalperiodic.plot(title=title, marker='o')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_xlabel("Minutes after admission to the ICU")
ax.set_ylabel("Absolute value")
```
## Questions
- Which variables are available for this patient?
- What is the peak heart rate during the period?
### The vitalaperiodic table
The vitalAperiodic table provides invasive vital sign data that is recorded at irregular intervals. See: http://eicu-crd.mit.edu/eicutables/vitalAperiodic/
```
# Get aperiodic vital signs
%%bigquery vitalaperiodic
SELECT *
FROM `physionet-data.eicu_crd_demo.vitalaperiodic`
WHERE patientunitstayid = 210014
# display the first few rows of the dataframe
vitalaperiodic.head()
# sort the values by the observationoffset (time in minutes from ICU admission)
vitalaperiodic = vitalaperiodic.sort_values(by='observationoffset')
vitalaperiodic.head()
# subselect the variable columns
columns = ['observationoffset','noninvasivesystolic','noninvasivediastolic',
'noninvasivemean','paop','cardiacoutput','cardiacinput','svr',
'svri','pvr','pvri']
vitalaperiodic = vitalaperiodic[columns].set_index('observationoffset')
vitalaperiodic.head()
# plot the data
plt.rcParams['figure.figsize'] = [12,8]
title = 'Vital signs (aperiodic) for patientunitstayid = {} \n'.format(patientunitstayid)
ax = vitalaperiodic.plot(title=title, marker='o')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_xlabel("Minutes after admission to the ICU")
ax.set_ylabel("Absolute value")
```
## Questions
- What do the non-invasive variables measure?
- How do you think the mean is calculated?
## 3.4. The lab table
```
# Get labs
%%bigquery lab
SELECT *
FROM `physionet-data.eicu_crd_demo.lab`
WHERE patientunitstayid = 210014
lab.head()
# sort the values by the offset time (time in minutes from ICU admission)
lab = lab.sort_values(by='labresultoffset')
lab.head()
lab = lab.set_index('labresultoffset')
columns = ['labname','labresult','labmeasurenamesystem']
lab = lab[columns]
lab.head()
# list the distinct labnames
lab['labname'].unique()
# pivot the lab table to put variables into columns
lab = lab.pivot(columns='labname', values='labresult')
lab.head()
# plot laboratory tests of interest
labs_to_plot = ['creatinine','pH','BUN', 'glucose', 'potassium']
lab[labs_to_plot].head()
# plot the data
plt.rcParams['figure.figsize'] = [12,8]
title = 'Laboratory test results for patientunitstayid = {} \n'.format(patientunitstayid)
ax = lab[labs_to_plot].plot(title=title, marker='o',ms=10, lw=0)
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_xlabel("Minutes after admission to the ICU")
ax.set_ylabel("Absolute value")
```
| github_jupyter |
```
import pandas as pd
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:90% !important;}</style>"))
# Don't wrap repr(DataFrame) across additional lines
pd.set_option("display.expand_frame_repr", True)
# Set max rows displayed in output to 25
pd.set_option("display.max_rows", 25)
%matplotlib inline
%matplotlib widget
# ASK WIKIPEDIA FOR LIST OF COMPANIES
# pip install sparqlwrapper
# https://rdflib.github.io/sparqlwrapper/
import sys
from SPARQLWrapper import SPARQLWrapper, JSON
endpoint_url = "https://query.wikidata.org/sparql"
query = """#List of `instances of` "business enterprise"
SELECT ?com ?comLabel ?inception ?industry ?industryLabel ?coordinate ?country ?countryLabel WHERE {
?com (wdt:P31/(wdt:P279*)) wd:Q4830453;
wdt:P625 ?coordinate.
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
OPTIONAL { ?com wdt:P571 ?inception. }
OPTIONAL { ?com wdt:P452 ?industry. }
OPTIONAL { ?com wdt:P17 ?country. }
}"""
def get_results(endpoint_url, query):
user_agent = "WDQS-example Python/%s.%s" % (sys.version_info[0], sys.version_info[1])
# TODO adjust user agent; see https://w.wiki/CX6
sparql = SPARQLWrapper(endpoint_url, agent=user_agent)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
return sparql.query().convert()
results = get_results(endpoint_url, query)
for result in results["results"]["bindings"]:
print(result)
#PUT THE DATA ON THE RIGHT FORMAT into pandas
import os
import json
import pandas as pd
from pandas.io.json import json_normalize
# Get the dataset, and transform string into floats for plotting
dataFrame = pd.json_normalize(results["results"]["bindings"]) #in a serialized json-based format
df = pd.DataFrame(dataFrame) # into pandas
p = r'(?P<latitude>-?\d+\.\d+).*?(?P<longitude>-?\d+\.\d+)' #get lat/lon from string coordinates
df[['longitude', 'latitude']] = df['coordinate.value'].str.extract(p, expand=True)
df['latitude'] = pd.to_numeric(df['latitude'], downcast='float')
df['longitude'] = pd.to_numeric(df['longitude'], downcast='float')
data = pd.DataFrame(df, columns = ['latitude','longitude','comLabel.value','coordinate.value', 'inception.value', 'industryLabel.value', 'com.value', 'industry.value', 'country.value','countryLabel.value'])
data=data.dropna(subset=['latitude', 'longitude'])
data.rename(columns={'comLabel.value':'company'}, inplace=True)
data.rename(columns={'coordinate.value':'coordinate'}, inplace=True)
data.rename(columns={'inception.value':'inception'}, inplace=True)
data.rename(columns={'industryLabel.value':'industry'}, inplace=True)
data.rename(columns={'com.value':'id'}, inplace=True)
data.rename(columns={'industry.value':'id_industry'}, inplace=True)
data.rename(columns={'country.value':'id_country'}, inplace=True)
data.rename(columns={'countryLabel.value':'country'}, inplace=True)
data = pd.DataFrame (data) #cluster maps works ONLY with dataframe
print(data.shape)
print(data.sample(5))
print(data.info())
#DATA index cleaning
from sqlalchemy import create_engine
from pandas.io import sql
import re
IDs=[]
for name in data['id']:
ID_n = name.rsplit('/', 1)[1]
ID = re.findall('\d+', ID_n)
#print(ID[0], ID_n)
IDs.append(ID[0])
data ['ID'] = IDs
print (data['ID'].describe())
data['ID']= data['ID'].astype(int)
#print (data['ID'].describe())
data.rename(columns={'id':'URL'}, inplace=True)
data['company_foundation'] = data['inception'].str.extract(r'(\d{4})')
pd.to_numeric(data['company_foundation'])
data = data.set_index(['ID'])
print(data.columns)
#GET company-industry relationship data
industries = data.dropna(subset=['id_industry'])
#print(industries)
industries.groupby('id_industry')[['company', 'country']].apply(lambda x: x.values.tolist())
print(industries.info())
industries = pd.DataFrame (industries)
print(industries.sample(3))
IDs=[]
for name in industries['id_industry']:
ID_n = name.rsplit('/', 1)[1]
ID = re.findall('\d+', ID_n)
# print(ID, ID_n)
IDs.append(ID[0])
industries ['ID_industry'] = IDs
industries['ID_industry']= industries['ID_industry'].astype(int)
industries.set_index([industries.index, 'ID_industry'], inplace=True)
industries['id_wikipedia']=industries['id_industry']
industries.drop('id_industry', axis=1, inplace=True)
industries = pd.DataFrame(industries)
print(industries.info())
print(industries.sample(3))
import plotly.express as px
import plotly.io as pio
px.defaults.template = "ggplot2"
px.defaults.color_continuous_scale = px.colors.sequential.Blackbody
#px.defaults.width = 600
#px.defaults.height = 400
#data = data.dropna(subset=['country'])
fig = px.scatter(data.dropna(subset=['country']), x="latitude", y="longitude", color="country")# width=400)
fig.show()
#break born into quarters and use it for the x axis; y has number of companies;
#fig = px.density_heatmap(countries_industries, x="country", y="companies", template="seaborn")
fig = px.density_heatmap(data, x="latitude", y="longitude")#, template="seaborn")
fig.show()
#COMPANIES IN COUNTRIES
fig = px.histogram(data.dropna(subset=['country', 'industry']), x="country",
title='COMPANIES IN COUNTRIES',
# labels={'industry':'industries'}, # can specify one label per df column
opacity=0.8,
log_y=False, # represent bars with log scale
# color_discrete_sequence=['indianred'], # color of histogram bars
color='industry',
# marginal="rug", # can be `box`, `violin`
# hover_data="companies"
barmode='overlay'
)
fig.show()
#INDUSTRIES IN COUNTRIES
fig = px.histogram(data.dropna(subset=['industry', 'country']), x="industry",
title='INDUSTRIES IN COUNTRIES',
# labels={'industry':'industries'}, # can specify one label per df column
opacity=0.8,
log_y=False, # represent bars with log scale
# color_discrete_sequence=['indianred'], # color of histogram bars
color='country',
# marginal="rug", # can be `box`, `violin`
# hover_data="companies"
barmode='overlay'
)
fig.show()
#THIS IS THE 2D MAP I COULD FIND, :)
import plotly.graph_objects as go
data['text'] = 'COMPANY: '+ data['company'] + '<br>COUNTRY: ' + data['country'] + '<br>FOUNDATION: ' + data['company_foundation'].astype(str)
fig = go.Figure(data=go.Scattergeo(
locationmode = 'ISO-3',
lon = data['longitude'],
lat = data['latitude'],
text = data['text'],
mode = 'markers',
marker = dict(
size = 3,
opacity = 0.8,
reversescale = True,
autocolorscale = False,
symbol = 'square',
line = dict(width=1, color='rgba(102, 102, 102)'),
# colorgroup='country'
# colorscale = 'Blues',
# cmin = 0,
# color = df['cnt'],
# cmax = df['cnt'].max(),
# colorbar_title="Incoming flights<br>February 2011"
)))
fig.update_layout(
title = 'Companies of the World<br>',
geo = dict(
scope='world',
# projection_type='albers usa',
showland = True,
landcolor = "rgb(250, 250, 250)",
subunitcolor = "rgb(217, 217, 217)",
countrycolor = "rgb(217, 217, 217)",
countrywidth = 0.5,
subunitwidth = 0.5
),
)
fig.show()
print(data.info())
import tkinter as tk
from tkinter import filedialog
from pandas import DataFrame
root= tk.Tk()
canvas1 = tk.Canvas(root, width = 300, height = 300, bg = 'lightsteelblue2', relief = 'raised')
canvas1.pack()
def exportCSV ():
global df
export_file_path = filedialog.asksaveasfilename(defaultextension='.csv')
data.to_csv (export_file_path, index = True, header=True)
saveAsButton_CSV = tk.Button(text='Export CSV', command=exportCSV, bg='green', fg='white', font=('helvetica', 12, 'bold'))
canvas1.create_window(150, 150, window=saveAsButton_CSV)
root.mainloop()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.