text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# 卷积神经网络
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, datasets, Sequential
```
### 1. 自定义权值实现
**在 tensorflow 中:**
- $C_{in} = 输入通道数 = 卷积核通道数$
- $C_{out} = 卷积核数 = 输出通道数$
$$X:[b, h, w, C_{in}],W:[k, k, C_{in}, C_{out}]$$
$$\Downarrow$$
$$O:[b, h', w', C_{out}]$$
```
x = tf.random.normal([2, 5, 5, 3]) # 输入,5*5,3 通道
w = tf.random.normal([3, 3, 3, 4]) # 4 个 3*3 大小的卷积核
# 设置步长为 1, padding 为 0
# padding 参数的设置格式为: padding=[[0,0],[上,下],[左,右],[0,0]]
out = tf.nn.conv2d(x, w, strides=1, padding=[[0, 0], [0, 0], [0, 0], [0, 0]])
out.shape
# padding 都为 1
out = tf.nn.conv2d(x, w, strides=1, padding=[[0, 0], [1, 1], [1, 1], [0, 0]])
out.shape
# 步长为,padding 设置为输出、输入同大小
# 需要注意的是, padding=same 只有在 strides=1 时才是同大小
out = tf.nn.conv2d(x, w, strides=1, padding='SAME')
out.shape
# 当𝑠 > 1 时,设置 padding='SAME'将使得输出高、宽将成 1/s 倍的减少
# 高宽先 padding 成可以整除 3 的最小整数 6,然后 6 按 3 倍减少,得到 2x2
out = tf.nn.conv2d(x, w, strides=3, padding='SAME')
out.shape
# tf.nn.conv2D 没有实现偏置向量计算, 所以需要手动添加 偏置 bias
b = tf.zeros([4])
out = out + b
```
### 2. 卷积层类
- 在 `TensorFlow` 中,`API` 的命名有 一定的规律,首字母大写的对象一般表示类,全部小写的一般表示函数
```
# 卷积核宽高相等时
# 创建 4 个 3 × 3大小的卷积核的卷积层,步长为 1, padding 方案为'SAME'
layer = layers.Conv2D(4, kernel_size=3, strides=1, padding='SAME')
# 卷积核宽高不等时
layer = layers.Conv2D(4, kernel_size=(3, 4), strides=(1, 2), padding="SAME")
layer = layers.Conv2D(4, kernel_size=3, strides=1, padding='SAME')
out = layer(x) # 前向计算
out.shape
# 返回 W 和 b 的列表
# layer.trainable_variables
# layer.kernel # layer.weights
# layer.bias
```
### 3. LeNet-5 实战
```
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
X_train = tf.convert_to_tensor(X_train, dtype=tf.float32)
y_train = tf.convert_to_tensor(y_train, dtype=tf.int32)
X_test = tf.convert_to_tensor(X_test, dtype=tf.float32)
y_test = tf.convert_to_tensor(y_test, dtype=tf.int32)
network = Sequential([
layers.Conv2D(6, kernel_size=3, strides=1), # 6 个 3x3 的卷积核
layers.MaxPooling2D(pool_size=2, strides=2), # 宽高各减半的池化层
layers.ReLU(),
layers.Conv2D(16, kernel_size=3, strides=1), # 第二个卷积层, 16 个 3x3 卷积核
layers.MaxPooling2D(pool_size=2, strides=2), # 宽高各减半的池化层
layers.ReLU(),
layers.Flatten(), # 打平层,方便全连接层处理
layers.Dense(120, activation='relu'),
layers.Dense(84, activation='relu'),
layers.Dense(10)
])
# build 一次网络模型,给输入 X 的形状,其中 4 为随意给的 batchsz
network.build(input_shape=(4, 28, 28, 1))
network.summary()
from tensorflow.keras import losses, optimizers
# 插入通道维度 => [b, 28, 28, 1]
X_train = tf.expand_dims(X_train, axis=3)
X_train.shape
# 通过设置 from_logits=True 标志位将 softmax 激活函数实现在损失函数中
# 创建损失函数的类,在实际计算时直接调用类实例即可
criteon = losses.CategoricalCrossentropy(from_logits=True)
optimizer = optimizers.SGD(lr=0.01)
for epoch in range(5):
# 构建梯度记录环境
with tf.GradientTape() as tape:
# 前向计算,获得10类别的预测分布,[b, 784] => [b, 10]
out = network(X_train)
# 真实标签one-hot编码,[b] => [b, 10]
y_train_onehot = tf.one_hot(y_train, depth=10)
# 计算交叉熵损失函数,标量
loss = criteon(y_train_onehot, out)
print("losses: ", loss)
# 自动计算梯度
grads = tape.gradient(loss, network.trainable_variables)
# 自动更新参数
optimizer.apply_gradients(zip(grads, network.trainable_variables))
```
**测试**
```
X_test = tf.expand_dims(X_test, axis=3)
X_test.shape
y_predict = network(X_test)
y_predict.shape
# 模型输出没有经过 softmax
y_predict[0]
y_predict = tf.argmax(y_predict, axis=1)
y_predict[:100]
y_predict2 = network(X_test)
y_predict2.shape
```
| github_jupyter |
# Neural Networks #
As outlined in [Carreau and Bengio (2009)](references.ipynb), the parameters of the Phat distribution can also be fit utilizing a simple neural network. For a univariate model, the need for such a structure may not be obvious, but the structure can be built upon to add additional free paramters (such as the mixture weights between the Carbens) and also conditional models with exogeneous variables.
First, we will demonstrate the technique simply on a Gaussian distribution.
**Tensorflow** is required.
## Fitting a Standard Gaussian ##
A conditional density model is estimated by providing one or many independent variables, $X$, and a dependent variable, $Y$. In our case, we are looking to fit a univariate independent variable. In Tensorflow, we must provide both $X$ and $Y$ input tensors, so to accomplish this we can simply set $X=0$ for every sample of $Y$:
$$
X_i = 0; i = 1 ... n
\\Y_i = \text{independent variable}
$$
In this example, we generate 100,000 samples from a standard Gaussian and fit the via the negative log-likelihood. `phat-tails` has a custom `DataSplit` class we can use to split the data for training purposes.
```
%load_ext autoreload
%autoreload 2
import seaborn as sns; sns.set(style = 'whitegrid')
import numpy as np
import scipy.stats as scist
import matplotlib.pyplot as plt
import phat as ph
n = 100000
y_data = scist.norm(0, 1).rvs(size=n)
data = ph.DataSplit(y_data)
```
Below we can see the kernel density of our samples looking clearly like the PDF of the Gaussian
```
plt.hist(y_data, bins=100)
plt.rcParams['patch.edgecolor'] = 'C0'
plt.show()
```
We have built a very simple neural network of `DN` class that takes in both $X$ and $Y$ variables, passes $X$ through 1 hidden layer (utilizing a `tanh` activation), then to an intermediate layer with two nodes, $\mu$ and $\sigma$, the parameters of the Normal distribution. $\sigma$ is then passed through a customized `nnelu` activation, which is simply the `relu` with a restriciton to only positive numbers.
The loss function is the Gaussian negative log-likelihood.
```
import tensorflow as tf
from phat.learn.normnet import DN, gnll_loss
dn = DN(neurons=200)
lr = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-3,
decay_steps=250,
decay_rate=0.8
)
dn.compile(
loss=gnll_loss,
optimizer=tf.keras.optimizers.Adam(learning_rate=lr),
metrics=['mean', 'std']
)
dn.build_graph().summary()
```
We can see the graph visually via `plot_model`
```
dn.plot_model()
stop_loss = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, verbose=0)
history = dn.fit(
data.train, epochs=3,
validation_data=data.test,
callbacks=[stop_loss], batch_size=32, verbose=0
)
```
The model minimized the loss almost immediately, resulting in the parameters below. They are shown next to return values from `scipy`'s fit function utilizing the Maximum Likelihood Estimate (MLE):
```
import pandas as pd
mean, std = dn.predict(np.zeros(1))[0]
m, s = scist.norm.fit(data.raw.y)
df = pd.DataFrame([[mean, std], [m, s]], index=['ANN', 'MLE'], columns=['mean', 'std'])
df.style.format({'mean': '{:.4f}', 'std': '{:.4f}'})
from IPython.core.display import Markdown as md
text = "The fit for both mean and standard deviation is fairly close,"
text += " though we should be cognizant that, in terms of daily returns,"
text += f' the delta of {mean - m:.4f} '
text += f' still translates to a {(1 + (mean-m)/100)**252-1:.2%} CAGR.'
md(text)
```
## Fitting S&P 500 Daily Returns ##
We will repeat the same process now for S&P 500 daily returns.
```
import yfinance as yf
sp = yf.download('^GSPC')
sp_rets = sp.Close.pct_change()[1:]
sp_rets.plot()
plt.show()
data = ph.DataSplit(sp_rets.values)
dn = DN()
lr = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-3,
decay_steps=100,
decay_rate=0.9
)
dn.compile(
loss=gnll_loss,
optimizer=tf.keras.optimizers.Adam(learning_rate=lr),
metrics=['mean', 'std']
)
stop_loss = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', min_delta=0,
patience=20, verbose=1, mode='auto'
)
history = dn.fit(
data.train, epochs=100,
validation_data=data.test,
callbacks=[stop_loss], batch_size=32, verbose=0
)
mean, std = dn.predict(np.zeros(1))[0]
m, s = scist.norm.fit(sp_rets)
df = pd.DataFrame([[mean, std], [m, s]], index=['ANN', 'MLE'], columns=['mean', 'std'])
df.style.format({'mean': '{:.5f}', 'std': '{:.4f}'})
from IPython.core.display import Markdown as md
diff = df['mean'].diff().iloc[1]
text = ' In this instance, the delta between the estimates'
text += f' accounts for just {(1 + diff/100)**252-1:.3%} CAGR.'
md(text)
```
A visualation of the gradient descent (towargs the mean) is avaible via `loss_progress`.
```
import matplotlib
from IPython.core.display import HTML
matplotlib.use("Agg")
Writer = matplotlib.animation.writers['ffmpeg']
writer = Writer(fps=15, metadata=dict(artist='rskene'), bitrate=1800)
anime = dn.loss_progress(history)
anime.save('nnet_norm_fit_sp.mp4', writer=writer)
HTML(anime.to_html5_video())
```
# Fitting the Phat #
## Failure of a Standard Loss Function ##
Of course, daily returns on the S&P 500 are not Guassian ... or, at best, if they are Gaussian it means we are living in a one-in-$10^{100+}$ universe where a dozen or more six sigma events have occured in the past 100 years. Fans of the Many Worlds interpretation would agree this is entirely possible.
Nevertheless, we will explore the fit of the Phat distribution, utilizing a network similar to that employed in [Carreau and Bengio (2009)](references.ipynb). We will first test our model against a generative version of Phat, with parameters chosen to mirror that of daily S&P 500 returns.
We will use the negative log-likelihood of the entire Phat distribution, a standard loss function used for most probabilitiy distributions.
```
genmod = ph.Phat(.0003, .0032, .17, .19)
n = 60000
y = genmod.rvs(size=n, seed=16433)
data = ph.DataSplit(y)
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(18,6))
ax1.plot(data.train_raw.y)
ax2.plot(data.test_raw.y)
ax1.set_title('Training Set')
ax2.set_title('Test Set')
plt.show()
```
We developed a second custom network for this process, `PhatNetBeta`, containing a few key changes.
1. we reduced the nodes in the hidden layer to just one. the input values are all zero and so their weighting is valueless. the bias from the hidden layer is the only meaningful input to the parameter layer.
2. the hidden layer has *no activation function*. the `x` values provided are all zero. They cannot be "activated".
2. we have added two additional parameters, `shape_l` and `shape_r`, representing the tail indices of the left and right-tailed Pareto distributions incorporated in the Phat distribution.
3. the loss function is now the negative log-likelihood of the Phat distribution (available as `BodyLoss`, referencing the fact that it over-fits the body of the distribution).
4. the `PhatMetric` class instantiates a metric for any one of the Phat parameters in the body and in both tails (which tail must be specified).
4. a number of operations were pushed lower-level for convenience
```
from phat.learn.phatnet import PhatNetBeta, PhatMetric, BodyLoss
dn = PhatNetBeta(neurons=1)
dn.plot_model()
metrics = [
PhatMetric('mean_left'), PhatMetric('std_left'),
PhatMetric('shape_left'), PhatMetric('shape_right'),
]
dn.compile(loss=BodyLoss(), optimizer='adam', metrics=metrics)
history = dn.fit(
data.train,
validation_data=data.test,
epochs=100,
batch_size=32,
verbose=0
)
```
We can compare results with the generative model:
```python
pd.DataFrame(
(genmod.args, phat_fit.args),
index=['Gen Model', 'Neural Net Fit'],
columns=['mean', 'std',
r'$\xi_l$', r'$\xi_r$', r'$a_l$',
r'$a_r$', r'$b_l$', r'$b_r$',
]
).T.to_csv('phat_fit_no_tails_comp.csv', index=False)
```
```
mean_, std_, shl_, shr_ = dn.predict([0])[0]
phat_fit = ph.Phat(*dn.predict([0])[0])
df = pd.read_csv('phat_fit_no_tails_comp.csv')
df.style.format('{:.4}')
```
As with the [MLE](#mle_fit.ipynb) fit, the neural net approach leads to significant underestimation of the tail index, driving them to near zero.
We can see why below. We compare the change in the log-likelihood for changes in the different parameters. Changes in the mean have a clear absolute minimum, changes in standard deviation are actually asymptotic to declining loss, and changes in the tail are linear across a very narrow range.
So $\xi$ can be turned all the way down to zero to the benefit of the loss function.
```
import numpy as np
y = np.linspace(-.1, .1, 1000)
siglin = np.linspace(0.001, 0.01, 1000)
shlin = np.linspace(0, .5, 1000)
shl_pdf = [ph.Phat(mean_, std_, sh, shr_).pdf(mean_) for sh in shlin]
shr_pdf = [ph.Phat(mean_, std_, shl_, sh).pdf(mean_) for sh in shlin]
shl_nll = [ph.Phat(mean_, std_, sh, shr_).nll(mean_) for sh in shlin]
shr_nll = [ph.Phat(mean_, std_, shl_, sh).nll(mean_) for sh in shlin]
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2,figsize=(18,12))
ax1.plot(y.reshape(-1,1), phat_fit.nll(np.tile(y, 1)).T)
ax2.plot(siglin.reshape(-1,1), ph.Phat(mean_, siglin, shl_, shr_).nll(mean_).T)
ax3.plot(shlin, shl_nll)
ax4.plot(shlin, shr_nll)
ax1.set_title('NLL Relative to Log Returns')
ax2.set_title('NLL Relative to Deviation')
ax3.set_title('NLL Relative to Changes in the Left Tail Shape')
ax4.set_title('NLL Relative to Changes in the Right Tail Shape')
plt.show()
```
Below we can see just how sinister the lack of a tail index can be. The fitted model, with no tail indices, appears to be as good a fit, if not better, for the random samples than the actual model that created them!
It is only deep in the tails that the larger tail index plays a role.
```
fig, ax = plt.subplots(1,1,figsize=(10,6))
y = genmod.rvs(size=10000)
y_ = np.linspace(y.min(), y.max(), 10000)
bins = np.linspace(y.min(), y.max(), 100)
ax.hist(y, bins=bins, density=True, label='Random Samples')
ax.plot(y_, genmod.pdf(y_), label='Generative Model')
ax.plot(y_, phat_fit.pdf(y_), label='PhatNetBeta')
ax.set_xlim((-.05, .05))
ax.legend()
plt.show()
```
## PhatLoss: A Custom Loss Function ##
As with our MLE, we can incorporate a tail estimation method into our process in order to alleviate underfitting in the tails. To do so, we have to again amend the network model, including the use of a new custom loss function, `PhatLoss`. The updated network is available as `PhatNet`.
First, we will generate some data.
```
genmod = ph.Phat(.0003, .0032, .25, .29)
n = 60000
y = genmod.rvs(size=n)
data = ph.DataSplit(y)
```
Then, we will find our estimate of each tail index.
```
xi_l, xi_r = ph.two_tailed_hill_double_bootstrap(data.raw.y)
```
For our neural net, we want to train the shape parameters against our bootstrapped estimates and the remaining parameters against our Phat distribution. This means we must employ two different loss functions:
1. For each of the shape parameters, we use the asymptotic error discussed earlier, however, we take the log of both values to create a more valuable gradient (same concept as log-likelihood):
$$\text{AMLSE} = E[(log\hat{\xi} - log\xi)^2]$$
where: $\xi$ is now the estimate derived from the double bootstrap method and $\hat{\xi}$ is the value resulting from the gradient descent.
The AMSE of the left and right tails is then averaged. This is the `TailLoss`.
2. For the body params, $\mu$ and $\sigma$, we will continue to use the negative log-likelihood of the resulting Phat distribution. Note this means the ongoing shape parameters must also be provided to the $mu$ and $sigma$ loss calculation. This is the `BodyLoss` used earlier.
We then combine these two losses, at each step, into a single loss driver according to the formula:
$$ \text{Loss} = \frac{\text{Loss}_{\textit{body}}}{\text{Loss}_{\textit{tail}}+1}$$
The above relationship was established empirically. As we'll see, it produces a loss curve that scales the relative importance of both the Body and Tail losses and allows for asymptotic behavior in either without negating it in the other. This seems to produce good convergence, although it does have [scale drawbacks](#Caution-On-Scaling).
Below we can see the surface of the resulting loss function in terms of its constituents, the log-likelihood of the Phat distribution and the average of the AMLSE of each tail index.
```
%matplotlib notebook
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(figsize=(8,5),subplot_kw={"projection": "3d"})
history = pd.read_csv('history_phat_learning.csv')
bodyloss = history.nll
tailloss = history.two_tailed_amlse
X, Y = np.meshgrid(bodyloss, tailloss)
Z = X / (Y + 1)
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
plt.gca().invert_xaxis()
ax.set_xlabel('Log Likelihood', labelpad=10)
ax.set_ylabel('AMLSE', labelpad=10)
ax.set_zlabel('Total Loss')
ax.get_proj = lambda: np.dot(Axes3D.get_proj(ax), np.diag([1, 1.5, 1.25, 1.1]))
ax.set_title('Phat Loss', y=.95)
plt.show()
```
## PhatNet ##
Here we see the model.
```
from datetime import datetime as dt
import tensorflow as tf
nnet = ph.PhatNet()
```
To compile the model, we pass our custom loss function `PhatLoss` as well as a number of custom metrics to monitor results, mainly through the `PhatMetric` class. Both `PhatLoss` and `TwoTailedAMLSE` take the estimated shape parameters as arguments.
```
from phat.learn.phatnet import NLL, TwoTailedAMLSE
metrics = [NLL(), TwoTailedAMLSE(xi_l, xi_r)]
optimizer = tf.keras.optimizers.Adam(learning_rate=5*10**-5)
nnet.compile(loss=ph.PhatLoss(xi_l, xi_r), optimizer=optimizer, metrics=metrics)
```
We have added some customization to the `fit` method including a number of callbacks. For instance, a TensorBoard log is created simply by passing the `logdir` keyword.
```
history = nnet.fit(
data.train,
epochs=100,
validation_data=data.test,
verbose=0
)
```
```python
pd.DataFrame(history.history).to_csv('history_phat_learning.csv', index=False)
results = pd.DataFrame(
list(zip(nnet.predicted_params().values[:, 0], genmod.learnable_params)),
index=genmod.PARAM_NAMES[:4],
columns=['Trained', 'Actual']
).to_csv('phat_w_tails_results.csv', index=False)
```
```
results = pd.read_csv('phat_w_tails_results.csv')
results.style.format('{:.4}')
```
Above we see a much improved fit of the tail indices while not sacrificing accuracy in the body parameters.
## Caution on Scaling ##
The model inputs are all `0` so the usual concerns regarding normalization/standardization/activation in the hidden layer do not apply. Still, the scale of the target `y` values do impact performance in an important way.
If the scale of the `y` values is too large, our custom loss function will work in the exact opposite fashion we expect. To see why, recall our loss function:
$$ \text{Loss} = \frac{\text{Loss}_{\textit{body}}}{\text{Loss}_{\textit{tail}}+1}$$
We can see that if $\Delta \text{Loss}_{\textit{tail}} >>> \Delta\text{Loss}_{\textit{body}}$, then an *increase* in both will lead to a decreasing loss value.
This can result if the scaling of `y` is too large.
To demonstrate, we'll repeat the prior experiment and simply increase the standard deviation of our Phat distribution by a factor of 100.
```
import numpy as np
genmod = ph.Phat(.0003, .32, .25, .29)
n = 60000
y = genmod.rvs(size=n)
data = ph.DataSplit(y)
xi_l, xi_r = ph.two_tailed_hill_double_bootstrap(data.raw.y)
```
Note that the tail estimates are essentially unchanged. The location of each tail will be impacted, but not the index.
```
from phat.learn.phatnet import NLL, TwoTailedAMLSE
nnet = ph.PhatNet()
metrics = [NLL(), TwoTailedAMLSE(xi_l, xi_r)]
optimizer = tf.keras.optimizers.Adam(learning_rate=5*10**-5)
nnet.compile(loss=ph.PhatLoss(xi_l, xi_r), optimizer=optimizer, metrics=metrics)
history = nnet.fit(
data.train,
epochs=100,
validation_data=data.test,
verbose=0,
)
```
As we can see below, this results in a markedly different loss region where the Total Loss improves as both the log-likelihood and the AMSE increases!
```
%matplotlib notebook
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(figsize=(6,4),subplot_kw={"projection": "3d"})
# pd.DataFrame(history.history).to_csv('history_phat_learning_on_100x_scale.csv', index=False)
histy = pd.read_csv('history_phat_learning_on_100x_scale.csv')
bodyloss = histy.nll
tailloss = histy.two_tailed_amlse
X, Y = np.meshgrid(bodyloss, tailloss)
Z = X / (Y + 1)
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
plt.gca().invert_xaxis()
ax.set_xlabel('Log Likelihood', labelpad=10)
ax.set_ylabel('AMLSE', labelpad=10)
ax.set_zlabel('Total Loss')
ax.set_title('Phat Loss with Incorrect Scale')
ax.get_proj = lambda: np.dot(Axes3D.get_proj(ax), np.diag([1, 1.5, 1.25, 1.1]))
plt.show()
```
| github_jupyter |
#Vowpal Wabbit parameter estimation
##MNIST PCA data
https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
```
import re
import csv
import subprocess
from time import ctime
import pandas as pd
import numpy as np
import scipy
from matplotlib import pyplot as plt
%matplotlib inline
#%qtconsole
!rm cv_train_pca.vw.cache
vw_input_file = 'data/mnist_train_pca.vw'
path_to_cache = 'cv_train_pca.vw.cache' # <-- this is the file removed, above
output_file = 'vw_cv_parms_tested.csv'
# ===================================================================
# REMEMBER: remove the parameter you're testing
# and put it back with its best value when you're done
# ===================================================================
vw_params = '-d ' + vw_input_file + ' --oaa 10 -f cv.model ' + \
' -q ii -b 19 -l 0.4 --power_t 0.6 --decay_learning_rate 1 --initial_t 0 ' + \
' --passes 35 --early_terminate 3 '
# ===================================================================
###
def get_loss( output ):
pattern = r"average\s+loss\s+=\s+([0-9.e]+)+"
m = re.search( pattern, output )
loss = m.group( 1 )
return loss
###
o_f = open( output_file, 'wb' )
writer = csv.writer( o_f )
writer.writerow( [ 'bits', 'loss' ] )
# =============================================
# ========= parameter ranges to test ==========
# --------------------------------------------------------
# with --early_terminate is there any reason not to simply
# set --passes to a very large number?
# --------------------------------------------------------
#param = "-b"
#param_name = param + " hash table entry bit size"
#param_range = range(12, 30+1, 1)
#param = "-l"
#param_name = param + " learning rate"
#param_range = np.arange(0.1, 1.1, .1)
#param = "--power_t"
#param_name = param
#param_range = np.arange(0, 1.1, .1)
#param = '--decay_learning_rate'
#param_name = param
#param_range = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.7071068, 0.8, 0.9, 1, 1.1, 1.2, 1.3, 1.4, 1.5]
# watch for this
# Warning: the learning rate for the last pass is multiplied by: [2.91038e-11]
# adjust --decay_learning_rate larger to avoid this.
#param = '--initial_t'
#param_name = param
#param_range = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# ========= parameter ranges to test ==========
# =============================================
for b in param_range:
cmd = 'vw {} --cache_file {} {} {} 2>&1'.format( vw_params, path_to_cache, param, b )
print cmd
output = subprocess.check_output( '{} | tee /dev/stderr'.format( cmd ), shell = True )
loss = get_loss( output )
print "\n{} {}, loss: {}\n{}\n".format( param_name, b, loss, ctime() )
writer.writerow( [ b, loss ] )
o_f.flush()
input_file = output_file
d = pd.read_csv( input_file )
plt.figure(figsize=(10,6))
plt.title("Vowpal Wabbit loss vs {0}".format(param_name) )
plt.xlabel("{0}; best={1}".format(param_name, d.bits[np.argmin(d.loss)]) )
plt.plot( d.bits, d.loss )
plt.ylabel("Average Log Loss (lowest is best)")
plt.axvline(x=d.bits[np.argmin(d.loss)])
plt.show()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
import cv2
import numpy as np
import pickle
from scipy.misc import imread, imresize
from birdseye import BirdsEye
from lanefilter import LaneFilter
from curves import Curves
from helpers import show_images, save_image, roi
from moviepy.editor import VideoFileClip
from IPython.display import HTML
calibration_data = pickle.load(open("calibration_data.p", "rb" ))
matrix = calibration_data['camera_matrix']
distortion_coef = calibration_data['distortion_coefficient']
source_points = [(580, 460), (205, 720), (1110, 720), (703, 460)]
destination_points = [(320, 0), (320, 720), (960, 720), (960, 0)]
p = { 'sat_thresh': 120, 'light_thresh': 40, 'light_thresh_agr': 205,
'grad_thresh': (0.7, 1.4), 'mag_thresh': 40, 'x_thresh': 20 }
birdsEye = BirdsEye(source_points, destination_points, matrix, distortion_coef)
laneFilter = LaneFilter(p)
curves = Curves(number_of_windows = 9, margin = 100, minimum_pixels = 50,
ym_per_pix = 30 / 720 , xm_per_pix = 3.7 / 700)
def debug_pipeline(img):
ground_img = birdsEye.undistort(img)
birdseye_img = birdsEye.sky_view(img)
binary_img = laneFilter.apply(ground_img)
sobel_img = birdsEye.sky_view(laneFilter.sobel_breakdown(ground_img))
color_img = birdsEye.sky_view(laneFilter.color_breakdown(ground_img))
wb = np.logical_and(birdsEye.sky_view(binary_img), roi(binary_img)).astype(np.uint8)
result = curves.fit(wb)
left_curve = result['pixel_left_best_fit_curve']
right_curve = result['pixel_right_best_fit_curve']
left_radius = result['left_radius']
right_radius = result['right_radius']
pos = result['vehicle_position_words']
curve_debug_img = result['image']
projected_img = birdsEye.project(ground_img, binary_img, left_curve, right_curve)
return birdseye_img, sobel_img, color_img, curve_debug_img, projected_img, left_radius, right_radius, pos
def verbose_pipeline(img):
b_img, s_img, co_img, cu_img, pro_img, lr, rr, pos = debug_pipeline(img)
b_img = imresize(b_img, 0.25)
s_img = imresize(s_img, 0.25)
co_img = imresize(co_img, 0.25)
cu_img = imresize(cu_img, 0.25)
offset = [0, 320, 640, 960]
width, height = 320,180
pro_img[:height, offset[0]: offset[0] + width] = b_img
pro_img[:height, offset[1]: offset[1] + width] = co_img
pro_img[:height, offset[2]: offset[2] + width] = s_img
pro_img[:height, offset[3]: offset[3] + width] = cu_img
text_pos = "vehicle pos: " + pos
text_l = "left r: " + str(np.round(lr, 2))
text_r = " right r: " + str(np.round(rr, 2))
cv2.putText(pro_img, text_l, (20, 220), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2)
cv2.putText(pro_img, text_r, (250, 220), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2)
cv2.putText(pro_img, text_pos, (620, 220), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2)
return pro_img
path = "test_images/test2.jpg"
img = imread(path)
plt.imshow(verbose_pipeline(img))
project_output = 'project_video_verbose_output.mp4'
clip1 = VideoFileClip("project_video.mp4");
white_clip = clip1.fl_image(verbose_pipeline)
%time white_clip.write_videofile(project_output, audio = False);
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import logging
import time
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
import sys
sys.path.insert(1, '../')
from config import shuffled_csv, path_exps
from NN import NN_model, ReLU, MSE, L1_reg
from LevelMethod import LevelMethod
from NN.utility import batch_train, batch_out, Model_Wrapper
from LBFGS import LBFGS
from testing import multi_run
data = pd.read_csv(shuffled_csv, index_col=0).to_numpy()
data = data[:100, :]
n_samples = data.shape[0]
X_data = data[:, :10]
Y_data = data[:, 10:]
Y_scaler = StandardScaler()
Y_scaled = Y_scaler.fit_transform(Y_data)
# np.random.seed(11)
model = NN_model([10, 20, 20, 2], ReLU, MSE)
model.init_weights()
reg_loss = L1_reg(1e-4)
# logging.basicConfig(level="INFO")
f = Model_Wrapper(model, X_data, Y_scaled, reg_loss)
x = model.Weights
max_feval = 500
import seaborn as sns
def geo_mean(iterable):
a = np.array(iterable)
return a.prod() ** (1.0 / len(a))
def pair_vector(vector_list):
maxlen = len(max(vector_list, key=len))
for i in range(len(vector_list)):
lastEl = len(vector_list[i]) - 1
tmpArray = np.full((maxlen - lastEl - 1, 1), vector_list[i][lastEl])
vector_list[i] = np.array(np.concatenate((vector_list[i], tmpArray), axis=None))
return vector_list
runs = 3
bundle_f_values = []
bundle_runtimes = []
bundle_f_evals = []
bundle_f_times = []
lbfgs_f_values = []
lbfgs_runtimes = []
lbfgs_f_evals = []
lbfgs_f_times = []
for i in range(runs):
solver = LevelMethod(lambda_=0.7, bounds=1, max_iter=max_feval, verbose=False)
model.init_weights()
x = model.Weights
start_time = time.process_time()
status = solver.solve(f, x)
end_time = time.process_time()
runtime = end_time-start_time
bundle_runtimes.append(runtime)
bundle_f_times.append(solver.times["step"])
#bundle_f_evals.append(solver.feval)
bundle_f_values.append(solver.f_values)
for i in range(runs):
solver = LBFGS(eps=1e-4,max_feval=max_feval,M=500,m1=0.01,m2=0.7)
model.init_weights()
x = model.Weights
start_time = time.process_time()
status = solver.solve(f, x)
end_time = time.process_time()
runtime = end_time-start_time
lbfgs_runtimes.append(runtime)
lbfgs_f_values.append(solver.f_values)
lbfgs_f_times.append(solver.time_evaluations)
lbfgs_f_evals.append(solver.feval)
bundle_f_times = [list(np.cumsum(step_times)) for step_times in bundle_f_times]
print("Solver \t\tMean runtime\t\tstd runtime")
print(f"Level Bundle\t\t{np.mean(bundle_runtimes)}\t{np.std(bundle_runtimes)}")
print(f"LBFGS\t\t{np.mean(lbfgs_runtimes)}\t{np.std(lbfgs_runtimes)}")
if lbfgs_f_values is not None and bundle_f_values is not None:
run_1 = pair_vector(bundle_f_values)
run_2 = pair_vector(lbfgs_f_values)
df_1 = pd.DataFrame(run_1).melt()
df_1['function'] = ["bundle"] * len(df_1.index)
df_2 = pd.DataFrame(run_2).melt()
df_2['function'] = ["lbfgs"] * len(df_2.index)
time_1 = pair_vector(bundle_f_times)
time_2 = pair_vector(lbfgs_f_times)
df_3 = pd.DataFrame(time_1).melt()
df_3['function'] = ["bundle"] * len(df_3.index)
df_4 = pd.DataFrame(time_2).melt()
df_4['function'] = ["lbfgs"] * len(df_4.index)
X1 = []
Y1 = []
y1_std = []
X2 = []
Y2 = []
y2_std = []
for i in range(0,len(df_1["value"].to_numpy()),runs):
X1.append(np.mean(df_3["value"].to_numpy()[i:i+runs]))
Y1.append(np.mean(df_1["value"].to_numpy()[i:i+runs]))
y1_std.append(np.std(df_1["value"].to_numpy()[i:i+runs]))
for i in range(0,len(df_2["value"].to_numpy()),runs):
X2.append(np.mean(df_4["value"].to_numpy()[i:i+runs]))
Y2.append(np.mean(df_2["value"].to_numpy()[i:i+runs]))
y2_std.append(np.std(df_2["value"].to_numpy()[i:i+runs]))
f_star = min(min(Y1),min(Y2))
for i in range(len(Y1)):
Y1[i] = (Y1[i] - f_star) / f_star
y1_std[i] = y1_std[i]/f_star
for i in range(len(Y2)):
Y2[i] = (Y2[i] - f_star) / f_star
y2_std[i] = y2_std[i]/f_star
# Plot fevals
fig, ax = plt.subplots(figsize=(12,8))
plt.rcParams.update({'font.size': 18,'font.weight':'normal','font.family':'sans-serif'})
ax.plot(np.asarray(X1)-X1[0],Y1, label='Bundle')
ax.fill_between(np.asarray(X1)-X1[0], np.asarray(Y1)-np.asarray(y1_std),np.asarray(Y1)+np.asarray(y1_std),alpha=0.1,interpolate = True)
ax.plot(np.asarray(X2)-X2[0],Y2, label='L-BFGS')
ax.fill_between(np.asarray(X2)-X2[0], np.asarray(Y2)-np.asarray(y2_std),np.asarray(Y2)+np.asarray(y2_std),alpha=0.1,interpolate = True)
plt.yscale("log")
plt.legend()
ax.set_xlabel('time (sec)',fontsize=22)
ax.set_ylabel('relative gap to minimum',fontsize=22)
plt.show()
ratios = np.zeros_like(Y2)
for i in range(len(Y2)-2):
ratios[i+1] = Y2[i]/Y2[i+1]
print(ratios)
```
| github_jupyter |
```
import skimage as ski
import numpy as np
import openpyxl as xl
import csv
import os
import xmltodict
import pandas as pd
"""
for this typical ETL pattern, this notebook explores EXTRACT
"""
plate1_path='/Volumes/GoogleDrive/My Drive/ELISAarrayReader/images_scienion/2020-01-15_plate4_AEP_Feb3_6mousesera'
# plate1_path = 'Plates_given_to_manu/2020-01-15_plate4_AEP_Feb3_6mousesera'
plate1_gal = plate1_path+os.sep+'8x6_test.gal'
plate1_xml = plate1_path+os.sep+'8x6_test.conf.xml'
"""
explore the .gal file
"""
# explore the data format, delimiter type
with open(plate1_gal, newline='') as csv_file:
csv_reader = csv.reader(csv_file, delimiter='\t')
for row in csv_reader:
print(row)
# block starts as list, then will be cast to numpy array
with open(plate1_gal, newline='') as csv_file:
csv_reader = csv.reader(csv_file, delimiter='\t')
out_dict = {}
out_dict['Header'] = []
out_dict['Block'] = []
begin_block = False
for row in csv_reader:
if row[0] == 'Block':
begin_block = True
# still reading header values
if not begin_block:
out_dict['Header'].append(row)
# begin block (spot) mapping
if begin_block:
out_dict['Block'].append(row)
out_dict['Block']
# parse the index for various header values
# ['Block', 'Row', 'Column', 'ID', 'Name']
row_idx = [int(idx) for idx, value in enumerate(out_dict['Block'][0]) if value == 'Row'][0]
col_idx = [int(idx) for idx, value in enumerate(out_dict['Block'][0]) if value == 'Column'][0]
id_idx = [int(idx) for idx, value in enumerate(out_dict['Block'][0]) if value == 'ID'][0]
name_idx = [int(idx) for idx, value in enumerate(out_dict['Block'][0]) if value == 'Name'][0]
# find the max rows and cols
max_row = 0
max_col = 0
for spot in out_dict['Block'][1:]:
if max_row < int(spot[row_idx]):
max_row = int(spot[row_idx])
if max_col < int(spot[col_idx]):
max_col = int(spot[col_idx])
gal_spot_array_list_ID = [[[None] for i in range(max_col)] for j in range(max_row)]
gal_spot_array_numpy_ID = np.empty(shape=(max_row, max_col), dtype=np.dtype('U100'))
gal_spot_array_list_Name = [[[None] for i in range(max_col)] for j in range(max_row)]
gal_spot_array_numpy_Name = np.empty(shape=(max_row, max_col), dtype=np.dtype('U100'))
for spot in out_dict['Block'][1:]:
r = int(spot[row_idx])-1
c = int(spot[col_idx])-1
ID = spot[id_idx]
name = spot[name_idx]
gal_spot_array_list_ID[r][c] = ID
gal_spot_array_numpy_ID[r, c] = ID
gal_spot_array_list_Name[r][c] = name
gal_spot_array_numpy_Name[r, c] = name
"""
explore the .xml file
"""
# xmltodict is much easier to use and translate than xml.dom.minidom
with open(plate1_xml) as fd:
doc = xmltodict.parse(fd.read())
# layout of array
layout = doc['configuration']['well_configurations']['configuration']['array']['layout']
# fiducials
fiduc = layout['marker']
# spot IDs
spots = doc['configuration']['well_configurations']['configuration']['array']['spots']['spot']
# replicates
repl = doc['configuration']['well_configurations']['configuration']['array']['spots']['multiplet']
rows = int(layout['@rows'])
columns = int(layout['@cols'])
v_pitch = float(layout['@vspace'])
h_pitch = float(layout['@hspace'])
spot_width = float(layout['@expected_diameter'])
bg_offset = float(layout['@background_offset'])
bg_thickness = float(layout['@background_thickness'])
max_diam = float(layout['@max_diameter'])
min_diam = float(layout['@min_diameter'])
# xml_spot_array_list = [[[None] for i in range(columns)] for j in range(rows)]
xml_spot_array_numpy = np.empty(shape=(rows, columns), dtype=np.dtype('U100'))
# xml_spot_array_list_ID = [[[None] for i in range(columns)] for j in range(rows)]
xml_spot_array_numpy_ID = np.empty(shape=(rows, columns), dtype=np.dtype('U100'))
# xml_spot_array_list_antigen = [[[None] for i in range(columns)] for j in range(rows)]
xml_spot_array_numpy_antigen = np.empty(shape=(rows, columns), dtype=np.dtype('U100'))
for spot in spots:
r = int(spot['@row'])
c = int(spot['@col'])
v = spot['@spot_type']
ID = spot['@id']
xml_spot_array_list[r][c] = v
xml_spot_array_numpy[r,c] = v
xml_spot_array_list_ID[r][c] = ID
xml_spot_array_numpy_ID[r,c] = ID
for f in fiduc:
r = int(f['@row'])
c = int(f['@col'])
v = f['@spot_type']
# fiduc do not have "ID"
xml_spot_array_list[r][c] = v
xml_spot_array_numpy[r,c] = v
xml_spot_array_numpy
# walk through the replicates and assign the repl @id to the array
# 1) iterate repl
# 2) extract all_ids (not @id), extract antigen (@id)
# 3) find cells that correspond to each in all_ids
# 4) use the xml_spot_array_list_ID to assign by antigen to ID
ids = xml_spot_array_numpy_ID
anti = xml_spot_array_numpy_antigen
for rep in repl:
antigen = rep['@id']
all_spots = rep['id'] # list of IDs
for spot in all_spots:
anti[np.where(ids==spot)] = antigen
xml_spot_array_numpy_antigen = anti
xml_spot_array_numpy_antigen
"""
The above code should parse the two files (.gal, .xml) into two data formats:
.gal
gal_spot_array_list_ID[r][c] = ID
gal_spot_array_numpy_ID[r, c] = ID
gal_spot_array_list_Name[r][c] = name
gal_spot_array_numpy_Name[r, c] = name
.xml
xml_spot_array_list[r][c] = spot type
xml_spot_array_numpy[r,c] = spot type
xml_spot_array_list_ID[r][c] = ID
xml_spot_array_numpy_ID[r,c] = ID
xml_spot_array_numpy_antigen = antigen
the 'numpy' and 'list' forms are identical
each can be sliced and are indexed by row-column format
each has a value that is a simple string
So we can use the following three arrays to query type, cell ID name, and antigen name
xml_spot_array_numpy
xml_spot_array_numpy_ID
xml_spot_array_numpy_antigen
"""
ids
xml_spot_array_numpy_antigen.shape
xlsx_template = '/Volumes/GoogleDrive/My Drive/ELISAarrayReader/data_for_tests_and_github/Metadata_and_Plate_configuration.xlsx'
# plate_info = pd.read_excel(xlsx_template, usecols='A:M', sheet_name=None)
plate_info = pd.read_excel(xlsx_template, sheet_name=None)
plate_info.keys()
plate_info['imaging_and_array_parameters'].keys()
plate_info['imaging_and_array_parameters']['Parameter']
plate_info['imaging_and_array_parameters']['Value']
for idx, value in enumerate(plate_info['imaging_and_array_parameters']['Parameter']):
print(f"key = {value}, \tval = {plate_info['imaging_and_array_parameters']['Value'][idx]}")
plate_info['array_antigens']
for idx, value in enumerate(plate_info['array_antigens'].keys()):
print(f"k: {value}, \tv: {idx}")
plate_info['array_antigens'].keys() # keys are first row (or column names)
plate_info['array_antigens'][0] # each key maps to a pd.Series
for item in plate_info['array_antigens'][0]:
print(item)
for col in plate_info['array_antigens'].keys()[1:]:
print(col)
for row, value in enumerate(plate_info['array_antigens'][col]):
print(f"\t{row}\t{value}")
"""
writing dict to xml
"""
import os
import xmltodict
temp_dir = '/Users/bryant.chhun/Desktop/Data/array-imager'
fiducials = [{'@row': 0,
'@col': 0,
'@spot_type': 'Reference, Diagnostic'}
]
spots = [{'@row': 0,
'@col': 1,
'@id': 'spot-1-2',
'spot_type': 'Diagnostic'},
{'@row': 1,
'@col': 2,
'@id': 'spot-2-3',
'spot_type': 'Diagnostic'}
]
repl = [{'@row': 0,
'@col': 1,
'@id': 'H1 HA',
'id': ['spot-1-2', 'spot-2-3']}
]
params = {'@rows': None,
'@cols': None,
'@vspace': None,
'@hspace': None,
'@expected_diameter': None,
'@background_offset': None,
'@background_thickness': None,
'@max_diameter': None,
'@min_diameter': None,
}
doc = {'configuration': {'well_configurations': {'configuration': {'array': {}}}}}
# set the hardware parameters
doc['configuration']['well_configurations']['configuration']['array']['layout'] = params
# set the fiducials
doc['configuration']['well_configurations']['configuration']['array']['layout']['marker'] = fiducials
# set the spot IDs
doc['configuration']['well_configurations']['configuration']['array']['spots'] = {}
doc['configuration']['well_configurations']['configuration']['array']['spots']['spot'] = spots
# set the number of replicates
doc['configuration']['well_configurations']['configuration']['array']['spots']['multiplet'] = repl
with open(os.path.join(temp_dir, 'temp.xml'), 'w', encoding='utf-8') as temp_xml:
temp_xml.write(xmltodict.unparse(doc))
"""
using pandas to make xlsx
"""
import pandas as pd
mydict = {'':'',
'rows':'6',
'columns':'6',
'v_pitch':'0.4',
'h_pitch':'0.45',
'spot_width':'0.2',
'pixel_size':'0.0049'
}
truth = '/Volumes/GoogleDrive/My Drive/ELISAarrayReader/images_scienion/2020-04-04-14-18-32-COVID_April4_flusecondplate/Metadata_and_Plate_configuration.xlsx'
d = pd.read_excel(truth)
d['Parameter'], d['Value']
mydict
keys = dict()
vals = dict()
for idx, value in enumerate(mydict.keys()):
keys[idx] = value
for idx, value in enumerate(mydict.values()):
vals[idx] = value
b = pd.Series(keys, name="Parameter")
c = pd.Series(vals, name="Value")
b, c
# mydict
# path = '/Users/bryant.chhun/Desktop/Data/array-imager/fname.xlsx'
# df = pd.DataFrame(mydict, index=[0]).T
# df.to_excel(path, index=False)
# mydict2 = {'Parameter':['Value'],
# '':'',
# 'rows':['6'],
# 'columns':['6'],
# 'v_pitch':['0.4'],
# 'h_pitch':['0.45'],
# 'spot_width':['0.2'],
# 'pixel_size':['0.0049']
# }
d = [b, c]
path = '/Users/bryant.chhun/Desktop/Data/array-imager/fname.xlsx'
df = pd.DataFrame(d).T
df
df.to_excel(path, index=False)
m = pd.read_excel(path)
m
antigens = {0:{0:'Fiducial', 1:'Flu vaccine 2018-2019', 2:'Flu vaccine 2018-2019', 3:'Flu vaccine 2018-2019', 4:'Flu vaccine 2018-2019', 5:'Fiducial'},
1:{0:'Fiducial', 1:'H1 HA', 2:'H1 HA', 3:'H1 HA', 4:'H1 HA', 5:''},
2:{0:'Positive Control', 1:'H3 HA', 2:'H3 HA', 3:'H3 HA', 4:'H3 HA', 5:'Negative Control'},
3:{0:'Positive Control', 1:'H7 HA', 2:'H7 HA', 3:'H7 HA', 4:'H7 HA', 5:'Negative Control'},
4:{0:'Positive Control', 1:'HA FluB I', 2:'HA FluB I', 3:'HA FluB I', 4:'HA FluB I', 5:'Negative Control'},
5:{0:'Fiducial', 1:'HA FluB II', 2:'HA FluB II', 3:'HA FluB II', 4:'HA FluB II', 5:'Fiducial'}
}
df2 = pd.DataFrame(antigens).T
df2.to_excel(path)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/intro/caliban.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Running parallel jobs on Google Cloud using Caliban
[Caliban](https://github.com/google/caliban) is a package that makes it easy to run embarassingly parallel jobs on Google Cloud Platform (GCP) from your laptop. (Caliban bundles your code into a Docker image, and then runs it on [Cloud AI Platform](https://cloud.google.com/ai-platform), which is a VM on top of GCP.)
```
import json
import pandas as pd
import glob
from IPython.display import display
import numpy as np
import matplotlib as plt
```
# Installation
The details on how to install and run Caliban can be found [here](https://github.com/google/caliban). Below we give a very brief summary. Do these steps on your laptop, **outside of this colab**.
- [install docker](https://github.com/google/caliban#docker) and test using ```docker run hello-world```
- ```pip install caliban```
- [setup GCP](https://caliban.readthedocs.io/en/latest/getting_started/cloud.html)
# Launch jobs on GCP
Do these steps on your laptop, **outside of this colab**.
- create a requirements.txt file containing packages you need to be installed in GCP Docker image. Example:
```
numpy
scipy
#sympy
matplotlib
#torch # 776MB slow
#torchvision
tensorflow_datasets
jupyter
ipywidgets
seaborn
pandas
keras
sklearn
#ipympl
jax
flax
# below is jaxlib with GPU support
# CUDA 10.0
#tensorflow-gpu==2.0
#https://storage.googleapis.com/jax-releases/cuda100/jaxlib-0.1.47-cp36-none-linux_x86_64.whl
#https://storage.googleapis.com/jax-releases/cuda100/jaxlib-0.1.47-cp37-none-linux_x86_64.whl
# CUDA 10.1
#tensorflow-gpu==2.1
#https://storage.googleapis.com/jax-releases/cuda101/jaxlib-0.1.47-cp37-none-linux_x86_64.whl
tensorflow==2.1 # 421MB slow
https://storage.googleapis.com/jax-releases/cuda101/jaxlib-0.1.60+cuda101-cp37-none-manylinux2010_x86_64.whl
# jaxlib with CPU support
#tensorflow
#jaxlib
```
- create script that you want to run in parallel, eg [caliban_test.py](https://github.com/probml/pyprobml/blob/master/scripts/caliban_test.py)
- create config.json file with the list of flag combinations you want to pass to the script. For example the following file says to run 2 versions of the script, with flags ```--ndims 10 --prefix "***"``` and ```--ndims 100 --prefix "***"```. (The prefix flag is for pretty printing.)
```
{"ndims": [10, 100],
"prefix": "***" }
```
- launch jobs on GCP, giving them a common name using the xgroup flag.
```
cp ~/github/pyprobml/scripts/caliban_test.py .
caliban cloud --experiment_config config.json --xgroup mygroup --gpu_spec 2xV100 caliban_test.py
```
You can specify the kind of machines you want to use as explained [here](https://caliban.readthedocs.io/en/latest/cloud/gpu_specs.html). If you omit "--gpu_spec", it defaults to n1-standard-8 with a single P100 GPU.
- open the URL that it prints to monitor progress. Example:
```
Visit https://console.cloud.google.com/ai-platform/jobs/?projectId=probml to see the status of all jobs.
```
You should see something like this:
<img src="https://github.com/probml/pyprobml/blob/
master/book1/intro/figures/GCP-jobs.png?raw=true">
- Monitor your jobs by clicking on 'view logs'. You should see something like this:
<img src="https://github.com/probml/pyprobml/blob/
master/book1/intro/figures/GCP-logs-GPU.png?raw=true">
- When jobs are done, download the log files using [caliban_save_logs.py](https://github.com/probml/pyprobml/blob/master/scripts/caliban_save_logs.py). Example:
```
python ~/github/pyprobml/scripts/caliban_save_logs.py --xgroup mygroup
```
- Upload the log files to Google drive and parse them inside colab using python code below.
# Parse the log files
```
!rm -rf pyprobml # Remove any old local directory to ensure fresh install
!git clone https://github.com/probml/pyprobml
import pyprobml.scripts.probml_tools as pml
pml.test()
import pyprobml.scripts.caliban_logs_parse as parse
import glob
logdir = 'https://github.com/probml/pyprobml/tree/master/data/Logs'
fnames = glob.glob(f'{logdir}/*.config')
print(fnames) # empty
from google.colab import drive
drive.mount('/content/gdrive')
logdir = '/content/gdrive/MyDrive/Logs'
fnames = glob.glob(f'{logdir}/*.config')
print(fnames)
configs_df = parse.parse_configs(logdir)
display(configs_df)
for n in [1,2]:
print(get_args(configs_df, n))
logdir = '/content/gdrive/MyDrive/Logs'
#df1 = log_file_to_pandas('/content/gdrive/MyDrive/Logs/caliban_kpmurphy_20210208_194505_1.log')
logs_df = parse.parse_logs(logdir)
display(logs_df.sample(n=5))
print(parse.get_log_messages(logs_df, 1))
print(parse.get_log_messages(logs_df, 2))
```
| github_jupyter |
```
%matplotlib inline
```
Word Embeddings: Encoding Lexical Semantics
===========================================
Word embeddings are dense vectors of real numbers, one per word in your
vocabulary. In NLP, it is almost always the case that your features are
words! But how should you represent a word in a computer? You could
store its ascii character representation, but that only tells you what
the word *is*, it doesn't say much about what it *means* (you might be
able to derive its part of speech from its affixes, or properties from
its capitalization, but not much). Even more, in what sense could you
combine these representations? We often want dense outputs from our
neural networks, where the inputs are $|V|$ dimensional, where
$V$ is our vocabulary, but often the outputs are only a few
dimensional (if we are only predicting a handful of labels, for
instance). How do we get from a massive dimensional space to a smaller
dimensional space?
How about instead of ascii representations, we use a one-hot encoding?
That is, we represent the word $w$ by
\begin{align}\overbrace{\left[ 0, 0, \dots, 1, \dots, 0, 0 \right]}^\text{|V| elements}\end{align}
where the 1 is in a location unique to $w$. Any other word will
have a 1 in some other location, and a 0 everywhere else.
There is an enormous drawback to this representation, besides just how
huge it is. It basically treats all words as independent entities with
no relation to each other. What we really want is some notion of
*similarity* between words. Why? Let's see an example.
Suppose we are building a language model. Suppose we have seen the
sentences
* The mathematician ran to the store.
* The physicist ran to the store.
* The mathematician solved the open problem.
in our training data. Now suppose we get a new sentence never before
seen in our training data:
* The physicist solved the open problem.
Our language model might do OK on this sentence, but wouldn't it be much
better if we could use the following two facts:
* We have seen mathematician and physicist in the same role in a sentence. Somehow they
have a semantic relation.
* We have seen mathematician in the same role in this new unseen sentence
as we are now seeing physicist.
and then infer that physicist is actually a good fit in the new unseen
sentence? This is what we mean by a notion of similarity: we mean
*semantic similarity*, not simply having similar orthographic
representations. It is a technique to combat the sparsity of linguistic
data, by connecting the dots between what we have seen and what we
haven't. This example of course relies on a fundamental linguistic
assumption: that words appearing in similar contexts are related to each
other semantically. This is called the `distributional
hypothesis <https://en.wikipedia.org/wiki/Distributional_semantics>`__.
Getting Dense Word Embeddings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
How can we solve this problem? That is, how could we actually encode
semantic similarity in words? Maybe we think up some semantic
attributes. For example, we see that both mathematicians and physicists
can run, so maybe we give these words a high score for the "is able to
run" semantic attribute. Think of some other attributes, and imagine
what you might score some common words on those attributes.
If each attribute is a dimension, then we might give each word a vector,
like this:
\begin{align}q_\text{mathematician} = \left[ \overbrace{2.3}^\text{can run},
\overbrace{9.4}^\text{likes coffee}, \overbrace{-5.5}^\text{majored in Physics}, \dots \right]\end{align}
\begin{align}q_\text{physicist} = \left[ \overbrace{2.5}^\text{can run},
\overbrace{9.1}^\text{likes coffee}, \overbrace{6.4}^\text{majored in Physics}, \dots \right]\end{align}
Then we can get a measure of similarity between these words by doing:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = q_\text{physicist} \cdot q_\text{mathematician}\end{align}
Although it is more common to normalize by the lengths:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = \frac{q_\text{physicist} \cdot q_\text{mathematician}}
{\| q_\text{\physicist} \| \| q_\text{mathematician} \|} = \cos (\phi)\end{align}
Where $\phi$ is the angle between the two vectors. That way,
extremely similar words (words whose embeddings point in the same
direction) will have similarity 1. Extremely dissimilar words should
have similarity -1.
You can think of the sparse one-hot vectors from the beginning of this
section as a special case of these new vectors we have defined, where
each word basically has similarity 0, and we gave each word some unique
semantic attribute. These new vectors are *dense*, which is to say their
entries are (typically) non-zero.
But these new vectors are a big pain: you could think of thousands of
different semantic attributes that might be relevant to determining
similarity, and how on earth would you set the values of the different
attributes? Central to the idea of deep learning is that the neural
network learns representations of the features, rather than requiring
the programmer to design them herself. So why not just let the word
embeddings be parameters in our model, and then be updated during
training? This is exactly what we will do. We will have some *latent
semantic attributes* that the network can, in principle, learn. Note
that the word embeddings will probably not be interpretable. That is,
although with our hand-crafted vectors above we can see that
mathematicians and physicists are similar in that they both like coffee,
if we allow a neural network to learn the embeddings and see that both
mathematicians and physicists have a large value in the second
dimension, it is not clear what that means. They are similar in some
latent semantic dimension, but this probably has no interpretation to
us.
In summary, **word embeddings are a representation of the *semantics* of
a word, efficiently encoding semantic information that might be relevant
to the task at hand**. You can embed other things too: part of speech
tags, parse trees, anything! The idea of feature embeddings is central
to the field.
Word Embeddings in Pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we get to a worked example and an exercise, a few quick notes
about how to use embeddings in Pytorch and in deep learning programming
in general. Similar to how we defined a unique index for each word when
making one-hot vectors, we also need to define an index for each word
when using embeddings. These will be keys into a lookup table. That is,
embeddings are stored as a $|V| \times D$ matrix, where $D$
is the dimensionality of the embeddings, such that the word assigned
index $i$ has its embedding stored in the $i$'th row of the
matrix. In all of my code, the mapping from words to indices is a
dictionary named word\_to\_ix.
The module that allows you to use embeddings is torch.nn.Embedding,
which takes two arguments: the vocabulary size, and the dimensionality
of the embeddings.
To index into this table, you must use torch.LongTensor (since the
indices are integers, not floats).
```
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor)
print(hello_embed)
```
An Example: N-Gram Language Modeling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Recall that in an n-gram language model, given a sequence of words
$w$, we want to compute
\begin{align}P(w_i | w_{i-1}, w_{i-2}, \dots, w_{i-n+1} )\end{align}
Where $w_i$ is the ith word of the sequence.
In this example, we will compute the loss function on some training
examples and update the parameters with backpropagation.
```
CONTEXT_SIZE = 2
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
for i in range(len(test_sentence) - 2)]
# print the first 3, just so you can see what they look like
print(trigrams[:3])
vocab = set(test_sentence)
word_to_ix = {word: i for i, word in enumerate(vocab)}
class NGramLanguageModeler(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
log_probs = F.log_softmax(out, dim=1)
return log_probs
losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(10):
total_loss = 0
for context, target in trigrams:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print(losses) # The loss decreased every iteration over the training data!
```
Exercise: Computing Word Embeddings: Continuous Bag-of-Words
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep
learning. It is a model that tries to predict words given the context of
a few words before and a few words after the target word. This is
distinct from language modeling, since CBOW is not sequential and does
not have to be probabilistic. Typcially, CBOW is used to quickly train
word embeddings, and these embeddings are used to initialize the
embeddings of some more complicated model. Usually, this is referred to
as *pretraining embeddings*. It almost always helps performance a couple
of percent.
The CBOW model is as follows. Given a target word $w_i$ and an
$N$ context window on each side, $w_{i-1}, \dots, w_{i-N}$
and $w_{i+1}, \dots, w_{i+N}$, referring to all context words
collectively as $C$, CBOW tries to minimize
\begin{align}-\log p(w_i | C) = -\log \text{Softmax}(A(\sum_{w \in C} q_w) + b)\end{align}
where $q_w$ is the embedding for word $w$.
Implement this model in Pytorch by filling in the class below. Some
tips:
* Think about which parameters you need to define.
* Make sure you know what shape each operation expects. Use .view() if you need to
reshape.
```
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))
print(data[:5])
class CBOW(nn.Module):
def __init__(self):
pass
def forward(self, inputs):
pass
# create your model and train. here are some functions to help you make
# the data ready for use by your module
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
make_context_vector(data[0][0], word_to_ix) # example
```
| github_jupyter |
```
!pip install -qq tensorflow
!pip install -qq tensor2tensor
!pip install -qq pydub
!apt-get -qq update
!apt-get -qq install -y ffmpeg
!apt-get -qq install -y sox
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import os
import collections
import base64
import cStringIO
import pydub
import shutil
from scipy.io import wavfile
import IPython
import google.colab
from tensor2tensor import models
from tensor2tensor import problems
from tensor2tensor.layers import common_layers
from tensor2tensor.utils import trainer_lib
from tensor2tensor.utils import t2t_model
from tensor2tensor.utils import registry
from tensor2tensor.utils import metrics
# Enable TF Eager execution
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
# Other setup
Modes = tf.estimator.ModeKeys
# Setup some directories
data_dir = os.path.expanduser("~/t2t/data")
tmp_dir = os.path.expanduser("~/t2t/tmp")
train_dir = os.path.expanduser("~/t2t/train")
checkpoint_dir = os.path.expanduser("~/t2t/checkpoints")
tf.gfile.MakeDirs(data_dir)
tf.gfile.MakeDirs(tmp_dir)
tf.gfile.MakeDirs(train_dir)
tf.gfile.MakeDirs(checkpoint_dir)
gs_ckpt_dir = "gs://tensor2tensor-checkpoints/"
```
### Define problem, hparams, model, encoder and decoder
Definition of this model (as well as many more) can be found on tensor2tensor github [page](https://github.com/tensorflow/tensor2tensor).
```
problem_name = "librispeech_clean"
asr_problem = problems.problem(problem_name)
encoders = asr_problem.feature_encoders(None)
model_name = "transformer"
hparams_set = "transformer_librispeech_tpu"
hparams = trainer_lib.create_hparams(hparams_set,data_dir=data_dir, problem_name=problem_name)
asr_model = registry.model(model_name)(hparams, Modes.PREDICT)
def encode(x):
waveforms = encoders["waveforms"].encode(x)
encoded_dict = asr_problem.preprocess_example({"waveforms":waveforms, "targets":[]}, Modes.PREDICT, hparams)
return {"inputs" : tf.expand_dims(encoded_dict["inputs"], 0), "targets" : tf.expand_dims(encoded_dict["targets"], 0)}
def decode(integers):
integers = list(np.squeeze(integers))
if 1 in integers:
integets = integers[:integers.index(1)]
return encoders["targets"].decode(np.squeeze(integers))
```
### Define path to checkpoint
In this demo we are using a pretrained model.
Instructions for training your own model can be found in the [tutorial](https://github.com/tensorflow/tensor2tensor/blob/master/docs/tutorials/asr_with_transformer.md) on tensor2tensor page.
```
# Copy the pretrained checkpoint locally
ckpt_name = "transformer_asr_180214"
gs_ckpt = os.path.join(gs_ckpt_dir, ckpt_name)
print(gs_ckpt)
!gsutil cp -R {gs_ckpt} {checkpoint_dir}
ckpt_path = tf.train.latest_checkpoint(os.path.join(checkpoint_dir, ckpt_name))
ckpt_path
```
### Define transcribe function
```
# Restore and transcribe!
def transcribe(inputs):
encoded_inputs = encode(inputs)
with tfe.restore_variables_on_create(ckpt_path):
model_output = asr_model.infer(encoded_inputs, beam_size=2, alpha=0.6, decode_length=1)["outputs"]
return decode(model_output)
def play_and_transcribe(inputs):
waveforms = encoders["waveforms"].encode(inputs)
IPython.display.display(IPython.display.Audio(data=waveforms, rate=16000))
return transcribe(inputs)
```
# Decoding prerecorded examples
You can upload any .wav files. They will be transcribed if frame rate matches Librispeeche's frame rate (16kHz).
```
uploaded = google.colab.files.upload()
prerecorded_messages = []
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
mem_file = cStringIO.StringIO(uploaded[fn])
save_filename = os.path.join(tmp_dir, fn)
with open(save_filename, 'w') as fd:
mem_file.seek(0)
shutil.copyfileobj(mem_file, fd)
prerecorded_messages.append(save_filename)
for inputs in prerecorded_messages:
outputs = play_and_transcribe(inputs)
print("Inputs: %s" % inputs)
print("Outputs: %s" % outputs)
```
# Recording your own examples
```
# Records webm file and converts
def RecordNewAudioSample(filename=None, webm_filename=None):
"""Args:
filename - string, path for storing wav file
webm_filename - string, path for storing webm file
Returns:
string - path where wav file was saved. (=filename if specified)
"""
# Create default filenames in tmp_dir if not specified.
if not filename:
filename = os.path.join(tmp_dir, "recording.wav")
if not webm_filename:
webm_filename = os.path.join(tmp_dir, "recording.webm")
# Record webm file form colab.
audio = google.colab._message.blocking_request('user_media', {"audio":True, "video":False, "duration":-1}, timeout_sec=600)
#audio = frontend.RecordMedia(True, False)
# Convert the recording into in_memory file.
music_mem_file = cStringIO.StringIO(
base64.decodestring(audio[audio.index(',')+1:]))
# Store webm recording in webm_filename. Storing is necessary for conversion.
with open(webm_filename, 'w') as fd:
music_mem_file.seek(0)
shutil.copyfileobj(music_mem_file, fd)
# Open stored file and save it as wav with sample_rate=16000.
pydub.AudioSegment.from_file(webm_filename, codec="opus"
).set_frame_rate(16000).export(out_f=filename,
format="wav")
return filename
# Record the sample
my_sample_filename = RecordNewAudioSample()
print my_sample_filename
print play_and_transcribe(my_sample_filename)
```
| github_jupyter |
```
import pandas as pd
from os import listdir
from os.path import isfile, join
import matplotlib.pyplot as plt
from ipyleaflet import *
import json
import requests
from IPython.display import clear_output
from ipywidgets import HTML
onlyfiles = [f for f in listdir("spreadsheets/") if isfile(join("spreadsheets/", f))]
onlyfiles
allfiles = {}
for a in onlyfiles:
if ".xlsx" in a:
allfiles[a.split(' ', 1)[0]] = pd.ExcelFile("spreadsheets/" + a)
if ".csv" in a:
print(a)
allfiles[a.split(' ', 1)[0]] = pd.read_csv("spreadsheets/" + a, encoding='ISO-8859-1')
alldata = {}
for key, b in allfiles.items():
if type(b) is pd.ExcelFile:
alldata[key] = {}
for a in b.sheet_names:
alldata[key][a] = b.parse(a)
if type(b) is pd.DataFrame:
alldata[key] = {}
alldata[key][key] = b
print(alldata.keys())
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
a = alldata['Drinking']["Table3_AllWellLocations"]
b = alldata["Public"]['Public']
print("Drinking data - well locations (found online ) has shape: " + str(a.shape))
print("Public Potable water (Greg gave us) has shape " + str(b.shape))
c = a.merge(b, left_on='WATER_SYSTEM_NO', right_on='Water System No ')
c.keys()
c.head(100)
plt.scatter(c["LATITUDE_MEASURE"], c["LONGITUDE_MEASURE"])
plt.show()
removed_null_na = c.dropna(subset=['LATITUDE_MEASURE', 'LONGITUDE_MEASURE'])
removed_null_na = removed_null_na[(removed_null_na['LATITUDE_MEASURE'] != 0) & (removed_null_na['LONGITUDE_MEASURE'] != 0)]
print( str(len(removed_null_na)) + " out of " + str(len(c)) + " rows have complete and nonzero latitude and longitude" )
primary_merged = removed_null_na.copy(deep = True)
abandoned = primary_merged[primary_merged['FACILITY_NAME'].str.contains("ABANDONED|DESTROYED")]
print(str(len(abandoned)) + " out of " + str(len(primary_merged)) + " wells are abandoned or destroyed")
primary_merged = primary_merged[~(primary_merged['FACILITY_NAME'].str.contains("ABANDONED|DESTROYED"))]
print("updated # rows:" + str(len(primary_merged)))
fill_dict = {'Residential Population': 0, 'Non Transient Population': 0,
'Transient Population': 0, 'Total Population': 0, 'Number of Service Connections Agricultural': 0,
'Number of COMBINED Service Connections (CB)': 0, 'Number of Commercial (CM) Service Connections': 0,
'Numer of Institutional Service Conections': 0, 'Number of Residential Service Connections': 0,
'Total Number of Service Connections': 0 }
primary_merged = primary_merged.fillna(value = fill_dict)
primary_merged.head(20)
info_widgets = HTML(
placeholder="Here's some useful info"
)
def summarize(df, county_name):
mapping = {}
df0 = df.groupby(['WATER_SYSTEM_NO']).mean()
mapping['Total Residential Population Served: '] = df0['Residential Population'].sum()
mapping['Total Non Transient Population Served: '] = df0['Non Transient Population'].sum()
mapping['Total Transient Population Served: '] = df0['Transient Population'].sum()
mapping['Total Population Served: '] = df0['Total Population'].sum()
mapping['Average Number of Service Connections: '] = df0['Total Number of Service Connections'].mean().astype(int)
return pd.DataFrame.from_dict(mapping, orient='index', columns= ["County Name : " + county_name])
m = Map(center=(37.871593, -122.272743), zoom=5)
with open('ca_boundary.json', 'r') as f:
data = json.load(f)
geo_json = GeoJSON(data=data, style = {'color': 'red', 'opacity':0.5, 'weight':1.1, 'dashArray':'5', 'fillOpacity':0})
m.add_layer(geo_json)
m.add_control(FullScreenControl())
prev_marker_layer = None
m.add_layer(popup)
def handle_click(**kwargs):
if kwargs.get('type') == 'click':
click_lat_long = kwargs.get('coordinates')
request_query = "https://geo.fcc.gov/api/census/area?lat=" + \
str(click_lat_long[0]) + "&lon=" + str(click_lat_long[1]) +"&format=json"
response_json = json.loads(requests.get(request_query).text)
if 'results' in response_json.keys():
if response_json['results'] != []:
state_name = response_json['results'][0]['state_name']
county_name = response_json['results'][0]['county_name']
end = "<p> Sorry, but I have no data about states other than California </p>" if state_name != 'California' else ""
info_widgets.value = "<p>The latitude, longitude for your click: <b>" + str(tuple(click_lat_long)) + "</b><br>" + \
"This corresponds to the county: <b>" + county_name + "</b> in " + str(state_name) + "</p>" + end
if state_name == 'California':
global prev_marker_layer
if prev_marker_layer != None:
m.remove_layer(prev_marker_layer)
county_name = county_name.upper()
want_columns = ['WATER_SYSTEM_NO', 'Water System Name','FACILITY_NAME','LATITUDE_MEASURE',
'LONGITUDE_MEASURE', 'Primary Water Source Type', 'REG_AGENCY',
'Water System Status', 'Residential Population', 'Non Transient Population',
'Transient Population', 'Total Population', 'Number of Service Connections Agricultural',
'Number of COMBINED Service Connections (CB)', 'Number of Commercial (CM) Service Connections',
'Numer of Institutional Service Conections', 'Number of Residential Service Connections','Total Number of Service Connections']
found_rows = primary_merged[primary_merged['Principal County Served'] == county_name][want_columns]
if len(found_rows) > 0:
summary = summarize(found_rows, county_name)
info_widgets.value += "<p> I found <b>"+ str(len(found_rows))+ "</b> water facilities that belong to <b>" + str(len(summary))+ "</b> water systems</p>"
info_widgets.value += "<p> Here's some summary statistics for the water systems: </p>" + \
summary.to_html()
info_widgets.value += "<p> Try clicking on the markers for more info about each water facility </p>"
all_markers = []
for i in range(len(found_rows)):
found_row = found_rows.iloc[i]
source_lat, source_long = found_row['LATITUDE_MEASURE'], found_row['LONGITUDE_MEASURE']
well_info = HTML()
found_row = pd.DataFrame(found_row)
found_row.style.set_properties(**{'text-align': 'right'})
well_info.value = found_row.to_html(header = False)
all_markers.append(Marker(location= [float(source_lat), float(source_long)], popup = well_info))
new_marker_layer = MarkerCluster(markers = all_markers)
prev_marker_layer = new_marker_layer
m.add_layer(new_marker_layer)
m.on_interaction(handle_click)
display(info_widgets)
display(m)
```
| github_jupyter |
# Game of Life
[Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life), introduced by John H. Conway in 1970, is a 2D cellular automaton that simulates a world populated by cells. The world is a 2D square grid that is, in principle, infinite. Each grid position represents a cell that can be either alive, or dead. The game is played over a number of generations. To compute the next generation, each grid position is considered indepedently. The rules are straightforward:
* If a cell in generation $t$ is alive,
* it is alive in generation $t + 1$ if it has either two or three life neighbours in generation $t$;
* it is dead in generation $t + 1$ otherwise.
* If a cell in generation $t$ is dead,
* it is alive in generatino $t + 1$ if it has exactly three neighbours in generation $t$;
* it is dead in generation $t + 1$ otherwise.
Each cell has eight neighbours. Typically, the Game of Life world is represented by an $n \times n$ array, and periodic boundary conditions are applied to simulate an infinite world.
## Required imports
```
from IPython.display import HTML
from collections import Counter
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
%matplotlib inline
import numpy as np
```
## World representation
A Game of Life world will be represented by an array of integers. Each array element represents a cell that can either be dead (0) or alive (1). First, we define a class that represents a world, and that is initialized from a given numpy array. This will serve as a base class for classes that implement specific initializations. Typically, those should override `__init__`. The `World` base class defines all methods to compute the next generation, get information on the world's state, as well a string representation.
```
class World:
'''Class representing a Game of Life world, intended to be subclassed
for specific initialization strategies.'''
def __init__(self, cells):
'''Initialize a world using the cells provided
Parameters
----------
cells : np.ndarray
2D numpy array representing the world, 1 represents a cell that
is alive, 0 represents a dead cell.
'''
self._world = np.copy(cells.astype(np.int8))
self._tmp_world = np.empty_like(self._world)
@property
def shape(self):
'''Get the shape of the world
Returns
-------
tuple
shape of the world as a 2-tuple of int
'''
return self._world.shape
@property
def nr_alive(self):
'''Get the number of cells that are alive
Returns
-------
int
number of cells alive in the world
'''
return np.sum(self._world)
@property
def cells(self):
'''Get the world as a 2D numpy array
Returns
-------
np.ndarray
2D numpy array of 0 and 1 int values, where 1 represents
a cell that is alive, and 0 one that is dead
'''
return np.copy(self._world)
@property
def fraction_alive(self):
'''Get the fraction of cells that are alive in the world
Returns
-------
float
fraction of cells that are alive
'''
return np.sum(self._world)/(self.shape[0]*self.shape[1])
def is_alive(self, i, j):
return self._world[i, j] == 1
def nr_neignbours(self, i, j):
up = (i + self.shape[0] - 1) % self.shape[0]
down = (i + 1) % self.shape[0]
left = (j + self.shape[1] - 1) % self.shape[1]
right = (j + 1) % self.shape[1]
return (self._world[up, left] + self._world[up, j] +
self._world[up, right] +
self._world[i, left] + self._world[i, right] +
self._world[down, left] + self._world[down, j] +
self._world[down, right])
def next_generation(self):
'''Compute the world's next generation
'''
for i in range(self.shape[0]):
for j in range(self.shape[1]):
nr_nb = self.nr_neignbours(i, j)
if self.is_alive(i, j):
self._tmp_world[i, j] = 1 if nr_nb == 2 or nr_nb == 3 else 0
else:
self._tmp_world[i, j] = 1 if nr_nb == 3 else 0
self._world = self._tmp_world
def __repr__(self):
return '\n'.join(' '.join(f'{self._world[i, j]:1d}'
for j in range(self.shape[1]))
for i in range(self.shape[0]))
```
### Random world
The `RandomWorld` class inherits from the `World` basse class, and initializes a world of $n \times n$ randomly, but where a fraction $f_{\rm alive}$ is alive.
```
class RandomWorld(World):
'''Class representing a world that is initialized randomly so that a given
fraction of cells is alive. Note this is not necessarily exact.'''
def __init__(self, n, f_alive):
'''Create a random world with a give fraction of cells that are alive.
Parameters
----------
n : int
size of the n*n world
f_alive : float
fraction of cells that are alive (between 0.0 and 1.0)
'''
super().__init__(np.random.choice(np.array([0, 1], dtype=np.int8),
(n, n), p=(1 - f_alive, f_alive)))
```
Create a world and run a generation.
```
world = RandomWorld(10, 0.4)
world
world.next_generation()
print(world)
```
### Patch world
A second, interesting way to initialize a world is from a numpy array representing an $p_0 \times p_1$ patch in the $n \times n$ world, where, obviously, $p_0 \le n$ and $p_1 \le n$.
```
class PatchWorld(World):
'''Class that is initialized with a patch given as a 2D numpy array. All
other cells are dead.'''
def __init__(self, n, patch):
'''Create a random world with a give initial patch, all
other cells will be dead.
Parameters
----------
n : int
size of the n*n world
patch : np.ndarray
2D numpy array containing the part of the world to be
initialized; patch.shape[0] <= n, patch.shape[1] <= n,
and patch should contain 1 for a cell that is alive, 0
for a cell that is dead
'''
world = np.zeros((n, n))
world[0:patch.shape[0], 0:patch.shape[1]] = patch
super().__init__(world)
world = PatchWorld(10, np.array([[1, 0, 0], [1, 1, 0]]))
world
```
## Simulation runner
We define a class to conveniently perform a complete simulation. At most `max_gen` generations are computed, but the computation stops as soon as a cycle is detected.
```
class WorldRunner:
'''Class to run a simulation of the given world over a maximum of
generations. The simulation will stop as soon as a cycle is detected.'''
def __init__(self, world, max_gen, early_stopping=True):
'''Initialize the run with the initial world and the maximum
number of generations to simulate.
Parameters
----------
world : World
initial world to run the simulation on
max_gen : int
maximum number of generations to simulate
early_stopping : bool
if True, stop when a cycle is detected, otherwise,
continue form max_gen generations
'''
self._world = world
self._max_gen = max_gen
self._early_stopping = early_stopping
self._cycle_length = None
self._hist = [self._world.cells]
@property
def max_gen(self):
'''Get the maximum generation for this simulation
Returns
-------
int
maximum number of generations for this run
'''
return self._max_gen
@property
def nr_generations(self):
'''Get the number of generations computed, note that this may be less than
the maximum number of generations if a cycle was detected.
Returns
-------
int
number of generations computed in this run
'''
return len(self._hist) - 1
def has_cycle(self):
'''Check whether a cycle was detected.
Returns
-------
bool
True if a cycle was detected, False otherwise
'''
return self._cycle_length is not None
@property
def cycle_length(self):
'''Get the cycle length, if any.
Returns
-------
int
length of the detected cycle, None if no cycle was found.
'''
return self._cycle_length
@property
def history(self):
'''Get the world history.
Returns
-------
list
a list of the generations of this world, represented as 2D
numpy arrays.
'''
return self._hist
def _has_cycle(self):
for gen in range(-2, -len(self._hist), -1):
if np.all(self._hist[-1] == self._hist[gen]):
self._cycle_length = -gen - 1
return True
return False
def run(self):
'''Run the simulation for the world.
'''
for _ in range(1, self.max_gen + 1):
self._world.next_generation()
self._hist.append(self._world.cells)
if self._has_cycle() and self._early_stopping:
break
```
Create a world, and run it for a number of generations, then check on the properties.
```
world = RandomWorld(10, 0.3)
runner = WorldRunner(world, 100)
runner.run()
```
The current state of the world can be checked.
```
world
world.fraction_alive
```
Check whether a cycle has been detected, what the cycle length is, and after how many generations it occured.
```
runner.has_cycle()
runner.cycle_length
runner.nr_generations
```
## Simulation visualization
To gain insight in the Game of Life dynamics, it is useful to visualize the consecutive generations of a world. This can be done by using the `FuncAnimation` function provided by matplotlib. Given the setup for this function, it is convenient to wrap its creation in a class.
```
class WorldView:
'''Class for creating an animation of the world's history.'''
def __init__(self, world_runner):
'''Initialize the view object.
Parameters
----------
world_runner : WorldRunner
runner that has completed a simulation to visualize.
'''
self._world_runner = world_runner
self._nr_gen = world_runner.nr_generations
self._figure, self._axes = plt.subplots()
self._axes.get_xaxis().set_visible(False)
self._axes.get_yaxis().set_visible(False)
@property
def figure(self):
return self._figure
def create_animation(self):
'''Create an animation.
Returns
-------
function
function that will visualize the simulation.
'''
return FuncAnimation(self.figure, self.create_animate(),
init_func=self.create_init(),
frames=self._world_runner.nr_generations)
def create_animate(self):
def animate(i):
self._axes.imshow(self._world_runner.history[i])
return animate
def create_init(self):
def init():
self._axes.imshow(self._world_runner.history[0])
return init
world_size = 10
f_alive = 0.3
max_generations = 100
world = RandomWorld(world_size, f_alive)
world_runner = WorldRunner(world, max_generations)
world_runner.run()
world_runner.nr_generations
world_view = WorldView(world_runner)
animation = world_view.create_animation()
HTML(animation.to_jshtml(default_mode='once'))
world
world_runner.cycle_length
```
## Simulation statistics
First, we define a class that is an iterator over randomlly initialized worlds. All worlds will have the same given size, and fraction of cells that are alive.
```
class RandomWorldGenerator:
'''Iterator over randomly initialized worlds.'''
def __init__(self, nr_worlds, size, f_alive):
'''Create an iterator over a given number of worlds, each of the same
size, and (approximately) the same number of cells that are alive.
Parameters
---------
nr_worlds : int
number of worlds to generate
size : int
world size
f_alive : float
fractino of cells that are alive
'''
self._nr_worlds = nr_worlds
self._size = size
self._f_alive = f_alive
def __iter__(self):
self._current = 0
return self
def __next__(self):
if self._current < self._nr_worlds:
self._current += 1
return RandomWorld(self._size, self._f_alive)
else:
raise StopIteration
for world in RandomWorldGenerator(3, 5, 0.3):
print(world, end='\n\n')
```
Next, we define a class to perform a number of simulation, and gather statistics on the number of cells that are alive for each generation.
```
def compute_avg_live_cels(world_generator, max_gen):
nr_alive = np.zeros(max_gen + 1)
nr_worlds = 0
for world in world_generator:
nr_worlds += 1
world_runner = WorldRunner(world, max_gen, early_stopping=False)
world_runner.run()
for i, generation in enumerate(world_runner.history):
nr_alive[i] += np.sum(generation)
return nr_alive/(nr_worlds*generation.shape[0]*generation.shape[1])
nr_generations = 100
stats = compute_avg_live_cels(RandomWorldGenerator(nr_worlds=50, size=20, f_alive=0.1), max_gen=nr_generations)
_ = plt.plot(range(nr_generations + 1), stats)
```
A second experiment would be to check how many initial world configurations of $p \times p$ where $p \le n$ and $n$ is the size of the world. For a $p \times p$ patch, there are $2^{p^2}$ initial configurations.
```
class PatchGenerator:
'''Iterator class for all worlds that are initialized from all compbinations of cells
are alive or dead in an p by p patch, while all other cells are dead. The number of
such worlds is 2^(p*p)'''
def __init__(self, size, patch_size):
'''Initialize the iterator fow a given patch size on a given board size
Parameters
----------
size : int
size of the world
patch_size : int
size of the patch, should be less than or equal to size
'''
if size < patch_size:
raise ValueError('patch size should be less or equal to world size')
self._size = size
self._patch_size = patch_size
self._patch_idx = None
def __iter__(self):
self._patch_idx = 0
return self
def _create_patch(self):
patch = np.empty((self._patch_size, self._patch_size))
for i in range(self._patch_size):
for j in range(self._patch_size):
patch[i, j] = 1 if self._patch_idx & (1 << (i*self._patch_size + j)) else 0
return patch
def __next__(self):
if self._patch_idx >= 2**(self._patch_size**2):
raise StopIteration
world = PatchWorld(self._size, self._create_patch())
self._patch_idx += 1
return world
patch_generrator = PatchGenerator(3, 2)
for world in patch_generrator:
print(world, end='\n\n')
def compute_cycle_count(world_generator, max_gen):
'''Function to cmopute statistics on the number of worlds that lead
to cycles of various lengths
Parameters
----------
world_generator : iterator
Iterator that returns initialized words
max_gen : int
Maximum number of generation to simulate per word
Returns
-------
collections.Counter
count for each cycle length, for the number of words that
contain only dead cells and for worlds for which no cycle
was detected.
'''
cycle_count = Counter()
nr_worlds = 0
for world in world_generator:
nr_worlds += 1
world_runner = WorldRunner(world, max_gen)
world_runner.run()
if world.nr_alive > 0:
if world_runner.has_cycle():
cycle_count[world_runner.cycle_length] += 1
else:
cycle_count['no cycle'] += 1
else:
cycle_count['dead'] += 1
return cycle_count
cycle_count = compute_cycle_count(PatchGenerator(5, 2), 10)
for cycle_length in cycle_count:
print(f'{cycle_length}: {cycle_count[cycle_length]}')
```
| github_jupyter |
# Experiments for Paper
This notebook contains all neural network experiments for the paper. The results are saved as CSV files for independent verification.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from nn_src.imports import *
DATA_DIR = '/scratch/srasp/ppnn_data/'
RESULTS_DIR = '/export/home/srasp/repositories/ppnn/results/csv_files/'
def reset_weights(model):
session = K.get_session()
for layer in model.layers:
if hasattr(layer, 'kernel_initializer'):
layer.kernel.initializer.run(session=session)
def ensemble_scores(m, n, x_trn, y_trn, x_test, y_test, **kwargs):
trn_scores, test_scores, preds = [], [], []
for i in tqdm(range(n)):
reset_weights(m)
m.fit(x_trn, y_trn, **kwargs)
trn_scores.append(m.evaluate(x_trn, y_trn, 4096, verbose=0))
test_scores.append(m.evaluate(x_test, y_test, 4096, verbose=0))
preds.append(m.predict(x_test, 4096, verbose=0))
return trn_scores, test_scores, preds
def save_ensemble(preds, test_set, exp_name, save=True):
preds = np.array(preds)
preds[:, :, 1] = np.abs(preds[:, :, 1]) # Make sure std is positive
mean_preds = np.mean(preds, 0)
ens_score = crps_normal(mean_preds[:, 0], mean_preds[:, 1], test_set.targets).mean()
print(f'Ensemble test score = {ens_score}')
if save:
results_df = create_results_df(test_set.date_strs, test_set.station_ids, mean_preds[:, 0], mean_preds[:, 1])
print(f'Saved results in {RESULTS_DIR}{exp_name}.csv')
results_df.to_csv(f'{RESULTS_DIR}{exp_name}.csv')
def get_datasets(pickled_name, train_dates, test_dates=['2016-01-01', '2017-01-01'], aux=False, reload=False):
pickle_fn = f'{DATA_DIR}pickled/{pickled_name}'
if not os.path.exists(pickle_fn) or reload:
var_dict = aux_dict if aux else None
train_set, test_set = get_train_test_sets(
DATA_DIR,
train_dates,
test_dates,
aux_dict=var_dict,
)
# Save pickled dataset
with open(pickle_fn, 'wb') as f:
pickle.dump((train_set, test_set), f)
else:
with open(pickle_fn, 'rb') as f:
train_set, test_set = pickle.load(f)
return train_set, test_set
```
## Train 2015
```
train_set, test_set = get_datasets('15_16.pkl', ['2015-01-01', '2016-01-01'], aux=False)
train_set.features.shape, train_set.targets.shape
aux_train_set, aux_test_set = get_datasets('aux_15_16.pkl', ['2015-01-01', '2016-01-01'], aux=True)
n_features = aux_train_set.features.shape[1]; n_features
```
### Fully connected network
```
fc = build_fc_model(2, 2, compile=True, lr=0.1)
fc.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc, 10,
train_set.features, train_set.targets,
test_set.features, test_set.targets,
epochs=30, batch_size=4096, verbose=0,
)
test_scores
save_ensemble(preds, test_set, 'fc_15')
```
### Fully connected network with auxiliary data
```
fc_aux = build_fc_model(n_features, 2, compile=True, lr=0.02)
fc_aux.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc_aux, 10,
aux_train_set.features, aux_train_set.targets,
aux_test_set.features, aux_test_set.targets,
epochs=30, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, test_set, 'fc_aux_15')
```
### Neural network with auxiliary data
```
nn_aux = build_hidden_model(n_features, 2, [32], compile=True, lr=0.02)
nn_aux.summary()
trn_scores, test_scores, preds = ensemble_scores(
nn_aux, 10,
aux_train_set.features, aux_train_set.targets,
aux_test_set.features, aux_test_set.targets,
epochs=15, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, aux_test_set, 'nn_aux_15')
```
### Fully connected network with station embeddings
```
emb_size = 2
max_id = int(np.max([aux_train_set.cont_ids.max(), aux_test_set.cont_ids.max()]))
max_id
fc_emb = build_emb_model(2, 2, [], emb_size, max_id, compile=True, lr=0.02)
fc_emb.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc_emb, 10,
[train_set.features, train_set.cont_ids], train_set.targets,
[test_set.features, test_set.cont_ids], test_set.targets,
epochs=30, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, test_set, 'fc_emb_15')
```
### Fully connected network with auxiliary data and station embeddings
```
emb_size = 2
max_id = int(np.max([aux_train_set.cont_ids.max(), aux_test_set.cont_ids.max()]))
max_id
fc_aux_emb = build_emb_model(n_features, 2, [], emb_size, max_id, compile=True, lr=0.02)
fc_aux_emb.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc_aux_emb, 10,
[aux_train_set.features, aux_train_set.cont_ids], aux_train_set.targets,
[aux_test_set.features, aux_test_set.cont_ids], aux_test_set.targets,
epochs=30, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, test_set, 'fc_aux_emb_15')
```
### Neural net with auxiliary data and station embeddings
```
nn_aux_emb = build_emb_model(n_features, 2, [50], emb_size, max_id, compile=True, lr=0.01)
nn_aux_emb.summary()
trn_scores, test_scores, preds = ensemble_scores(
nn_aux_emb, 10,
[aux_train_set.features, aux_train_set.cont_ids], aux_train_set.targets,
[aux_test_set.features, aux_test_set.cont_ids], aux_test_set.targets,
epochs=30, batch_size=1024, verbose=0,
)
test_scores, np.mean(test_scores), np.std(test_scores)
save_ensemble(preds, aux_test_set, 'nn_aux_emb_15')
```
## Train 2007-2015
Note that the first two days of 2007 are missing.
```
train_set_long, test_set_long = get_datasets('07_16.pkl', ['2007-01-03', '2016-01-01'], aux=False)
train_set_long.features.shape
aux_train_set_long, aux_test_set_long = get_datasets('aux_07_16.pkl', ['2007-01-03', '2016-01-01'], aux=True)
n_features = aux_train_set_long.features.shape[1]; n_features
```
### Fully connected network
```
fc = build_fc_model(2, 2, compile=True, lr=0.1)
fc.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc, 10,
train_set_long.features, train_set_long.targets,
test_set_long.features, test_set_long.targets,
epochs=15, batch_size=4096, verbose=0,
)
test_scores
save_ensemble(preds, test_set_long, 'fc_07-15')
```
### Fully connected network with auxiliary data
```
fc_aux = build_fc_model(n_features, 2, compile=True, lr=0.02)
fc_aux.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc_aux, 10,
aux_train_set_long.features, aux_train_set_long.targets,
aux_test_set_long.features, aux_test_set_long.targets,
epochs=10, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, aux_test_set_long, 'fc_aux_07-15')
```
### Neural network with auxiliary data
```
nn_aux = build_hidden_model(n_features, 2, [64], compile=True, lr=0.02)
nn_aux.summary()
trn_scores, test_scores, preds = ensemble_scores(
nn_aux, 10,
aux_train_set_long.features, aux_train_set_long.targets,
aux_test_set_long.features, aux_test_set_long.targets,
epochs=10, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, aux_test_set_long, 'nn_aux_07-15')
```
### Fully connected network with station embeddings
```
emb_size = 2
max_id = int(np.max([aux_train_set_long.cont_ids.max(), aux_test_set_long.cont_ids.max()]))
max_id
fc_emb = build_emb_model(2, 2, [], emb_size, max_id, compile=True, lr=0.02)
fc_emb.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc_emb, 10,
[train_set_long.features, train_set_long.cont_ids], train_set_long.targets,
[test_set_long.features, test_set_long.cont_ids], test_set_long.targets,
epochs=10, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, test_set_long, 'fc_emb_07-15')
```
### Fully connected network with auxiliary data and station embeddings
```
fc_aux_emb = build_emb_model(n_features, 2, [], emb_size, max_id, compile=True, lr=0.02)
fc_aux_emb.summary()
trn_scores, test_scores, preds = ensemble_scores(
fc_aux_emb, 10,
[aux_train_set_long.features, aux_train_set_long.cont_ids], aux_train_set_long.targets,
[aux_test_set_long.features, aux_test_set_long.cont_ids], aux_test_set_long.targets,
epochs=10, batch_size=1024, verbose=0,
)
test_scores
save_ensemble(preds, aux_test_set_long, 'fc_aux_emb_07-15')
```
### Neural network with auxiliary data and station embeddings
```
nn_aux_emb = build_emb_model(n_features, 2, [512], emb_size, max_id, compile=True, lr=0.002)
nn_aux_emb.summary()
trn_scores, test_scores, preds = ensemble_scores(
nn_aux_emb, 10,
[aux_train_set_long.features, aux_train_set_long.cont_ids], aux_train_set_long.targets,
[aux_test_set_long.features, aux_test_set_long.cont_ids], aux_test_set_long.targets,
epochs=15, batch_size=4096, verbose=0
)
test_scores
save_ensemble(preds, aux_test_set, 'nn_aux_emb_07-15')
```
## Sensitivity to training length
```
datasets = {}
datasets['07'] = get_datasets('aux_07_16.pkl', ['2007-01-03', '2016-01-01'], aux=True)
for y in tqdm(range(8, 16)):
yy = str(y).zfill(2)
datasets[yy] = get_datasets(f'aux_{yy}_16.pkl', [f'20{yy}-01-03', '2016-01-01'], aux=True)
fc_scores = []
for y in tqdm(range(7, 16)):
yy = str(y).zfill(2)
fc_aux_emb = build_emb_model(n_features, 2, [], emb_size, max_id, compile=True, lr=0.02)
train_set, test_set = datasets[yy]
fc_aux_emb.fit([train_set.features, train_set.cont_ids], train_set.targets, 1024, 30, verbose=0)
fc_scores.append(fc_aux_emb.evaluate(
[test_set.features, test_set.cont_ids], test_set.targets, 4096, 0))
with open('./fc_scores.pkl', 'wb') as f:
pickle.dump(fc_scores, f)
plt.plot(fc_scores)
```
| github_jupyter |
# Exponentiated Gradient Reduction
Exponentiated gradient reduction is an in-processing technique that reduces fair classification to a sequence of cost-sensitive classification problems, returning a randomized classifier with the lowest empirical error subject to
fair classification constraints. The code for exponentiated gradient reduction wraps the source class
`fairlearn.reductions.ExponentiatedGradient` available in the https://github.com/fairlearn/fairlearn library,
licensed under the MIT Licencse, Copyright Microsoft Corporation.
This version of exponentiated gradient reduction (implemented in `aif360.algorithms`) wraps the sklearn compatible version of exponentiated gradient reduction implemented in `aif360.sklearn`. For a detailed tutorial on sklearn compatible exponentiated gradient reduction see [examples/sklearn/demo_exponentiated_gradient_reduction_sklearn.ipynb](sklearn/demo_exponentiated_gradient_reduction_sklearn.ipynb).
```
%matplotlib inline
# Load all necessary packages
import sys
sys.path.append("../")
from aif360.datasets import BinaryLabelDataset
from aif360.datasets import AdultDataset, GermanDataset, CompasDataset
from aif360.metrics import BinaryLabelDatasetMetric
from aif360.metrics import ClassificationMetric
from aif360.algorithms.preprocessing.optim_preproc_helpers.data_preproc_functions import load_preproc_data_adult, load_preproc_data_compas, load_preproc_data_german
from aif360.algorithms.inprocessing.exponentiated_gradient_reduction import ExponentiatedGradientReduction
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, MaxAbsScaler
from sklearn.metrics import accuracy_score
from IPython.display import Markdown, display
import matplotlib.pyplot as plt
import numpy as np
```
#### Load dataset and set options
```
# Get the dataset and split into train and test
dataset_orig = load_preproc_data_adult()
privileged_groups = [{'sex': 1}]
unprivileged_groups = [{'sex': 0}]
np.random.seed(0)
dataset_orig_train, dataset_orig_test = dataset_orig.split([0.7], shuffle=True)
# print out some labels, names, etc.
display(Markdown("#### Training Dataset shape"))
print(dataset_orig_train.features.shape)
display(Markdown("#### Favorable and unfavorable labels"))
print(dataset_orig_train.favorable_label, dataset_orig_train.unfavorable_label)
display(Markdown("#### Protected attribute names"))
print(dataset_orig_train.protected_attribute_names)
display(Markdown("#### Privileged and unprivileged protected attribute values"))
print(dataset_orig_train.privileged_protected_attributes,
dataset_orig_train.unprivileged_protected_attributes)
display(Markdown("#### Dataset feature names"))
print(dataset_orig_train.feature_names)
```
#### Metric for original training data
```
# Metric for the original dataset
metric_orig_train = BinaryLabelDatasetMetric(dataset_orig_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
display(Markdown("#### Original training dataset"))
print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_train.mean_difference())
metric_orig_test = BinaryLabelDatasetMetric(dataset_orig_test,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_test.mean_difference())
min_max_scaler = MaxAbsScaler()
dataset_orig_train.features = min_max_scaler.fit_transform(dataset_orig_train.features)
dataset_orig_test.features = min_max_scaler.transform(dataset_orig_test.features)
metric_scaled_train = BinaryLabelDatasetMetric(dataset_orig_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
display(Markdown("#### Scaled dataset - Verify that the scaling does not affect the group label statistics"))
print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_scaled_train.mean_difference())
metric_scaled_test = BinaryLabelDatasetMetric(dataset_orig_test,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_scaled_test.mean_difference())
```
### Standard Logistic Regression
```
X_train = dataset_orig_train.features
y_train = dataset_orig_train.labels.ravel()
lmod = LogisticRegression(solver='lbfgs')
lmod.fit(X_train, y_train, sample_weight=dataset_orig_train.instance_weights)
X_test = dataset_orig_test.features
y_test = dataset_orig_test.labels.ravel()
y_pred = lmod.predict(X_test)
display(Markdown("#### Accuracy"))
lr_acc = accuracy_score(y_test, y_pred)
print(lr_acc)
dataset_orig_test_pred = dataset_orig_test.copy(deepcopy=True)
dataset_orig_test_pred.labels = y_pred
# positive class index
pos_ind = np.where(lmod.classes_ == dataset_orig_train.favorable_label)[0][0]
dataset_orig_test_pred.scores = lmod.predict_proba(X_test)[:,pos_ind].reshape(-1,1)
metric_test = ClassificationMetric(dataset_orig_test,
dataset_orig_test_pred,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
display(Markdown("#### Average odds difference"))
lr_aod = metric_test.average_odds_difference()
print(lr_aod)
```
### Exponentiated Gradient Reduction
Choose a base model for the randomized classifer
```
estimator = LogisticRegression(solver='lbfgs')
```
Train the randomized classifier and observe test accuracy. Other options for `constraints` include "DemographicParity," "TruePositiveRateDifference", and "ErrorRateRatio."
```
np.random.seed(0) #need for reproducibility
exp_grad_red = ExponentiatedGradientReduction(estimator=estimator,
constraints="EqualizedOdds",
drop_prot_attr=False)
exp_grad_red.fit(dataset_orig_train)
exp_grad_red_pred = exp_grad_red.predict(dataset_orig_test)
metric_test = ClassificationMetric(dataset_orig_test,
exp_grad_red_pred,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
display(Markdown("#### Accuracy"))
egr_acc = metric_test.accuracy()
print(egr_acc)
#Check if accuracy is comparable
assert abs(lr_acc-egr_acc)<0.03
display(Markdown("#### Average odds difference"))
egr_aod = metric_test.average_odds_difference()
print(egr_aod)
#Check if average odds difference has improved
assert abs(egr_aod)<abs(lr_aod)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Using GPUs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/using_gpu"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/using_gpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/using_gpu.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/using_gpu.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlow code, and `tf.keras` models will transparently run on a single GPU with no code changes required.
Note: Use `tf.config.experimental.list_physical_devices('GPU')` to confirm that TensorFlow is using the GPU.
The simplest way to run on multiple GPUs, on one or many machines, is using [Distribution Strategies](distribute_strategy.ipynb)
This guide is for users who have tried these approaches and found that they need find-grained control of how TensorFlow uses the GPU.
## Setup
Ensure you have the latest TensorFlow gpu release installed.
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
```
## Overview
TensorFlow supports running computations on a variety of types of devices, including CPU and GPU. They are represented with string identifiers for example:
* `"/device:CPU:0"`: The CPU of your machine.
* `"/GPU:0"`: Short-hand notation for the first GPU of your machine that is visible to TensorFlow
* `"/job:localhost/replica:0/task:0/device:GPU:1"`: Fully qualified name of the second GPU of your machine that is visible to TensorFlow.
If a TensorFlow operation has both CPU and GPU implementations, by default the GPU devices will be given priority when the operation is assigned to a device. For example, `tf.matmul` has both CPU and GPU kernels. On a system with devices `CPU:0` and `GPU:0`, the `GPU:0` device will be selected to run `tf.matmul` unless you explicitly request running it on another device.
## Logging device placement
To find out which devices your operations and tensors are assigned to, put
`tf.debugging.set_log_device_placement(True)` as the first statement of your
program. Enabling device placement logging causes any Tensor allocations or operations to be printed.
```
tf.debugging.set_log_device_placement(True)
# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
```
The above code will print an indication the `MatMul` op was executed on `GPU:0`.
## Manual device placement
If you would like a particular operation to run on a device of your choice
instead of what's automatically selected for you, you can use `with tf.device`
to create a device context, and all the operations within that context will
run on the same designated device.
```
tf.debugging.set_log_device_placement(True)
# Place tensors on the CPU
with tf.device('/CPU:0'):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
```
You will see that now `a` and `b` are assigned to `CPU:0`. Since a device was
not explicitly specified for the `MatMul` operation, the TensorFlow runtime will
choose one based on the operation and available devices (`GPU:0` in this
example) and automatically copy tensors between devices if required.
## Limiting GPU memory growth
By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to
[`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars)) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. To limit TensorFlow to a specific set of GPUs we use the `tf.config.experimental.set_visible_devices` method.
```
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
```
In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process. TensorFlow provides two methods to control this.
The first option is to turn on memory growth by calling `tf.config.experimental.set_memory_growth`, which attempts to allocate only as much GPU memory in needed for the runtime allocations: it starts out allocating very little memory, and as the program gets run and more GPU memory is needed, we extend the GPU memory region allocated to the TensorFlow process. Note we do not release memory, since it can lead to memory fragmentation. To turn on memory growth for a specific GPU, use the following code prior to allocating any tensors or executing any ops.
```
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
```
Another way to enable this option is to set the environmental variable `TF_FORCE_GPU_ALLOW_GROWTH` to `true`. This configuration is platform specific.
The second method is to configure a virtual GPU device with `tf.config.experimental.set_virtual_device_configuration` and set a hard limit on the total memory to allocate on the GPU.
```
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
```
This is useful if you want to truly bound the amount of GPU memory available to the TensorFlow process. This is common practise for local development when the GPU is shared with other applications such as a workstation GUI.
## Using a single GPU on a multi-GPU system
If you have more than one GPU in your system, the GPU with the lowest ID will be
selected by default. If you would like to run on a different GPU, you will need
to specify the preference explicitly:
```
tf.debugging.set_log_device_placement(True)
try:
# Specify an invalid GPU device
with tf.device('/device:GPU:2'):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
except RuntimeError as e:
print(e)
```
If the device you have specified does not exist, you will get a `RuntimeError`:
If you would like TensorFlow to automatically choose an existing and supported device to run the operations in case the specified one doesn't exist, you can call `tf.config.set_soft_device_placement(True)`.
```
tf.config.set_soft_device_placement(True)
tf.debugging.set_log_device_placement(True)
# Creates some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
```
## Using multiple GPUs
Developing for multiple GPUs will allow a model to scale with the additional resources. If developing on a system with a single GPU, we can simulate multiple GPUs with virtual devices. This enables easy testing of multi-GPU setups without requiring additional resources.
```
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Create 2 virtual GPUs with 1GB memory each
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
```
Once we have multiple logical GPUs available to the runtime, we can utilize the multiple GPUs with `tf.distribute.Strategy` or with manual placement.
#### With `tf.distribute.Strategy`
The best practice for using multiple GPUs is to use `tf.distribute.Strategy`.
Here is a simple example:
```
tf.debugging.set_log_device_placement(True)
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse',
optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))
```
This program will run a copy of your model on each GPU, splitting the input data
between them, also known as "[data parallelism](https://en.wikipedia.org/wiki/Data_parallelism)".
For more information about distribution strategies, check out the guide [here](./distribute_strategy.ipynb).
#### Manual placement
`tf.distribute.Strategy` works under the hood by replicating computation across devices. You can manually implement replication by constructing your model on each GPU. For example:
```
tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_logical_devices('GPU')
if gpus:
# Replicate your computation on multiple GPUs
c = []
for gpu in gpus:
with tf.device(gpu.name):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c.append(tf.matmul(a, b))
with tf.device('/CPU:0'):
matmul_sum = tf.add_n(c)
print(matmul_sum)
```
| github_jupyter |
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/57_cartoee_blend.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) and [cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html#installing) if needed. Keep in mind that cartopy can be challenging to install. If you are unable to install cartopy on your computer, you can try Google Colab with this the [notebook example](https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/cartoee_colab.ipynb).
See below the commands to install cartopy and geemap using conda/mamba:
```
conda create -n carto python=3.8
conda activate carto
conda install mamba -c conda-forge
mamba install cartopy scipy -c conda-forge
mamba install geemap -c conda-forge
jupyter notebook
```
```
# !pip install cartopy scipy
# !pip install geemap
```
# Creating publication-quality maps with multiple Earth Engine layers
```
import ee
import geemap
from geemap import cartoee
import cartopy.crs as ccrs
%pylab inline
```
## Create an interactive map
```
Map = geemap.Map()
image = ee.ImageCollection('MODIS/MCD43A4_006_NDVI') \
.filter(ee.Filter.date('2018-04-01', '2018-05-01')) \
.select("NDVI")\
.first()
vis_params = {
'min': 0.0,
'max': 1.0,
'palette': [
'FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718', '74A901',
'66A000', '529400', '3E8601', '207401', '056201', '004C00', '023B01',
'012E01', '011D01', '011301'
],
}
Map.setCenter(-7.03125, 31.0529339857, 2)
Map.addLayer(image, vis_params, 'MODIS NDVI')
countries = ee.FeatureCollection('users/giswqs/public/countries')
style = {
"color": "00000088",
"width": 1,
"fillColor": "00000000"}
Map.addLayer(countries.style(**style), {}, "Countries")
ndvi = image.visualize(**vis_params)
blend = ndvi.blend(countries.style(**style))
Map.addLayer(blend, {}, "Blend")
Map
```
## Plot an image with the default projection
```
# specify region to focus on
bbox = [180, -88, -180, 88]
fig = plt.figure(figsize=(15,10))
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(blend, region=bbox)
cb = cartoee.add_colorbar(ax, vis_params=vis_params, loc='right')
ax.set_title(label='MODIS NDVI', fontsize = 15)
# ax.coastlines()
plt.show()
```
## Plot an image with a different projection
```
fig = plt.figure(figsize=(15,10))
projection = ccrs.EqualEarth(central_longitude=-180)
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(blend, region=bbox, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=vis_params, loc='right')
ax.set_title(label='MODIS NDVI', fontsize=15)
# ax.coastlines()
plt.show()
```
| github_jupyter |
# MNE
Open-source Python software for exploring, visualizing, and analyzing human neurophysiological data: MEG, EEG, sEEG, ECoG, and more.
<https://martinos.org/mne>
---
```
import numpy as np
pip install mne
from mne.datasets import eegbci
from mne.io import concatenate_raws, read_raw_edf
subject = 1
runs = [6, 10, 14] # motor imagery: hands vs feet
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
```
---
Plot the data using
```python
raw.plot(start=..., duration=..., n_channels=..., scalings='auto')
```
---
```
# Apply band-pass filter
raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge')
```
---
### Divide into epochs
```
from mne import Epochs, pick_types, events_from_annotations
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
```
have a look to `events` and `picks`
```
event_id = dict(hands=2, feet=3)
tmin, tmax=-1,4
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=None, preload=True)
```
---
Consider only 1 second for each epoch
```
epochs_design = epochs.copy().crop(tmin=1., tmax=2.)
```
---
Create a new variable `y` (**label**) from `events` (or from `epochs_design.events`)
`y`:
- 0: event T1
- 1: event T2
```
#y =...
```
---
Get **data** from `epochs_design`, using the method `get_data()`
Have a look to the data, using `shape`
```
#X=...
X.shape
```
----
# SCIKIT-LEARN
Machine learning in python
<https://scikit-learn.org>
---
Split data and labels into random train and test subsets using
`train_test_split` from `sklearn.model_selection`.
Have a look to the data.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
X_test.shape
```
---
## Feature extraction:
**Common Spatial Pattern (CSP)**
- Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440–447, December 1991.
- https://en.wikipedia.org/wiki/Common_spatial_*pattern*
```
from mne.decoding import CSP
csp = CSP(n_components=4, reg=None, log=True, norm_trace=False)
```
---
Use of **CSP**
- 'train' the decoder using the `fit()` method.
- transform the data using the `tranform()` method
have a look to the data
```
# csp.fit(...)
# X_train_csp=...
# X_test_csp=...
```
---
Create a linear discriminant classifier
```python
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
```
- Train the classifier using the `fit()` method
- Classify the test set using the `predict()`method
- Estimate accuracy
```
```
---
Repeat the process using the `knn` classifier
```python
from sklearn.neighbors import KNeighborsClassifier
knn=KNeighborsClassifier(k)
```
| github_jupyter |
### BACKGROUND:
Currently, HondaWeb is the only known source of obtaining associate's basic information for almost any or all Honda associates from any Honda company. Basic information such as company name, division, department, location, email, etc. To discover what information can be obtained through HondaWeb profile pages, just simply observe your profile page. Several attempts and inquires have been made to obtain a single source of profile information for any or all Honda associates, regardless of Honda company. So far, HondaWeb appears to be the only good source. To web scrape an associate's profile information from HondaWeb, besides the Python libraries, all that is needed is the associate's Windows log in user ID if they are non-American Honda associates. For American Honda associates, their user ID is just their "FirstName LastName"
For example, if you are non-AHM associate, copy this URL:
```https://somesite.com/REDACTED|AccessManagerMembershipProvider|```
Then paste it into your browser and then add or type your Windows user ID at the end or right after the "|" symbol, then hit ENTER key. You should then see your HondaWeb profile page. For AHM associates, you would just type or enter their first name, space, then their last name instead, then hit the ENTER key.
With the knowledge above, if you belong to an internal organization where your membership or users can come from any Honda company, then all you need to have is a compiled list of their Windows user name/ID or first and last name (if AHM associate). Then with this list, you can programmatically obtain their basic profile information with this web scraping technique.
### Python libraries that were installed that do not come with standard Python:
- lxml
- Selenium
- tqdm
- pandas
### Import necessary Python libraries
```
from getpass import getpass # built-in Python library to enable hiding sensitive info such as password
from lxml import html # Library to web scrape HTML pages
from selenium import webdriver # Needed to automate or simulate browser activity
from selenium.webdriver.common.keys import Keys # Import Keys to send key strokes or key inputs to Chrome
from selenium.webdriver.chrome.options import Options # Needed if you want to use Chrome in "headless" mode
from tqdm import tqdm_notebook # library to embed progress bar
import pandas as pd # Library for working with heterogenous tabular data
import sqlite3 # Members Windows user IDs are saved in a sqlite3 database
pd.options.display.max_colwidth=500
```
### Obtain a list of BRAIN BRG Member's Windows user ID
```
conn = sqlite3.connect(r'\\some_site.honda.com\REDACTED\database.db')
sql = """
SELECT
RTRIM(OPRID) as OPRID
FROM
members
WHERE
Member = 'X'
"""
members = pd.read_sql_query(sql, conn)
conn.close()
```
### Let's look at our list of Windows user IDs of BRAIN members
```
members.OPRID.values
```
### HondaWeb is a secured site, so you need to provide your credentials
```
username = input('Enter your username: ')
password = getpass('Enter your password: ')
```
### We will be using Chrome browser in this example and therefore, we need to load the Chrome driver
```
# First, set Chrome into "headless mode" for quicker page navigation
options = Options()
options.headless = True
browser = webdriver.Chrome(r'C:\Users\user\Downloads\chromedriver_win32\chromedriver.exe', options=options)
```
### Instruct the Chrome browser to visit the Honda HondaWeb log in page and then:
- Enter user name and then
- Enter password and then
- Hit Enter key to submit the user name and password
```
browser.get('https://some_site.com/auth/default.aspx')
elem_username = browser.find_element_by_name('username') # find username text box
elem_username.send_keys(username)
elem_password = browser.find_element_by_name('password') # find the password text box
elem_password.send_keys(password + Keys.RETURN)
```
### Loop through the members list and for each member, extract the data
```
%%time
# Initialize Python lists to contain the data we want to capture
first_last_name_list = []
company_list = []
division_list = []
department_list = []
office_location_list = []
email_list = []
skills_list = []
interests_list = []
profile_url_list = []
# This is the "base" URL needed to append or concatenate the member's Windows user ID with
base_profile_url = 'somesite.com/REDACTED|AccessManagerMembershipProvider|'
# Now loop through the list of members' Windows user IDs and visit their HondaWeb profile page
# and extract their data with lxml's XPath query language
print("Running Chrome in headless mode...")
for member in tqdm_notebook(members.OPRID, desc='Looping thru members...'):
member_url = base_profile_url + member
browser.get(member_url)
profile_html = html.fromstring(browser.page_source)
first_last_name_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_PreferredName"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
company_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_HondaCompanyName"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
division_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_HondaDivisionName"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
department_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_HondaDepartmentName"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
office_loc_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_SPS-Location"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
email_span = profile_html.xpath('//span[@id="ProfileViewer_ValueWorkEmail"]/text()')
skills_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_SPS-Skills"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
interests_div = profile_html.xpath('//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_SPS-Interests"] \
/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
# With each member's data, we will add them/append to their respective Python list
if first_last_name_div:
first_last_name_list.append(first_last_name_div[0])
else:
first_last_name_list.append('')
if company_div:
company_list.append(company_div[0])
else:
company_list.append('')
if division_div:
division_list.append(division_div[0])
else:
division_list.append('')
if department_div:
department_list.append(department_div[0])
else:
department_list.append('')
if office_loc_div:
office_location_list.append(office_loc_div[0])
else:
office_location_list.append('')
if email_span:
email_list.append(email_span[0].lower()) # Discovered that for some reason, some emails can have mix cases
else:
email_list.append('')
if skills_div:
skills_list.append(skills_div[0])
else:
skills_list.append('')
if interests_div:
interests_list.append(interests_div[0])
else:
interests_list.append('')
profile_url_list.append(member_url)
# Close/Quite Chrome browser
print("Web scraping complete. Quitting Chrome browser...")
browser.quit()
```
### Let's take a peek (first 5 records) at our Python lists to see if they have the data we wanted
```
first_last_name_list[:5]
company_list[:5]
division_list[:5]
department_list[:5]
office_location_list[:5]
email_list[:5]
skills_list[:5]
interests_list[:5]
profile_url_list[:5]
```
### Basic data check: Making sure we have same number of data as the number of BRAIN BRG members in our Python lists
```
assert len(first_last_name_list) == members.shape[0]
assert len(company_list) == members.shape[0]
assert len(division_list) == members.shape[0]
assert len(department_list) == members.shape[0]
assert len(office_location_list) == members.shape[0]
assert len(email_list) == members.shape[0]
assert len(skills_list) == members.shape[0]
assert len(interests_list) == members.shape[0]
assert len(profile_url_list) == members.shape[0]
```
For more comprehensive data validation, check out great_expectations [library](http://docs.greatexpectations.io/en/latest/core_concepts/expectations.html).
### If our data check passed, then let's go ahead and make a pandas dataframe from our Python lists
```
members_df = pd.DataFrame({'First_Last_Name': first_last_name_list, 'Company': company_list,
'Division': division_list, 'Department': department_list,
'Office_Location': office_location_list, 'Email': email_list,
'Skills': skills_list, 'Interests': interests_list,
'Profile_Url': profile_url_list})
members_df.head()
members_df.tail()
```
### Now, we can save our dataframe as Excel, csv, to a database, email it, etc...
```
# members_df.to_excel(r'path_to_where_you_want_to_save\filename.xlxs')
# members_df.to_csv(r'path_to_where_you_want_to_save\filename.csv)
```
### Make HTML table from pandas dataframe
But first, need to create a column containing HTML ```<a>``` tags with ```HREF=``` pointed to their profile page URL
```
def makeHyperlink(row):
""" Function to convert a string URL to HTML <a> tag """
value = '<a href="' + str(row['Profile_Url']) + '"' + ">Profile Page</>"
return value
```
#### Apply the function above to create new ```URL_Hyperlink``` column:
```
members_df['URL_Hyperlink'] = members_df.apply(makeHyperlink, axis='columns')
```
### Now display dataframe as HTML table
```
from ipywidgets import HTML
HTML(members_df.drop(columns='Profile_Url', axis='columns').to_html(escape=False, index=False))
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from ast import literal_eval
import pickle
import pprint
pp = pprint.PrettyPrinter(depth=6)
matplotlib.rcParams['figure.figsize'] = (15.0, 5.0)
pd.set_option('display.max_columns', 150)
```
import os
thedir = 'rtp-torrent'
dirs = [name for name in os.listdir(thedir) if os.path.isdir(os.path.join(thedir, name))]
projects = [dirname.replace("@", "/") for dirname in dirs if "@" in dirname]
projects
```
folder = "../../../tmp"
for chunk in pd.read_csv(f"{folder}/travistorrent_8_2_2017.csv", chunksize=10):
cols = print(chunk.columns)
break
df = pd.read_csv(f"{folder}/travis_projects.csv", index_col=0)
df
df.columns
```
#### Number of builds per project
```
plt.figure(figsize=(15,5))
data=df.groupby("gh_project_name").count().tr_build_id
data=data.sort_values()
ax = sns.barplot(x=data.index,y=data.values)
ax.tick_params(axis='x', labelrotation= 90)
ax.set_title("Number of builds")
```
#### Number of builds with test results
```
plt.figure(figsize=(15,5))
ax = sns.barplot(x="gh_project_name",y="tr_log_bool_tests_ran",
data=df.groupby(df.gh_project_name).sum().tr_log_bool_tests_ran.reset_index().sort_values(by="tr_log_bool_tests_ran"))
ax.tick_params(axis='x', labelrotation= 90)
ax.set_title("Number of builds with test results")
```
#### Percentage of builds with test results
```
plt.figure(figsize=(15,5))
data=df.groupby("gh_project_name").sum().tr_log_bool_tests_ran/df.groupby("gh_project_name").tr_log_bool_tests_ran.count()
data=data.sort_values()
ax = sns.barplot(x=data.index,y=data.values)
ax.tick_params(axis='x', labelrotation= 90)
ax.set_title("Percentage of builds with test results")
```
## In depth analysis of a project
```
project='SonarSource/sonarqube'
project_df = df[df.gh_project_name == project]
project_df.sort_values(by="tr_build_id").head(10)
project_df.shape
```
**Count lines containing the project name**
Make sure all lines for the project are correctly imported in pandas
```
num_rows = 0
for row in open(f"{folder}/travistorrent_8_2_2017.csv"):
if(project in row):
num_rows += 1
num_rows == len(project_df)
```
### Missing builds from TraviTorrent
TraviCI uses an incremental build_number. All the builds up to the highest build_number for the project should be present in the dataset
```
max_build_number = project_df.tr_build_number.max()
n_missing_builds = max_build_number - len(project_df.tr_build_number.unique())
n_missing_builds
len(project_df.tr_build_number.unique())
```
Let's plot the distribution of build_numbers to visualize if older builds are the ones more affected by this problem
```
missing_build_numbers = []
build_numbers = project_df.tr_build_number.unique()
for i in range(0, project_df.tr_build_number.max()):
if i not in build_numbers:
missing_build_numbers.append(i)
sns.distplot(missing_build_numbers)
plt.axvline(max_build_number, 0, 10, color="r")
```
The problem seems to affect builds up until the very last build present in the dataset (the red line)
**Total number of builds for the project in TravisTorrent**
```
len(project_df.tr_build_id.unique())
len(project_df.tr_build_id.unique()) + n_missing_builds == max_build_number
project_df[[col for col in project_df if col.startswith("tr_log")]].describe()
```
plt.figure(figsize=(15,5))
plt.locator_params(axis='x', nbins=6)
ax = sns.lineplot(x="tr_build_id", y="tr_log_num_tests_run", data=project_df[["tr_build_id","tr_log_num_tests_run"]])
for ind, label in enumerate(ax.get_xticklabels()):
if ind % 20 == 0: # every 10th label is kept
label.set_visible(True)
else:
label.set_visible(False)
## Import RTPTorrent dataset
This dataset contains class specific test metrics for the project's builds
```
sonar_rtp = pd.read_csv(f"{folder}/rtp-torrent/SonarSource@sonarqube/SonarSource@sonarqube.csv")
sonar_rtp
```
## TravisTorrent and RTPTorrent merging
**Are all the jobs' builds from RTPTorrent present in TravisTorrent dataset?**
```
rt_jobIds = sonar_rtp.travisJobId.unique()
travis_jobsIds = project_df.tr_job_id.unique()
len(set(rt_jobIds) - set(travis_jobsIds))
(len(set(jobIds) - set(jobIds2)))/len(jobIds)
```
More than 60% of the builds are missing from TravisTorrent
```
jobIds2[-1]
```
### TravisCI info for the project
```
from travispy import TravisPy
t = TravisPy()
repo = t.repo(project)
repo.last_build_number
builds = t.builds(repository_id = repo.id, number = 1)
print(builds[0].started_at)
```
* To this day, the project has over 39420 builds on TraviCI
* The first available build on TravisCI dates 17/3/2015
```
builds = t.builds(repository_id = repo.id, after_number = 10)
len(builds)
pp.pprint(builds[0].__slots__)
builds[0].__slots__
builds[0].job_ids
```
## TravisCI builds for the project
```
import glob
v2_files = glob.glob("builds/v2/*.pkl")
v3_files = glob.glob("builds/v3/*.pkl")
with open(v2_files[0], 'rb') as f:
builds2 = pickle.load(f)
v2_fields = builds2[0].keys()
v2_fields
pp.pprint(builds2[0])
with open(v3_files[0], 'rb') as f:
builds3 = pickle.load(f)
v3_fields = builds3[0].keys()
v3_fields
pp.pprint(builds3[0])
common_fields = set(v3_fields).intersection(set(v2_fields))
common_fields
uncommon_fields = set(v3_fields).symmetric_difference(set(v2_fields))
uncommon_fields
```
## Builds retrival
```
builds_tuple = []
```
#### Version 2 retrieval
```
for v2_file in v2_files:
with open(v2_file, 'rb') as f:
builds2 = pickle.load(f)
for build in builds2:
build_values = []
for field in common_fields:
build_values.append(build[field])
builds_tuple.append(tuple(build_values))
```
#### Version 3 retrieval
```
for v3_file in v3_files:
with open(v3_file, 'rb') as f:
builds3 = pickle.load(f)
for build in builds3:
build_values = []
for field in common_fields:
build_values.append(build[field])
builds_tuple.append(tuple(build_values))
buildsDf = pd.DataFrame(builds_tuple, columns = common_fields)
buildsDf
buildsDf["number"] = pd.to_numeric(buildsDf.number)
```
**Build numbers with multiple build_ids**
```
(buildsDf.groupby("number").count().id > 1).sum()
```
**Number of missing build numbers**
```
buildsDf.number.max() - buildsDf.number.min() - len(buildsDf.number.unique()) > 0
```
**Numbers of the missing builds**
```
t = buildsDf.number.unique()
missing_numbers = []
for i in range(1, buildsDf.number.max()):
if i in t:
continue
missing_numbers.append(i)
len(missing_numbers)
import travis2
import travis
```
bs=[]
for number in missing_numbers:
bs.append(travis2.get_builds({"repository_id": "234484", "number": number})["builds"][0])
```
len(bs) == len(missing_numbers)
```
with open("builds/all_missing_builds.pkl", "wb") as f:
pickle.dump(bs, f)
```
with open("builds/all_missing_builds.pkl", "rb") as f:
bs = pickle.load(f)
missing_builds_tuple = []
for build in bs:
missing_build_values = []
for field in common_fields:
missing_build_values.append(build[field])
missing_builds_tuple.append(tuple(missing_build_values))
missingBuildsDf = pd.DataFrame(missing_builds_tuple, columns = common_fields)
missingBuildsDf
allBuilds = buildsDf.append(missingBuildsDf)
allBuilds
allBuilds["number"] = pd.to_numeric(allBuilds.number)
allBuilds.number.max() == len(allBuilds.number.unique())
allBuilds
#allBuilds.to_csv("csv/allBuilds.csv")
allBuilds = pd.read_csv("csv/allBuilds.csv", index_col=0)
allBuilds
```
## Jobs
```
job_tuples = []
for v2_file in v2_files:
with open(v2_file, 'rb') as f:
builds2 = pickle.load(f)
for build in builds2:
build_values = []
for job in build["job_ids"]:
job_tuples.append((build["id"], build["number"], job))
for v3_file in v3_files:
with open(v3_file, 'rb') as f:
builds3 = pickle.load(f)
for build in builds3:
build_values = []
for job in build["jobs"]:
job_tuples.append((build["id"], build["number"], job["id"]))
with open("builds/all_missing_builds.pkl", 'rb') as f:
builds2 = pickle.load(f)
for build in builds2:
build_values = []
for job in build["job_ids"]:
job_tuples.append((build["id"], build["number"], job))
jobsDf = pd.DataFrame(job_tuples, columns = ["build_id", "build_number", "job_id"])
jobsDf.to_csv("csv/job_id_build_id.csv")
jobsDf
jobsDf.build_number = pd.to_numeric(jobsDf.build_number)
```
**Do we have all builds for all jobs in RTPTorrent?**
```
len(set(jobIds)-(set(jobsDf.job_id.unique())))
```
### Compare our dataset to TravisTorrent
**Number of builds up until the latest in TravisTorrent**
TravisTorrent
```
travis_builds = len(project_df.tr_build_number.unique())
```
Our dataset
```
ours_builds = len(allBuilds[allBuilds.number < project_df.tr_build_number.max()])
ax = sns.barplot(x="Builds", y="Dataset",
data=pd.DataFrame([("TravisTorrent", travis_builds),("Ours", ours_builds)],columns=["Dataset", "Builds"]))
ax.set_title(f"Counted up until latest build in TravisTorrent ({project_df.tr_build_number.max()})")
missing_build_numbers = []
build_numbers = allBuilds.number.unique()
for i in range(0, project_df.tr_build_number.max()):
if i not in build_numbers:
missing_build_numbers.append(i)
sns.distplot(missing_build_numbers)
plt.axvline(max_build_number, 0, 10, color="r")
```
**TravisTorrent has 1 job_id per row**
```
len(project_df) == len(project_df.tr_job_id.unique())
```
**Number of job ids up until the last build number in TraviTorrent**
TravisTorrent
```
travis_jobs = len(project_df.tr_job_id.unique())
```
Our dataset
```
ids = list(allBuilds[allBuilds.number < project_df.tr_build_number.max()].id)
ours_jobs = len(jobsDf[jobsDf.build_id.isin(ids)])
ax = sns.barplot(x="Jobs", y="Dataset",
data=pd.DataFrame([("TravisTorrent", travis_jobs),("Ours", ours_jobs)],columns=["Dataset", "Jobs"]))
ax.set_title(f"Counted up until latest build in TravisTorrent ({project_df.tr_build_number.max()})")
pd.DataFrame([("TravisTorrent", travis_jobs),("Ours", ours_jobs)],columns=["Dataset", "Jobs"])
group_size = 400
t = jobsDf.groupby("build_number").count().reset_index()
t["groups"] = t.build_number.apply(lambda x: int(x / group_size))
data=t.groupby("groups").mean().reset_index()
ax = sns.barplot(x="groups",y="job_id",data=data)
ax.tick_params(axis='x', labelrotation= 45)
ax.set_title("Avarage number of jobs per build over time")
ax.set_xlabel("build grouped in 400")
ax.set_ylabel("Average number of jobs")
plt.axvline(int(project_df.tr_build_number.max()/group_size), 0, 10, color="r")
```
### Look at a couple of builds not included in TravisTorrent
```
notinTravis = allBuilds[allBuilds.id.isin(set(allBuilds[allBuilds.number < project_df.tr_build_number.max()].id) - set(project_df.tr_build_id))]
notinTravis.sort_values(by="number").head(10)
sns.countplot(notinTravis.state)
sns.countplot(notinTravis.event_type)
jobsDf.sort_values(by="build_number")
```
pp = pprint.PrettyPrinter(depth=6)
current_jobs = []
i = 0
offset=24700
failed = 0
for build_id in allBuilds.sort_values(by="id").id.unique():
if i < offset:
i+=1
continue
jobs = travis.get_jobs(build_id)
if not jobs:
print(f"Failed build {build_id}")
failed+=1
else:
current_jobs = current_jobs + jobs
i+=1
if(i % 100 == 0):
print(f"Downloaded jobs: {i}...")
with open(f'jobs/jobs{i}.pkl', 'wb') as f:
pickle.dump(current_jobs, f)
current_jobs = []
print(i)
| github_jupyter |
```
# -*- coding: utf-8 -*-
"""
Usage:
THEANO_FLAGS="device=gpu0" python exptBikeNYC.py
"""
from __future__ import print_function
import os
import pickle
import numpy as np
import math
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping, ModelCheckpoint
from deepst.models.threewayConvLSTM import threeway
from deepst.config import Config
import deepst.metrics as metrics
from deepst.datasets import BikeNYC
np.random.seed(1337) # for reproducibility
# parameters
# data path, you may set your own data path with the global envirmental
# variable DATAPATH
DATAPATH = Config().DATAPATH
nb_epoch = 40 # number of epoch at training stage
nb_epoch_cont = 10 # number of epoch at training (cont) stage
batch_size = 1 # batch size
T = 24 # number of time intervals in one day
lr = 0.0002 # learning rate
len_closeness = 3 # length of closeness dependent sequence
len_period = 2 # length of peroid dependent sequence
len_trend = 2 # length of trend dependent sequence
nb_residual_unit = 4 # number of residual units
ConvLstmLayers = 1 # depth of ConvLstmLayers
nb_flow = 2 # there are two types of flows: new-flow and end-flow
# divide data into two subsets: Train & Test, of which the test set is the
# last 10 days
days_test = 10
len_test = T * days_test
map_height, map_width = 16, 8 # grid size
# For NYC Bike data, there are 81 available grid-based areas, each of
# which includes at least ONE bike station. Therefore, we modify the final
# RMSE by multiplying the following factor (i.e., factor).
nb_area = 81
m_factor = math.sqrt(1. * map_height * map_width / nb_area)
print('factor: ', m_factor)
path_result = 'Test_RET'
path_model = 'Test_MODEL'
if os.path.isdir(path_result) is False:
os.mkdir(path_result)
if os.path.isdir(path_model) is False:
os.mkdir(path_model)
def build_model(external_dim=None):
c_conf = (len_closeness, nb_flow, map_height,
map_width) if len_closeness > 0 else None
p_conf = (len_period, nb_flow, map_height,
map_width) if len_period > 0 else None
t_conf = (len_trend, nb_flow, map_height,
map_width) if len_trend > 0 else None
model = threeway(c_conf=c_conf, p_conf=p_conf, t_conf=t_conf,
external_dim=external_dim)
adam = Adam(lr=lr)
model.compile(loss='mse', optimizer=adam, metrics=[metrics.rmse])
model.summary()
# from keras.utils.visualize_util import plot
# plot(model, to_file='model.png', show_shapes=True)
return model
def main():
print("loading data...")
# data_numbers=None will use all data, this could be very slowly.
# data_numbers=800 will use only 800 series for trying on small data.
X_train_ALL, X_test_ALL, X_train, Y_train, X_test, Y_test, mmn, external_dim, timestamp_train, timestamp_test = BikeNYC.load_threeway_data(
T=T, nb_flow=nb_flow, len_closeness=len_closeness, len_period=len_period, len_trend=len_trend, len_test=len_test,
preprocess_name='preprocessing.pkl', meta_data=True, data_numbers=None )
print("\n days (test): ", [v[:8] for v in timestamp_test[0::T]])
print('=' * 10)
print("compiling model...")
print(
"**at the first time, it takes a few minites to compile if you use [Theano] as the backend**")
print('external_dim is:', external_dim)
model = build_model(external_dim)
hyperparams_name = 'threeway_c{}.p{}.t{}.ConvLstmLayers{}.lr{}'.format(
len_closeness, len_period, len_trend,ConvLstmLayers , lr)
fname_param = os.path.join(path_model, '{}.best.h5'.format(hyperparams_name))
early_stopping = EarlyStopping(monitor='val_rmse', patience=5, mode='min')
model_checkpoint = ModelCheckpoint(
fname_param, monitor='val_rmse', verbose=0, save_best_only=True, mode='min')
print('=' * 10)
print("training model...")
history = model.fit(X_train, Y_train,
nb_epoch=nb_epoch,
batch_size=batch_size,
validation_split=0.1,
callbacks=[early_stopping, model_checkpoint],
verbose=1)
model.save_weights(os.path.join(
path_model, '{}.h5'.format(hyperparams_name)), overwrite=True)
pickle.dump((history.history), open(os.path.join(
path_result, '{}.history.pkl'.format(hyperparams_name)), 'wb'))
print('=' * 10)
print('evaluating using the model that has the best loss on the valid set')
model.load_weights(fname_param)
score = model.evaluate(X_train, Y_train, batch_size=Y_train.shape[
0] // 48, verbose=0)
print('Train score: %.6f rmse (norm): %.6f rmse (real): %.6f' %
(score[0], score[1], score[1] * (mmn._max - mmn._min) / 2. * m_factor))
score = model.evaluate(
X_test, Y_test, batch_size=Y_test.shape[0], verbose=0)
print('Test score: %.6f rmse (norm): %.6f rmse (real): %.6f' %
(score[0], score[1], score[1] * (mmn._max - mmn._min) / 2. * m_factor))
print('=' * 10)
print("training model (cont)...")
fname_param = os.path.join(
path_model, '{}.cont.best.h5'.format(hyperparams_name))
model_checkpoint = ModelCheckpoint(
fname_param, monitor='rmse', verbose=0, save_best_only=True, mode='min')
history = model.fit(X_train, Y_train, nb_epoch=nb_epoch_cont, verbose=1, batch_size=batch_size, callbacks=[
model_checkpoint], validation_data=(X_test, Y_test))
pickle.dump((history.history), open(os.path.join(
path_result, '{}.cont.history.pkl'.format(hyperparams_name)), 'wb'))
model.save_weights(os.path.join(
path_model, '{}_cont.h5'.format(hyperparams_name)), overwrite=True)
print('=' * 10)
print('evaluating using the final model')
score = model.evaluate(X_train, Y_train, batch_size=Y_train.shape[
0] // 48, verbose=0)
print('Train score: %.6f rmse (norm): %.6f rmse (real): %.6f' %
(score[0], score[1], score[1] * (mmn._max - mmn._min) / 2. * m_factor))
score = model.evaluate(
X_test, Y_test, batch_size=Y_test.shape[0], verbose=0)
print('Test score: %.6f rmse (norm): %.6f rmse (real): %.6f' %
(score[0], score[1], score[1] * (mmn._max - mmn._min) / 2. * m_factor))
if __name__ == '__main__':
main()
```
| github_jupyter |
```
import cv2
from pathlib import Path
from random import *
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from skimage.feature import hog
from imutils import face_utils
#import dlib
import os
import pickle
np.random.seed(1000)
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
frames = []
labels = []
for file in os.listdir('output/'):
if file[-10:] == 'frames.pkl':
with open('output/'+file, 'rb') as f:
frames.append(pickle.load(f))
elif file[-10:] == 'labels.pkl':
with open('output/'+file, 'rb') as f:
labels.append(pickle.load(f))
print(len(frames), len(labels))
from sklearn.model_selection import train_test_split
train_clips, test_clips, train_clips_labels, test_clips_labels = \
train_test_split(frames, labels, test_size=0.2, random_state=42)
train_images, test_images, train_labels, test_labels = [], [], [], []
for clip, label in zip(train_clips, train_clips_labels):
try:
train_images, train_labels = train_images + clip, train_labels + [label[0]] * len(clip)
except:
continue
for clip, label in zip(test_clips, test_clips_labels):
try:
test_images, test_labels = test_images + clip, test_labels + [label[0]] * len(clip)
except:
continue
print(len(train_images), len(train_labels), len(test_images), len(test_labels))
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
for i in range(len(train_images)):
train_images[i] = clahe.apply(train_images[i])
for i in range(len(test_images)):
test_images[i] = clahe.apply(test_images[i])
train_images, test_images, train_labels, test_labels = np.asarray(train_images), np.asarray(test_images), np.asarray(train_labels), np.asarray(test_labels)
test_images, test_labels =np.asarray(test_images), np.asarray(test_labels)
test_images = np.expand_dims(test_images, axis=3)
train_images = np.expand_dims(train_images, axis=3)
train_labels //= 2
test_labels //= 2
#print((train_labels == 0).sum())
#(train_labels == 1).sum()
import tensorflow.keras.layers as kl
import tensorflow.keras.losses
from tensorflow.keras.applications.resnet50 import ResNet50
def network():
model = tf.keras.Sequential()
model.add(kl.InputLayer(input_shape=(48, 48, 1)))
# First conv block
model.add(kl.Conv2D(filters = 96, kernel_size=7, padding='same', strides=2))
model.add(tf.keras.layers.ReLU())
model.add(kl.MaxPooling2D(pool_size=(3, 3)))
# Second conv block
model.add(kl.Conv2D(filters = 256, kernel_size=5, padding='same', strides=1))
model.add(tf.keras.layers.ReLU())
model.add(kl.MaxPooling2D(pool_size=(2, 2)))
# model.add(tf.keras.layers.Dropout(0.3))
# Third-Fourth-Fifth conv block
for i in range(3):
model.add(kl.Conv2D(filters = 144, kernel_size=3, padding='same', strides=1))
model.add(tf.keras.layers.ReLU())
model.add(tf.keras.layers.Dropout(0.3))
model.add(kl.MaxPooling2D(pool_size=(3, 3)))
# Flatten
model.add(kl.Flatten())
# First FC
model.add(kl.Dense(4048))
# Second Fc
model.add(kl.Dense(4048))
# Third FC
model.add(kl.Dense(4))
# Softmax at the end
model.add(kl.Softmax())
return model
''''
model = network()
InitialLearnRate = 0.03
MaxEpochs = 30
MiniBatchSize = 32
opt = tf.keras.optimizers.SGD(lr=InitialLearnRate, decay=InitialLearnRate / MaxEpochs)
model.compile(loss="sparse_categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
'''
model = network()
model.compile(optimizer='adamax',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
from sklearn.utils.class_weight import compute_class_weight
weights = compute_class_weight(class_weight = 'balanced', classes = np.unique(train_labels), y = train_labels)
print(weights)
weights = {0:0.68708684, 1:1.83627784}
print(weights)
history = model.fit(train_images, train_labels, epochs=2, class_weight=weights, batch_size=50)
test_loss, test_acc = model.evaluate(test_images, test_labels)
model.save('boredom74.h5')
```
| github_jupyter |
Experiments with Yelp API
Notes:
* Documentation: https://www.yelp.com/developers/documentation/v3
* Limit of 25,000 calls per day (see FAQ)
* Options for search: https://www.yelp.com/developers/documentation/v3/business_search
* 'term', 'location', 'limit' (max 50), 'offset', 'price'
* 'categories' - comma-delimited string, using category identifier (e.g. "bars,french")
* 'sort_by': 'best_match', 'rating', 'review_count', 'distance'
* sort by review count and rating breaks after 200 for NYC
* sort by best_match throws error if ask above 1000
* 'attributes': e.g. 'hot_and_new,cashback' also:
* 'request_a_quote', 'waitlist_reservation', 'deals', 'gender_neutral_restrooms'
* Returns: 'total' (# reviews);
* City population data gotten from:
* https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk
* from wikipedia for populations of US cities
* After scraping, run the process_scraped_data() function in the util library
Scrapes
1. Get top 1000 restaurants in each city (761 * 20 = ~14000 scrapes)
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import json
import time
import os
from json.decoder import JSONDecodeError
import util
```
### Load food categories and cities
```
# These are used for the 'category' input to the search function
df_categories = pd.read_json('/gh/data2/yelp/categories.json')
df_categories.head()
# Load cities info
df_cities = pd.read_csv('/gh/data2/yelp/city_pop_old2.csv', index_col=0)
df_cities.head()
```
# Query API
```
t_start = time.time()
verbose = True
# Define search term
search_term = 'food'
step_size = 50
N_steps = 20 # 1000 results max
# Prepare parameters and outputs for looping through each city
N_cities = len(df_cities)
search_params = {'term': search_term,
'limit': step_size
}
# Collect restaurant data from each city
for i, row in df_cities.iterrows():
# Print city and time elapsed
print('\n{:}, {:}, time = {:.2f} seconds'.format(row['city'], i, time.time()-t_start))
# Check if dataframe exists
json_name = '/gh/data2/yelp/food_by_city/places/'+row['city']+'_'+row['state']+'.json'
if os.path.isfile(json_name):
print('Already scraped')
else:
# Update location
search_params['location'] = row['city'] + ', ' + row['state']
# Loop through the first 1000 in steps of 50
total_temp = []
lats_temp = []
longs_temp = []
businesses_temp = []
for j in range(N_steps):
# Determine range of restaurants to acquire
search_params['offset'] = step_size*j
# Scrape 50 restaurants
try:
t, lat, lon, bus = util.query_api(search_params, verbose=False)
except JSONDecodeError:
print('Got a JSON decode error. Try again.')
time.sleep(5)
try:
t, lat, lon, bus = util.query_api(search_params)
except JSONDecodeError:
print('Another JSON decode error. Try the next block.')
break
# Exit loop if no more restaurants
if t is None:
if verbose:
print('Finished getting restaurants after scraping:', search_params['offset'])
break
# Save business data
total_temp.append(t)
lats_temp.append(lat)
longs_temp.append(lon)
businesses_temp.append(bus)
# Save the business data to a dataframe
with open(json_name, 'w') as fout:
json.dump(list(np.hstack(businesses_temp)), fout)
# Save totals array
totals_name = '/gh/data2/yelp/food_by_city/totals/'+row['city']+'_'+row['state']+'.npy'
np.save(totals_name, total_temp)
# Save latitude
lats_diff = np.sum(np.array(lats_temp) - lats_temp[0])
if lats_diff > 0:
print('Latitude not constant:')
print(lats_temp)
lats_name = '/gh/data2/yelp/food_by_city/lats/'+row['city']+'_'+row['state']+'.npy'
np.save(lats_name, lats_temp)
else:
lats_name = '/gh/data2/yelp/food_by_city/lats/'+row['city']+'_'+row['state']+'.txt'
with open(lats_name, "w") as f:
f.write(str(lats_temp[0]))
# Save longitude
longs_diff = np.sum(np.array(longs_temp) - longs_temp[0])
if longs_diff > 0:
print('Longitude not constant:')
print(longs_temp)
longs_name = '/gh/data2/yelp/food_by_city/longs/'+row['city']+'_'+row['state']+'.npy'
np.save(longs_name, longs_temp)
else:
longs_name = '/gh/data2/yelp/food_by_city/longs/'+row['city']+'_'+row['state']+'.txt'
with open(longs_name, "w") as f:
f.write(str(longs_temp[0]))
util.process_scraped_data()
util.expand_df_cities()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.compose import ColumnTransformer
from termcolor import colored
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.model_selection import cross_val_score, RepeatedStratifiedKFold
from sklearn.pipeline import Pipeline
```
# Datos Puros
```
data = pd.read_csv('')
data.head(1)
data = data.drop(['Second_Protocol', 'Third_Protocol','Cryptocurrency'], axis = 1)
```
## Imputación de valores ausentes
```
punter = pd.concat([data['second_sp'],data['second_dp'],data['third_sp'],data['third_dp']], axis = 1)
imputer = SimpleImputer(missing_values = np.nan, strategy = "median")
values = imputer.fit_transform(punter)
punter = pd.DataFrame(values, columns = punter.columns)
data['second_sp'] = punter['second_sp']
data['second_dp'] = punter['second_dp']
data['third_sp'] = punter['third_sp']
data['third_dp'] = punter['third_dp']
data.head(2)
```
# Exclusión de variables con varianza próxima a cero
## OneHotEncoder
```
data_categoric = data[data.select_dtypes(include=['object']).columns.to_list()]
one_hot = OneHotEncoder(drop="first")
one_hot.fit_transform(data_categoric)
one_hot.categories_
dataDummy = pd.get_dummies(data_categoric)
dataDummy.head()
```
# Multicolinealidad
```
from statsmodels.stats.outliers_influence import variance_inflation_factor
multicolinialidad = pd.concat([data, dataDummy], axis = 1)
multicolinialidad.columns
multicolinialidad = multicolinialidad.drop(['Type_not_mine','Type_mine','Type','First_Protocol'],axis = 1)
vif_data = pd.DataFrame()
vif_data["feature"] = multicolinialidad.columns
vif_data["VIF"] = [variance_inflation_factor(multicolinialidad.values, i) for i in range(len(multicolinialidad.columns))]
vif_data
```
## Estandarización
```
data_numeric = data[data.select_dtypes(include=['float64', 'int64']).columns.to_list()]
preprocessor = ColumnTransformer([
('scale', StandardScaler(), data_numeric.columns),
], remainder='passthrough')
values = preprocessor.fit_transform(data_numeric)
values
data_standarizada = pd.DataFrame(values, columns = data_numeric.columns)
data_standarizada.head(1)
```
### Concatenación de los conjuntos de datos
```
data_p = pd.concat([data_standarizada, dataDummy], axis = 1)
data_p.columns
data_p = data_p.drop('Type_not_mine',axis = 1)
data_p.to_csv('', index=False)
```
# Data No Pura
```
data_n = pd.read_csv('').drop(['Name'],axis = 1)
data_n.head(1)
data_n = data_n.drop(['Second_Protocol','Third_Protocol'], axis = 1)
```
## Identificar valores nulos
```
data_n.isnull().sum()
```
## Imputar valores
```
punter = pd.concat([data_n['second_sp'],data_n['third_sp'],data_n['second_dp'],data_n['third_dp']], axis = 1)
imputer = SimpleImputer(missing_values = np.nan, strategy = 'median')
values = imputer.fit_transform(punter)
values = pd.DataFrame(values, columns = punter.columns)
data_n['second_sp'] = values['second_sp']
data_n['third_sp'] = values['third_sp']
data_n['second_dp'] = values['second_dp']
data_n['third_dp'] = values['third_dp']
```
## OneHotEncoder
```
data_categoric = data_n.select_dtypes(['object'])
data_categoric.columns
one_hot = OneHotEncoder()
one_hot.fit_transform(data_categoric)
dataDummy = pd.get_dummies(data_categoric)
dataDummy.head()
```
## Estandarización
```
data_numeric = data_n.select_dtypes(['int64','float64'])
preprocessor = ColumnTransformer([
('scale', StandardScaler(), data_numeric.columns),
], remainder='passthrough')
values = preprocessor.fit_transform(data_numeric)
data_estandarizada = pd.DataFrame(values,columns= data_numeric.columns)
data_estandarizada.head(1)
```
### Concatenación de los conjuntos de datos
```
data_n = pd.concat([data_estandarizada,dataDummy],axis = 1)
data_n.head()
data_n.to_csv('', index = False)
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Given sorted arrays A, B, merge B into A in sorted order.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Does A have enough space for B?
* Yes
* Can the inputs have duplicate array items?
* Yes
* Can we assume the inputs are valid?
* No
* Does the inputs also include the actual size of A and B?
* Yes
* Can we assume this fits memory?
* Yes
## Test Cases
* A or B is None -> Exception
* index of last A or B < 0 -> Exception
* A or B is empty
* General case
* A = [1, 3, 5, 7, 9, None, None, None]
* B = [4, 5, 6]
* A = [1, 3, 4, 5, 5, 6, 7, 9]
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.jupyter.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/merge_into/merge_into_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
class Array(object):
def merge_into(self, source, dest, source_end_index, dest_end_index):
result = []
if source is None or dest is None:
raise TypeError
if source_end_index < 0 or dest_end_index < 0:
raise ValueError
s_idx = 0
e_idx = 0
while s_idx < source_end_index and e_idx < dest_end_index:
if source[s_idx] < dest[e_idx]:
result.append(source[s_idx])
s_idx += 1
else:
result.append(dest[e_idx])
e_idx += 1
while s_idx < source_end_index:
result.append(source[s_idx])
s_idx += 1
while e_idx < dest_end_index:
result.append(dest[e_idx])
e_idx += 1
return result
pass
```
## Unit Test
**The following unit test is expected to fail until you solve the challenge.**
```
# %load test_merge_into.py
import unittest
class TestArray(unittest.TestCase):
def test_merge_into(self):
array = Array()
self.assertRaises(TypeError, array.merge_into, None, None, None, None)
self.assertRaises(ValueError, array.merge_into, [1], [2], -1, -1)
a = [1, 2, 3]
self.assertEqual(array.merge_into(a, [], len(a), 0), [1, 2, 3])
a = [1, 2, 3]
self.assertEqual(array.merge_into(a, [], len(a), 0), [1, 2, 3])
a = [1, 3, 5, 7, 9, None, None, None]
b = [4, 5, 6]
expected = [1, 3, 4, 5, 5, 6, 7, 9]
self.assertEqual(array.merge_into(a, b, 5, len(b)), expected)
print('Success: test_merge_into')
def main():
test = TestArray()
test.test_merge_into()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook]() for a discussion on algorithms and code solutions.
| github_jupyter |
```
import unittest
from decimal import Decimal
import age
resultHandler = age.newResultHandler()
def evalExp(exp):
value = resultHandler.parse(exp)
print(type(value), "|", exp, " --> " ,value )
mapStr = '{"name": "Smith", "num":123, "yn":true, "bigInt":123456789123456789123456789123456789::numeric}'
arrStr = '["name", "Smith", "num", 123, "yn", true, 123456789123456789123456789123456789.8888::numeric]'
strStr = '"abcd"'
intStr = '1234'
floatStr = '1234.56789'
numericStr1 = '12345678901234567890123456789123456789.789::numeric'
numericStr2 = '12345678901234567890123456789123456789::numeric'
boolStr = 'true'
evalExp(mapStr)
evalExp(arrStr)
evalExp(strStr)
evalExp(intStr)
evalExp(floatStr)
evalExp(numericStr1)
evalExp(numericStr2)
evalExp(boolStr)
evalExp('-6.45161290322581e+46')
evalExp('-123456789.99::numeric')
evalExp('-6.45161290322581e+46::numeric')
evalExp('1234')
evalExp('NaN')
evalExp('-Infinity')
evalExp('Infinity')
vertexExp = '''{"id": 2251799813685425, "label": "Person",
"properties": {"name": "Smith", "numInt":123, "numFloat": 384.23424,
"bigInt":123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789::numeric,
"bigFloat":123456789123456789123456789123456789.12345::numeric,
"yn":true, "nullVal": null}}::vertex'''
vertex = age.parseAgeValue(vertexExp)
print(type(vertex.id), vertex.id)
print(type(vertex.label), vertex.label)
print(type(vertex["name"]), vertex["name"])
print(type(vertex["numInt"]), vertex["numInt"])
print(type(vertex["numFloat"]), vertex["numFloat"])
print(type(vertex["bigInt"]), vertex["bigInt"])
print(type(vertex["bigFloat"]), vertex["bigFloat"])
print(type(vertex["yn"]), vertex["yn"])
print(type(vertex["nullVal"]), vertex["nullVal"])
pathExp = '''[{"id": 2251799813685425, "label": "Person", "properties": {"name": "Smith"}}::vertex,
{"id": 2533274790396576, "label": "workWith", "end_id": 2251799813685425, "start_id": 2251799813685424,
"properties": {"weight": 3, "bigFloat":123456789123456789123456789.12345::numeric}}::edge,
{"id": 2251799813685424, "label": "Person", "properties": {"name": "Joe"}}::vertex]::path'''
path = age.parseAgeValue(pathExp)
vertexStart = path[0]
edge = path[1]
vertexEnd = path[2]
print(type(vertexStart.id), vertexStart.id)
print(type(vertexStart.label), vertexStart.label)
print(type(vertexStart["name"]), vertexStart["name"])
print(type(edge.id), edge.id)
print(type(edge.label), edge.label)
print(type(edge["weight"]), edge["weight"])
print(type(edge["bigFloat"]), edge["bigFloat"])
print(type(vertexEnd.id), vertexEnd.id)
print(type(vertexEnd.label), vertexEnd.label)
print(type(vertexEnd["name"]), vertexEnd["name"])
```
| github_jupyter |
## More on Lists
### List slicing
*Slicing* uses the bracket operator (`[]`) to copies a *slice* out of a list. The syntax is `lst[start:stop:step]`. Every parameter is optional. The defaults are equivalent to writing `lst[0:len(lst):1]`.
Copy of whole list:
```
a_list = [1, 2, 'a', 'string', 3.14159, True, "red", 3]
new_list = a_list[:] # new_list now a copy of a_list
# same as `new_list = a_list[0:len(a_list):1]
print("new_list:", new_list)
a_list[0] = 999
print("a_list:", a_list)
print("new_list:", new_list)
```
Copy of list from 3 on:
```
print(a_list)
new_list = a_list[3:]
print(new_list)
```
Copy of list up to 4:
```
new_list = a_list[:4]
print(new_list)
```
We can also index from the back of the list, using negative indices:
```
print(a_list[-3:-1])
print("a_list:", a_list)
print("[1:8:3]:", a_list[1:8:3])
print("[::2]:", a_list[::2])
```
### `del`
- `pop()` takes items off the back of a list.
- `remove()` deletes items by value.
So how do we delete, say, the 5th item in a list? The answer is `del`.
```
lst = [0, 2, 4, 6, 8, 10, 12, 14, 16]
del lst[5]
print(lst)
del lst[6:8] # we can use a range
print(lst)
```
### `min` and `max`
We can get the minimum and maximum values in a list with `min()` and `max()`:
```
min(lst)
max(lst)
```
Let's write a version of `max()` ourselves to see how it works:
```
def our_max(lst):
"""Return max value from a list."""
if not lst:
return None
this_max = lst[0]
for i in range(1, len(lst)):
if lst[i] > this_max:
this_max = lst[i]
return this_max
print(our_max(lst))
```
What happens if we try `min()` or `max()` on a list of mixed types? Let's find out!
```
mixed_list = [0, -2.34, 'abc', 7, None, 2.718]
min(mixed_list)
```
### Sorting lists
Lists have a `sort()` method in Python. It sorts the list in place, and returns `None`:
```
lst = [10, 1, 22, 3, 1.4, 5, 66, 77, 8]
print(lst.sort())
print(lst)
print(lst.sort(reverse=True))
print(lst)
```
### Lists and strings
We can take a list of strings and join them into a single string:
```
words = ['These', 'are', 'some',
'words', 'in', 'a', 'list']
sentence = ' '.join(words)
print("Type of words:", type(words))
print("Type of sentence:", type(sentence))
print("id words:", id(words), "; id sentence:",
id(sentence))
print(sentence)
sentence2 = sentence
print("id sentence2:", id(sentence2))
print("id sentence:", id(sentence))
print("sen2 == sen ?", sentence2 == sentence)
print("sen2 is sen ?", sentence2 is sentence)
sentence3 = ' '.join(words)
print("id sentence2:", id(sentence2))
print("id sentence3:", id(sentence3))
print("sen2 == sen3 ?", sentence2 == sentence3)
print("sen2 is sen3 ?", sentence2 is sentence3)
```
We can also do the opposite operation, and separate a string into a list of strings:
```
csv_line = "Monday,45.3,76.2,.1,32"
fields = csv_line.split(",")
print(fields)
```
### A list as a parameter to a function which changes the list
Let's randomly replace 3 elements in a list with 'x'.
This function modifies the list *in-place* and does not return it.
```
import random
def random_list_changer(lst):
for loop_index in range(3):
# may overwrite same location!
rand_index = random.randrange(0, len(lst))
print("index =", rand_index)
lst[rand_index] = 'x'
def main():
a_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
random_list_changer(a_list)
print(a_list) # Note that the list has changed!
main()
```
#### When are objects actually the same object?
Two ints:
```
an_int = 257
my_int = 257
# an_int = 6
# my_int = 8
print("my_int == an_int:", my_int == an_int)
print(id(an_int), id(my_int))
print("an int is my int?", an_int is my_int)
```
Two lists:
```
a_list = [1, 2, 3, 4, 5]
b_list = a_list
# a_list = "Hello"
print(id(a_list), id(b_list))
print(a_list is b_list)
```
Let's change an item in a_list and see what happens:
```
a_list[3] = 10
print("a_list =", a_list, "; b_list =", b_list)
```
Taking a list slice creates a new list!
```
c_list = [2, 4, 6, 8]
d_list = c_list[:] # list copy via slice
print(d_list)
print(c_list == d_list)
print(c_list is d_list)
print(id(c_list), id(d_list))
c_list[0] = 'Hello'
print("c_list =", c_list, "; d_list =", d_list)
```
| github_jupyter |
# Use Spark to predict credit risk with `ibm-watson-machine-learning`
This notebook introduces commands for model persistance to Watson Machine Learning repository, model deployment, and scoring.
Some familiarity with Python is helpful. This notebook uses Python 3.7 and Apache® Spark 2.4.
You will use **German Credit Risk** dataset.
## Learning goals
The learning goals of this notebook are:
- Load a CSV file into an Apache® Spark DataFrame.
- Explore data.
- Prepare data for training and evaluation.
- Persist a pipeline and model in Watson Machine Learning repository from tar.gz files.
- Deploy a model for online scoring using Wastson Machine Learning API.
- Score sample scoring data using the Watson Machine Learning API.
- Explore and visualize prediction result using the plotly package.
## Contents
This notebook contains the following parts:
1. [Set up](#setup)
2. [Load and explore data](#load)
3. [Persist model](#persistence)
4. [Predict locally](#visualization)
5. [Deploy and score in a Cloud](#scoring)
6. [Clean up](#cleanup)
7. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`.
You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location.
API Key can be generated in the following way:
```
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
```
In result, get the value of `api_key` from the output.
Location of your WML instance can be retrieved in the following way:
```
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
```
In result, get the value of `location` from the output.
**Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get a service specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details.
You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
**Action**: Enter your `api_key` and `location` in the following cell.
```
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one.
- Click New Deployment Space
- Create an empty space
- Select Cloud Object Storage
- Select Watson Machine Learning instance and press Create
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
### Test Spark
```
try:
from pyspark.sql import SparkSession
except:
print('Error: Spark runtime is missing. If you are using Watson Studio change the notebook runtime to Spark.')
raise
```
<a id="load"></a>
## 2. Load and explore data
In this section you will load the data as an Apache® Spark DataFrame and perform a basic exploration.
The csv file for German Credit Risk is available on the same repository as this notebook. Load the file to Apache® Spark DataFrame using code below.
```
import os
from wget import download
sample_dir = 'spark_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
filename = os.path.join(sample_dir, 'credit_risk_training.csv')
if not os.path.isfile(filename):
filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/data/credit_risk/credit_risk_training.csv', out=sample_dir)
spark = SparkSession.builder.getOrCreate()
df_data = spark.read\
.format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\
.option('header', 'true')\
.option('inferSchema', 'true')\
.load(filename)
```
Explore the loaded data by using the following Apache® Spark DataFrame methods:
- print schema
- print top ten records
- count all records
```
df_data.printSchema()
```
As you can see, the data contains 21 fields. Risk field is the one we would like to predict (label).
```
df_data.show(n=5, truncate=False, vertical=True)
print("Number of records: " + str(df_data.count()))
```
As you can see, the data set contains 5000 records.
### 2.1 Prepare data
In this subsection you will split your data into: train, test and predict datasets.
```
splitted_data = df_data.randomSplit([0.8, 0.18, 0.02], 24)
train_data = splitted_data[0]
test_data = splitted_data[1]
predict_data = splitted_data[2]
print("Number of training records: " + str(train_data.count()))
print("Number of testing records : " + str(test_data.count()))
print("Number of prediction records : " + str(predict_data.count()))
```
As you can see our data has been successfully split into three datasets:
- The train data set, which is the largest group, is used for training.
- The test data set will be used for model evaluation and is used to test the assumptions of the model.
- The predict data set will be used for prediction.
<a id="persistence"></a>
## 3. Persist model
In this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using python client libraries.
**Note**: Apache® Spark 2.4 is required.
#### Save training data in your Cloud Object Storage
ibm-cos-sdk library allows Python developers to manage Cloud Object Storage (COS).
```
import ibm_boto3
from ibm_botocore.client import Config
```
**Action**: Put credentials from Object Storage Service in Bluemix here.
```
cos_credentials = {
"apikey": "***",
"cos_hmac_keys": {
"access_key_id": "***",
"secret_access_key": "***"
},
"endpoints": "***",
"iam_apikey_description": "***",
"iam_apikey_name": "***",
"iam_role_crn": "***",
"iam_serviceid_crn": "***",
"resource_instance_id": "***"
}
connection_apikey = cos_credentials['apikey']
connection_resource_instance_id = cos_credentials["resource_instance_id"]
connection_access_key_id = cos_credentials['cos_hmac_keys']['access_key_id']
connection_secret_access_key = cos_credentials['cos_hmac_keys']['secret_access_key']
```
**Action**: Define the service endpoint we will use. <br>
**Tip**: You can find this information in Endpoints section of your Cloud Object Storage intance's dashbord.
```
service_endpoint = 'https://s3.us.cloud-object-storage.appdomain.cloud'
```
You also need IBM Cloud authorization endpoint to be able to create COS resource object.
```
auth_endpoint = 'https://iam.cloud.ibm.com/identity/token'
```
We create COS resource to be able to write data to Cloud Object Storage.
```
cos = ibm_boto3.resource('s3',
ibm_api_key_id=cos_credentials['apikey'],
ibm_service_instance_id=cos_credentials['resource_instance_id'],
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
```
Now you will create bucket in COS and copy `training dataset` for model from **credit_risk_training.csv**.
```
from uuid import uuid4
bucket_uid = str(uuid4())
score_filename = "credit_risk_training.csv"
buckets = ["credit-risk-" + bucket_uid]
for bucket in buckets:
if not cos.Bucket(bucket) in cos.buckets.all():
print('Creating bucket "{}"...'.format(bucket))
try:
cos.create_bucket(Bucket=bucket)
except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
print('Error: {}.'.format(e.response['Error']['Message']))
bucket_obj = cos.Bucket(buckets[0])
print('Uploading data {}...'.format(score_filename))
with open(filename, 'rb') as f:
bucket_obj.upload_fileobj(f, score_filename)
print('{} is uploaded.'.format(score_filename))
```
### Create connections to a COS bucket
```
datasource_type = client.connections.get_datasource_type_uid_by_name('bluemixcloudobjectstorage')
conn_meta_props= {
client.connections.ConfigurationMetaNames.NAME: "COS connection - spark",
client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: datasource_type,
client.connections.ConfigurationMetaNames.PROPERTIES: {
'bucket': buckets[0],
'access_key': connection_access_key_id,
'secret_key': connection_secret_access_key,
'iam_url': auth_endpoint,
'url': service_endpoint
}
}
conn_details = client.connections.create(meta_props=conn_meta_props)
```
**Note**: The above connection can be initialized alternatively with `api_key` and `resource_instance_id`.
The above cell can be replaced with:
```
conn_meta_props= {
client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {db_name} ",
client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_uid_by_name(db_name),
client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database",
client.connections.ConfigurationMetaNames.PROPERTIES: {
'bucket': bucket_name,
'api_key': cos_credentials['apikey'],
'resource_instance_id': cos_credentials['resource_instance_id'],
'iam_url': 'https://iam.cloud.ibm.com/identity/token',
'url': 'https://s3.us.cloud-object-storage.appdomain.cloud'
}
}
conn_details = client.connections.create(meta_props=conn_meta_props)
```
```
connection_id = client.connections.get_uid(conn_details)
```
### 3.1: Save pipeline and model
In this subsection you will learn how to save pipeline and model artifacts to your Watson Machine Learning instance.
**Download pipeline and model archives**
```
import os
from wget import download
sample_dir = 'spark_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
pipeline_filename = os.path.join(sample_dir, 'credit_risk_spark_pipeline.tar.gz')
if not os.path.isfile(pipeline_filename):
pipeline_filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/spark/credit-risk/model/credit_risk_spark_pipeline.tar.gz', out=sample_dir)
model_filename = os.path.join(sample_dir, 'credit_risk_spark_model.gz')
if not os.path.isfile(model_filename):
model_filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/spark/credit-risk/model/credit_risk_spark_model.gz', out=sample_dir)
```
**Store piepline and model**
To be able to store your Spark model, you need to provide a training data reference, this will allow to read the model schema automatically.
```
training_data_references = [
{
"type": "connection_asset",
"connection": {
"id": connection_id,
},
"location": {
"bucket": buckets[0],
"file_name": score_filename,
},
"schema": {
"id": "training_schema",
"fields": [
{
"metadata": {},
"name": "CheckingStatus",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "LoanDuration",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "CreditHistory",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "LoanPurpose",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "LoanAmount",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "ExistingSavings",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "EmploymentDuration",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "InstallmentPercent",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "Sex",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "OthersOnLoan",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "CurrentResidenceDuration",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "OwnsProperty",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "Age",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "InstallmentPlans",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "Housing",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "ExistingCreditsCount",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "Job",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "Dependents",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "Telephone",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "ForeignWorker",
"nullable": True,
"type": "string"
},
{
"metadata": {
"modeling_role": "target"
},
"name": "Risk",
"nullable": True,
"type": "string"
}
]
}
}
]
published_model_details = client.repository.store_model(
model=model_filename,
meta_props={
client.repository.ModelMetaNames.NAME:'Credit Risk model',
client.repository.ModelMetaNames.TYPE: "mllib_2.4",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_id_by_name('spark-mllib_2.4'),
client.repository.ModelMetaNames.TRAINING_DATA_REFERENCES: training_data_references,
client.repository.ModelMetaNames.LABEL_FIELD: "Risk",
},
training_data=train_data,
pipeline=pipeline_filename)
model_uid = client.repository.get_model_uid(published_model_details)
print(model_uid)
client.repository.get_model_details(model_uid)
```
Get saved model metadata from Watson Machine Learning.
**Tip**: Use `client.repository.ModelMetaNames.show()` to get the list of available props.
```
client.repository.ModelMetaNames.show()
```
### 3.2: Load model
In this subsection you will learn how to load back saved model from specified instance of Watson Machine Learning.
```
loaded_model = client.repository.load(model_uid)
```
You can print for example model name to make sure that model has been loaded correctly.
```
print(type(loaded_model))
```
<a id="visualization"></a>
## 4. Predict locally
In this section you will learn how to score test data using loaded model.
### 4.1: Make local prediction using previously loaded model and test data
In this subsection you will score *predict_data* data set.
```
predictions = loaded_model.transform(predict_data)
```
Preview the results by calling the *show()* method on the predictions DataFrame.
```
predictions.show(5)
```
By tabulating a count, you can see which product line is the most popular.
```
predictions.select("predictedLabel").groupBy("predictedLabel").count().show(truncate=False)
```
<a id="scoring"></a>
## 5. Deploy and score in a Cloud
In this section you will learn how to create online scoring and to score a new data record using `ibm-watson-machine-learning`.
**Note:** You can also use REST API to deploy and score.
For more information about REST APIs, see the [Swagger Documentation](https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_create).
### 5.1: Create online scoring endpoint
Now you can create an online scoring endpoint.
#### Create online deployment for published model
```
deployment_details = client.deployments.create(
model_uid,
meta_props={
client.deployments.ConfigurationMetaNames.NAME: "Credit Risk model deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
deployment_details
```
Now, you can send new scoring records (new data) for which you would like to get predictions. To do that, execute the following sample code:
```
fields = ["CheckingStatus", "LoanDuration", "CreditHistory", "LoanPurpose", "LoanAmount", "ExistingSavings",
"EmploymentDuration", "InstallmentPercent", "Sex", "OthersOnLoan", "CurrentResidenceDuration",
"OwnsProperty", "Age", "InstallmentPlans", "Housing", "ExistingCreditsCount", "Job", "Dependents",
"Telephone", "ForeignWorker"]
values = [
["no_checking", 13, "credits_paid_to_date", "car_new", 1343, "100_to_500", "1_to_4", 2, "female", "none", 3,
"savings_insurance", 46, "none", "own", 2, "skilled", 1, "none", "yes"],
["no_checking", 24, "prior_payments_delayed", "furniture", 4567, "500_to_1000", "1_to_4", 4, "male", "none",
4, "savings_insurance", 36, "none", "free", 2, "management_self-employed", 1, "none", "yes"],
["0_to_200", 26, "all_credits_paid_back", "car_new", 863, "less_100", "less_1", 2, "female", "co-applicant",
2, "real_estate", 38, "none", "own", 1, "skilled", 1, "none", "yes"],
["0_to_200", 14, "no_credits", "car_new", 2368, "less_100", "1_to_4", 3, "female", "none", 3, "real_estate",
29, "none", "own", 1, "skilled", 1, "none", "yes"],
["0_to_200", 4, "no_credits", "car_new", 250, "less_100", "unemployed", 2, "female", "none", 3,
"real_estate", 23, "none", "rent", 1, "management_self-employed", 1, "none", "yes"],
["no_checking", 17, "credits_paid_to_date", "car_new", 832, "100_to_500", "1_to_4", 2, "male", "none", 2,
"real_estate", 42, "none", "own", 1, "skilled", 1, "none", "yes"],
["no_checking", 33, "outstanding_credit", "appliances", 5696, "unknown", "greater_7", 4, "male",
"co-applicant", 4, "unknown", 54, "none", "free", 2, "skilled", 1, "yes", "yes"],
["0_to_200", 13, "prior_payments_delayed", "retraining", 1375, "100_to_500", "4_to_7", 3, "male", "none", 3,
"real_estate", 37, "none", "own", 2, "management_self-employed", 1, "none", "yes"]
]
payload_scoring = {"input_data": [{"fields": fields, "values": values}]}
deployment_id = client.deployments.get_id(deployment_details)
client.deployments.score(deployment_id, payload_scoring)
```
<a id="cleanup"></a>
## 6. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 7. Summary and next steps
You successfully completed this notebook! You learned how to use Apache Spark machine learning as well as Watson Machine Learning for model creation and deployment. Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics) for more samples, tutorials, documentation, how-tos, and blog posts.
### Authors
**Amadeusz Masny**, Python Software Developer in Watson Machine Learning at IBM
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
```
import csv
import numpy as np
from google.colab import drive
import pandas as pd
import json
import ast
import matplotlib.pyplot as plt
import collections
```
#Main Functions
```
def reverse_counts(counts, size=20):
"""
Reverses the keys of a dictionary (i.e. the characters in all the keys are reversed)
Parameters:
counts (dict): dictionary containing the measurement results
size (int): the number of qubits measured
Returns:
reverse_counts (dict): dictionary with keys in reverse order
"""
intermediate = {}
for key, value in counts.items():
rev_key = ""
for i in range(size):
rev_key = rev_key + key[size-i-1]
intermediate[key] = rev_key
reverse_counts = dict([(intermediate.get(k), v) for k, v in counts.items()])
return reverse_counts
def get_delegated_OTP_keys(permutation, x_key, z_key, num_qubits=14, syndrome_cnots = [[14, 0], [14, 2], [14, 4], [14, 6], [15, 1], [15, 2], [15, 5], [15, 6], [16, 3], [16, 4], [16, 5], [16, 6], [17, 7], [17, 9], [17, 11], [17, 13], [18, 8], [18, 9], [18, 12], [18, 13], [19, 10], [19, 11], [19, 12], [19, 13]]):
"""
Get delegated, post-processed, classical one-time pad keys for a program
Parameters:
permutation ([int]): permutation key
x_key ([int]): X part of the non-delegated one-time pad key
z_key ([int]): Z part of the non-delegated one-time pad key
num_qubits (int): number of data qubits
syndrome_cnots ([[int,int]]): all cnot gates used to derive error syndromes
Returns:
delegated_x_key ([int]): classically processed and delegated X part of one-time pad key
delegated_z_key ([int]): classically processed and delegated Z part of one-time pad key
"""
permuted_cnots = []
for gate in syndrome_cnots:
permuted_cnots.append([gate[0],permutation.index(gate[1])])
new_x_key = x_key[:]
new_z_key = z_key[:]
for cnot in permuted_cnots:
a = new_x_key[cnot[0]]
b = new_z_key[cnot[0]]
c = new_x_key[cnot[1]]
d = new_z_key[cnot[1]]
new_x_key[cnot[0]] = a
new_z_key[cnot[0]] = b+d
new_x_key[cnot[1]] = a+c
new_z_key[cnot[1]] = d
#hadamard operator delegation
for i in range(num_qubits,num_qubits + int(num_qubits/7*3)):
new_x_key[i], new_z_key[i] = new_z_key[i], new_x_key[i]
delegated_x_key = [i%2 for i in new_x_key]
delegated_z_key = [i%2 for i in new_z_key]
return delegated_x_key, delegated_z_key
def apply_OTP_and_unpermute(counts, permutation, x_key, z_key, num_qubits=14):
"""
Classical processing of quantum measurement outcomes
Includes applying the delegated one-time pad and unpermuting the circuit
Parameters:
counts (dict): all the measurement outcomes for a job
permutation([int]): permutation key
x_key ([int]): x gates part of one-time pad key
z_key ([int]): z gates part of one-time pad key
num_qubits (int): number of data qubits
Returns:
unpermuted_steane(dict): classically post processed measurement outcomes
"""
processed_results = {}
for key, value in counts.items():
new_key = ""
for i in range(num_qubits + int(num_qubits/7*3)):
val = int(key[i])
k2_val = int(x_key[i])
if k2_val == 1 and val == 0:
new_key = new_key + "1"
elif k2_val == 1 and val == 1:
new_key = new_key + "0"
else:
new_key = new_key + str(val)
processed_results[new_key] = value
unpermuted_steane = {}
for key, value in processed_results.items():
new_key = ""
for i in range(num_qubits):
new_key = new_key+ key[permutation.index(i)]
syndrome_keys=""
for j in range(int(num_qubits/7*3)):
syndrome_keys = syndrome_keys + key[-int(int(num_qubits/7*3)-j)]
new_key = new_key + syndrome_keys
# print(syndrome_keys)
# print(new_key)
unpermuted_steane[new_key] = value
return unpermuted_steane
def check_correctness(counts, codeword_combos, syndrome = '000000', num_shots = 8192, num_qubits = 14):
"""
Gets the correct measurement outcome rates of a job
Parameters:
counts (dict): all processed measurement outcomes
codeword_combos ([str]): all codewords
syndrome (str): the correct no error syndrome
num_shots (int): the number of times the computation was run
num_qubits (int): the number of data qubits
Returns:
bit_rate (float): rate of measurement outcomes that have no bit flips (i.e. no bit error)
phase_rate (float): rate of measurement outcomes that have no phase flips (i.e. no phase error)
all_rate (float): rate of measurement outcomes that have no bit or phase flips (i.e. no bit and phase error)
"""
bit_count = 0
phase_count = 0
all_count = 0
for key, val in counts.items():
if key[:num_qubits] in codeword_combos:
bit_count = bit_count + 1
if key[num_qubits:] == syndrome:
all_count = all_count +1
if key[num_qubits:] == syndrome:
phase_count = phase_count +1
bit_rate = bit_count/num_shots
phase_rate = phase_count/num_shots
all_rate = all_count/num_shots
return bit_rate, phase_rate, all_rate
def get_average_rates(file_name, num_tests = 5, num_iterations= 10):
"""
Gets the average true positive and false positive rates for the different tests
For tests where the challenge input is equal to the password, the average true positive rate is found.
In all other cases, the average false positive is found.
Parameters:
file_name (str): the name of the file in which the rates for the individual rates were saved
num_tests (int): the number of different tests performed
num_iterations (int): the number of iterations each test was performed
Returns:
new_df (panda's DataFrame): contains the averages of all the tests
"""
try:
df = pd.read_csv(file_name)
except Error as err:
print("Error: ", err)
new_df = pd.DataFrame()
for i in range(num_tests):
avgs = df[i*num_iterations:(i+1)*num_iterations].mean()
new_df[str(i)] = avgs
return new_df
def get_average_rates_from_random_tests(file_name, start_index, end_index):
"""
Gets the average true positive and false positive rates for tests that sample random challenge inputs
For tests where the challenge input is equal to the password, the average true positive rate is found.
In all other cases, the average false positive is found.
Parameters:
file_name (str): the name of the file in which the rates for the individual rates were saved
start_index (int): the location of where random tests starts according to data ordered in file_name
end_index (int): the location of where random tests ends according to data ordered in file_name
Returns:
new_df (panda's DataFrame): contains the averages of the random tests
"""
try:
df = pd.read_csv(file_name)
except Error as err:
print("Error: ", err)
new_df = pd.DataFrame()
random_avgs = df[start_index:end_index].groupby(['is_p']).get_group(True).mean()
new_df["True Positive"] = random_avgs
random_avgs = df[start_index:end_index].groupby(['is_p']).get_group(False).mean()
new_df["False Positive"] = random_avgs
return new_df
```
# User Defined Values
```
drive.mount('/content/drive')
# set location for retrieving all the measurement outcome results and information
info_file = "/content/drive/My Drive/res/stripped_info.csv"
# set location for saving all the individual calculated error rates (i.e. bit, phase, and both bit and phase combined errors)
save_file = "/content/drive/My Drive/res/individual_error_rates.csv"
df = pd.read_csv(info_file)
all_key1 = df.challenge_key_1.to_list()
all_key2 = df.challenge_key_2.to_list()
is_point = df.is_point.to_list()
fields = ['#', 'is_p','no_bit_flip_percentage', 'no_phase_flip_percentage', 'no_error_percentage']
stats = pd.DataFrame(columns=fields)
first_steane_codewords = ['0000000','1010101','0110011','1100110','0001111','1011010','0111100','1101001']
second_steane_codewords = ['0000000', '1110000', '1001100', '0111100', '0101010', '1011010', '1100110', '0010110', '1101001', '0011001', '0100101', '1010101', '1000011', '0110011', '0001111', '1111111']
# the codewords of our Steane encoded program
codeword_combos = [x+y for x in first_steane_codewords for y in second_steane_codewords]
```
# Calculate Error Rates
## Option 1: Calculating Rates From File
Calculate the true positive and false positive rates for all test results from a file containing all the raw counts.
```
# set location of data file containing a list of raw counts only
# format of file: "["{'00000000000000000000':8192}"]"
data = "/content/drive/My Drive/res/secondary/raw_counts_data.txt"
raw_data = ""
with open(data) as f:
raw_data = f.read()
raw_data = ast.literal_eval(raw_data)
index = 0
for x in raw_data:
raw = ast.literal_eval(x)
counts = reverse_counts(raw)
key1 = ast.literal_eval(all_key1[index])
key2 = ast.literal_eval(all_key2[index])
xkey = key2[0] + [0]*6
zkey = key2[1] + [0]*6
x_key, z_key = get_delegated_OTP_keys(key1, xkey, zkey)
processed_counts = apply_OTP_and_unpermute(counts, key1, x_key, z_key)
bit, phase ,all = check_correctness(processed_counts, codeword_combos)
print(is_point[index], bit, phase, all)
stats.loc[index] = [index, is_point[index], bit, phase, all]
index = index +1
stats.to_csv(save_file)
print(stats)
```
## Option 2: Calculating Rates from A Single Set of Measurement Outcomes
Calculate the true positive and false positive rates for all test results from a single job's measurement outcomes
```
# set the index of the job
index = 10
# set a single job's measurement counts
raw = {}
counts = reverse_counts(raw)
key1 = ast.literal_eval(all_key1[index])
key2 = ast.literal_eval(all_key2[index])
xkey = key2[0] + [0]*6
zkey = key2[1] + [0]*6
del_x_key, del_z_key = get_delegated_OTP_keys(key1, xkey, zkey)
processed_counts = apply_OTP_and_unpermute(counts, key1, del_x_key, del_z_key)
bit, phase, all = check_correctness(processed_counts,codeword_combos)
# stats.loc[index] = [index, is_point[index], bit, phase, all]
print(index, is_point[index], bit, phase, all)
```
# Calculate Average Error Rates
```
df = get_average_rates(save_file, num_tests = 5, num_iterations= 10)
print(df)
df = get_average_rates_from_random_tests(save_file, 50, 60)
print(df)
```
#Graphing results example
```
# no error phase syndrome
phase_syndrome = '000000'
# set the post-processed measurement outcomes
counts_dict = {}
num_qubits = 14
d = collections.OrderedDict(sorted(counts_dict.items()))
count = 0
# set color of all the wrong measurement outcomes
colors = ['lightgray']*len(d)
patterns = ['']*len(d)
for key, val in d.items():
if phase_syndrome == key[-num_syndrome:]:
if key[:num_qubits] in codeword_combos:
# set color of all the right measurement outcomes
colors[count]= "black"
count = count +1
x_vals = list(d.keys())
y_vals = list(d.values())
plt.figure(figsize=(20,14))
for i in range(len(d)):
plt.bar(x_vals[i], y_vals[i], color=colors[i])
plt.xticks(fontsize=18, rotation=90)
plt.yticks(fontsize=18)
plt.xlabel('Measurement Values', fontsize=25)
plt.ylabel('Probability', fontsize=25)
plt.title('Quantum Computer without Err Mit', fontsize=30)
plt.show()
```
| github_jupyter |
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# Images are numpy arrays
Images are represented in ``scikit-image`` using standard ``numpy`` arrays. This allows maximum inter-operability with other libraries in the scientific Python ecosystem, such as ``matplotlib`` and ``scipy``.
Let's see how to build a grayscale image as a 2D array:
```
import numpy as np
from matplotlib import pyplot as plt
random_image = np.random.random([500, 500])
plt.imshow(random_image, cmap='gray')
plt.colorbar();
```
The same holds for "real-world" images:
```
from skimage import data
coins = data.coins()
print('Type:', type(coins))
print('dtype:', coins.dtype)
print('shape:', coins.shape)
plt.imshow(coins, cmap='gray');
```
A color image is a 3D array, where the last dimension has size 3 and represents the red, green, and blue channels:
```
cat = data.chelsea()
print("Shape:", cat.shape)
print("Values min/max:", cat.min(), cat.max())
plt.imshow(cat);
```
These are *just NumPy arrays*. E.g., we can make a red square by using standard array slicing and manipulation:
```
cat[10:110, 10:110, :] = [255, 0, 0] # [red, green, blue]
plt.imshow(cat);
```
Images can also include transparent regions by adding a 4th dimension, called an *alpha layer*.
### Other shapes, and their (possible) meanings
|Image type|Coordinates|
|:---|:---|
|2D grayscale|(row, column)|
|2D multichannel|(row, column, channel)|
|3D grayscale (or volumetric) |(plane, row, column)|
|3D multichannel|(plane, row, column, channel)|
## Displaying images using matplotlib
```
from skimage import data
img0 = data.chelsea()
img1 = data.rocket()
import matplotlib.pyplot as plt
f, (ax0, ax1) = plt.subplots(1, 2, figsize=(20, 10))
ax0.imshow(img0)
ax0.set_title('Cat', fontsize=18)
ax0.axis('off')
ax1.imshow(img1)
ax1.set_title('Rocket', fontsize=18)
ax1.set_xlabel(r'Launching position $\alpha=320$')
ax1.vlines([202, 300], 0, img1.shape[0], colors='magenta', linewidth=3, label='Side tower position')
ax1.plot([168, 190, 200], [400, 200, 300], color='white', linestyle='--', label='Side angle')
ax1.legend();
```
For more on plotting, see the [Matplotlib documentation](https://matplotlib.org/gallery/index.html#images-contours-and-fields) and [pyplot API](https://matplotlib.org/api/pyplot_summary.html).
## Data types and image values
In literature, one finds different conventions for representing image values:
```
0 - 255 where 0 is black, 255 is white
0 - 1 where 0 is black, 1 is white
```
``scikit-image`` supports both conventions--the choice is determined by the
data-type of the array.
E.g., here, I generate two valid images:
```
linear0 = np.linspace(0, 1, 2500).reshape((50, 50))
linear1 = np.linspace(0, 255, 2500).reshape((50, 50)).astype(np.uint8)
print("Linear0:", linear0.dtype, linear0.min(), linear0.max())
print("Linear1:", linear1.dtype, linear1.min(), linear1.max())
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 15))
ax0.imshow(linear0, cmap='gray')
ax1.imshow(linear1, cmap='gray');
```
The library is designed in such a way that any data-type is allowed as input,
as long as the range is correct (0-1 for floating point images, 0-255 for unsigned bytes,
0-65535 for unsigned 16-bit integers).
You can convert images between different representations by using ``img_as_float``, ``img_as_ubyte``, etc.:
```
from skimage import img_as_float, img_as_ubyte
image = data.chelsea()
image_ubyte = img_as_ubyte(image)
image_float = img_as_float(image)
print("type, min, max:", image_ubyte.dtype, image_ubyte.min(), image_ubyte.max())
print("type, min, max:", image_float.dtype, image_float.min(), image_float.max())
print()
print("231/255 =", 231/255.)
```
Your code would then typically look like this:
```python
def my_function(any_image):
float_image = img_as_float(any_image)
# Proceed, knowing image is in [0, 1]
```
We recommend using the floating point representation, given that
``scikit-image`` mostly uses that format internally.
## Image I/O
Mostly, we won't be using input images from the scikit-image example data sets. Those images are typically stored in JPEG or PNG format. Since scikit-image operates on NumPy arrays, *any* image reader library that provides arrays will do. Options include imageio, matplotlib, pillow, etc.
scikit-image conveniently wraps many of these in the `io` submodule, and will use whichever of the libraries mentioned above are installed:
```
from skimage import io
image = io.imread('../images/balloon.jpg')
print(type(image))
print(image.dtype)
print(image.shape)
print(image.min(), image.max())
plt.imshow(image);
```
We also have the ability to load multiple images, or multi-layer TIFF images:
```
import os
ic = io.ImageCollection(os.pathsep.join(['../images/*.png', '../images/*.jpg']))
print('Type:', type(ic))
ic.files
import os
f, axes = plt.subplots(nrows=3, ncols=len(ic) // 3 + 1, figsize=(20, 5))
# subplots returns the figure and an array of axes
# we use `axes.ravel()` to turn these into a list
axes = axes.ravel()
for ax in axes:
ax.axis('off')
for i, image in enumerate(ic):
axes[i].imshow(image, cmap='gray')
axes[i].set_title(os.path.basename(ic.files[i]))
plt.tight_layout()
```
### Aside: `enumerate`
`enumerate` gives us each element in a container, along with its position.
```
animals = ['cat', 'dog', 'leopard']
for i, animal in enumerate(animals):
print('The animal in position {} is {}'.format(i, animal))
```
## <span class="exercize">Exercise: draw the letter H</span>
Define a function that takes as input an RGB image and a pair of coordinates (row, column), and returns a copy with a green letter H overlaid at those coordinates. The coordinates point to the top-left corner of the H.
The arms and strut of the H should have a width of 3 pixels, and the H itself should have a height of 24 pixels and width of 20 pixels.
Start with the following template:
```
def draw_H(image, coords, color=(0, 255, 0)):
out = image.copy()
out = ... # FIXME
return out
```
Test your function like so:
```
cat = data.chelsea()
cat_H = draw_H(cat, (50, -50))
plt.imshow(cat_H);
```
## <span class="exercize">Exercise: visualizing RGB channels</span>
Display the different color channels of the image along (each as a gray-scale image). Start with the following template:
```
# --- read in the image ---
image = plt.imread('../images/Bells-Beach.jpg')
# --- assign each color channel to a different variable ---
r = ... # FIXME: grab channel from image...
g = ... # FIXME
b = ... # FIXME
# --- display the image and r, g, b channels ---
f, axes = plt.subplots(1, 4, figsize=(16, 5))
for ax in axes:
ax.axis('off')
(ax_r, ax_g, ax_b, ax_color) = axes
ax_r.imshow(r, cmap='gray')
ax_r.set_title('red channel')
ax_g.imshow(g, cmap='gray')
ax_g.set_title('green channel')
ax_b.imshow(b, cmap='gray')
ax_b.set_title('blue channel')
# --- Here, we stack the R, G, and B layers again
# to form a color image ---
ax_color.imshow(np.stack([r, g, b], axis=2))
ax_color.set_title('all channels');
```
Now, take a look at the following R, G, and B channels. How would their combination look? (Write some code to confirm your intuition.)
```
from skimage import draw
red = np.zeros((300, 300))
green = np.zeros((300, 300))
blue = np.zeros((300, 300))
r, c = draw.circle(100, 100, 100)
red[r, c] = 1
r, c = draw.circle(100, 200, 100)
green[r, c] = 1
r, c = draw.circle(200, 150, 100)
blue[r, c] = 1
f, axes = plt.subplots(1, 3)
for (ax, channel) in zip(axes, [red, green, blue]):
ax.imshow(channel, cmap='gray')
ax.axis('off')
# Solution
```
## Exercise: Convert to grayscale ("black and white")
The *relative luminance* of an image is the intensity of light coming from each point. Different colors contribute differently to the luminance: it's very hard to have a bright, pure blue, for example. So, starting from an RGB image, the luminance is given by:
$$
Y = 0.2126R + 0.7152G + 0.0722B
$$
Use Python's matrix multiplication, `@`, to convert an RGB image to a grayscale luminance image according to the formula above.
Compare your results to that obtained with `skimage.color.rgb2gray`.
Change the coefficients to 1/3 (i.e., take the mean of the red, green, and blue channels, to see how that approach compares with `rgb2gray`).
```
from skimage import color, io, img_as_float
image = img_as_float(io.imread('../images/balloon.jpg'))
gray = color.rgb2gray(image)
my_gray = ... # FIXME
# --- display the results ---
f, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 6))
ax0.imshow(gray, cmap='gray')
ax0.set_title('skimage.color.rgb2gray')
ax1.imshow(my_gray, cmap='gray')
ax1.set_title('my rgb2gray')
```
## Bonus
If you would like to watch a stand-up comedy act about spreadsheets
Matt Parker’s comedy routine about spreadsheets. From the Festival of the Spoken Nerd DVD
The video is 13 minutes long.
You can watch it here: https://www.youtube.com/watch?v=UBX2QQHlQ_I)
| github_jupyter |
# Simulating noise on Amazon Braket
This notebook gives a detailed overview of noise simulations on Amazon Braket. Amazon Braket provides two noise simulators: a local noise simulator that you can use for free as part of the Braket SDK and a fully managed, high-performing noise simulator, DM1. Both simulators are based on the density matrix formalism. After this tutorial, you will be able to define noise channels, apply noise to new or existing circuits, and run those circuits on the Braket noise simulators.
### Table of contents:
* [Background](#Background)
* [Noise simulation based on the density matrix formalism](#density_matrix)
* [Quantum channel and Kraus representation](#quantum_channel)
* [General imports](#imports)
* [Quick start](#start)
* [Defining noise channels](#noise_channels)
* [Pre-defined noise channels](#pre-defined)
* [Defining custom noise channels](#self-defined)
* [Adding noise to a circuit](#apply_noise)
* [Build noisy circuits bottom-up](#apply_noise_directly)
* [Applying noise to existing circuits with global methods](#apply_noise_globally)
* [Applying gate noise to the circuit](#gate-noise)
* [Applying initialization noise to the circuit](#initialization-noise)
* [Applying readout noise to the circuit](#readout-noise)
* [Using both the direct and global methods to apply noise](#both)
* [Running a noisy circuit](#run)
## Background <a class="anchor" id="Background"></a>
### Noise simulation based on the density matrix formalism <a class="anchor" id="density_matrix"></a>
In an ideal case, a quantum state prepared by a noise-free circuit can be described by a state vector $|\psi\rangle$ -- we call it a 'pure state'. However, the presence of noise in realistic quantum devices will introduce classical uncertainty to the quantum state. For example, a bit flip error with 50% probability acting on a qubit flips the $|0\rangle$ state into either $|0\rangle$ or $|1\rangle$ with a 50-50 chance. Note that this is different from an Hadamard-gate acting on $|0\rangle$: The latter results in a coherent superposition of $|0\rangle$ and $|1\rangle$, whereas the former is a classical, so-called mixture of $|0\rangle$ and $|1\rangle$. The most general way of describing a quantum state in the presence of noise is through the so-called density matrix: $\rho = \sum_i p_i|\psi_i\rangle\langle\psi_i|$. It can be understood as a classical mixture of a series of pure states $|\psi_i\rangle$ (each of which could be highly entangled), where $p_i$ is the probability of the state being in $|\psi_i\rangle$. Because the $p_i$ are classical probabilities they have to sum up to 1: $\sum_i p_i = 1$. The density matrix of a pure state is simply $\rho = |\psi\rangle\langle\psi|$ and, in the bit-flip example from above, the density matrix would be $\rho = 0.5|0\rangle\langle 0| + 0.5|1\rangle\langle 1|$.
The density matrix formalism is a very useful way to describe a noisy system with probabilistic outcomes. It gives an exact description of a quantum system going through a quantum channel with noise. Besides, the expectation value of an observable $\langle O\rangle$ can be easily calculated by $\rm{Tr}(O\rho)$, where "$\rm{Tr}$" is the trace operator.
### Quantum channel and Kraus representation <a class="anchor" id="quantum_channel"></a>
A [quantum channel](https://en.wikipedia.org/wiki/Quantum_channel) describes the time evolution of a quantum state which is expressed as a density matrix. For instance, to understand what a series of noisy gates does to the state of a quantum computer, you can apply a quantum channel corresponding to the different gate and noise operations.
Mathematically speaking, a quantum channel is a completely positive and trace-preserving (CPTP) linear map acting on a density matrix. Completely positive means the channel maps positive operators into positive operators (even if the operator is applied to part of a larger system) to make sure the density matrix describes a proper quantum state after the map. Trace-preserving means the trace of the density matrix remains unchanged during the mapping process (this is so that after the map the classical probabilities $p_i$ still sum to 1).
The so-called _Kraus representation_ is a commonly used representation for CPTP maps. [Kraus's theorem](https://en.wikipedia.org/wiki/Quantum_operation#Kraus_operators) states that any quantum operation acting on a quantum state $\rho$ can be expressed as a map $\varepsilon(\rho) = \sum_i K_i\rho K_i^{\dagger}$, and it satisfies: $\sum_i K_i^{\dagger}K_i = \mathbb{1}$, where $\mathbb{1}$ is the Identity operator.
Let's get started and have a look how you can define and simulate noisy circuits on Amazon Braket.
## General imports <a class="anchor" id="imports"></a>
Let's begin with the usual imports.
```
from braket.circuits import Circuit, Observable, Gate, Noise
from braket.devices import LocalSimulator
from braket.aws import AwsDevice
import numpy as np
from scipy.stats import unitary_group
```
## Quick start <a class="anchor" id="start"></a>
Let's start with a simple example of running a noisy circuit on Amazon Braket.
```
# build a simple circuit
circ = Circuit().h(0).cnot(0,1)
# define a noise channel
noise = Noise.BitFlip(probability=0.1)
# add noise to every gate in the circuit
circ.apply_gate_noise(noise)
# select the local noise simulator
device = LocalSimulator('braket_dm')
# run the circuit on the local simulator
task = device.run(circ, shots = 1000)
# visualize the results
result = task.result()
measurement = result.measurement_counts
print('measurement results:', measurement)
```
Ideally, in the noise-free case, the circuit we defined prepares a Bell-state, and we would expect to measure only '00' and '11' outcomes. However, the presence of noise, in our case a bit flip error, means that sometimes we find the state in '01' and '10' instead.
The local simulator is suitable for fast prototyping on small circuits. If you want to run a noisy circuit with more than 10~12 qubits, we recommend using the managed simulator DM1. Using DM1, you can run circuits with up to 17 qubits, and benefit from parallel execution for a group of circuits. The code below shows an example of preparing a 13-qubit GHZ state in the presence of noise.
```
def ghz_circuit(n_qubits: int) -> Circuit:
"""
Function to return simple GHZ circuit ansatz. Assumes all qubits in range(0, n_qubits-1)
are entangled.
"""
circuit = Circuit().h(0)
for ii in range(0, n_qubits-1):
circuit.cnot(control=ii, target=ii+1)
return circuit
# build a 13-qubit GHZ circuit
circ = ghz_circuit(13)
# define a noise channel
noise = Noise.Depolarizing(probability=0.1)
# add noise to every gate in the circuit
circ.apply_gate_noise(noise)
# select the managed density matrix simulator DM1
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/dm1")
# run the circuit on DM1
task = device.run(circ, shots = 10)
# visualize the results
result = task.result()
measurement = result.measurement_counts
print('measurement results:', measurement)
```
We now start exploring the detailed instructions and use cases of each step in the following sections.
## Defining noise channels <a class="anchor" id="noise_channels"></a>
To apply noise to a quantum circuit, first, you need to define the noise channel, which is defined in Kraus representation. We offer many commonly-used noise channels in the `Noise` class of the [Amazon Braket SDK](https://amazon-braket-sdk-python.readthedocs.io/en/latest/_apidoc/braket.circuits.html). In addition, you can also define your own custom noise channel as a list of Kraus operators.
### Pre-defined noise channels <a class="anchor" id="pre-defined"></a>
The pre-defined single-qubit noise channels include `BitFlip`, `PhaseFlip`, `Depolarizing`, `AmplitudeDamping`, `GeneralizedAmplitudeDamping`, `PhaseDamping` and `PauliChannel`.
The pre-defined two-qubit noise channels include `TwoQubitDepolarizing` and `TwoQubitDephasing`. The Kraus representations for all of the pre-defined channels are summarized in the following table.
__single-qubit noise channels__
| Noise channel | <div style="width:290px">Kraus representation</div> | Parameter |
|:-------------- |:-------------------------------------------------- |:------------|
| `BitFlip` | $(1-p)\rho$ + $pX\rho X$| $p$ is the probability of the bit flip noise. |
| `PhaseFlip` | $(1-p)\rho$ + $pZ\rho Z$| $p$ is the probability of the phase flip noise. |
| `Depolarizing` |$(1-p)\rho$ + $p/3(X\rho X$ + $Y\rho Y$ + $Z\rho Z)$|$p$ is the probability of the depolarizing noise (the three possible error cases share the same probability of $p/3$).|
|`AmplitudeDamping`|$K_0\rho K_0^\dagger$ + $K_1\rho K_1^\dagger$|$K_0=[1,0;0,\sqrt{1-\gamma}]$, $K_1=[0,\sqrt{\gamma};0,0]$, where $\gamma$ is the rate of amplitude damping.|
|`GeneralizedAmplitudeDamping`|$K_0\rho K_0^\dagger$ + $K_1\rho K_1^\dagger$ + $K_2\rho K_2^\dagger$ + $K_3 \rho K_3^\dagger$|$K_0=\sqrt{p}[1,0;0,\sqrt{1-\gamma}]$, $K_1=\sqrt{p}[0,\sqrt{\gamma};0,0]$, $K_2=\sqrt{1-p}[\sqrt{1-\gamma},0;0,1]$, $K_3=\sqrt{1-p}[0,0;\sqrt{\gamma},0]$, where $\gamma$ is the rate of amplitude damping, and $p$ is the probability of the system been excited by the environment [1].|
|`PhaseDamping`|$K_0\rho K_0^\dagger$ + $K_1 \rho K_1^\dagger$|$K_0=[1,0;0,\sqrt{1-\gamma}]$, $K_1=[0,0;0,\sqrt{\gamma}]$, where $\gamma$ is the rate of phase damping.|
|`PauliChannel`|$(1-p_x-p_y-p_z)\rho$ + $p_xX\rho X$ + $p_yY\rho Y$ + $p_zZ\rho Z$|$p_x$, $p_y$ and $p_z$ are probabilities for the Pauli X, Y, Z noise respectively.|
__two-qubit noise channels__
|<div style="width:160px">Noise channel</div>| <div style="width:290px">Kraus representation</div> | Parameter |
|:----------------------- |:-------------------------------------------------- |:------------|
| `TwoQubitDepolarizing`| $(1-p)\rho$ + $p/15(IX\rho IX$ + $IY\rho IY$ + $IZ\rho IZ$ + $XI\rho XI$ +....+ $ZZ\rho ZZ)$| $p$ is the probability of the two-qubit depolarizing noise (the 15 possible error combinations share the same probability of $p/15$).|
| `TwoQubitDephasing` | $(1-p)\rho$ + $p/3(IZ\rho IZ$ + $ZI\rho ZI$ + $ZZ\rho ZZ)$| $p$ is the probability of the two-qubit dephasing noise (the three possible error combinations share the same probability of $p/3$). |
The following code block takes the example of the bit flip noise channel: $\rho\rightarrow(1-p)\rho$ + $pX\rho X$, where $p$ corresponds to the `probability` parameter when defining the noise. This noise channel is equivalent to applying a bit flip error (applying an X gate) with probability $p$ and doing nothing with probability $1-p$. You can check the target qubit count and the Kraus operators of the noise channel defined.
```
# define a bit flip noise channel with probability = 0.1
noise = Noise.BitFlip(probability=0.1)
print('name: ', noise.name)
print('qubit count: ', noise.qubit_count)
print('Kraus operators: ')
for matrix in noise.to_matrix():
print(matrix, '\n')
```
Other pre-defined noise channels can be used in a similar way:
```
# define a phase flip noise channel
noise = Noise.PhaseFlip(probability=0.1)
# define a single-qubit depolarizing noise channel
noise = Noise.Depolarizing(probability=0.1)
# define a two-qubit depolarizing noise channel
noise = Noise.TwoQubitDepolarizing(probability=0.1)
# define a two-qubit dephasing noise channel
noise = Noise.TwoQubitDephasing(probability=0.1)
# define an amplitude damping noise channel
noise = Noise.AmplitudeDamping(gamma=0.1)
# define a generalized amplitude damping noise, where gamma is the amplitude damping rate, and
# probability is the probability of the system being excited by the environment.
noise = Noise.GeneralizedAmplitudeDamping(gamma=0.1, probability=0.1)
# define a phase damping noise channel
noise = Noise.PhaseDamping(gamma=0.1)
# define a Pauli noise channel
noise = Noise.PauliChannel(probX=0.1, probY=0.2, probZ=0.3)
```
### Defining custom noise channels <a class="anchor" id="self-defined"></a>
Apart from the pre-defined noise models, you can also define your own noise model by specifying a list of Kraus operators. The following code shows an example of defining a two-qubit Kraus channel with randomly generated unitary operators.
```
# create an arbitrary 2-qubit Kraus matrix
E0 = unitary_group.rvs(4) * np.sqrt(0.2)
E1 = unitary_group.rvs(4) * np.sqrt(0.8)
K = [E0, E1]
# define a two-qubit noise channel with Kraus operators
noise = Noise.Kraus(K)
```
Note that the noise channel you define needs to form a CPTP map. If the input matrices do not define a CPTP map, an error will be raised.
```
K_invalid = [np.random.randn(2,2), np.random.randn(2,2)]
try:
noise = Noise.Kraus(K_invalid)
pass
except ValueError as err:
print(err)
```
## Adding noise to a circuit <a class="anchor" id="apply_noise"></a>
There are two methods to build a 'noisy' circuit. First, you can add noise to the circuit 'bottom-up', by using the noise operations in the same way as you would add a gate to the circuit. Second, you can use the methods `apply_gate_noise()`, `apply_initialization_noise()` and `apply_readout_noise()` to apply gate error, qubit initialization error and measurement error globally to existing circuits.
The direct method is more flexible as you can apply noise to any place in a circuit. But for an existing large circuit with lots of gates, you may want to use the global methods to conveniently apply noise to the circuit.
### Build noisy circuits bottom-up <a class="anchor" id="apply_noise_directly"></a>
Noise channels can be applied to the circuit the same way as gates. The following example shows how to apply single- and two-qubit noise channels directly to a circuit. The noise applied can be visualized in the circuit diagram with the `print()` method.
```
# apply depolarizing noise
circ = Circuit().x(0).x(1).cnot(0,1).depolarizing(1, probability=0.2).x(0).two_qubit_dephasing(target1=0, target2=1, probability=0.1)
print(circ)
```
### Applying noise to existing circuits with global methods<a class="anchor" id="apply_noise_globally"></a>
We offer three methods to apply noise globally to the circuit: `apply_gate_noise()`, `apply_initialization_noise()` and `apply_readout_noise()`. In the following, we explain in detail the usage of these three methods.
#### Applying gate noise to the circuit <a class="anchor" id="gate-noise"></a>
`apply_gate_noise()` is the method to conveniently apply gate-noise to the circuit. It accepts the following input parameters:
- __noise__: A single or a list of noise channel in `Noise` type.
- __target_unitary__: A single unitary gate in the form of a matrix in `numpy.ndarray` type. The noise will be applied to that unitary gate.
- __target_gates__: A single or a list of gates in `Gate` type. Note that `target_gates` and `target_unitary` can not be provided at the same time. If none of `target_gates` and `target_unitary` is given, noise will be applied to all the gates in the circuit.
- __target_qubits__: A single or a list of qubit indexes. If not given, noise will be applied to all the qubits in the circuit.
When calling the method, the noise channel(s) will be applied right after all `target_gates` in `target_qubits`.
<div class="alert alert-block alert-info">
<b>Note</b> When you call this method, noise will be inserted right after the gate. If you like to apply more than one noise operation, be aware of the order. Alternatively, you can provide a list of noise operations in one call, and the noise will be applied in forward order.
</div>
The code below is an example of applying phase damping noise to all gates in the circuit.
```
noise = Noise.PhaseDamping(gamma=0.1)
# the noise channel is applied to every gate in the circuit
circ = Circuit().x(0).bit_flip(0,0.1).cnot(0,1)
circ.apply_gate_noise(noise)
print('Noise is applied to every gate in the circuit:\n')
print(circ)
```
If you want to apply noise to some particular gates in the circuit, you can specify them as `target_gates`. Below is an example in which noise is applied to all X gates in the circuit.
<div class="alert alert-block alert-info">
<b>Note</b> The <code>target_gates</code> must be a <code>Gate</code> type. You can find all available gates with the following commands:
<code>
from braket.circuits import Gate
gate_set = [attr for attr in dir(Gate) if attr[0] in string.ascii_uppercase]
print(gate_set)
</code>
</div>
```
# the noise channel is applied to all the X gates in the circuit
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
circ.apply_gate_noise(noise, target_gates = Gate.X)
print('Noise is applied to every X gate:\n')
print(circ)
```
If you define custom unitary gates as part of your circuit, and you want to apply noise to them, you can use the `target_unitary` criterion.
```
U1=unitary_group.rvs(4)
U2=unitary_group.rvs(4)
circ = Circuit().x(0).y(1).unitary((0,1),U1).cnot(0,2).x(1).z(2).unitary((1,2),U2)
circ.apply_gate_noise(noise, target_unitary = U2)
print('Noise is applied to U2:\n')
print(circ)
```
If you want to apply noise to some particular qubits in the circuit, you can specify them as `target_qubits`. Below is an example to apply noise to all gates in qubits 0 and 2 in the circuit.
```
# the noise channel is applied to every gate on qubits 0 and 2
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
circ.apply_gate_noise(noise, target_qubits = [0,2])
print('Noise is applied to every gate in qubits 0 and 2:\n')
print(circ)
```
The `target_qubits` and `target_gates` criteria can be used at the same time. The code block below applies the gate noise to all X gates in qubit 0.
```
# the noise channel is applied to X gate on qubits 0
circ = Circuit().x(0).y(1).cnot(0,2).x(0).x(1).z(2)
circ.apply_gate_noise(noise, target_gates = Gate.X, target_qubits = 0)
print('Noise is applied to X gates in qubits 0:\n')
print(circ)
```
If a list of noise channels is provided, the first noise channel in the list will be applied first, then the second.
```
# define two noise channels
noise1 = Noise.Depolarizing(probability=0.1)
noise2 = Noise.BitFlip(probability=0.2)
# apply a list of noise channels
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
circ.apply_gate_noise([noise1, noise2], target_qubits = [0,1])
print('Noise channels are applied to every gate in qubits 0 and 1:\n')
print(circ)
```
If you want to apply multi-qubit noise channels to a gate, the number of qubits associated with the gate must equal to the number of qubits defined by the noise channel, or otherwise the noise will not be applied. Below shows an example.
```
# define a two-qubit noise channel
noise = Noise.TwoQubitDephasing(probability=0.1)
# apply the noise to the circuit
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2).swap(1,0)
circ.apply_gate_noise(noise)
print('The two-qubit noise channel is applied to all the two-qubit gates in the circuit:\n')
print(circ)
```
#### Applying initialization noise to the circuit <a class="anchor" id="initialization-noise"></a>
`apply_initialization_noise()` is the method to apply initialization noise to the circuit. By using the method, the noise will be applied to every qubit at the beginning of a circuit. It accepts the following input parameters:
- __noise__: a single or a list of noise channel in `Noise` type.
- __target_qubits__: a single or a list of qubit indexes. If not given, noise will be applied to all the qubits in the circuit.
If you want to apply the initialization noise to an empty circuit, you need to provide `target_qubits` to the method.
<div class="alert alert-block alert-info">
<b>Note</b> When you call this method, noise will be inserted at the very beginning of the circuit. If you like to apply more than one noise operation, be aware of the order. Alternatively, you can provide a list of noise operations in one call, and the noise will be applied in forward order.
</div>
```
# define a noise channel
noise = Noise.Depolarizing(probability=0.1)
# the noise channel is applied as the initialization noise to the circuit
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
circ.apply_initialization_noise(noise)
print('Initialization noise is applied to the circuit:\n')
print(circ)
```
If you want to apply a multi-qubit noise channel as the initialization noise to a circuit and if the number of the qubits in the existing circuit doesn't match the number of qubits as defined by the noise channel, you need to provide `target_qubits` with the number of qubits matching the noise channel.
```
# define a two-qubit noise channel
noise = Noise.TwoQubitDephasing(probability=0.1)
# the noise channel is applied as the initialization noise to the circuit
circ = Circuit().x(0).y(1).cnot(0,1).x(1).z(0)
circ.apply_initialization_noise(noise)
print('Initialization noise is applied to the circuit:\n')
print(circ)
```
#### Applying readout noise to the circuit <a class="anchor" id="readout-noise"></a>
The method of `apply_readout_noise()` is very similar to the method to apply initialization noise, except that the noise channel is applied to every qubit in the end of a circuit. It accepts the following input parameters:
- __noise__: a single or a list of noise channel in `Noise` type.
- __target_qubits__: a single or a list of qubit indexes. If not given, noise will be applied to all the qubits in the circuit.
If you want to apply the readout noise to an empty circuit, you need to provide `target_qubits` to the method.
<div class="alert alert-block alert-info">
<b>Note</b> When you call this method, noise will be inserted at the very end of the circuit. If you like to apply more than one noise operation, be aware of the order. You can also provide a list of noise operations in the one call, and the noise will be applied in forward order.
</div>
```
# define a noise channel
noise = Noise.Depolarizing(probability=0.1)
# the noise channel is applied as the readout noise to the circuit
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
circ.apply_readout_noise(noise)
print('Read-out noise is applied to the circuit:\n')
print(circ)
```
If you want to apply a multi-qubit noise channel as the readout noise to a circuit and if the number of the qubits in the existing circuit doesn't match the number of qubits as defined by the noise channel, you need to provide `target_qubits` with the number of qubits matching the noise channel.
### Using both the direct and global methods to apply noise <a class="anchor" id="both"></a>
You can apply noise to the circuit using both the direct and global methods.
```
# define a noise channel
noise = Noise.PhaseFlip(probability=0.2)
# create a circuit and add noise directly to the circuit
circ = Circuit().x(0).y(1).bit_flip(0,0.1).cnot(1,2).two_qubit_depolarizing(1, 2, probability=0.1).z(2)
circ.apply_gate_noise(noise, target_qubits=0)
print('Noise channels are applied to the circuit:\n')
print(circ)
```
## Running a noisy circuit <a class="anchor" id="run"></a>
Running a noisy circuit is like running any other task on Amazon Braket. In the example below we will pick the local simulator to run our circuit.
With shots = 0, you can obtain the exact values of probability, density matrix and expectation values of the mixed state by attaching the corresponding result type. The reduced density matrix is also available if providing the targets qubits. If no target qubit is provided, the full density matrix will be returned.
An example is shown in the code block below.
```
# define the noise channel
noise = Noise.AmplitudeDamping(gamma=0.1)
# create a circuit
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
# apply the noise to qubits 0 and 2 in the circuit
circ.apply_gate_noise(noise, target_qubits = [0,2])
# attach the result types
circ.probability()
circ.expectation(observable = Observable.Z(),target=0)
# attach the density matrix with target=[0,1], and the reduced density matrix of qubits 0,1 will be returned
circ.density_matrix(target=[0,1])
print(circ)
# choose the noise simulator, which is called "braket_dm"
device = LocalSimulator("braket_dm")
# run the circuit
task = device.run(circ, shots=0)
result = task.result()
print('- Probability is: ')
print(result.values[0])
print('- Expectation value <Z_0> is: ')
print(result.values[1])
print('- The reduced Density Matrix is: ')
print(result.values[2])
```
With shots > 0, the results are sampled from the probability distributions. The result type `density_matrix` is not available for shots > 0.
The code below shows the expectation value $\langle Z_0\rangle$ and the probability that the mixed state collapsing into different states. We see those values here are different from the exact values obtained in the shots = 0 case.
```
# create a circuit
circ = Circuit().x(0).y(1).cnot(0,2).x(1).z(2)
circ.apply_gate_noise(noise, target_qubits = [0,2])
circ.probability()
circ.expectation(observable = Observable.Z(),target=0)
print(circ)
# run the circuit
task = device.run(circ, shots=100)
result = task.result()
print('- Probability is: ')
print(result.values[0])
print('- Expectation value <Z_0> is: ')
print(result.values[1])
```
## Reference
[1] Srikanth R, Banerjee S. "Squeezed generalized amplitude damping channel", Physical Review A, 2008, 77(1): 012318.
| github_jupyter |
```
"""
IPython Notebook v4.0 para python 2.7
Librerías adicionales: Ninguna.
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores.
"""
# Configuracion para recargar módulos y librerías
%reload_ext autoreload
%autoreload 2
from IPython.core.display import HTML
HTML(open("style/mat281.css", "r").read())
```
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
# MAT281
## Aplicaciones de la Matemática en la Ingeniería
### Sebastián Flores
* sebastian.flores@usm.cl
* https://www.github.com/sebastiandres/mat281
## ¿Qué contenido aprenderemos?
* Formalidades
* Proyectos 2014
* Proyectos 2015
## ¿Porqué aprenderemos ese contenido?
* Formalidades
* Rrevisión de horarios y programa del curso.
* Proyectos 2014
* Ver ejemplos concretos de proyectos realizados por alumnos.
* Proyectos 2015
* Conocer posibles proyectos y comenzar a discutir posibles grupos y proyectos.
## 1- Formalidades
* Horario del curso:
* Lunes 3-4, P212
* Miercoles 3-4, F265
* Horario ayudantía:
* Lunes bloques 13-14
* Programa de la asignatura
## 2- Proyectos 2014
* **Reconciliación de datos**
* Aplicación a celdas de flotación de minería.
* **Análisis de Sentimientos**
* Aplicación a tweets.
* **Interpolación espacial con Krigging**.
* Aplicación a datos de crimen.
* **Valor intrínseco de un producto**.
* Aplicación a datos de Amazon.
#### 2- Proyectos 2014
## Interpolación espacial con Krigging
### Aplicación a datos de crimen
### Diego Gajardo.
#### 2- Proyectos 2014
## Valor intrínseco de un producto.
### Aplicación a datos de Amazon.
### Alberto Rubio
## 3- Proyectos 2015
* Proyectos definidos
* Proyectos por definir
## Proyectos definidos
* Entity resolution
* Model Order Reduction
* Call Center Mathematics
* Electrical Trees
#¿Cómo comparar en internet?
<hr>
## Entity Resolution
#### Proyectos definidos
#### Proyectos definidos
## Entity Resolution
Conocida multitienda 1:
<img src="images/entityresolution1.png" alt="" width="800px" align="middle"/>
#### Proyectos definidos
## Entity Resolution
Conocida multitienda 2:
<img src="images/entityresolution2.png" alt="" width="800px" align="middle"/>
#### Proyectos definidos
## Entity Resolution
* ¿Son los productos anteriores iguales?
* ¿Cómo medir las diferencias entre 2 "items" arbitrarios de las tiendas?
* ¿Cómo asignar eficientemente pares entre 2 bases de datos "grandes"?
##### Temáticas
Lenguaje natural, optimización, machine learning.
#### Proyectos definidos
## Entity Resolution
**AKA**: "record linkage", "list washing", "database merging", "data matching", ...
####Definición formal:
Tomar dos o más bases de datos y generar clases de equivalencia entre ellos.
##### Definición informal:
Determinar si 2 productos son iguales, a pesar de tener "definiciones" distintas.
#### Proyectos definidos
## Entity Resolution
##### "Golden" Datasets
Ver el link: http://dbs.uni-leipzig.de/en/research/projects/object_matching/fever/
benchmark_datasets_for_entity_resolution
* **Amazon-GoogleProducts**: Datos de e-commerce.
* **Abt-Buy**: Datos de e-commerce.
* **DBLP-ACM**: Datos bibliográficos.
* **DBLP-Schola**: Datos bibliográficos.
##### Algunas referencias:
* Evaluating Entity Resolution Results
David Menestrina, Steven Euijong Whang, Hector Garcia-Molina.
* Disinformation Techniques for Entity Resolution. Steven Euijong Whang, Hector Garcia-Molina.
# ¿Cómo simplificar un problema?
<hr>
## Model Order Reduction
#### Proyectos definidos
#### Proyectos definidos
## Model Order Reduction
<img src="images/mor.png" alt="" width="600px" align="middle"/>
#### Proyectos definidos
## Model Order Reduction
**AKA**: "dimensionality reduction", "feature extraction"...
Reducir el "tamaño" de un problema resulta interesante para simulaciones computacionales, optimización, uncertainty quantification y análisis de sensibilidad.
#### Definición formal:
Disminución del tamaño computacional en simulaciones de sistemas dinámicos de gran tamaño.
##### Definición informal:
Simplificar el problema tomando los elementos (o mezclas de éstos) que entregan las mayores contribuciones.
#### Proyectos definidos
## Model Order Reduction
##### Algunos ejemplos
* HyShot II scramjet
* Photovoltaic solar cell
* Airfoil shape optimization
##### Algunas referencias:
* Active Subspaces, Paul G. Constantine.
* A Comparison of Some Model Order Reduction Techniques, Rodney Slone, Jin-fa Lee, Robert Lee.
# ¿Cómo podemos optimizar un call center?
<hr>
## Call Center Mathematics
#### Proyectos definidos
#### Proyectos definidos
## Call Center Mathematics
<img src="images/callcenter.jpg" alt="" width="300px" align="right"/>
¿Cómo podemos optimizar un call center, de manera científica?
* Problemas de optimización estocástica.
* Dados ciertos turnos,
* ¿Qué calidad de servicio se entrega?
* Dada una calidad de servicio deseada,
* ¿Cómo se deben organizar los turnos?
* Algunas formulas conocidas: Erlang C, Erlang F,
##### Temáticas
Simulación, optimización, estadística, probabilidades, industrias.
#### Proyectos definidos
## Call Center Mathematics
##### Datasets
* http://klipfolio.uservoice.com/knowledgebase/articles/81667-call-center-data-spreadsheet-
* http://ie.technion.ac.il/serveng/callcenterdata/index.html
##### Algunas referencias
* Ger Koole: Fundador de CCmath, "call center optimization company". Varios libros y artículos en la web.
* Queueing Models of Call Centers: An Introduction. Ger Koole y Avishai Mandelbaum.
#¿Fractales en la naturaleza?
<hr>
## Árboles eléctricos
#### Proyectos definidos
## Árboles eléctricos
#### Proyectos definidos
<img src="images/arboleselectricos.png" alt="" width="800px" align="middle"/>
## Árboles eléctricos
#### Proyectos definidos
<img src="images/arboleselectricos2.jpg" alt="" width="800px" align="middle"/>
#### Proyectos definidos
## Árboles eléctricos
<img src="images/arboleselectricos3.jpg" alt="" width="800px" align="middle"/>
## Árboles eléctricos
#### Proyectos definidos
* ¿Porqué la electricidad viaja en una trayectoria fractal en el material?
* ¿Qué características del medio condicionan las características del árbol eléctrico?
* ¿Qué predomina, determinismo o aleatoriedad, en la propagación eléctrica?
##### Temática
Fractales, simulación, visualización, modelamiento.
#### Proyectos definidos
## Árboles eléctricos
##### Datasets
* Departamento de Eléctrica, UTFSM.
##### Algunas referencias
* Three-Dimensional Imaging and Analysis of Electrical Trees, Roger Schurch.
* Fractal Analysis of Electrical Trees, K. Kudo.
## Proyectos por definir
* Proyectos en Kaggle
* Proyectos en HeroX
* API del gobierno
* Otras APIS y fuentes de datos
* Otras ideas
#### Proyectos por definir
## Kaggle
<img src="images/kaggle.png" alt="" height="100px" align="right"/>
* http://www.kaggle.com/
* Plataforma de concursos de Machine Learning y Data Science.
* Modalidad:
* Descargar datos
* Seleccionar y afinar un modelo
* Predecir resultados
#### Proyectos por definir
## Kaggle
Proyectos actuales:
* 1- **Springleaf Marketing Response**: Determine whether to send a direct mail piece to a customer.
* 2- **Western Australia Rental Prices**: Predict rental prices for properties across Western Australia.
* 3- **Rossmann Store Sales**: Forecast sales using store, promotion, and competitor data.
* 4- **Flavours of Physics**: Identify a rare decay phenomenon.
* 5- **Right Whale Recognition**: Identify endangered right whales in aerial photographs.
* 6- **How Much Did It Rain?**: Predict hourly rainfall using data from polarimetric radars.
#### Proyectos por definir
## Kaggle
Proyectos actuales:
* 7- **Ocean Ship Logbooks (1750-1850)**: Explore changing climatology with data from early shipping logs.
* 8- **Hillary Clinton's Emails**: Uncover the political landscape in Hillary Clinton's emails.
* 9- **Meta Kaggle**: The dataset on Kaggle, on Kaggle.
* 10- **What's Cooking?**: Use recipe ingredients to categorize the cuisine.
* 11- **San Francisco Crime Classification**: Predict the category of crimes that occurred in the city by the bay.
* 12- **Denoising Dirty Documents**: Remove noise from printed text.
#### Proyectos por definir
## HeroX
<img src="images/herox.png" alt="" height="100px" align="right"/>
* http://www.herox.com/
* Plataforma de concursos de Machine Learning y Data Science.
* Similar a Kaggle, pero un poco más diverso.
* Modalidad:
* Descargar datos
* Seleccionar y afinar un modelo
* Predecir resultados
#### Proyectos por definir
## HeroX
Proyectos actuales
* 1- **Cognitive Computing Challenge**: Build a cognitive system that can read a document, then load a database with what it finds.
* 2- **Integra Gold Rush Challenge**: Integra Gold is offering $1 million to help lead us to the next big gold discovery in Val-d'Or, Canada.
* 3- **Sky for All: Air Mobility for 2035 and Beyond**: Envision the skies of 2035 and design an airspace system that allows vehicles to safely and efficiently navigate...
* 4- **The Lunar Initiatives Flash Art Competition**: Calling all writers and 2D artists! Submit your lunar artwork in the Lunar...
* 5- **Financial Revolutionaries Enhancing Education**: Educating For Financial Freedom
### Proyectos por definir
## HeroX
Proyectos actuales:
* 6- **Operation Blue Sky**: Aboriginal Health Initiative: MNP presents an ideation challenge to improve health outcomes
* 7- **Autism Speaks House to Home Prize**: Autism Speaks is searching for belief-busting breakthroughs in housing and residential supports for...
* 8- **Raising the Bar on Healthcare**: A video challenge to share how Redirect Health raised the bar on healthcare and led to lowered costs...
* 9- **Clinical Trial Innovation Prize**: Producing a breakthrough that doubles the accrual rate of clinical trials in the diagnosis and treatment of cancer.
* 10- **CHIME National Patient ID Challenge**: Ensure 100% accuracy of every patient’s health info to reduce preventable medical errors and...
### Interludio
## API
¿Qué es una API?
**Application Programming Interface**: abstracción que permite a terceros consumir datos de un programa o de un sitio web.
Ejemplos clásicos:
* API de Twitter: Permite encontrar tweets por pais, por idioma, por fecha, etc...
* API de Google Maps: permite que desarrolladores construyan aplicaciones con los datos y mapas de google maps.
No todas las APIs son idénticas, pero existen similaridades y ciertas "buenas prácticas".
#### Proyectos por definir
## API datos.gob.cl
<img src="images/APIgob.png" alt="" height="100px" align="right"/>
* http://recursos.datos.gob.cl/
* Plataforma con datos relativos a Chile: ministerios y consejos, desde educación a homicidios.
* **Sólo están los datos**: usteden tiene que elaborar una pregunta, proponer una estrategia de resolución e implementarla.
* Modalidad:
* Descargar una base de datos
#### Proyectos abiertos
## API datos.gob.cl
Ejemplos de bases de datos recientes:
* **Organizaciones Comunitarias**
* Fuente: Municipalidad de Máfil
* Categorías: Comunicaciones Comunidad Sociedad General
* Formatos: xls
* Descripción: Nómina de organizaciones comunitarias
* **Patentes Comerciales Renovadas 1er Semestre 2015**
* Fuente: Municipalidad de Los Lagos
* Categorías: Negocios Comunidad Finanzas Planificación
* Formatos: xlsx
* Fecha de publicación: 30 de septiembre del 2015
* Descripción: Listado de patentes comerciales renovadas y vigentes para el primera semestre del año 2015.
* **Abonados Móviles**
* Fuente: Subsecretaría de Telecomunicaciones
* Categorías: Comunicaciones
* Formatos: xlsx
* Fecha de publicación: 30 de septiembre del 2015
* Descripción: Abonados Móviles
#### Proyectos abiertos
## API datos.gob.cl
Ejemplos de datos más descargados:
* **Precipitaciones diarias por Estaciones**
* Fuente: Dirección General de Aeronáutica Civil
* Categorías: Negocios Comunicaciones Comunidad Cultura
* Formatos: csv xml
* Fecha de publicación 8 de septiembre del 2015
* Descripción: Muestra las precipitaciones ocurridas por período, en cada estación a lo largo del pais.
* **PRODUCTO INTERNO BRUTO DE CHILE**
* Fuente: Comisión Chilena del Cobre
* Categorías: Gobierno
* Formatos: html
* Fecha de publicación 9 de julio del 2013
* Descripción: Contiene información del Producto Interno Bruto por Clase de Actividad Económica a precios corrientes y volumen a precios del año anterior encadenado. Series anuales disponibles desde el año 2003.
* **CENSO 2002**
* Fuente: Instituto Nacional de Estadísticas
* Categorías: Cultura Educación Sociedad Tecnología
* Formatos: txt gz
* Fecha de publicación 1 de febrero del 2013
* Descripción: El censo es la medición más importante del país, se realiza cada 10 años y es el operativo estadístico más amplio que se realiza en Chile.Debe cumplir con tres características...
#### Proyectos abiertos
## Otros sitios y APIs
Algunos ejemplos (no limitantes):
* http://mindicador.cl/: principales indicadores económicos para Chile.
* http://www.cs.cmu.edu/~enron/: Enron email dataset.
* http://datahub.io/: Colección de diversos datasets mundiales.
* http://www.openstreetmap.org/: Datos geográficos colaborativos.
* http://musicbrainz.org/: open music encyclopedia.
## Consejo
Definan el tema que más les interesa, y luego busquen un sitio o API apropiada.
#### Proyectos por definir
## Otras ideas
* ¿Es posible predecir la existencia de monopolios, mediante un procesamiento automático de indicadores económicos?
* ¿Cómo imprime una impresoras 3D? ¿Cómo se diseña/optimiza la impresión?
* Reconstrucción tridimensional con datos obtenidos por drones.
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Pruning in Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
Welcome to an end-to-end example for magnitude-based *weight pruning*.
### Other pages
For an introduction to what pruning is and to determine if you should use it (including what's supported), see the [overview page](https://www.tensorflow.org/model_optimization/guide/pruning).
To quickly find the APIs you need for your use case (beyond fully pruning a model with 80% sparsity), see the
[comprehensive guide](https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide.md).
### Summary
In this tutorial, you will:
1. Train a `tf.keras` model for MNIST from scratch.
2. Fine tune the model by applying the pruning API and see the accuracy.
3. Create 3x smaller TF and TFLite models from pruning.
4. Create a 10x smaller TFLite model from combining pruning and post-training quantization.
5. See the persistence of accuracy from TF to TFLite.
## Setup
```
! pip install -q tensorflow-model-optimization
import tempfile
import os
import tensorflow as tf
import numpy as np
from tensorflow import keras
%load_ext tensorboard
```
## Train a model for MNIST without pruning
```
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=4,
validation_split=0.1,
)
```
Evaluate baseline test accuracy and save the model for later usage.
```
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
```
## Fine-tune pre-trained model with pruning
### Define the model
You will apply pruning to the whole model and see this in the model summary.
In this example, you start the model with 50% sparsity (50% zeros in weights)
and end with 80% sparsity.
In the [comprehensive guide](https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide.md), you can see how to prune some layers for model accuracy improvements.
```
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after 2 epochs.
batch_size = 128
epochs = 2
validation_split = 0.1 # 10% of training set will be used for validation set.
num_images = train_images.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs
# Define model for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_for_pruning = prune_low_magnitude(model, **pruning_params)
# `prune_low_magnitude` requires a recompile.
model_for_pruning.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_for_pruning.summary()
```
### Train and evaluate the model against baseline
Fine tune with pruning for two epochs.
`tfmot.sparsity.keras.UpdatePruningStep` is required during training, and `tfmot.sparsity.keras.PruningSummaries` provides logs for tracking progress and debugging.
```
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.fit(train_images, train_labels,
batch_size=batch_size, epochs=epochs, validation_split=validation_split,
callbacks=callbacks)
```
For this example, there is minimal loss in test accuracy after pruning, compared to the baseline.
```
_, model_for_pruning_accuracy = model_for_pruning.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Pruned test accuracy:', model_for_pruning_accuracy)
```
The logs show the progression of sparsity on a per-layer basis.
```
#docs_infra: no_execute
%tensorboard --logdir={logdir}
```
For non-Colab users, you can see [the results of a previous run](https://tensorboard.dev/experiment/sRQnrycaTMWQOaswXzClYA/#scalars&_smoothingWeight=0) of this code block on [TensorBoard.dev](https://tensorboard.dev/).
## Create 3x smaller models from pruning
Both `tfmot.sparsity.keras.strip_pruning` and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression
benefits of pruning.
First, create a compressible model for TensorFlow.
```
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
_, pruned_keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model_for_export, pruned_keras_file, include_optimizer=False)
print('Saved pruned Keras model to:', pruned_keras_file)
```
Then, create a compressible model for TFLite.
```
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
pruned_tflite_model = converter.convert()
_, pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_tflite_file, 'wb') as f:
f.write(pruned_tflite_model)
print('Saved pruned TFLite model to:', pruned_tflite_file)
```
Define a helper function to actually compress the models via gzip and measure the zipped size.
```
def get_gzipped_model_size(file):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
```
Compare and see that the models are 3x smaller from pruning.
```
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned Keras model: %.2f bytes" % (get_gzipped_model_size(pruned_keras_file)))
print("Size of gzipped pruned TFlite model: %.2f bytes" % (get_gzipped_model_size(pruned_tflite_file)))
```
## Create a 10x smaller model from combining pruning and quantization
You can apply post-training quantization to the pruned model for additional benefits.
```
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_and_pruned_tflite_model = converter.convert()
_, quantized_and_pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_pruned_tflite_file, 'wb') as f:
f.write(quantized_and_pruned_tflite_model)
print('Saved quantized and pruned TFLite model to:', quantized_and_pruned_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_pruned_tflite_file)))
```
## See persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TF Lite model on the test dataset.
```
import numpy as np
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on ever y image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
```
You evaluate the pruned and quantized model and see that the accuracy from TensorFlow persists to the TFLite backend.
```
interpreter = tf.lite.Interpreter(model_content=quantized_and_pruned_tflite_model)
interpreter.allocate_tensors()
test_accuracy = evaluate_model(interpreter)
print('Pruned and quantized TFLite test_accuracy:', test_accuracy)
print('Pruned TF test accuracy:', model_for_pruning_accuracy)
```
## Conclusion
In this tutorial, you saw how to create sparse models with the TensorFlow Model Optimization Toolkit API for both TensorFlow and TFLite. You
then combined pruning with post-training quantization for additional benefits.
You created a 10x smaller model for MNIST, with minimal accuracy difference.
We encourage you to try this new capability, which can be particularly important for deployment in resource-constrained environments.
| github_jupyter |
```
import os
import sys
import multiprocessing
import logging
import numpy as np
import pandas as pd
import mxnet as mx
from mxnet.io import DataDesc
from mxnet import nd, gluon, autograd
from mxnet.gluon.data import RecordFileDataset, ArrayDataset, Dataset
from mxnet.gluon.data.vision import transforms
from mxnet.gluon.data.vision.datasets import ImageFolderDataset
from mxnet.gluon.data.dataloader import DataLoader
from mxnet.gluon.model_zoo import vision as models
from mxnet import recordio
from sklearn.metrics.ranking import roc_auc_score
from sklearn.model_selection import train_test_split
from PIL import Image
from common.utils import *
from common.params_dense import *
import math
from time import time
%load_ext autoreload
%autoreload 2
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Numpy: ", np.__version__)
print("MXNet: ", mx.__version__)
print("GPU: ", get_gpu_name())
print(get_cuda_version())
print("CuDNN Version ", get_cudnn_version())
# User-set
# Note if NUM_GPUS > 1 then MULTI_GPU = True and ALL GPUs will be used
# Set below to affect batch-size
# E.g. 1 GPU = 64, 2 GPUs =64*2, 4 GPUs = 64*4
# Note that the effective learning-rate will be decreased this way
CPU_COUNT = multiprocessing.cpu_count()
GPU_COUNT = len(get_gpu_name())
MULTI_GPU = GPU_COUNT > 1
print("CPUs: ", CPU_COUNT)
print("GPUs: ", GPU_COUNT)
# Manually scale to multi-gpu
if MULTI_GPU:
LR *= GPU_COUNT
BATCHSIZE *= (GPU_COUNT)
BATCHSIZE = BATCHSIZE//GPU_COUNT*GPU_COUNT
```
## Data Download
```
# Model-params
# Paths
CSV_DEST = "/data/chestxray"
IMAGE_FOLDER = os.path.join(CSV_DEST, "images")
LABEL_FILE = os.path.join(CSV_DEST, "Data_Entry_2017.csv")
%%time
# Download data
print("Please make sure to download")
print("https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-linux#download-and-install-azcopy")
download_data_chextxray(CSV_DEST)
```
## Data prep
https://github.com/apache/incubator-mxnet/issues/1480
```
train_set, valid_set, test_set = get_train_valid_test_split(TOT_PATIENT_NUMBER)
```
## Data Loading
### Creating the datasets
```
class XrayData(Dataset):
def __init__(self, img_dir, lbl_file, patient_ids, transform=None):
self.img_locs, self.labels = get_imgloc_labels(img_dir, lbl_file, patient_ids)
self.transform = transform
print("Loaded {} labels and {} images".format(len(self.labels), len(self.img_locs)))
def __getitem__(self, idx):
im_file = self.img_locs[idx]
im_rgb = Image.open(im_file)
label = self.labels[idx]
im_rgb = mx.nd.array(im_rgb)
if self.transform is not None:
im_rgb = self.transform(im_rgb)
return im_rgb, mx.nd.array(label)
def __len__(self):
return len(self.img_locs)
def no_augmentation_dataset(img_dir, lbl_file, patient_ids, normalize):
dataset = XrayData(img_dir, lbl_file, patient_ids,
transform=transforms.Compose([
transforms.Resize(WIDTH),
transforms.ToTensor(),
transforms.Normalize(IMAGENET_RGB_MEAN, IMAGENET_RGB_SD)]))
return dataset
# Dataset for training
train_dataset = XrayData(img_dir=IMAGE_FOLDER,
lbl_file=LABEL_FILE,
patient_ids=train_set,
transform=transforms.Compose([
transforms.RandomResizedCrop(size=WIDTH),
transforms.RandomFlipLeftRight(),
transforms.ToTensor(),
transforms.Normalize(IMAGENET_RGB_MEAN, IMAGENET_RGB_SD)]))
valid_dataset = no_augmentation_dataset(IMAGE_FOLDER, LABEL_FILE, valid_set, transforms.Normalize(IMAGENET_RGB_MEAN, IMAGENET_RGB_SD))
test_dataset = no_augmentation_dataset(IMAGE_FOLDER, LABEL_FILE, test_set, transforms.Normalize(IMAGENET_RGB_MEAN, IMAGENET_RGB_SD))
# DataLoaders
train_loader = DataLoader(dataset=train_dataset, batch_size=BATCHSIZE,
shuffle=True, num_workers=CPU_COUNT, last_batch='discard')
valid_loader = DataLoader(dataset=valid_dataset, batch_size=BATCHSIZE,
shuffle=False, num_workers=CPU_COUNT, last_batch='discard')
test_loader = DataLoader(dataset=test_dataset, batch_size=BATCHSIZE,
shuffle=False, num_workers=CPU_COUNT, last_batch='discard')
```
## Creating the network
### Loading the pretrained model
```
ctx = [mx.gpu(i) for i in range(GPU_COUNT)]
net = mx.gluon.model_zoo.vision.densenet121(pretrained=True, ctx=ctx)
with net.name_scope():
net.output = mx.gluon.nn.Dense(CLASSES)
net.output.initialize(ctx=ctx)
net.hybridize()
```
## Trainer
```
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': LR})
```
## Loss
```
binary_cross_entropy = gluon.loss.SigmoidBinaryCrossEntropyLoss()
```
## Output
```
sig = gluon.nn.Activation('sigmoid')
```
## Evaluation loop
```
def evaluate_accuracy(data_iterator, net):
acc = 0
for i, (data, label) in enumerate(data_iterator):
data_split = gluon.utils.split_and_load(data, ctx)
label_split = gluon.utils.split_and_load(label, ctx)
outputs = [(sig(net(X)),Y) for X, Y in zip(data_split, label_split)]
for output, label in outputs:
acc += float((label.asnumpy() == np.round(output.asnumpy())).sum()) / CLASSES / output.shape[0]
data_split = gluon.utils.split_and_load(data, [mx.cpu()])
label_split = gluon.utils.split_and_load(label, [mx.cpu()])
return acc/i/len(ctx)
```
## Training loop
```
%%time
n_batch = 100
n_batches = len(train_loader)
for e in range(EPOCHS):
tick = time()
loss = 0
for i, (data, label) in enumerate(train_loader):
data_split = gluon.utils.split_and_load(data, ctx)
label_split = gluon.utils.split_and_load(label, ctx)
# Printing the loss here to allow data to be loaded asynchronously on the GPU
if (i > 0):
loss += sum(losses).mean().asscalar()
if (i%n_batch == 0 and i > 0):
print('Batch {0}: Sigmoid Binary Cross Entropy Loss: {1:.4f}'.format(i,loss/i))
with autograd.record():
losses = [binary_cross_entropy(net(X), Y) for X, Y in zip(data_split, label_split)]
for l in losses:
l.backward()
trainer.step(data.shape[0])
test_accuracy = evaluate_accuracy(valid_loader, net)
print('Epoch {0}, {1:.6f} test_accuracy after {2:.2f} seconds'.format(e, test_accuracy, time()-tick))
```
## Evaluate
```
%%time
predictions = np.zeros((0, CLASSES))
labels = np.zeros((0, CLASSES))
for (data, label) in (test_loader):
data_split = gluon.utils.split_and_load(data, ctx)
label_split = gluon.utils.split_and_load(label, ctx)
outputs = [sig(net(X)) for X in data_split]
predictions = np.concatenate([predictions, np.concatenate([output.asnumpy() for output in outputs])])
labels = np.concatenate([labels, np.concatenate([label.asnumpy() for label in label_split])])
print("Validation AUC: {0:.4f}".format(compute_roc_auc(labels, predictions, CLASSES)))
```
## Synthetic Data (Pure Training)
```
fake_X = mx.nd.ones((tot_num, 3, 224, 224), dtype=np.float32)
fake_y = mx.nd.ones((tot_num, CLASSES), dtype=np.float32)
train_dataset_synth = ArrayDataset(fake_X, fake_y)
train_dataloader_synth = DataLoader(train_dataset_synth, BATCHSIZE, shuffle=False, num_workers=CPU_COUNT, last_batch='discard')
%%time
n_batch = 50
for e in range(EPOCHS):
tick = time()
loss = 0
for i, (data, label) in enumerate(train_loader):
data_split = gluon.utils.split_and_load(data, ctx)
label_split = gluon.utils.split_and_load(label, ctx)
# Printing the loss here to allow data to be loaded asynchronously on the GPU
if (i > 0):
loss += sum(losses).mean().asscalar()
if (i%n_batch == 0 and i > 0):
print('Batch {0}: Sigmoid Binary Cross Entropy Loss: {1:.4f}'.format(i,loss/i))
with autograd.record():
losses = [binary_cross_entropy(net(X), Y) for X, Y in zip(data_split, label_split)]
for l in losses:
l.backward()
trainer.step(data.shape[0])
print('Epoch {0}, {1:.2f} seconds, loss {2:.4f}'.format(e, time()-tick, sum(losses).mean().asscalar()))
```
| github_jupyter |
# Goals
### 1. Learn to implement Inception A Block using monk
- Monk's Keras
- Monk's Pytorch
- Monk's Mxnet
### 2. Use network Monk's debugger to create complex blocks
### 3. Understand how syntactically different it is to implement the same using
- Traditional Keras
- Traditional Pytorch
- Traditional Mxnet
# Inception A Block
- Note: The block structure can have variations too, this is just an example
```
from IPython.display import Image
Image(filename='imgs/inception_a.png')
```
# Table of contents
[1. Install Monk](#1)
[2. Block basic Information](#2)
- [2.1) Visual structure](#2-1)
- [2.2) Layers in Branches](#2-2)
[3) Creating Block using monk visual debugger](#3)
- [3.0) Create the base sub-block](#3-0)
- [3.1) Create the first branch](#3-1)
- [3.2) Create the second branch](#3-2)
- [3.3) Create the third branch](#3-3)
- [3.4) Create the fourth branch](#3-4)
- [3.5) Merge the branches](#3-5)
- [3.6) Debug the merged network](#3-6)
- [3.7) Compile the network](#3-7)
- [3.8) Visualize the network](#3-8)
- [3.9) Run data through the network](#3-9)
[4) Creating Block Using MONK one line API call](#4)
- [Mxnet Backend](#4-1)
- [Pytorch Backend](#4-2)
- [Keras Backend](#4-3)
[5) Appendix](#5)
- [Study Material](#5-1)
- [Creating block using traditional Mxnet](#5-2)
- [Creating block using traditional Pytorch](#5-3)
- [Creating block using traditional Keras](#5-4)
<a id='1'></a>
# Install Monk
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
- cd monk_v1/installation && pip install -r requirements_cu9.txt
- (Select the requirements file as per OS and CUDA version)
```
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
```
# Imports
```
# Common
import numpy as np
import math
import netron
from collections import OrderedDict
from functools import partial
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
```
<a id='2'></a>
# Block Information
<a id='2_1'></a>
## Visual structure
```
from IPython.display import Image
Image(filename='imgs/inception_a.png')
```
<a id='2_2'></a>
## Layers in Branches
- Number of branches: 4
- Branch 1
- conv1x1 -> batchnorm
- Branch 2
- conv_1x1 -> batchnorm -> relu -> conv_5x5 -> batchnorm -> relu
- Branch 3
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu
- Branch 4
- pooling -> conv_1x1 -> batchnorm -> relu
- Branches merged using
- Concatenation
(See Appendix to read blogs on inception networks)
<a id='3'></a>
# Creating Block using monk debugger
```
# Imports and setup a project
# To use pytorch backend - replace gluon_prototype with pytorch_prototype
# To use keras backend - replace gluon_prototype with keras_prototype
from gluon_prototype import prototype
# Create a sample project
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
```
<a id='3-0'></a>
## Create the Base block
```
# Create Base Convolution->Batchnorn->RELU block
def conv_bn_relu_block(output_channels=64, kernel_size=1, stride=1):
network = [];
network.append(gtf.convolution(output_channels=output_channels,
kernel_size=kernel_size,
stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
return network;
# Debug the block
branch_1 = conv_bn_relu_block();
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network);
```
<a id='3-1'></a>
## Create the first branch
```
def first_branch():
network = [];
network.append(conv_bn_relu_block(output_channels=64, kernel_size=1))
return network;
# Debug the branch
branch_1 = first_branch();
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network);
```
<a id='3-2'></a>
## Create the second branch
```
def second_branch():
network = [];
network.append(conv_bn_relu_block(output_channels=48, kernel_size=1));
network.append(conv_bn_relu_block(output_channels=64, kernel_size=5));
return network;
# Debug the branch
branch_2 = second_branch()
network = [];
network.append(branch_2);
gtf.debug_custom_model_design(network);
```
<a id='3-3'></a>
## Create the Third branch
```
def third_branch():
network = [];
network.append(conv_bn_relu_block(output_channels=64, kernel_size=1));
network.append(conv_bn_relu_block(output_channels=96, kernel_size=3));
network.append(conv_bn_relu_block(output_channels=96, kernel_size=3));
return network;
# Debug the branch
branch_3 = third_branch()
network = [];
network.append(branch_3);
gtf.debug_custom_model_design(network);
```
<a id='3-4'></a>
## Create the Fourth branch
```
def fourth_branch(pooling_branch_channels=32, pool_type="avg"):
network = [];
if(pool_type=="avg"):
network.append(gtf.average_pooling(kernel_size=3, stride=1, padding=1));
else:
network.append(gtf.max_pooling(kernel_size=3, stride=1, padding=1));
network.append(conv_bn_relu_block(output_channels=pooling_branch_channels, kernel_size=1));
return network;
# Debug the branch
branch_4 = fourth_branch()
network = [];
network.append(branch_4);
gtf.debug_custom_model_design(network);
```
<a id='3-5'></a>
## Merge the branches
```
def final_block(pooling_branch_channels=32, pool_type="avg"):
network = [];
#Create subnetwork and add branches
subnetwork = [];
branch_1 = first_branch()
branch_2 = second_branch()
branch_3 = third_branch()
branch_4 = fourth_branch(pooling_branch_channels=pooling_branch_channels,
pool_type=pool_type)
subnetwork.append(branch_1);
subnetwork.append(branch_2);
subnetwork.append(branch_3);
subnetwork.append(branch_4);
# Add merging element
subnetwork.append(gtf.concatenate());
# Add the subnetwork
network.append(subnetwork);
return network;
```
<a id='3-6'></a>
## Debug the merged network
```
final = final_block(pooling_branch_channels=32, pool_type="avg")
network = [];
network.append(final);
gtf.debug_custom_model_design(network);
```
<a id='3-7'></a>
## Compile the network
```
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='3-8'></a>
## Run data through the network
```
import mxnet as mx
x = np.zeros((1, 3, 224, 224));
x = mx.nd.array(x);
y = gtf.system_dict["local"]["model"].forward(x);
print(x.shape, y.shape)
```
<a id='3-9'></a>
## Visualize network using netron
```
gtf.Visualize_With_Netron(data_shape=(3, 224, 224))
```
<a id='4'></a>
# Creating Using MONK LOW code API
<a id='4-1'></a>
## Mxnet backend
```
from gluon_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.inception_a_block(pooling_branch_channels=32, pool_type="avg"));
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='4-2'></a>
## Pytorch backend
- Only the import changes
```
#Change gluon_prototype to pytorch_prototype
from pytorch_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.inception_a_block(pooling_branch_channels=32, pool_type="avg"));
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='4-3'></a>
## Keras backend
- Only the import changes
```
#Change gluon_prototype to keras_prototype
from keras_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.inception_a_block(pooling_branch_channels=32, pool_type="avg"));
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='5'></a>
# Appendix
<a id='5-1'></a>
## Study links
- https://medium.com/@sh.tsang/review-inception-v3-1st-runner-up-image-classification-in-ilsvrc-2015-17915421f77c
- https://www.analyticsvidhya.com/blog/2018/10/understanding-inception-network-from-scratch/
- https://software.intel.com/en-us/articles/inception-v3-deep-convolutional-architecture-for-classifying-acute-myeloidlymphoblastic
- https://codelabs.developers.google.com/codelabs/cpb102-txf-learning/index.html#0
- https://cloud.google.com/tpu/docs/inception-v3-advanced
<a id='5-2'></a>
## Creating block using traditional Mxnet
- Code credits - https://mxnet.incubator.apache.org/
```
# Traditional-Mxnet-gluon
import mxnet as mx
from mxnet.gluon import nn
from mxnet.gluon.nn import HybridBlock, BatchNorm
from mxnet.gluon.contrib.nn import HybridConcurrent, Identity
from mxnet import gluon, init, nd
def _make_basic_conv(norm_layer=BatchNorm, norm_kwargs=None, **kwargs):
out = nn.HybridSequential(prefix='')
out.add(nn.Conv2D(use_bias=False, **kwargs))
out.add(norm_layer(epsilon=0.001, **({} if norm_kwargs is None else norm_kwargs)))
out.add(nn.Activation('relu'))
return out
def _make_branch(use_pool, norm_layer, norm_kwargs, *conv_settings):
out = nn.HybridSequential(prefix='')
if use_pool == 'avg':
out.add(nn.AvgPool2D(pool_size=3, strides=1, padding=1))
elif use_pool == 'max':
out.add(nn.MaxPool2D(pool_size=3, strides=2))
setting_names = ['channels', 'kernel_size', 'strides', 'padding']
for setting in conv_settings:
kwargs = {}
for i, value in enumerate(setting):
if value is not None:
kwargs[setting_names[i]] = value
out.add(_make_basic_conv(norm_layer, norm_kwargs, **kwargs))
return out
def make_A(pool_features, prefix=None, norm_layer=BatchNorm, norm_kwargs=None):
out = HybridConcurrent(axis=1, prefix=prefix)
with out.name_scope():
out.add(_make_branch(None, norm_layer, norm_kwargs,
(64, 1, None, None)))
out.add(_make_branch(None, norm_layer, norm_kwargs,
(48, 1, None, None),
(64, 5, None, 2)))
out.add(_make_branch(None, norm_layer, norm_kwargs,
(64, 1, None, None),
(96, 3, None, 1),
(96, 3, None, 1)))
out.add(_make_branch('avg', norm_layer, norm_kwargs,
(pool_features, 1, None, None)))
return out
# Invoke the block
block = make_A(32)
# Initialize network and load block on machine
ctx = [mx.cpu()];
block.initialize(init.Xavier(), ctx = ctx);
block.collect_params().reset_ctx(ctx)
block.hybridize()
# Run data through network
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = block.forward(x);
print(x.shape, y.shape)
# Export Model to Load on Netron
block.export("final", epoch=0);
netron.start("final-symbol.json", port=8082)
```
<a id='5-3'></a>
## Creating block using traditional Pytorch
- Code credits - https://pytorch.org/
```
# Traiditional-Pytorch
import torch
from torch import nn
from torch.jit.annotations import List
import torch.nn.functional as F
class BasicConv2d(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return F.relu(x, inplace=True)
class InceptionA(nn.Module):
def __init__(self, in_channels, pool_features):
super(InceptionA, self).__init__()
self.branch1x1 = BasicConv2d(in_channels, 64, kernel_size=1)
self.branch5x5_1 = BasicConv2d(in_channels, 48, kernel_size=1)
self.branch5x5_2 = BasicConv2d(48, 64, kernel_size=5, padding=2)
self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)
self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)
self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, padding=1)
self.branch_pool = BasicConv2d(in_channels, pool_features, kernel_size=1)
def forward(self, x):
branch1x1 = self.branch1x1(x)
branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)
branch3x3dbl = self.branch3x3dbl_1(x)
branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)
outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool]
return torch.cat(outputs, 1)
# Invoke the block
block = InceptionA(3, 32);
# Initialize network and load block on machine
layers = []
layers.append(block);
net = nn.Sequential(*layers);
# Run data through network
x = torch.randn(1, 3, 224, 224)
y = net(x)
print(x.shape, y.shape);
# Export Model to Load on Netron
torch.onnx.export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
netron.start('model.onnx', port=9998);
```
<a id='5-4'></a>
## Creating block using traditional Keras
- Code credits: https://keras.io/
```
# Traditional-Keras
import keras
import keras.layers as kla
import keras.models as kmo
import tensorflow as tf
from keras.models import Model
backend = 'channels_last'
from keras import layers
def inception_a_block(input_tensor, filters, stage, block):
bn_axis = 3
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
branch_1 = layers.Conv2D(64, (1, 1),
kernel_initializer='he_normal')(input_tensor)
branch_1 = layers.BatchNormalization(axis=bn_axis)(branch_1)
branch_1 = layers.Activation('relu')(branch_1)
branch_2 = layers.Conv2D(48, (1, 1),
kernel_initializer='he_normal')(input_tensor)
branch_2 = layers.BatchNormalization(axis=bn_axis)(branch_2)
branch_2 = layers.Activation('relu')(branch_2)
branch_2 = layers.Conv2D(64, (5, 5),
kernel_initializer='he_normal', padding="same")(branch_2)
branch_2 = layers.BatchNormalization(axis=bn_axis)(branch_2)
branch_2 = layers.Activation('relu')(branch_2)
branch_3 = layers.Conv2D(48, (1, 1),
kernel_initializer='he_normal')(input_tensor)
branch_3 = layers.BatchNormalization(axis=bn_axis)(branch_3)
branch_3 = layers.Activation('relu')(branch_3)
branch_3 = layers.Conv2D(96, (3, 3),
kernel_initializer='he_normal', padding="same")(branch_3)
branch_3 = layers.BatchNormalization(axis=bn_axis)(branch_3)
branch_3 = layers.Activation('relu')(branch_3)
branch_3 = layers.Conv2D(96, (3, 3),
kernel_initializer='he_normal', padding="same")(branch_3)
branch_3 = layers.BatchNormalization(axis=bn_axis)(branch_3)
branch_3 = layers.Activation('relu')(branch_3)
branch_4 = layers.AveragePooling2D(pool_size=(3, 3),
strides=(1, 1),
padding='same',
data_format=None)(input_tensor)
branch_4 = layers.Conv2D(filters, (1, 1),
kernel_initializer='he_normal')(branch_4)
branch_4 = layers.BatchNormalization(axis=bn_axis)(branch_4)
branch_4 = layers.Activation('relu')(branch_4)
x = layers.Concatenate()([branch_1, branch_2, branch_3, branch_4])
return x
def create_model(input_shape, filters, stage, block):
img_input = layers.Input(shape=input_shape);
x = inception_a_block(img_input, filters, stage, block)
return Model(img_input, x);
# Invoke the block
filters=32;
input_shape=(224, 224, 3);
model = create_model(input_shape, filters, 0, "0");
# Run data through network
x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
y = model(x)
print(x.shape, y.shape)
# Export Model to Load on Netron
model.save("final.h5");
netron.start("final.h5", port=8082)
```
| github_jupyter |
# Blood Glucose Predictions with LSTM network
### Imports
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from statsmodels.tools.eval_measures import rmse
from sklearn.preprocessing import MinMaxScaler
from keras.preprocessing.sequence import TimeseriesGenerator
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, TimeDistributed
from keras.callbacks import ModelCheckpoint, EarlyStopping
import warnings
import io
import math
warnings.filterwarnings("ignore")
```
This is how the DataFrame generated by simglucose looks like
```
df = pd.read_csv("adolescent#008.csv")
df.Time = pd.to_datetime(df.Time)
df = df[0:2480] # this final data from some of pacients are not relevant(stays at hipo for to long to be real)
df.set_index("Time")
df.head()
```
We have only interest in the pacient blood glucose on time
```
plt.figure(figsize=(16,8))
plt.title('Blood Glucose from adolescent 8')
plt.plot(df['CGM'])
plt.xlabel('Timestamp',fontsize=18)
plt.ylabel('BG (mg/dL)',fontsize=18)
plt.show()
```
Let's create a function to prepare date for training and testing.
We can take 20 as the length of the input because that is approximately the maximum time which carbs and insulin affects the blood glucose.
```
def read_pacient(age="adolescent#0", number="08", extension=".csv", training_test_proportion=0.8,input_len=20, output_len=6):
# reading the file
df = pd.read_csv(age+number+extension)
df.Time = pd.to_datetime(df.Time)
df = df[0:2480] # this final data from some of pacients are not relevant(stays at hipo for to long to be real)
df.set_index("Time")
# Getting only blood glucuse from sensor data
data = df.filter(['CGM'])
dataset = data.values
training_data_len = math.ceil( len(dataset) *training_test_proportion) # setting proportion for training and testing
# Scalling data from 0 - 1 to input in the neural network
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset)
train_data = scaled_data[0:training_data_len , : ]
x_train=[] # arrays of blood glucose with len of input_len
y_train = [] # arrays of blood glucose with len of output_len
for i in range(input_len,len(train_data)-output_len):
x_train.append(train_data[i-input_len:i,0]) # past blood glucose to learn
y_train.append(train_data[i:i+output_len,0]) # future blood glucose to predict
x_train, y_train = np.array(x_train), np.array(y_train) # converting to numpy array
'''
Reshape is necessary so the neural network can understand the data
Shape will be (number of predictions, input_len, number of features)
Feature is which property we are using in the model, so in this case it is only the blood glucose from the pacient
'''
x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1))
not_scaled_test_data = dataset[training_data_len - input_len: , : ]
test_data = scaled_data[training_data_len - input_len: , : ]
x_test = [] # arrays of blood glucose with len of input_len
y_test = [] # arrays of blood glucose with len of output_len
continuous_ytest = [] # list with not scaled blood glucose from y_test not broken into arrays
'''
So here in the looping of the test we are predicting output_len values
then the next output_len values so we can create a continuos plot of the
predicted glucose
'''
i = input_len
while (i >= input_len and i < len(test_data)-output_len):
x_test.append(test_data[i-input_len:i,0])
y_test.append(not_scaled_test_data[i:i+output_len,0])
for bg in not_scaled_test_data[i:i+output_len,0]:
continuous_ytest.append(bg) # not for testing, just for plot purpose
i = i+output_len # jump output_len values in the future
x_test = np.array(x_test) # converting to numpy array
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
return scaler, x_train, y_train, x_test, y_test, continuous_ytest
```
Now, let's create a function that applies a LSTM Model to our data
```
def make_predictions(scaler, x_train, y_train, x_test, y_test, batch_size=1, epochs=1):
# LSTM Model
model = Sequential()
model.add(LSTM(units=50, return_sequences=True,input_shape=(x_train.shape[1],1)))
model.add(LSTM(units=50, return_sequences=False))
model.add(Dropout(0.5))
model.add(Dense(units=y_train.shape[1]))
model.compile(optimizer="adam", loss='mse',metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs)
predictions = model.predict(x_test) # make predictions
predictions = np.reshape(predictions, (predictions.shape[0],predictions.shape[1])) # reshape just like y_test
predictions = scaler.inverse_transform(predictions) # reverse data
# Create a continuous data of predictions to plot with continuous_ytest
continuous_predictions = predictions[0]
for i in range(1,len(predictions)):
continuous_predictions = np.concatenate([continuous_predictions,predictions[i]])
rmse=np.sqrt(np.mean(((predictions-y_test)**2)))
return model, predictions, continuous_predictions, rmse
```
Finally, we can have a function to plot our results
```
def show_plots(continuous_ytest, continuous_predictions):
plt.figure(figsize=(16,8))
plt.title('Blood Glucose Prediction Model Result')
plt.plot(continuous_ytest, color = 'b')
plt.plot(continuous_predictions, color = 'r')
plt.xlabel('Timestamp',fontsize=18)
plt.ylabel('BGBG (mg/dL)',fontsize=18)
plt.legend(['Real','Predictions'], loc='lower right')
plt.show()
```
Now just testing!
```
scaler, x_train, y_train, x_test, y_test, continuous_ytest = read_pacient() # standard parameters are for pacient number 8
model, predictions, continuous_predictions, rmse = make_predictions(scaler, x_train, y_train, x_test, y_test,batch_size=100, epochs=100)
```
## Results of training with patient number 08
```
show_plots(continuous_ytest, continuous_predictions)
print("Root-Mean-Squared Deviation {}".format(rmse))
```
We can now create and use a functions to apply this model in other pacient
```
def test_model(model, age="adolescent#0", number="01", extension=".csv",input_len=20, output_len=6):
# reading the file
df = pd.read_csv(age+number+extension)
df.Time = pd.to_datetime(df.Time)
df = df[0:2480] # this final data from some of pacients are not relevant(stays at hipo for to long to be real)
df.set_index("Time")
# Getting only blood glucuse from sensor data
data = df.filter(['CGM'])
dataset = data.values
# Scalling data from 0 - 1 to input in the neural network
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset)
x_test = [] # arrays of blood glucose with len of input_len
y_test = [] # arrays of blood glucose with len of output_len
continuous_ytest = [] # list with not scaled blood glucose from y_test not broken into arrays
i = input_len
while (i >= input_len and i < len(dataset)-output_len):
x_test.append(scaled_data[i-input_len:i,0])
y_test.append(dataset[i:i+output_len,0])
for bg in dataset[i:i+output_len,0]:
continuous_ytest.append(bg) # not for testing, just for plot purpose
i = i+output_len # jump output_len values in the future
x_test = np.array(x_test) # converting to numpy array
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
predictions = model.predict(x_test) # make predictions
predictions = np.reshape(predictions, (predictions.shape[0],predictions.shape[1])) # reshape just like y_test
predictions = scaler.inverse_transform(predictions) # reverse data
# Create a continuous data of predictions to plot with continuous_ytest
continuous_predictions = predictions[0]
for i in range(1,len(predictions)):
continuous_predictions = np.concatenate([continuous_predictions,predictions[i]])
rmse=np.sqrt(np.mean(((predictions-y_test)**2)))
return rmse, continuous_ytest, continuous_predictions
rmse2, continuous_ytes2, continuous_predictions2 = test_model(model)
```
### Results with Patient number 01
```
show_plots(continuous_ytes2, continuous_predictions2)
print("Root-Mean-Squared Deviation {}".format(rmse2))
rmse3, continuous_predictions3, continuous_ytes3 = test_model(model,number="10")
```
### Results with Patient number 10
```
show_plots(continuous_ytes3, continuous_predictions3)
print("Root-Mean-Squared Deviation {}".format(rmse3))
rmse4, continuous_predictions4, continuous_ytes4 = test_model(model,number="07")
```
### Results with Patient number 07
```
show_plots(continuous_ytes4, continuous_predictions4)
print("Root-Mean-Squared Deviation {}".format(rmse4))
```
| github_jupyter |
People always ask: "can you randomize several times and use the proportion of selection, instead of
just one randomization"?
Let's try to figure this out.
```
import numpy as np
import regreg.api as rr
import seaborn as sns
%matplotlib inline
%load_ext rpy2.ipython
import matplotlib.pyplot as plt
import scipy.stats
import statsmodels.api as sm
from selection.distributions.discrete_family import discrete_family
nsample=100
ntries, q = 15, 0.60
frac = 0.5
sigma = 1./float(np.sqrt(frac))
truth=0
def boot_algorithm(sample, ntries=ntries, q=q):
proportion = 0
nsample=sample.shape[0]
for _ in range(ntries):
boot_sample = sample[np.random.choice(np.arange(nsample), size=int(frac * nsample), replace=True)]
proportion += np.sqrt(nsample)*np.mean(boot_sample) > 0
proportion /= float(ntries)
return proportion > q
def algorithm(Z, ntries=ntries, q=q):
proportion = 0
for _ in range(ntries):
proportion += (Z + sigma * np.random.standard_normal() > 0)
proportion /= float(ntries)
return proportion > q
def fit_algorithm(algorithm, B=10000, ntries=ntries, q=q):
scale = np.random.choice([0.5, 1, 1.5, 2], size=B)
Z = np.multiply(np.random.standard_normal(B), scale)
Y = np.array([algorithm(z, ntries=ntries, q=q) for z in Z])
%R -i Y,Z M = glm(Y ~ Z, family=binomial(link=probit))
coefM = %R coef(M)
return coefM
def fit_algorithm1(algorithm, B=10000, ntries=ntries, q=q):
scale = np.random.choice([0.5, 1, 1.5, 2], size=B)
Z = np.multiply(np.random.standard_normal(B), scale)
Y = np.array([algorithm(z, ntries=ntries, q=q) for z in Z])
%R -i Y,Z M = glm(Y ~ Z, family=binomial(link=logit))
coefM = %R coef(M)
return coefM
coefM = fit_algorithm(algorithm)
coefM1 = fit_algorithm1(algorithm)
def simulate(n=100, ntries=ntries, q=q, sigma=sigma, truth=truth):
while True:
sample = np.random.standard_normal(size=nsample) + truth
if boot_algorithm(sample, ntries, q):
return np.sqrt(nsample)*np.mean(sample)
#while True:
# Z = np.random.standard_normal() + truth
# if algorithm(Z, ntries, q=q):
# return Z
simulate()
def weight(Z, ntries=ntries, sigma=sigma, q=q):
piZ = scipy.stats.norm.sf(-Z/sigma)
return scipy.stats.binom.sf(ntries * q, ntries, piZ)
def weight_fit(Z, coef=coefM):
linpred = coefM[0] + coefM[1] * Z
return scipy.stats.norm.cdf(linpred)
def weight_LD(Z, ntries=ntries, sigma=sigma, q=q):
phiZ = scipy.stats.norm.sf(-Z/sigma)
return np.exp(-ntries * (q * np.log(q / phiZ) + (1 - q) * np.log((1 - q) / (1 - phiZ)))) * (phiZ < q) + (phiZ >= q)
weight(0.2)
Z = np.linspace(-4, 4, 1001)
W = [weight_LD(z) for z in Z]
W0 = [weight(z) for z in Z]
W1 = [weight_fit(z, coef=coefM) for z in Z]
W2 = [weight_fit(z, coef=coefM1) for z in Z]
plt.plot(Z, np.log(W0), color='r', label='true', linewidth=2)
plt.plot(Z, np.log(W), color='g', label='LD', linewidth=2)
plt.plot(Z, np.log(W1), color='c', label='fit probit', linewidth=2)
plt.plot(Z, np.log(W2), color='b', label='fit logit', linestyle=":", linewidth=2)
plt.legend(fontsize=18, loc="lower right")
plt.xlabel("$\sqrt{n}x$", fontsize=18)
plt.ylabel("log $s(x)$", fontsize=18)
plt.title("Selection probability function", fontsize=20)
plt.savefig("simple_example_sel_prob.pdf")
selective_law = discrete_family(Z, W * scipy.stats.norm.pdf(Z))
selective_law0 = discrete_family(Z, W0 * scipy.stats.norm.pdf(Z))
selective_law1 = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
selective_law2 = discrete_family(Z, W2 * scipy.stats.norm.pdf(Z))
def pivot(z, truth=0):
return 1 - selective_law.cdf(truth, z)
def pivot0(z, truth=0):
return 1 - selective_law0.cdf(truth, z)
def pivot1(z, truth=0):
return 1 - selective_law1.cdf(truth, z)
def pivot2(z, truth=0):
return 1 - selective_law2.cdf(truth, z)
pivot(simulate())
P0 = []
npts = 1000
for _ in range(npts):
naive_pivot = 1-scipy.stats.norm.cdf(simulate() - truth)
P0.append((pivot(simulate()), pivot0(simulate()), pivot1(simulate()), pivot2(simulate()), naive_pivot))
P0 = np.array(P0)
U = np.linspace(0, 1, npts+1)
plt.plot(U, sm.distributions.ECDF(P0[:,1])(U), 'r', label='true', linewidth=2)
plt.plot(U, sm.distributions.ECDF(P0[:,0])(U), 'g', label='LD', linewidth=2)
plt.plot(U, sm.distributions.ECDF(P0[:,2])(U), 'c', label='fit probit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(P0[:,3])(U), 'b', label='fit logit', linestyle=":", linewidth=2)
plt.plot(U, sm.distributions.ECDF(P0[:,4])(U), 'y', label='naive', linewidth=2)
plt.plot([0, 1], [0, 1], 'k--')
plt.legend(fontsize=16, loc="lower right")
plt.xlabel("Observed p-value", fontsize=18)
plt.ylabel("Proportion (empirical CDF)", fontsize=18)
plt.title("P-values", fontsize=20)
plt.savefig("simple_example_pivots.pdf")
PA = []
for _ in range(1000):
PA.append((pivot(simulate(truth=1), truth=1),
pivot0(simulate(truth=1), truth=1),
pivot1(simulate(truth=1), truth=1)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'r', label='True')
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'g', label='LD')
plt.plot(U, sm.distributions.ECDF(P0[:,2])(U), 'c', label='fit')
plt.plot([0, 1], [0, 1], 'k--')
selective_law.equal_tailed_interval(-1)
Z0 = np.linspace(-2,2,501)
selective_law = discrete_family(Z, W * scipy.stats.norm.pdf(Z))
LU = []
for z in Z0:
selective_law = discrete_family(Z, W * scipy.stats.norm.pdf(Z))
LU.append(selective_law.equal_tailed_interval(z))
LU = np.array(LU)
LU0 = []
for z in Z0:
selective_law = discrete_family(Z, W0 * scipy.stats.norm.pdf(Z))
LU0.append(selective_law.equal_tailed_interval(z))
LU0 = np.array(LU0)
LU = []
for z in Z0:
selective_law = discrete_family(Z, W * scipy.stats.norm.pdf(Z))
LU.append(selective_law.equal_tailed_interval(z))
LU = np.array(LU)
LU1 = []
for z in Z0:
selective_law = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
LU1.append(selective_law.equal_tailed_interval(z))
LU1 = np.array(LU1)
LU2 = []
for z in Z0:
selective_law = discrete_family(Z, W2 * scipy.stats.norm.pdf(Z))
LU2.append(selective_law.equal_tailed_interval(z))
LU2 = np.array(LU2)
plt.plot(Z0, LU[:,0], 'g', label='LD')
plt.plot(Z0, LU[:,1], 'g')
plt.plot(Z0, LU0[:,0], 'r', label='true')
plt.plot(Z0, LU0[:,1], 'r')
plt.plot(Z0, LU1[:,0], 'c', label='fit probit')
plt.plot(Z0, LU1[:,1], 'c')
plt.plot(Z0, LU2[:,0], 'b', label='fit logit')
plt.plot(Z0, LU2[:,1], 'b')
plt.legend(loc="lower right")
np.random.seed(1)
coverage, ncover, truth = 0, 1000, 0
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / float(ncover), np.mean(lengths), np.std(lengths)
np.random.seed(1)
coverage, ncover, truth = 0, 1000, 0.5
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W0 * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / float(ncover), np.mean(lengths), np.std(lengths)
np.random.seed(1)
coverage, ncover, truth = 0, 1000, -3.
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / float(ncover), np.mean(lengths), np.std(lengths)
np.random.seed(1)
coverage, ncover, truth = 0, 1000, -3.
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W2 * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / float(ncover), np.mean(lengths), np.std(lengths)
```
# Increasing number of tries
```
ntries, sigma, q = 31, 1, 0.65
Z = np.linspace(-4, 4, 1001)
W = [weight_LD(z, ntries=ntries, sigma=sigma, q=q) for z in Z]
W0 = [weight(z, ntries=ntries, sigma=sigma, q=q) for z in Z]
selective_law = discrete_family(Z, W * scipy.stats.norm.pdf(Z))
selective_law0 = discrete_family(Z, W0 * scipy.stats.norm.pdf(Z))
def pivot(z, truth=0):
return 1 - selective_law.cdf(truth, z)
def pivot(z, truth=0):
return 1 - selective_law0.cdf(truth, z)
def algorithm(Z, ntries=ntries, q=q):
proportion = 0
for _ in range(ntries):
proportion += (Z + sigma * np.random.standard_normal() > 0)
proportion /= ntries
return proportion > q
def fit_algorithm(algorithm, B=10000, ntries=ntries, q=q):
Z = np.random.standard_normal(B) * 2
Y = np.array([algorithm(z, ntries=ntries, q=q) for z in Z])
%R -i Y,Z M = glm(Y ~ Z, family=binomial(link=probit))
coefM = %R coef(M)
return coefM
def weight_fit(Z, coef=coefM):
linpred = coefM[0] + coefM[1] * Z
return scipy.stats.norm.cdf(linpred)
coefM = fit_algorithm(algorithm)
def weight_fit(Z, coef=coefM):
linpred = coefM[0] + coefM[1] * Z
return scipy.stats.norm.cdf(linpred)
W1 = [weight_fit(z) for z in Z]
selective_law1 = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
pivot(simulate())
P0 = []
truth = 0
for _ in range(1000):
P0.append((pivot(simulate(ntries=ntries, sigma=sigma, truth=truth)),
pivot0(simulate(ntries=ntries, sigma=sigma, truth=truth)),
pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth)),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
P0 = np.array(P0)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(P0[:,1])(U), 'r', label='True')
plt.plot(U, sm.distributions.ECDF(P0[:,0])(U), 'g', label='LD')
plt.plot(U, sm.distributions.ECDF(P0[:,2])(U), 'c', label='fit')
plt.plot(U, sm.distributions.ECDF(P0[:,3])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
truth = -1
PA = []
for _ in range(1000):
PA.append((pivot(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
pivot0(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'r', label='True')
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'g', label='LD')
plt.plot(U, sm.distributions.ECDF(PA[:,2])(U), 'c', label='fit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(PA[:,3])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
truth = -2
PA = []
for _ in range(1000):
PA.append((pivot(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
pivot0(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'r', label='True')
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'g', label='LD')
plt.plot(U, sm.distributions.ECDF(PA[:,2])(U), 'c', label='fit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(PA[:,3])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
truth = 1
PA = []
for _ in range(1000):
PA.append((pivot(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
pivot0(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'r', label='True')
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'g', label='LD')
plt.plot(U, sm.distributions.ECDF(PA[:,2])(U), 'c', label='fit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(PA[:,3])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
```
##
| github_jupyter |
```
# slow down a bit when hacking something together, e.g. I forgot to add a simple function call
# tuple unpacking is nice, but cannot be done in a nested list comprehension
# don't forget .items in for k,v in dict.items()
# use hashlib for md5 encodings
# multiline list comprehensions don't need extra parentheses, but multiline if statements do
# np.clip min and max can be omitted by specifying None
# try except looks nice untill it obscures your real error
# parsing ints to ints instead of strings is really important
# checking whether someting is an int should be done with isinstance, not with isalpha() (fails on int)
# removing from a list while iterating can be done safely by iterating over a slice(?)
# with re make sure to use r'' literal strings
# read assignment before tinkering with networkx and discovering its not necessary
# sometimes a simple for loop works better then a list comprehension when parsing the input, and just add to concept variables
# for incrementing a string, you can use chr(ord(inp)+1)
# find repeating characters re.findall(r'([a-z])\1', password)
# regex: modify operator to nongreedy by appending ?
from dataclasses import dataclass
from math import gcd, ceil
import re
from collections import Counter, defaultdict, namedtuple, deque
import itertools
import numpy as np
from matplotlib import pyplot as plt
from aoc_utils import *
import networkx as nx
from itertools import permutations
import re
# method 1 for parsing
lines = data('input', parser=str, sep='\n')
sues = dict()
for line in lines:
num, elements = line.split(':',maxsplit=1)
_, num = num.split()
sues[num] = dict((e.split(':') for e in elements.replace(' ',"").split(',')))
# method 2 for parsing, Peter Norvig style
def parser(line):
num, elements = line.split(':',maxsplit=1)
_, num = num.split()
return num, dict((e.split(':') for e in elements.replace(' ',"").split(',')))
sues = dict(data('input', parser=parser, sep='\n'))
def parser(line):
prop, num = line.split(': ')
return prop, int(num)
realsue = dict(data('aunt', parser=parser, sep='\n'))
realsue
# method 1 solving
for k,v in sues.items():
issue=True
for element, number in v.items():
if element in ('cats', 'trees'):
if realsue[element] >= int(number):
issue=False
elif element in 'pomeraniansgoldfish': # just another fun way to check if string in
if realsue[element] <= int(number):
issue=False
else:
if realsue[element] != int(number):
issue=False
if issue:
print(k)
# Method 2 solving Peter Norvig style
def findsue():
return next(k for k,v in sues.items() if
all(realsue[element] < int(number) if element in ('cats', 'trees') else
realsue[element] > int(number) if element in 'pomeraniansgoldfish' else
realsue[element] == int(number)
for element, number in v.items())
)
findsue()
```
| github_jupyter |
```
from fastai2.vision.all import *
torch
```
https://github.com/pytorch/pytorch/issues/34086
Code from
* https://github.com/pytorch/pytorch/blob/2f840b1662b487d5551d7230f8eb4d57645cfff5/test/test_autograd.py
* https://github.com/pytorch/pytorch/blob/2f840b1662b487d5551d7230f8eb4d57645cfff5/test/test_autograd.py
* https://github.com/pytorch/pytorch/blob/9600ed9af3b84c000b7f54765495e96f29c4bf1d/torch/autograd/profiler.py
* https://github.com/pytorch/pytorch/issues/19420
* https://github.com/pytorch/pytorch/issues/19422
* https://github.com/pytorch/pytorch/search?q=export_chrome_trace&type=Issues
Forum
* https://discuss.pytorch.org/t/interpreting-data-from-torch-autograd-profiler-profile/34390/4
* https://discuss.pytorch.org/t/proper-way-to-enable-and-disable-autograd-profiler/89773
## Importar lo necesario y copiar tests del profiler
```
from torch.autograd.profiler import (profile, format_time, EventList,
FunctionEvent, FunctionEventAvg,
record_function, emit_nvtx)
class Some():
def assertTrue(self, v): return v
def assertFalse(self, v): return not v
def assertEqual(self, a, b): return a == b
def test_profiler_tracing(self):
t1, t2 = torch.ones(1), torch.ones(1)
with torch.autograd.profiler.profile() as prof:
torch.add(t1, t2)
with tempfile.NamedTemporaryFile(mode="w+") as f:
print("export to chrome")
prof.export_chrome_trace(f.name)
# read the trace and expect valid json
# if the JSON generated by export_chrome_trace is not valid, this will throw and fail the test.
parsed = json.load(f)
print(f"pintando json de chrome {f.name}")
print(json.dumps(parsed, indent=4, sort_keys=True))
# Same test but for cuda.
if not torch.cuda.is_available():
return
device = torch.device("cuda:0")
t1, t2 = torch.ones(1, device=device), torch.ones(1, device=device)
with torch.autograd.profiler.profile(use_cuda=True) as prof:
torch.add(t1, t2)
with tempfile.NamedTemporaryFile(mode="w+") as f:
prof.export_chrome_trace(f.name)
# Now validate the json
parsed = json.load(f)
print(f"pintando json de chrome {f.name}")
print(json.dumps(parsed, indent=4, sort_keys=True))
def test_profiler(self):
x = torch.randn(10, 10)
with profile() as p:
self.assertTrue(torch.autograd._profiler_enabled())
y = x * 2 + 4
self.assertFalse(torch.autograd._profiler_enabled())
last_end = 0
names = ['aten::mul', 'aten::to', 'aten::empty_strided', 'aten::copy_',
'aten::empty', 'aten::add', 'aten::to', 'aten::empty_strided',
'aten::copy_', 'aten::empty']
top_level_names = ['aten::mul', 'aten::add']
top_level_iter = iter(top_level_names)
self.assertEqual(len(p.function_events), len(names))
for info, expected_name in zip(p.function_events, names):
if info.cpu_interval.start > last_end:
print(top_level_iter)
top_level_name_expected = next(top_level_iter)
self.assertEqual(info.name, top_level_name_expected)
last_end = info.cpu_interval.end
self.assertEqual(info.name, expected_name)
def test_profiler_unboxed_only(self):
x = torch.rand(3, 4)
with torch.autograd.profiler.profile() as prof:
x.resize_([3, 2])
# @skipIfRocm
def test_profiler_custom_op(self):
inst = torch.classes._TorchScriptTesting._PickleTester([3, 4])
with torch.autograd.profiler.profile() as prof:
torch.ops._TorchScriptTesting.take_an_instance(inst)
found_event = False
for e in prof.function_events:
if e.name == '_TorchScriptTesting::take_an_instance':
found_event = True
self.assertTrue(found_event)
def test_profiler_propagation(self):
def foo(x):
with record_function("in_foo") as rf:
return x * 2
x = torch.rand(3, 4)
traced_foo = torch.jit.trace(foo, x)
def bar(x):
with record_function("in_bar") as rf:
# we expect that profiler will be able
# propagate across fork
fut = torch.jit._fork(traced_foo, x)
y = torch.jit._wait(fut)
# note: continuation (and rf's end) can
# be executed in a different thread
with record_function("in_bar_after_wait") as rf2:
y = y * 2
return y
traced_bar = torch.jit.trace(bar, x)
with profile() as p:
traced_bar(x)
found_foo = False
found_bar = False
found_bar_after_wait = False
for info in p.function_events:
if info.name == "in_foo":
self.assertFalse(found_foo)
found_foo = True
elif info.name == "in_bar":
self.assertFalse(found_bar)
found_bar = True
elif info.name == "in_bar_after_wait":
self.assertFalse(found_bar_after_wait)
found_bar_after_wait = True
self.assertTrue(found_foo)
self.assertTrue(found_bar)
self.assertTrue(found_bar_after_wait)
def test_record_function_callbacks(self):
x = torch.randn(10, 10)
with profile() as p:
with record_function("foo"):
y = x * 2 + 4
function_events = p.function_events
foo_event = [event for event in function_events if "foo" in event.name][0]
self.assertEqual(foo_event.count, 1)
def test_profiler_aggregation_fake(self):
events = EventList()
id = [0]
def get_id():
id[0] = id[0] + 1
return id[0]
# [[thread_id, [(start, end, id), ....]], ...]
# Using list instead of a dict so order is guaranteed for any Python
# version
threads = [
[1, [(0, 1, get_id()), (1, 2, get_id())]],
[0, [(0, 2, get_id()), (1, 2, get_id()), (1, 3, get_id())]],
]
for thread, ranges in threads:
for range in ranges:
assert(len(range) == 3)
events.append(
FunctionEvent(
id=range[2],
node_id=0,
name="",
thread=thread,
cpu_start=range[0],
cpu_end=range[1],
)
)
events.populate_cpu_children()
# Note that [1, 3] pushes out [0, 2] first. Then we record [1, 2]
# as a child of [1, 3]
res = [[], [], [], [], [4]]
def get_children_ids(event):
return [child.id for child in event.cpu_children]
assert([get_children_ids(event) for event in events] == res)
def test_profiler_aggregation_table(self):
"""
Test if the profiling result is aggregated for `str(prof)`
See: https://github.com/pytorch/pytorch/issues/37500
"""
x = torch.randn(1024)
with torch.autograd.profiler.profile() as prof:
torch.einsum("i->", x)
prof_str = str(prof)
prof_table = prof.table()
self.assertEqual(prof_table, prof_str)
def test_profiler_function_event_avg(self):
avg = FunctionEventAvg()
avg.add(FunctionEvent(id=0, node_id=0, name="foo", thread=0, cpu_start=10, cpu_end=15))
avg.add(FunctionEvent(id=1, node_id=0, name="foo", thread=0, cpu_start=20, cpu_end=30))
avg.add(avg)
self.assertEqual(avg.key, "foo")
# aggregate stats
self.assertEqual(avg.count, 4)
self.assertEqual(avg.cpu_time_total, 30)
self.assertEqual(avg.self_cpu_time_total, 30)
self.assertEqual(avg.cuda_time_total, 0)
# average stats
self.assertEqual(avg.cpu_time, 7.5)
self.assertEqual(avg.cuda_time_total, 0)
def test_profiler_shapes(self):
print("")
layer1 = torch.nn.Linear(20, 30)
layer2 = torch.nn.Linear(30, 40)
input = torch.randn(128, 20)
with profile(record_shapes=True) as prof:
layer2(layer1(input))
print(prof.function_events)
top_level_expected_events_and_shapes = [
(None, [[30, 20]]),
('aten::addmm', [[30], [128, 20], [20, 30], [], []]),
(None, [[40, 30]]),
('aten::addmm', [[40], [128, 30], [30, 40], [], []])
]
expected_iter = iter(top_level_expected_events_and_shapes)
last_end = 0
for event in prof.function_events:
if event.cpu_interval.start > last_end:
name_expected, input_shape_expected = next(expected_iter)
if name_expected is not None:
self.assertEqual(event.name, name_expected)
self.assertEqual(event.input_shapes, input_shape_expected)
last_end = event.cpu_interval.end
def test_profiler_no_cuda(self):
print("")
layer = torch.nn.Linear(20, 30)
x = torch.randn(128, 20)
with profile(use_cuda=False) as prof:
layer(x)
prof_str = str(prof)
print(prof_str)
self.assertTrue('cpu' in prof_str.lower())
self.assertTrue('cuda' not in prof_str.lower())
def test_profiler_aggregation_lstm(self):
print("")
rnn = torch.nn.LSTM(10, 20, 2)
total_time_s = 0
with profile(record_shapes=True) as prof:
for i in range(20):
input = torch.randn(5, 3, 10)
h = torch.randn(2, 3, 20)
c = torch.randn(2, 3, 20)
start = time.time()
rnn(input, (h, c))
end = time.time()
total_time_s += end - start
print(prof.table(
sort_by="self_cpu_time_total", row_limit=10, header="TEST"))
print(prof.key_averages(group_by_input_shape=True).table(
sort_by="self_cpu_time_total", row_limit=10))
total_time_us = total_time_s * 1000.0 * 1000.0 # make it us which is profiler default
print(
"Total time based on python measurements: ",
format_time(total_time_us)
)
print(
"CPU time measurement python side overhead: {:.2f}%".format(
(total_time_us / prof.self_cpu_time_total - 1.0) * 100.0
)
)
if sys.platform != "win32":
with tempfile.NamedTemporaryFile() as trace_file:
prof.export_chrome_trace(trace_file.name)
def test_memory_profiler(self):
def run_profiler(tensor_creation_fn, metric):
# collecting allocs / deallocs
with profile(profile_memory=True, record_shapes=True) as prof:
x = None
with record_function("test_user_scope_alloc"):
x = tensor_creation_fn()
with record_function("test_user_scope_dealloc"):
del x
stats = prof.key_averages(group_by_input_shape=True)
print(stats.table(sort_by=metric))
return stats
def check_metrics(stats, metric, allocs=None, deallocs=None):
stat_metrics = {}
for stat in stats:
stat_metrics[stat.key] = getattr(stat, metric)
if allocs is not None:
for alloc_fn in allocs:
self.assertTrue(alloc_fn in stat_metrics)
self.assertTrue(stat_metrics[alloc_fn] > 0)
if deallocs is not None:
for dealloc_fn in deallocs:
self.assertTrue(dealloc_fn in stat_metrics)
self.assertTrue(stat_metrics[dealloc_fn] < 0)
def create_cpu_tensor():
return torch.rand(10, 10)
def create_cuda_tensor():
return torch.rand(10, 10).cuda()
def create_mkldnn_tensor():
return torch.rand(10, 10, dtype=torch.float32).to_mkldnn()
print("Running CPU test")
stats = run_profiler(create_cpu_tensor, "cpu_memory_usage")
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"aten::empty",
"aten::rand",
"test_user_scope_alloc",
],
deallocs=[
"test_user_scope_dealloc",
]
)
if torch.cuda.is_available():
create_cuda_tensor()
print("Running CUDA test")
stats = run_profiler(create_cuda_tensor, "cuda_memory_usage")
check_metrics(
stats,
"cuda_memory_usage",
allocs=[
"test_user_scope_alloc",
"aten::to",
"aten::empty_strided",
],
deallocs=[
"test_user_scope_dealloc",
]
)
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"aten::rand",
"aten::empty",
]
)
if torch._C.has_mkldnn:
create_mkldnn_tensor()
print("Running MKLDNN test")
stats = run_profiler(create_mkldnn_tensor, "cpu_memory_usage")
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"test_user_scope_alloc",
"aten::rand",
"aten::empty",
"aten::to_mkldnn",
],
deallocs=[
"test_user_scope_dealloc",
]
)
# check partial overlap of tensor allocation with memory profiler
x = torch.rand(10, 10)
with profile(profile_memory=True, record_shapes=True) as prof:
del x
x = torch.rand(10, 10)
del x
stats = prof.key_averages(group_by_input_shape=True)
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"aten::rand",
"aten::empty",
]
)
def test_record_function(self):
x = torch.randn(10, 10)
def forward(x):
with record_function("outer"):
y = x * 2 + 4
with record_function("inner"):
y = y - 1
y = y / 1
forward(x)
with profile() as p:
forward(x)
events = p.function_events
important_events = [
'outer',
'aten::mul',
'aten::add',
'inner',
'aten::sub',
'aten::div'
]
idx = 0
for info in events:
if info.name == important_events[idx]:
idx = idx + 1
if idx == len(important_events):
break
self.assertEqual(idx, len(important_events))
# We can also use record_function to decorate arbitrary function
@record_function('my_func')
def f(x, y):
return x + y
with profile() as p:
f(1, 2)
self.assertTrue('my_func' in str(p))
o = Some()
o.test_profiler_tracing()
from torch.autograd.profiler import FunctionEvent, EventList
```

# Buscar generar de manera correcta una traza
```
import random
l = [FunctionEvent(f'XXXX thread-{666+10+i}', i, "name", i, i*1000, i*1200) for i in range(10)]
l2 = [FunctionEvent(9999+100+i, i if random.random() < 0.4 else i+222, "CUDA name", i, i*1000+100, i*1200-100) for i in range(10)]
l.extend(l2)
for i, e in enumerate(l):
cude, start, end = random.randint(8, 10), i*1000+10, i*1200-10
e.append_kernel(f"add", cude, start, end)
start, end = i*1000+100, i*1200-100
e.append_kernel(f"sub_add {i}", cude, start, end)
ev=EventList(l)
with tempfile.NamedTemporaryFile(mode="w+") as f:
print(f"writed to {f.name}")
ev.export_chrome_trace(f.name)
# read the trace and expect valid json
# if the JSON generated by export_chrome_trace is not valid, this will throw and fail the test.
print(f.read())
print("readed")
parsed = json.load(f)
print(json.dumps(parsed, sort_keys=True)) #indent=4))
print(ev.table())
```
| github_jupyter |
```
# default_exp stats
%load_ext autoreload
%autoreload 2
```
# stats
> A way to access metadata on all the files due for processing.
```
#hide
from nbdev.showdoc import *
#export
import json
import os
import datetime
from pathlib import Path
from typing import Any
import pandas as pd
import numpy as np
import fitz
from rich import print
from rich.console import Console
from rich.table import Table
%load_ext rich
#export
def get_page_count(filepath: "os.PathLike[Any]") -> int:
"""Gets the page count of a PDF file."""
pdf_obj = fitz.open(filepath)
return pdf_obj.page_count
#export
def get_file_metadata(filepath: "os.PathLike[Any]") -> dict:
"""Gets the metadata associated with a PDF file."""
pdf_obj = fitz.open(filepath)
return pdf_obj.metadata
#export
def add_comma_separation(input: int) -> str:
"""Adds comma-separation for thousands to an integer."""
return f'{input:,.0f}'
#export
def has_ocr_layer(filepath: "os.PathLike[Any]") -> bool:
"""Checks whether a particular file has an OCR layer."""
# TODO: fix this function
# pdf_obj = fitz.open(filepath)
# return len(pdf_obj[0].get_text("text")) == 0
return False
#export
def get_stats(source_path: "os.PathLike[Any]") -> list:
"""Gathers statistics on the PDF data contained in a particular directory."""
stats_data = []
files = list(source_path.glob("**/*.pdf")) # searches source_path and all subfolders
for file in files:
file_data = {
"filename": file.name,
"pagecount": get_page_count(file),
"has_ocr_layer": has_ocr_layer(file),
"pdf_file_size_bytes": os.path.getsize(file),
"date_created": datetime.datetime.fromtimestamp(os.path.getctime(file)),
"date_last_modified": datetime.datetime.fromtimestamp(os.path.getmtime(file)),
"author": get_file_metadata(file)['author'],
}
stats_data.append(file_data)
return stats_data
#export
def convert_timestamp(item_date_object):
"""Helper function to convert a datetime object to timestamp when
needed for a JSON object."""
if isinstance(item_date_object, (datetime.date, datetime.datetime)):
return item_date_object.timestamp()
#export
def get_json_stats(source_path: "os.PathLike[Any]") -> str:
"""Gathers statistics on the PDF data in a directory in JSON format."""
return json.dumps(get_stats(source_path), default=convert_timestamp)
#export
def get_dataframe_stats(source_path: "os.PathLike[Any]") -> pd.core.frame.DataFrame:
"""Gathers statistics on the PDF data in a directory as a dataframe."""
return pd.DataFrame.from_dict(get_stats(source_path))
#export
def export_stats_as_csv(source_path: "os.PathLike[Any]", destination_path: "os.PathLike[Any]" = Path("./stats.csv")) -> None:
"""Exports statistics on the PDF data as a CSV file."""
get_dataframe_stats(source_path).to_csv(destination_path)
```
In this next section I manage to get some makeshift drift detection working with `evidently`:
```
# WHOLE SECTION COMMENTED OUT
# from evidently.dashboard import Dashboard
# from evidently.tabs import DataDriftTab
# from evidently.pipeline.column_mapping import ColumnMapping
# source1 = Path("/Users/strickvl/Desktop/NL")
# source2 = Path("/Users/strickvl/Desktop/machine-learning-flashcards")
# data_1 = get_dataframe_stats(source1)
# data_2 = get_dataframe_stats(source2)
# data_types_dict = {
# 'filename': str,
# "pagecount": np.number,
# 'has_ocr_layer': np.number,
# 'pdf_file_size_bytes': np.number,
# 'author': str,
# }
# data_1['date_created'] = pd.to_datetime(data_1['date_created'])
# data_1['date_last_modified'] = pd.to_datetime(data_1['date_last_modified'])
# data_1 = data_1.astype(data_types_dict)
# data_2['date_created'] = pd.to_datetime(data_2['date_created'])
# data_2['date_last_modified'] = pd.to_datetime(data_2['date_last_modified'])
# data_2 = data_2.astype(data_types_dict)
# cols_to_drop = ['date_created', 'date_last_modified', "filename", "author"]
# data_1.drop(cols_to_drop, axis=1, inplace=True)
# data_2.drop(cols_to_drop, axis=1, inplace=True)
# # export_stats_as_csv(source1, Path("./tryout/stats1.csv"))
# # export_stats_as_csv(source2, Path("./tryout/stats2.csv"))
# # df1 = pd.read_csv("./tryout/stats1.csv")
# # df2 = pd.read_csv("./tryout/stats1.csv")
# data_drift_report = Dashboard(tabs=[DataDriftTab()])
# # these next two lines need fixing
# # data_drift_report.calculate(data_1, data_2)
# # data_drift_report.show()
#export
def display_stats(stats_list: list) -> None:
"""Displays statistics on the PDF data contained in a particular directory."""
table = Table(title="Stats for your PDF Files")
table.add_column("PageCount", justify="right", style="green")
table.add_column("Filename", justify="left", style="cyan", no_wrap=True, max_width=50)
table.add_column("ocr_layer", justify="left", style="blue")
table.add_column("pdf_file_size_bytes", justify="left", style="purple")
table.add_column("author", justify="left", style="black")
page_count = 0
file_count = 0
for item in stats_list:
page_count += item['pagecount']
file_count += 1
if file_count <= 45:
table.add_row(
str(item['pagecount']),
item['filename'],
str(item['has_ocr_layer']),
str(item['pdf_file_size_bytes']),
item['author'],
)
if file_count > 45:
table.add_row("...", "...", "...", "...", "...")
console = Console()
console.print(table)
bold_str_count = f"[bold red]{add_comma_separation(page_count)}"
console.print("[bold red]TOTAL PAGECOUNT:", bold_str_count)
source = Path("/Users/strickvl/Desktop/machine-learning-flashcards")
display_stats(get_stats(source))
```
| github_jupyter |
# 7.5 IMDb(Internet Movie Database)からDataLoaderを作成
- 本ファイルでは、IMDb(Internet Movie Database)のデータを使用して、感情分析(0:ネガティブ、1:ポジティブ)を2値クラス分類するためのDatasetとDataLoaderを作成します。
※ 本章のファイルはすべてUbuntuでの動作を前提としています。Windowsなど文字コードが違う環境での動作にはご注意下さい。
# 7.5 学習目標
1. テキスト形式のファイルデータからtsvファイルを作成し、torchtext用のDataLoaderを作成できるようになる
# 事前準備
書籍の指示に従い、本章で使用するデータを用意します
# 1. IMDbデータセットをtsv形式に変換
Datasetをダウンロードします
※torchtextで標準でIMDbが使える関数があるのですが、今回は今後データセットが用意されていない場合でも対応できるように0から作ります。
http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
5万件のデータ(train,testともに2.5万件)です。データidとrating(1-10)でファイル名が決まっています。
rateは10の方が良いです。4以下がnegative、7以上がpositiveにクラス分けされています。
```
# tsv形式のファイルにします
import glob
import os
import io
import string
# 訓練データのtsvファイルを作成します
f = open('./data/IMDb_train.tsv', 'w')
path = './data/aclImdb/train/pos/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'1'+'\t'+'\n'
f.write(text)
path = './data/aclImdb/train/neg/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'0'+'\t'+'\n'
f.write(text)
f.close()
# テストデータの作成
f = open('./data/IMDb_test.tsv', 'w')
path = './data/aclImdb/test/pos/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'1'+'\t'+'\n'
f.write(text)
path = './data/aclImdb/test/neg/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'0'+'\t'+'\n'
f.write(text)
f.close()
```
# 2. 前処理と単語分割の関数を定義
```
import string
import re
# 以下の記号はスペースに置き換えます(カンマ、ピリオドを除く)。
# punctuationとは日本語で句点という意味です
print("区切り文字:", string.punctuation)
# !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
# 前処理
def preprocessing_text(text):
# 改行コードを消去
text = re.sub('<br />', '', text)
# カンマ、ピリオド以外の記号をスペースに置換
for p in string.punctuation:
if (p == ".") or (p == ","):
continue
else:
text = text.replace(p, " ")
# ピリオドなどの前後にはスペースを入れておく
text = text.replace(".", " . ")
text = text.replace(",", " , ")
return text
# 分かち書き(今回はデータが英語で、簡易的にスペースで区切る)
def tokenizer_punctuation(text):
return text.strip().split()
# 前処理と分かち書きをまとめた関数を定義
def tokenizer_with_preprocessing(text):
text = preprocessing_text(text)
ret = tokenizer_punctuation(text)
return ret
# 動作を確認します
print(tokenizer_with_preprocessing('I like cats.'))
```
# DataLoaderの作成
```
# データを読み込んだときに、読み込んだ内容に対して行う処理を定義します
import torchtext
# 文章とラベルの両方に用意します
max_length = 256
TEXT = torchtext.data.Field(sequential=True, tokenize=tokenizer_with_preprocessing, use_vocab=True,
lower=True, include_lengths=True, batch_first=True, fix_length=max_length, init_token="<cls>", eos_token="<eos>")
LABEL = torchtext.data.Field(sequential=False, use_vocab=False)
# 引数の意味は次の通り
# init_token:全部の文章で、文頭に入れておく単語
# eos_token:全部の文章で、文末に入れておく単語
# フォルダ「data」から各tsvファイルを読み込みます
train_val_ds, test_ds = torchtext.data.TabularDataset.splits(
path='./data/', train='IMDb_train.tsv',
test='IMDb_test.tsv', format='tsv',
fields=[('Text', TEXT), ('Label', LABEL)])
# 動作確認
print('訓練および検証のデータ数', len(train_val_ds))
print('1つ目の訓練および検証のデータ', vars(train_val_ds[0]))
import random
# torchtext.data.Datasetのsplit関数で訓練データとvalidationデータを分ける
train_ds, val_ds = train_val_ds.split(
split_ratio=0.8, random_state=random.seed(1234))
# 動作確認
print('訓練データの数', len(train_ds))
print('検証データの数', len(val_ds))
print('1つ目の訓練データ', vars(train_ds[0]))
```
# ボキャブラリーを作成
```
# torchtextで単語ベクトルとして英語学習済みモデルを読み込みます
from torchtext.vocab import Vectors
english_fasttext_vectors = Vectors(name='data/wiki-news-300d-1M.vec')
# 単語ベクトルの中身を確認します
print("1単語を表現する次元数:", english_fasttext_vectors.dim)
print("単語数:", len(english_fasttext_vectors.itos))
# ベクトル化したバージョンのボキャブラリーを作成します
TEXT.build_vocab(train_ds, vectors=english_fasttext_vectors, min_freq=10)
# ボキャブラリーのベクトルを確認します
print(TEXT.vocab.vectors.shape) # 17916個の単語が300次元のベクトルで表現されている
TEXT.vocab.vectors
# ボキャブラリーの単語の順番を確認します
TEXT.vocab.stoi
# DataLoaderを作成します(torchtextの文脈では単純にiteraterと呼ばれています)
train_dl = torchtext.data.Iterator(train_ds, batch_size=24, train=True)
val_dl = torchtext.data.Iterator(
val_ds, batch_size=24, train=False, sort=False)
test_dl = torchtext.data.Iterator(
test_ds, batch_size=24, train=False, sort=False)
# 動作確認 検証データのデータセットで確認
batch = next(iter(val_dl))
print(batch.Text)
print(batch.Label)
```
このようにDataLoaderは単語のidを格納しているので、分散表現はディープラーニングモデル側でidに応じて取得してあげる必要があります。
ここまでの内容をフォルダ「utils」のdataloader.pyに別途保存しておき、次節からはこちらから読み込むようにします
以上
| github_jupyter |
# Before we start...
This colab notebook is a minimum demo for faceswap-GAN v2.2. Since colab allows maximum run time limit of 12 hrs, we will only train a lightweight model in this notebook. **The purpose of this notebook is not to train a model that produces high quality results but a quick overview for how faceswap-GAN works.**
The pipeline of faceswap-GAN v2.2 is described below:
1. Upload two videos for training.
2. Apply face extraction (preprocessing) on the two uploaded videos
3. Train a liteweight faceswap-GAN model. (This will take 10 ~ 12 hrs)
4. Apply video conversion to the uploaded videos.
# Step 1: Set runtime type to Python 3/GPU
Set the colab notebook to GPU instance through: **runtime -> change runtime type -> Python3 and GPU**
The following cells will show the system information of the current instance. Run the cells and check if it uses python >= 3.6 and has a GPU device.
```
import platform
print(platform.python_version())
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
```
# Step 2: Git clone faceswap-GAN
```
!git clone https://github.com/shaoanlu/faceswap-GAN.git
%cd "faceswap-GAN"
```
# Step 3: Upload training videos
The user should upload two videos: **source video** and **target video**. The model will **tranform source face to target face by default.**
- The videos better **contain only one person**.
- There is no limitation on video length but the longer it is, the longer preprocessing time / video conversion time it will take, which may cause excceded run time of 12 hrs. (**Recommended video length: 30 secs ~ 2 mins.**)
```
from google.colab import files
# Upload source video
source_video = files.upload()
for fn_source_video, _ in source_video.items():
print(fn_source_video)
# Upload target video
target_video = files.upload()
for fn_target_video, _ in target_video.items():
print(fn_target_video)
```
# Step 4: Set maximum training iterations
Default 25000 iters require ~ 10hrs of training.
Iterations >= 27k may exceed run time limit; Iterations < 18k may yield poorly-trained model.
```
global TOTAL_ITERS
TOTAL_ITERS = 34000
```
# Step 5: Everything is ready.
**Press Ctrl + F10 (or runtime -> run after)** to start the remaining process and leave this page alone. It will take 10 ~ 12 hours to finish training. The result video can be downloaded by running the last cell:
```python
files.download("OUTPUT_VIDEO.mp4")
# Some browsers do not support this line (e.g., Opera does not pop up a save dialog). Please use Firefox or Chrome.
```
Notice that **this page should not be closed or refreshed while running**.
```
%%capture
!pip install moviepy
!pip install keras_vggface
import imageio
imageio.plugins.ffmpeg.download()
import keras.backend as K
from detector.face_detector import MTCNNFaceDetector
import glob
from preprocess import preprocess_video
fd = MTCNNFaceDetector(sess=K.get_session(), model_path="./mtcnn_weights/")
!mkdir -p faceA/rgb
!mkdir -p faceA/binary_mask
!mkdir -p faceB/rgb
!mkdir -p faceB/binary_mask
save_interval = 5 # perform face detection every {save_interval} frames
save_path = "./faceA/"
preprocess_video(fn_source_video, fd, save_interval, save_path)
save_path = "./faceB/"
preprocess_video(fn_target_video, fd, save_interval, save_path)
print(str(len(glob.glob("faceA/rgb/*.*"))) + " face(s) extracted from source video: " + fn_source_video + ".")
print(str(len(glob.glob("faceB/rgb/*.*"))) + " face(s) extracted from target video: " + fn_target_video + ".")
```
## The following cells are from [FaceSwap_GAN_v2.2_train_test.ipynb](https://github.com/shaoanlu/faceswap-GAN/blob/master/FaceSwap_GAN_v2.2_train_test.ipynb)
## Import packages
```
from keras.layers import *
import keras.backend as K
import tensorflow as tf
import os
import cv2
import glob
import time
import numpy as np
from pathlib import PurePath, Path
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
```
## Configuration
```
K.set_learning_phase(1)
# Number of CPU cores
num_cpus = os.cpu_count()
# Input/Output resolution
RESOLUTION = 64 # 64x64, 128x128, 256x256
assert (RESOLUTION % 64) == 0, "RESOLUTION should be 64, 128, or 256."
# Batch size
batchSize = 4
# Use motion blurs (data augmentation)
# set True if training data contains images extracted from videos
use_da_motion_blur = False
# Use eye-aware training
# require images generated from prep_binary_masks.ipynb
use_bm_eyes = True
# Probability of random color matching (data augmentation)
prob_random_color_match = 0.5
da_config = {
"prob_random_color_match": prob_random_color_match,
"use_da_motion_blur": use_da_motion_blur,
"use_bm_eyes": use_bm_eyes
}
# Path to training images
img_dirA = './faceA/rgb'
img_dirB = './faceB/rgb'
img_dirA_bm_eyes = "./faceA/binary_mask"
img_dirB_bm_eyes = "./faceB/binary_mask"
# Path to saved model weights
models_dir = "./models"
# Architecture configuration
arch_config = {}
arch_config['IMAGE_SHAPE'] = (RESOLUTION, RESOLUTION, 3)
arch_config['use_self_attn'] = True
arch_config['norm'] = "hybrid" # instancenorm, batchnorm, layernorm, groupnorm, none
arch_config['model_capacity'] = "lite" # standard, lite
# Loss function weights configuration
loss_weights = {}
loss_weights['w_D'] = 0.1 # Discriminator
loss_weights['w_recon'] = 1. # L1 reconstruction loss
loss_weights['w_edge'] = 0.1 # edge loss
loss_weights['w_eyes'] = 30. # reconstruction and edge loss on eyes area
loss_weights['w_pl'] = (0.01, 0.1, 0.3, 0.1) # perceptual loss (0.003, 0.03, 0.3, 0.3)
# Init. loss config.
loss_config = {}
loss_config["gan_training"] = "mixup_LSGAN"
loss_config['use_PL'] = False
loss_config["PL_before_activ"] = True
loss_config['use_mask_hinge_loss'] = False
loss_config['m_mask'] = 0.
loss_config['lr_factor'] = 1.
loss_config['use_cyclic_loss'] = False
```
## Build the model
```
from networks.faceswap_gan_model import FaceswapGANModel
from data_loader.data_loader import DataLoader
from utils import showG, showG_mask, showG_eyes
model = FaceswapGANModel(**arch_config)
%%capture
!wget https://github.com/rcmalli/keras-vggface/releases/download/v2.0/rcmalli_vggface_tf_notop_resnet50.h5
#from keras_vggface.vggface import VGGFace
# VGGFace ResNet50
#vggface = VGGFace(include_top=False, model='resnet50', input_shape=(224, 224, 3))'
from colab_demo.vggface_models import RESNET50
vggface = RESNET50(include_top=False, weights=None, input_shape=(224, 224, 3))
vggface.load_weights("rcmalli_vggface_tf_notop_resnet50.h5")
#from keras.applications.resnet50 import ResNet50
#vggface = ResNet50(include_top=False, input_shape=(224, 224, 3))
#vggface.summary()
model.build_pl_model(vggface_model=vggface, before_activ=loss_config["PL_before_activ"])
model.build_train_functions(loss_weights=loss_weights, **loss_config)
```
## Start training
```
# Create ./models directory
Path(f"models").mkdir(parents=True, exist_ok=True)
# Get filenames
train_A = glob.glob(img_dirA+"/*.*")
train_B = glob.glob(img_dirB+"/*.*")
train_AnB = train_A + train_B
assert len(train_A), "No image found in " + str(img_dirA)
assert len(train_B), "No image found in " + str(img_dirB)
print ("Number of images in folder A: " + str(len(train_A)))
print ("Number of images in folder B: " + str(len(train_B)))
def show_loss_config(loss_config):
for config, value in loss_config.items():
print(f"{config} = {value}")
def reset_session(save_path):
global model, vggface
global train_batchA, train_batchB
model.save_weights(path=save_path)
del model
del vggface
del train_batchA
del train_batchB
K.clear_session()
model = FaceswapGANModel(**arch_config)
model.load_weights(path=save_path)
#vggface = VGGFace(include_top=False, model='resnet50', input_shape=(224, 224, 3))
vggface = RESNET50(include_top=False, weights=None, input_shape=(224, 224, 3))
vggface.load_weights("rcmalli_vggface_tf_notop_resnet50.h5")
model.build_pl_model(vggface_model=vggface, before_activ=loss_config["PL_before_activ"])
train_batchA = DataLoader(train_A, train_AnB, batchSize, img_dirA_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
train_batchB = DataLoader(train_B, train_AnB, batchSize, img_dirB_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
# Start training
t0 = time.time()
# This try/except is meant to resume training if we disconnected from Colab
try:
gen_iterations
print(f"Resume training from iter {gen_iterations}.")
except:
gen_iterations = 0
errGA_sum = errGB_sum = errDA_sum = errDB_sum = 0
errGAs = {}
errGBs = {}
# Dictionaries are ordered in Python 3.6
for k in ['ttl', 'adv', 'recon', 'edge', 'pl']:
errGAs[k] = 0
errGBs[k] = 0
display_iters = 300
global TOTAL_ITERS
global train_batchA, train_batchB
train_batchA = DataLoader(train_A, train_AnB, batchSize, img_dirA_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
train_batchB = DataLoader(train_B, train_AnB, batchSize, img_dirB_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
while gen_iterations <= TOTAL_ITERS:
# Loss function automation
if gen_iterations == (TOTAL_ITERS//5 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = False
loss_config['m_mask'] = 0.0
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (TOTAL_ITERS//5 + TOTAL_ITERS//10 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.5
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Complete.")
elif gen_iterations == (2*TOTAL_ITERS//5 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.2
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (TOTAL_ITERS//2 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.4
loss_config['lr_factor'] = 0.3
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (2*TOTAL_ITERS//3 - display_iters//2):
clear_output()
model.decoder_A.load_weights("models/decoder_B.h5") # swap decoders
model.decoder_B.load_weights("models/decoder_A.h5") # swap decoders
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.5
loss_config['lr_factor'] = 1
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (8*TOTAL_ITERS//10 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.1
loss_config['lr_factor'] = 0.3
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (9*TOTAL_ITERS//10 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = False
loss_config['m_mask'] = 0.0
loss_config['lr_factor'] = 0.1
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
if gen_iterations == 5:
print ("working.")
# Train dicriminators for one batch
data_A = train_batchA.get_next_batch()
data_B = train_batchB.get_next_batch()
errDA, errDB = model.train_one_batch_D(data_A=data_A, data_B=data_B)
errDA_sum +=errDA[0]
errDB_sum +=errDB[0]
# Train generators for one batch
data_A = train_batchA.get_next_batch()
data_B = train_batchB.get_next_batch()
errGA, errGB = model.train_one_batch_G(data_A=data_A, data_B=data_B)
errGA_sum += errGA[0]
errGB_sum += errGB[0]
for i, k in enumerate(['ttl', 'adv', 'recon', 'edge', 'pl']):
errGAs[k] += errGA[i]
errGBs[k] += errGB[i]
gen_iterations+=1
# Visualization
if gen_iterations % display_iters == 0:
clear_output()
# Display loss information
show_loss_config(loss_config)
print("----------")
print('[iter %d] Loss_DA: %f Loss_DB: %f Loss_GA: %f Loss_GB: %f time: %f'
% (gen_iterations, errDA_sum/display_iters, errDB_sum/display_iters,
errGA_sum/display_iters, errGB_sum/display_iters, time.time()-t0))
print("----------")
print("Generator loss details:")
print(f'[Adversarial loss]')
print(f'GA: {errGAs["adv"]/display_iters:.4f} GB: {errGBs["adv"]/display_iters:.4f}')
print(f'[Reconstruction loss]')
print(f'GA: {errGAs["recon"]/display_iters:.4f} GB: {errGBs["recon"]/display_iters:.4f}')
print(f'[Edge loss]')
print(f'GA: {errGAs["edge"]/display_iters:.4f} GB: {errGBs["edge"]/display_iters:.4f}')
if loss_config['use_PL'] == True:
print(f'[Perceptual loss]')
try:
print(f'GA: {errGAs["pl"][0]/display_iters:.4f} GB: {errGBs["pl"][0]/display_iters:.4f}')
except:
print(f'GA: {errGAs["pl"]/display_iters:.4f} GB: {errGBs["pl"]/display_iters:.4f}')
# Display images
print("----------")
wA, tA, _ = train_batchA.get_next_batch()
wB, tB, _ = train_batchB.get_next_batch()
print("Transformed (masked) results:")
showG(tA, tB, model.path_A, model.path_B, batchSize)
print("Masks:")
showG_mask(tA, tB, model.path_mask_A, model.path_mask_B, batchSize)
print("Reconstruction results:")
showG(wA, wB, model.path_bgr_A, model.path_bgr_B, batchSize)
errGA_sum = errGB_sum = errDA_sum = errDB_sum = 0
for k in ['ttl', 'adv', 'recon', 'edge', 'pl']:
errGAs[k] = 0
errGBs[k] = 0
# Save models
model.save_weights(path=models_dir)
```
## The following cells are from [FaceSwap_GAN_v2.2_video_conversion.ipynb](https://github.com/shaoanlu/faceswap-GAN/blob/master/FaceSwap_GAN_v2.2_video_conversion.ipynb)
## Video conversion
```
from converter.video_converter import VideoConverter
global model, vggface
global train_batchA, train_batchB
del model
del vggface
del train_batchA
del train_batchB
tf.reset_default_graph()
K.clear_session()
model = FaceswapGANModel(**arch_config)
model.load_weights(path=models_dir)
fd = MTCNNFaceDetector(sess=K.get_session(), model_path="./mtcnn_weights/")
vc = VideoConverter()
vc.set_face_detector(fd)
vc.set_gan_model(model)
options = {
# ===== Fixed =====
"use_smoothed_bbox": True,
"use_kalman_filter": True,
"use_auto_downscaling": False,
"bbox_moving_avg_coef": 0.65,
"min_face_area": 35 * 35,
"IMAGE_SHAPE": model.IMAGE_SHAPE,
# ===== Tunable =====
"kf_noise_coef": 1e-3,
"use_color_correction": "hist_match",
"detec_threshold": 0.8,
"roi_coverage": 0.9,
"enhance": 0.,
"output_type": 3,
"direction": "AtoB", # ==================== This line determines the transform direction ====================
}
if options["direction"] == "AtoB":
input_fn = fn_source_video
output_fn = "OUTPUT_VIDEO_AtoB.mp4"
elif options["direction"] == "BtoA":
input_fn = fn_target_video
output_fn = "OUTPUT_VIDEO_BtoA.mp4"
duration = None # None or a non-negative float tuple: (start_sec, end_sec). Duration of input video to be converted
vc.convert(input_fn=input_fn, output_fn=output_fn, options=options, duration=duration)
```
# Download result video
```
from google.colab import files
if options["direction"] == "AtoB":
files.download("OUTPUT_VIDEO_AtoB.mp4")
elif options["direction"] == "BtoA":
files.download("OUTPUT_VIDEO_BtoA.mp4")
```
| github_jupyter |
# Timeseries anomaly detection using an Autoencoder
**Author:** [pavithrasv](https://github.com/pavithrasv)<br>
**Date created:** 2020/05/31<br>
**Last modified:** 2020/05/31<br>
**Description:** Detect anomalies in a timeseries using an Autoencoder.
## Introduction
This script demonstrates how you can use a reconstruction convolutional
autoencoder model to detect anomalies in timeseries data.
## Setup
```
import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
from matplotlib import pyplot as plt
```
## Load the data
We will use the [Numenta Anomaly Benchmark(NAB)](
https://www.kaggle.com/boltzmannbrain/nab) dataset. It provides artifical
timeseries data containing labeled anomalous periods of behavior. Data are
ordered, timestamped, single-valued metrics.
We will use the `art_daily_small_noise.csv` file for training and the
`art_daily_jumpsup.csv` file for testing. The simplicity of this dataset
allows us to demonstrate anomaly detection effectively.
```
master_url_root = "https://raw.githubusercontent.com/numenta/NAB/master/data/"
df_small_noise_url_suffix = "artificialNoAnomaly/art_daily_small_noise.csv"
df_small_noise_url = master_url_root + df_small_noise_url_suffix
df_small_noise = pd.read_csv(
df_small_noise_url, parse_dates=True, index_col="timestamp"
)
df_daily_jumpsup_url_suffix = "artificialWithAnomaly/art_daily_jumpsup.csv"
df_daily_jumpsup_url = master_url_root + df_daily_jumpsup_url_suffix
df_daily_jumpsup = pd.read_csv(
df_daily_jumpsup_url, parse_dates=True, index_col="timestamp"
)
```
## Quick look at the data
```
print(df_small_noise.head())
print(df_daily_jumpsup.head())
```
## Visualize the data
### Timeseries data without anomalies
We will use the following data for training.
```
fig, ax = plt.subplots()
df_small_noise.plot(legend=False, ax=ax)
plt.show()
```
### Timeseries data with anomalies
We will use the following data for testing and see if the sudden jump up in the
data is detected as an anomaly.
```
fig, ax = plt.subplots()
df_daily_jumpsup.plot(legend=False, ax=ax)
plt.show()
```
## Prepare training data
Get data values from the training timeseries data file and normalize the
`value` data. We have a `value` for every 5 mins for 14 days.
- 24 * 60 / 5 = **288 timesteps per day**
- 288 * 14 = **4032 data points** in total
```
# Normalize and save the mean and std we get,
# for normalizing test data.
training_mean = df_small_noise.mean()
training_std = df_small_noise.std()
df_training_value = (df_small_noise - training_mean) / training_std
print("Number of training samples:", len(df_training_value))
```
### Create sequences
Create sequences combining `TIME_STEPS` contiguous data values from the
training data.
```
TIME_STEPS = 288
# Generated training sequences for use in the model.
def create_sequences(values, time_steps=TIME_STEPS):
output = []
for i in range(len(values) - time_steps):
output.append(values[i : (i + time_steps)])
return np.stack(output)
x_train = create_sequences(df_training_value.values)
print("Training input shape: ", x_train.shape)
```
## Build a model
We will build a convolutional reconstruction autoencoder model. The model will
take input of shape `(batch_size, sequence_length, num_features)` and return
output of the same shape. In this case, `sequence_length` is 288 and
`num_features` is 1.
```
model = keras.Sequential(
[
layers.Input(shape=(x_train.shape[1], x_train.shape[2])),
layers.Conv1D(
filters=32, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Dropout(rate=0.2),
layers.Conv1D(
filters=16, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Conv1DTranspose(
filters=16, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Dropout(rate=0.2),
layers.Conv1DTranspose(
filters=32, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Conv1DTranspose(filters=1, kernel_size=7, padding="same"),
]
)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss="mse")
model.summary()
```
## Train the model
Please note that we are using `x_train` as both the input and the target
since this is a reconstruction model.
```
history = model.fit(
x_train,
x_train,
epochs=50,
batch_size=128,
validation_split=0.1,
callbacks=[
keras.callbacks.EarlyStopping(monitor="val_loss", patience=5, mode="min")
],
)
```
Let's plot training and validation loss to see how the training went.
```
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
plt.show()
```
## Detecting anomalies
We will detect anomalies by determining how well our model can reconstruct
the input data.
1. Find MAE loss on training samples.
2. Find max MAE loss value. This is the worst our model has performed trying
to reconstruct a sample. We will make this the `threshold` for anomaly
detection.
3. If the reconstruction loss for a sample is greater than this `threshold`
value then we can infer that the model is seeing a pattern that it isn't
familiar with. We will label this sample as an `anomaly`.
```
# Get train MAE loss.
x_train_pred = model.predict(x_train)
train_mae_loss = np.mean(np.abs(x_train_pred - x_train), axis=1)
plt.hist(train_mae_loss, bins=50)
plt.xlabel("Train MAE loss")
plt.ylabel("No of samples")
plt.show()
# Get reconstruction loss threshold.
threshold = np.max(train_mae_loss)
print("Reconstruction error threshold: ", threshold)
```
### Compare recontruction
Just for fun, let's see how our model has recontructed the first sample.
This is the 288 timesteps from day 1 of our training dataset.
```
# Checking how the first sequence is learnt
plt.plot(x_train[0])
plt.plot(x_train_pred[0])
plt.show()
```
### Prepare test data
```
def normalize_test(values, mean, std):
values -= mean
values /= std
return values
df_test_value = (df_daily_jumpsup - training_mean) / training_std
fig, ax = plt.subplots()
df_test_value.plot(legend=False, ax=ax)
plt.show()
# Create sequences from test values.
x_test = create_sequences(df_test_value.values)
print("Test input shape: ", x_test.shape)
# Get test MAE loss.
x_test_pred = model.predict(x_test)
test_mae_loss = np.mean(np.abs(x_test_pred - x_test), axis=1)
test_mae_loss = test_mae_loss.reshape((-1))
plt.hist(test_mae_loss, bins=50)
plt.xlabel("test MAE loss")
plt.ylabel("No of samples")
plt.show()
# Detect all the samples which are anomalies.
anomalies = test_mae_loss > threshold
print("Number of anomaly samples: ", np.sum(anomalies))
print("Indices of anomaly samples: ", np.where(anomalies))
```
## Plot anomalies
We now know the samples of the data which are anomalies. With this, we will
find the corresponding `timestamps` from the original test data. We will be
using the following method to do that:
Let's say time_steps = 3 and we have 10 training values. Our `x_train` will
look like this:
- 0, 1, 2
- 1, 2, 3
- 2, 3, 4
- 3, 4, 5
- 4, 5, 6
- 5, 6, 7
- 6, 7, 8
- 7, 8, 9
All except the initial and the final time_steps-1 data values, will appear in
`time_steps` number of samples. So, if we know that the samples
[(3, 4, 5), (4, 5, 6), (5, 6, 7)] are anomalies, we can say that the data point
5 is an anomaly.
```
# data i is an anomaly if samples [(i - timesteps + 1) to (i)] are anomalies
anomalous_data_indices = []
for data_idx in range(TIME_STEPS - 1, len(df_test_value) - TIME_STEPS + 1):
if np.all(anomalies[data_idx - TIME_STEPS + 1 : data_idx]):
anomalous_data_indices.append(data_idx)
```
Let's overlay the anomalies on the original test data plot.
```
df_subset = df_daily_jumpsup.iloc[anomalous_data_indices]
fig, ax = plt.subplots()
df_daily_jumpsup.plot(legend=False, ax=ax)
df_subset.plot(legend=False, ax=ax, color="r")
plt.show()
```
| github_jupyter |
# Visualizations
This tutorial illustrates the core visualization utilities available in Ax.
```
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
```
## 1. Create experiment and run optimization
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
#### 1a. Define search space and evaluation function
```
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
```
#### 1b. Create Experiment
```
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
```
#### 1c. Run the optimization and fit a GP on all data
```
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
```
## 2. Contour plots
The plot below shows the response surface for `hartmann6` metric as a function of the `x1`, `x2` parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
```
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
```
#### 2a. Interactive contour plot
The plot below allows toggling between different pairs of parameters to view the contours.
```
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
```
## 3. Tradeoff plots
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
```
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
```
## 4. Cross-validation plots
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
```
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
```
## 5. Slice plots
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
```
render(plot_slice(model, "x2", "hartmann6"))
```
## 6. Tile plots
Tile plots are useful for viewing the effect of each arm.
```
render(interact_fitted(model, rel=False))
```
| github_jupyter |
# Exploratory Data Analysis with Pandas
The main scope of this notebook is to perform an analysis of the reviews received for the applications (games) in Steam. Each row in the dataset represents a review made by one user (Author) about a specific application.
The goal will be to answer different possible research questions.
### Useful imports to analyze the dataset and analyze the results
```
import pandas as pd
import random
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import linregress, pearsonr, mannwhitneyu, shapiro, ttest_ind, normaltest
import seaborn as sea
from mpl_toolkits import mplot3d
import statsmodels.api as sm
import plotly.express as px
from sklearn import preprocessing
```
## Preparing the data
First we have to load the .csv files and merge them in a single dataframe. We will only select the columns of interest and skip rows that give troubles (for example different types in the same column due to the merge).
```
columns = ["app_id", "review_id","app_name", "language", "timestamp_created", "timestamp_updated", "recommended",
"votes_helpful", "votes_funny", "weighted_vote_score", "received_for_free", "steam_purchase", "author.steamid",
"author.num_reviews", "author.playtime_at_review"]
```
To parse the dates we'll use this simple function that converts a timestamp in seconds into a date
```
def dateparse(time_in_secs):
return pd.to_datetime(time_in_secs, unit='s')
df1 = pd.read_csv("./steam_reviews.csv", usecols = columns,
parse_dates=['timestamp_created', 'timestamp_updated'],
date_parser= dateparse)
df2 = pd.read_csv("./steam_reviews_1.csv", usecols = columns,
parse_dates=['timestamp_created', "timestamp_updated"], skiprows=[234060, 5938492, 8278792, 9163394],
date_parser=dateparse)
df3 = pd.read_csv("./steam_reviews_2.csv", usecols = columns,
parse_dates=['timestamp_created', 'timestamp_updated'], skiprows=[3895921, 3984228, 5893390, 1929921],
date_parser=dateparse)
```
To build the actual dataframe we concatenate the dataframes along the rows with the pandas method concat that is almost the same as numpy's concatenate
```
df = pd.concat([df1, df2, df3], ignore_index=True)
```
Given that the dataframes will consume much memory we suggest to delete the unused ones when possible
```
# deleting unused data frames
del [[df1,df2, df3]]
```
## Let's take a first look at the dataframe
We can see how many records are stored by using the method _shape_
```
df.shape
```
Print the first **k** records with the method _head_
```
# print a few lines of the dataset to visualize the format
df.head(5)
```
Or even check the type of each column with _info_
```
df.info()
```
The _describe_ method will give us some statistics about the quantitative features of the dataframe
```
# print different statitics of each columns of the datasets
pd.set_option('display.float_format', lambda x: '%.3f' % x)
df.describe()[1:]
```
## Let's explore the dataset by finding simple insights into the reviews.
Some of the tasks we would like to complete are:
- Plot the number of reviews for each application in descending order.
- Find the applications with the best Weighted Vote Score?
- Find the applications with the most and the least recommendations?
- How many of these applications were purchased, and how many were given for free?
### Building a sub-dataframe for the games
In order to answer this questions is useful to create a dataframe containing the information we need for each game:
- n_reviews: the total number of reviews for that application
- weighted_score_mean: the average the weighted score for the application
- recommended: the number of people who recommended this application
```
# first we count the number of reviews for each game by grouping and counting the number of different (unique) reviews_id
n_reviews = df.groupby("app_name")["review_id"].count()
# Then we take the average Weighted vote score grouping by the app name
weighted_score_mean = df.groupby("app_name")["weighted_vote_score"].mean()
# Now we count the number of recommendations
recommendations = df.groupby("app_name")["recommended"].sum()
```
We separately store in a variable the number of copies for each game that were acquired for free or purchased
```
gratis = df.groupby("received_for_free").review_id.count()
# Create a new dataframe concatenating the columns
game_df = pd.concat([n_reviews, weighted_score_mean, recommendations], axis=1).reset_index()
# Rename columns in a clearer way
game_df.rename(columns={"review_id": "n_reviews", "weighted_vote_score": "weighted_score_mean"}, inplace=True)
```
Let's take a look at this new dataframe
```
game_df.head()
```
Now we would like to order each column separately in order to find the best and worst games for each of them, this will also help us to plot them later. We take the first 20 games.
```
# most reviewed
most_reviewed = game_df.sort_values(by="n_reviews", ascending=False)[:20]
# highest score
highest_score = game_df.sort_values(by="weighted_score_mean", ascending=False)[:20]
# most and least recommended
most_recommended = df_per_game.sort_values(by="recommended", ascending=False)[:20]
least_recommended = df_per_game.sort_values(by="recommended", ascending=True)[:20]
```
Who is the most recommended? and the least?
```
print(most_recommended["app_name"].iloc[0], 'has the most recommendations:', most_recommended["recommended"].iloc[0])
print(least_recommended["app_name"].iloc[0], 'has the least recommendations:', least_recommended["recommended"].iloc[0])
```
### Plotting the results
For this task we will use _cmap_ to have colorful barplots and we will implement a function to return a plot given the correct data.
```
# to use cmaps
rescale = lambda y: (y*3 - np.min(y)) / (np.max(y) - np.min(y))
#funtion to use cmap with plt.bar
def c_map(cmap_name,y):
return plt.get_cmap(cmap_name)(rescale(y))
#cmap color extractor
def colors_from_cmap(cmap_name, length):
colors = plt.get_cmap(cmap_name)(np.linspace(0.2, 1, length))
return colors
```
The actual function to plot the data
```
def cmap_barplot(x, y, size, title, xlab = '', ylab = '', color_map, num_colors)
'''
function to plot a barplot with cmap
inputs:
x, y = a column of a dataframe
size = a tuple with the figure size
title = the figure titleù
xlab, ylab = axis labels
color_map = a string specifying the color map to apply
num_colors = number of colors required
'''
fig = plt.figure(0, figsize=(15, 7));
plt.title(title);
plt.barh(y.iloc[::-1],
x.iloc[::-1],
color=colors_from_cmap(color_map, num_colors));
plt.xlabel(xlab);
plt.ylabel(ylab);
plt.show();
```
#### Plot for the most reviewed games
```
cmap_barplot(most_reviewed["app_name"].iloc[::-1], most_reviewed["n_reviews"].iloc[::-1],
(15, 7), "Most reviewed games", "Number of reviews", 'Games', "Reds", 20)
```
#### Plot for the most recommended games
```
cmap_barplot(most_reviewed["app_name"].iloc[::-1], most_reviewed["recommended"].iloc[::-1],
(15, 7), "Most recommended games", "Number of recommendations", 'Games', 'Blues', 20)
```
#### Plot for the least recommended games
```
cmap_barplot(most_reviewed["app_name"], most_reviewed["recommended"],
(15, 7), "Least recommended games", "Number of recommendations", 'Games', 'Oranges', 20)
```
#### Highest scored games
```
cmap_barplot(most_reviewed["app_name"].iloc[::-1], most_reviewed["weighted_score_mean"].iloc[::-1],
(15, 7), "Highest scored games", "Average weighted score", 'Games', 'Greens', 20)
```
#### Pie plot of the purchased and for free games
```
fig = plt.figure(0, figsize=(8, 8))
plt.title("Applications purchased vs applications given for free");
plt.pie(labels = ["purchased", "free"],
x = gratis/df.shape[0],
colors=["turquoise", "darkslategrey"], explode = [0.1, 0.1]);
plt.title("Copies purchased vs copies given for free, normalized");
```
## What is the preferred time to review a game?
It might be useful to know the most common time to do a review and plot the number of reviews for given intervals of time.
For this task we will create a dataframe using only the columns that refer to the time a review has been made.
```
# extract the column of timestamps
time_df = df[['timestamp_created']]
#Change the type and store in a new column
time_df['Time_review'] = pd.to_datetime(time_df['timestamp_created'], unit='s').dt.strftime('%H:%M')
```
Now we count the number of occurrencies for each unique time we have and find the most common. For instance this operation can be made also with different time formats (like only considering the hours without the minutes)
```
# finds and print the max number of occurrencies and the time associated
ordered_time = time_df['Time_review'].value_counts().reset_index()
most_common_time = np.array(ordered_time.reset_index().head(1))[0]
print(most_common_time[0], 'is the most common time with', most_common_time[1], 'occurrencies')
ordered_time
```
### Plot the occurrences for an interval of time
```
def reviews_intervals(intervals):
'''
Given a list of intervals this functions, a dataframe and a column of interest
creates an histogram of the frequencies for each interval
'''
initial, final = intervals[::2], intervals[1::2]
intervals = pd.DataFrame({"Initial time": initial, "Final time": final})
for i in range(len(intervals)):
# create a new column for each interval and fill with 0 or 1 if the time review is the interval
time_df[intervals.iloc[i,0]+'-'+intervals.iloc[i,1]]=np.where((intervals.iloc[i,0] <= time_df['Time_review']) & (time_df['Time_review'] <= intervals.iloc[i,1]) , 1, 0)
#store the dataframe without the columns 'Time_review','timestamp_created'
nb_review_intervals = time_df.drop(['Time_review','timestamp_created'], axis=1)
nb_review_intervals.sum().plot.barh(title='Number of reviews for each interval', color=colors_from_cmap('autumn', intervals.shape[0]), figsize=(10,7));
# create the nested list of the homework example
intervals = ['06:00:00', '10:59:59', '11:00:00', '13:59:59','14:00:00', '16:59:59', '17:00:00', '19:59:59', '20:00:00', '23:59:59','00:00:00','02:59:59', '03:00:00', '05:59:59']
#apply the function 'reviews_intervals'
reviews_intervals(intervals)
```
## What are the most common languages?
The reviews are done from all over the world but what are the most common ones?
We can answer this by grouping and counting, then we'll extract some informations from a filtered dataset with only the most relevant languages.
```
# sorted number of reviews in each language in descending order
reviews_per_language = df.groupby("language")["review_id"].count().sort_values(ascending=False)
#store and print top three languages
top_three_languages = reviews_per_language[:3].index.tolist()
print('The Top 3 languages are :',*top_three_languages)
```
Here we create the function we will use to filter the dataset by language.
```
def filter_by_language(df, languages):
#data frame filtered only with the reviews written in the provided languages
filtered_df = df[df["language"].isin(languages)]
return filtered_df
```
Next we use our function filter_by_language to retrieve the subset of the dataframe pertaining to the three top languages.
We are interested for these languages to see which games have been voted as funny or helpful by creating two new boolean variables. Indeed, here we consider that a review is funny or helpful if it has one vote or more but this threshold can be changed with the variable threshold
```
filtered_df = filter_by_language(df, top_three_languages)
n = len(filtered_df)
#i fix a threshold which represent the minimum number of vote to consider the review helpful or funny
threshold=1
# new dataframe in which we create two new boolean attributes to know if we have more votes than the threshold
filtered_df = filtered_df.assign(was_funny=lambda x: x["votes_funny"] >= threshold,
was_helpful=lambda x: x["votes_helpful"] >= threshold)
# compute the percentage of funny and helpful rewiews
funny_or_not = filtered_df.groupby("was_funny")["review_id"].count() / n
helpful_or_not = filtered_df.groupby("was_helpful")["review_id"].count() / n
```
And now we plot the results.
#### Barplot for the reviews per language
```
cmap_barplot(reviews_per_language.index[::-1], reviews_per_language[::-1],
(15, 7), "Reviews per language", "# of reviews",
'languages', "cool", len(reviews_per_language.index))
```
#### Pie plot for funny or not reviews
```
fig_2 = plt.figure(figsize=(8, 8))
plt.pie(labels = ['Not Funny', 'Funny'],
x = funny_or_not,
colors=["crimson","violet"], explode = [0.1,0.1])
plt.title("Percentage of funny reviews for the top three languages")
plt.show()
```
#### Pie plot for helpful or not reviews
```
fig_2 = plt.figure(figsize=(8,8))
plt.pie(labels = ['Not Helpful','Helpful'],
x = helpful_or_not,
colors =["lime", "lightgreen"], explode = [0.1, 0.1])
plt.title("Percentage of helpful reviews for the top three languages")
plt.show()
```
## Insights about the authors of the reviews
The reviews' authors are users from the game that provide their opinion on it. Nowwe can check how often they make reviews.
First of all, we retrieve the number of reviews submitted by each author. And then order them in descending order and retrieve the top 10 authors (in terms of number of contributions) and we plot the results (after transforming each id in a string).
```
#compute the number of review per reviewer "author.steamid"
author_df = df.groupby("author.steamid")["review_id"].count()
# store the top 10
top_10_reviewers = author_df.sort_values(ascending=False).iloc[:10]
#change the type to obtain labels (str)
authors_names = list(map(str, top_10_reviewers.index.tolist()))
```
#### Barplot of the reviewers
```
cmap_barplot(authors_names[::-1], top_10_reviewers[::-1],
(15, 7),"Most popular reviewers","Number of reviews",
"Steam ID", "YlGnBu", len(authors_names))
```
Let' find the top reviewer and analyze him mire in depth by obtaining the name of all the applications he reviewed.
```
top_reviewer = authors_names[0]
print('The most popular reviewer has the id',top_reviewer)
top_reviewer = df[df["author.steamid"]==float(top_reviewer)]
print('The top reviewer wrote reviews about :\n')
for app in pd.unique(top_reviewer["app_name"]):
print('\t'+app)
```
And now we specifically save the information about the number of copies he purchased or received for free and whether he recommended them or not.
N.B. : We consider that a person wrote a review only if he played this game, so if he did nit obtain it for free he bought it.
```
free_or_not_top_reviewer = top_reviewer.groupby("received_for_free")["review_id"].count()
free_or_not_top_reviewer / len(top_reviewer)
```
As we can see the number of games received for free is not large enough to allow us to infer some informations for this reason we'll focus onthe purchased games.
```
recommended_purchased = top_reviewer[top_reviewer["received_for_free"]==False].groupby("recommended")["review_id"].count()
recommended_purchased
```
And now we plot the results.
#### Pie plot of the recommended games purchased from the top reviewer
```
plt.figure(figsize=(10,10))
plt.pie(labels=["Not recommended", "Recommended"],
x = recommended_purchased,
colors=["darkgoldenrod","khaki" ], explode = [0.1, 0.1])
plt.title("Purchased games positive vs negative reviews");
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
# DCGAN: An example with tf.keras and eager
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook demonstrates how to generate images of handwritten digits using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). To do this, we use Deep Convolutional Generative Adverserial Networks ([DCGAN](https://arxiv.org/pdf/1511.06434.pdf)).
On a colab GPU(Tesla K80), the model takes around 40 seconds per epoch to train.
Below is the output generated after training the generator and discriminator models for 150 epochs.

```
# to generate gifs
!pip install imageio
```
## Import TensorFlow and enable eager execution
```
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
```
## Load the dataset
We are going to use the MNIST dataset to train the generator and the discriminator. The generator will then generate handwritten digits.
```
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
# We are normalizing the images to the range of [-1, 1]
train_images = (train_images - 127.5) / 127.5
BUFFER_SIZE = 60000
BATCH_SIZE = 256
```
## Use tf.data to create batches and shuffle the dataset
```
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
```
## Write the generator and discriminator models
* **Generator**
* It is responsible for **creating the convincing images good enough to fool the discriminator**.
* It consists of Conv2DTranspose(Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size(mnist image size) which is (28, 28, 1).
* We use **leaky relu** activation except for the **last layer** which uses **tanh** activation.
* **Discriminator**
* **The discriminator is responsible for classifying the fake images from the real images.**
* In other words, the discriminator is given generated images(from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake(generated) and real(MNIST images).
* **Basically the generator should be good enough to fool the discriminator that the generated images are real**.
```
class Generator(tf.keras.Model):
def __init__(self):
super(Generator, self).__init__()
self.fc1 = tf.keras.layers.Dense(7*7*64, use_bias=False)
self.batchnorm1 = tf.keras.layers.BatchNormalization()
self.conv1 = tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(1, 1), padding='same', use_bias=False)
self.batchnorm2 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False)
self.batchnorm3 = tf.keras.layers.BatchNormalization()
self.conv3 = tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False)
def call(self, x, training=True):
x = self.fc1(x)
x = self.batchnorm1(x, training=training)
x = tf.nn.relu(x)
x = tf.reshape(x, shape=(-1, 7, 7, 64))
x = self.conv1(x)
x = self.batchnorm2(x, training=training)
x = tf.nn.relu(x)
x = self.conv2(x)
x = self.batchnorm3(x, training=training)
x = tf.nn.relu(x)
x = tf.nn.tanh(self.conv3(x))
return x
class Discriminator(tf.keras.Model):
def __init__(self):
super(Discriminator, self).__init__()
self.conv1 = tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same')
self.conv2 = tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same')
self.dropout = tf.keras.layers.Dropout(0.3)
self.flatten = tf.keras.layers.Flatten()
self.fc1 = tf.keras.layers.Dense(1)
def call(self, x, training=True):
x = tf.nn.leaky_relu(self.conv1(x))
x = self.dropout(x, training=training)
x = tf.nn.leaky_relu(self.conv2(x))
x = self.dropout(x, training=training)
x = self.flatten(x)
x = self.fc1(x)
return x
generator = Generator()
discriminator = Discriminator()
```
## Define the loss functions and the optimizer
* **Discriminator loss**
* The discriminator loss function takes 2 inputs; **real images, generated images**
* real_loss is a sigmoid cross entropy loss of the **real images** and an **array of ones(since these are the real images)**
* generated_loss is a sigmoid cross entropy loss of the **generated images** and an **array of zeros(since these are the fake images)**
* Then the total_loss is the sum of real_loss and the generated_loss
* **Generator loss**
* It is a sigmoid cross entropy loss of the generated images and an **array of ones**
* The discriminator and the generator optimizers are different since we will train them separately.
```
def discriminator_loss(real_output, generated_output):
# [1,1,...,1] with real output since it is true and we want
# our generated examples to look like it
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output)
# [0,0,...,0] with generated images since they are fake
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output)
total_loss = real_loss + generated_loss
return total_loss
def generator_loss(generated_output):
return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output)
discriminator_optimizer = tf.train.AdamOptimizer(1e-4)
generator_optimizer = tf.train.AdamOptimizer(1e-4)
```
## Training
* We start by iterating over the dataset
* The generator is given **noise as an input** which when passed through the generator model will output a image looking like a handwritten digit
* The discriminator is given the **real MNIST images as well as the generated images(from the generator)**.
* Next, we calculate the generator and the discriminator loss.
* Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer.
## Generate Images
* After training, its time to generate some images!
* We start by creating noise array as an input to the generator
* The generator will then convert the noise into handwritten images.
* Last step is to plot the predictions and **voila!**
```
EPOCHS = 150
noise_dim = 100
num_examples_to_generate = 100
# keeping the random vector constant for generation(prediction) so
# it will be easier to see the improvement of the gan.
random_vector_for_generation = tf.random_normal([num_examples_to_generate,
noise_dim])
def generate_and_save_images(model, epoch, test_input):
# make sure the training parameter is set to False because we
# don't want to train the batchnorm layer when doing inference.
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(10,10))
for i in range(predictions.shape[0]):
plt.subplot(10, 10, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.tight_layout()
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
def train(dataset, epochs, noise_dim):
for epoch in range(epochs):
start = time.time()
for images in dataset:
# generating noise from a uniform distribution
noise = tf.random_normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
generated_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(generated_output)
disc_loss = discriminator_loss(real_output, generated_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables))
if epoch % 10 == 0:
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
random_vector_for_generation)
print ('Time taken for epoch {} is {} sec'.format(epoch + 1,
time.time()-start))
# generating after the final epoch
generate_and_save_images(generator,
epochs,
random_vector_for_generation)
train(train_dataset, EPOCHS, noise_dim)
```
# Display an image using the epoch number
```
def display_image(epoch_no):
plt.figure(figsize=(15,15))
plt.imshow(np.array(PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))))
plt.axis('off')
display_image(EPOCHS)
```
## Generate a GIF of all the saved images.
<!-- TODO(markdaoust): Remove the hack when Ipython version is updated -->
```
with imageio.get_writer('dcgan.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('mv dcgan.gif dcgan.gif.png')
display.Image(filename="dcgan.gif.png")
```
| github_jupyter |
```
#简单的user-based协同过滤算法示例代码
#寒小阳(hanxiaoyang.ml@gmail.com)
#构造一份打分数据集,可以去movielens下载真实的数据做实验
users = {"小明": {"中国合伙人": 5.0, "太平轮": 3.0, "荒野猎人": 4.5, "老炮儿": 5.0, "我的少女时代": 3.0, "肖洛特烦恼": 4.5, "火星救援": 5.0},
"小红":{"小时代4": 4.0, "荒野猎人": 3.0, "我的少女时代": 5.0, "肖洛特烦恼": 5.0, "火星救援": 3.0, "后会无期": 3.0},
"小阳": {"小时代4": 2.0, "中国合伙人": 5.0, "我的少女时代": 3.0, "老炮儿": 5.0, "肖洛特烦恼": 4.5, "速度与激情7": 5.0},
"小四": {"小时代4": 5.0, "中国合伙人": 3.0, "我的少女时代": 4.0, "匆匆那年": 4.0, "速度与激情7": 3.5, "火星救援": 3.5, "后会无期": 4.5},
"六爷": {"小时代4": 2.0, "中国合伙人": 4.0, "荒野猎人": 4.5, "老炮儿": 5.0, "我的少女时代": 2.0},
"小李": {"荒野猎人": 5.0, "盗梦空间": 5.0, "我的少女时代": 3.0, "速度与激情7": 5.0, "蚁人": 4.5, "老炮儿": 4.0, "后会无期": 3.5},
"隔壁老王": {"荒野猎人": 5.0, "中国合伙人": 4.0, "我的少女时代": 1.0, "Phoenix": 5.0, "甄嬛传": 4.0, "The Strokes": 5.0},
"邻村小芳": {"小时代4": 4.0, "我的少女时代": 4.5, "匆匆那年": 4.5, "甄嬛传": 2.5, "The Strokes": 3.0}
}
#定义几种距离计算函数
#更高效的方式为把得分向量化之后使用scipy中定义的distance方法
from math import sqrt
def euclidean_dis(rating1, rating2):
"""计算2个打分序列间的欧式距离. 输入的rating1和rating2都是打分dict
格式为{'小时代4': 1.0, '疯狂动物城': 5.0}"""
distance = 0
commonRatings = False
for key in rating1:
if key in rating2:
distance += (rating1[key] - rating2[key])^2
commonRatings = True
#两个打分序列之间有公共打分电影
if commonRatings:
return distance
#无公共打分电影
else:
return -1
def manhattan_dis(rating1, rating2):
"""计算2个打分序列间的曼哈顿距离. 输入的rating1和rating2都是打分dict
格式为{'小时代4': 1.0, '疯狂动物城': 5.0}"""
distance = 0
commonRatings = False
for key in rating1:
if key in rating2:
distance += abs(rating1[key] - rating2[key])
commonRatings = True
#两个打分序列之间有公共打分电影
if commonRatings:
return distance
#无公共打分电影
else:
return -1
def cos_dis(rating1, rating2):
"""计算2个打分序列间的cos距离. 输入的rating1和rating2都是打分dict
格式为{'小时代4': 1.0, '疯狂动物城': 5.0}"""
distance = 0
dot_product_1 = 0
dot_product_2 = 0
commonRatings = False
for score in rating1.values():
dot_product_1 += score^2
for score in rating2.values():
dot_product_2 += score^2
for key in rating1:
if key in rating2:
distance += rating1[key] * rating2[key]
commonRatings = True
#两个打分序列之间有公共打分电影
if commonRatings:
return 1-distance/sqrt(dot_product_1*dot_product_2)
#无公共打分电影
else:
return -1
def pearson_dis(rating1, rating2):
"""计算2个打分序列间的pearson距离. 输入的rating1和rating2都是打分dict
格式为{'小时代4': 1.0, '疯狂动物城': 5.0}"""
sum_xy = 0
sum_x = 0
sum_y = 0
sum_x2 = 0
sum_y2 = 0
n = 0
for key in rating1:
if key in rating2:
n += 1
x = rating1[key]
y = rating2[key]
sum_xy += x * y
sum_x += x
sum_y += y
sum_x2 += pow(x, 2)
sum_y2 += pow(y, 2)
# now compute denominator
denominator = sqrt(sum_x2 - pow(sum_x, 2) / n) * sqrt(sum_y2 - pow(sum_y, 2) / n)
if denominator == 0:
return 0
else:
return (sum_xy - (sum_x * sum_y) / n) / denominator
#查找最近邻
def computeNearestNeighbor(username, users):
"""在给定username的情况下,计算其他用户和它的距离并排序"""
distances = []
for user in users:
if user != username:
#distance = manhattan_dis(users[user], users[username])
distance = pearson_dis(users[user], users[username])
distances.append((distance, user))
# 根据距离排序,距离越近,排得越靠前
distances.sort()
return distances
#推荐
def recommend(username, users):
"""对指定的user推荐电影"""
# 找到最近邻
nearest = computeNearestNeighbor(username, users)[0][1]
recommendations = []
# 找到最近邻看过,但是我们没看过的电影,计算推荐
neighborRatings = users[nearest]
userRatings = users[username]
for artist in neighborRatings:
if not artist in userRatings:
recommendations.append((artist, neighborRatings[artist]))
results = sorted(recommendations, key=lambda artistTuple: artistTuple[1], reverse = True)
for result in results:
print result[0], result[1]
recommend('六爷', users)
#简单的张量分解进行打分和推荐
#要用到numpy模块
import numpy
#手写矩阵分解
#现在有很多很方便对高维矩阵做分解的package,比如libmf, svdfeature等
def matrix_factorization(R, P, Q, K, steps=5000, alpha=0.0002, beta=0.02):
Q = Q.T
for step in xrange(steps):
for i in xrange(len(R)):
for j in xrange(len(R[i])):
if R[i][j] > 0:
eij = R[i][j] - numpy.dot(P[i,:],Q[:,j])
for k in xrange(K):
P[i][k] = P[i][k] + alpha * (2 * eij * Q[k][j] - beta * P[i][k])
Q[k][j] = Q[k][j] + alpha * (2 * eij * P[i][k] - beta * Q[k][j])
eR = numpy.dot(P,Q)
e = 0
for i in xrange(len(R)):
for j in xrange(len(R[i])):
if R[i][j] > 0:
e = e + pow(R[i][j] - numpy.dot(P[i,:],Q[:,j]), 2)
for k in xrange(K):
e = e + (beta/2) * (pow(P[i][k],2) + pow(Q[k][j],2))
if e < 0.001:
break
return P, Q.T
#读取user数据并用张量分解进行打分
R = [
[5,3,0,1],
[4,0,3,1],
[1,1,0,5],
[1,0,0,4],
[0,1,5,4],
]
R = numpy.array(R)
N = len(R)
M = len(R[0])
K = 2
P = numpy.random.rand(N,K)
Q = numpy.random.rand(M,K)
nP, nQ = matrix_factorization(R, P, Q, K)
nR = numpy.dot(nP, nQ.T)
nP
nQ
nR
R
```
| github_jupyter |
# Assignment #04
## Exercise #04-01: a glimpse in the C language
This exercise can be done on a linux machine only!
```{tip}
You can use MyBinder's terminal if you don't have Linux!
```
Here is the C code sample from the lecture:
```c
#include <stdio.h>
int main ()
{
int a = 2;
int b = 3;
int c = a + b;
printf ("Sum of two numbers : %d \n", c);
}
```
**Write this code in a C code file, compile and run it.**
**Now, replace the line ``int b = 3`` with ``char b[] = "Hello";``. Compile and run the program again (ignore warnings at compilation). Does the output match your expectations? Can you explain what happens? Compare this behavior to python's, and try to explain why this behavior can lead to faster execution times.**
(content:montecarlo)=
## Exercise #04-02: Monte-Carlo estimation of $\pi$
A simple way to estimate $\pi$ using a computer is based on a [Monte-Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) method. By drawing a sample of N points with random 2D coordinates (x, y) in the ``[0, 1[`` range, the ratio of points that fall within the unit circle divided by the total number of points (N) gives an estimate of $\pi / 4$.
**Provide two implementations of the monte-carlo estimation of $\pi$: a pure python version (standard library) and a vectorized version using numpy. Time their execution for N = [1e2, 1e3, ..., 1e7]. Optional: plot the numpy speed-up as a function of N.**
**Optional: try the numpy version with N=1e8 and above. Make conclusions about a new trade-off happening for large values of N.**
```{tip}
You can try to mimic ipython's ``%timeit`` in your code by running each function at least three times and taking the fastest execution of all three.
```
## Exercise #04-03: a new format based on fixed point binary numbers
Write a function which converts binary strings to decimal numbers. The function should handle unsigned (positive) numbers only. Examples:
- ``'101010'`` $\rightarrow$ ``42``
- ``'10000.1'`` $\rightarrow$ ``16.5``
- ``'1011101101.10101011'`` $\rightarrow$ ``749.66796875``
Now let's develop a new standard based on this representation. Dots cannot be represented by 0s and 1s, so that if we want the position of the dot to be flexible we need an additional memory slot to store this position. Let's define our new format as a 32 bits long sequence of bits, the first 5 bits (starting from the left) being used to give the position of the dot, and the remaining 27 bits used to represent the number. Examples:
- ``'10101010101010101010101010101010'`` $\rightarrow$ ``699050.65625``.
- ``'00000001100110011001100110011001'`` $\rightarrow$ ``0.19999999552965164``.
Explanation for example 1: the first five digits are `'10101'` which gives the number 21. The second part of the string therefore becomes a dot at position 21: ``'010101010101010101010.101010'``. This binary number is then converted to decimal.
Let's name this standard "BSE" (for "best standard ever"), and try to convince the *Institute of Electrical and Electronics Engineers* to adopt it in place of the old IEEE 754 standard. We have to answer the following questions:
- what is the smallest number the BSE can represent? The largest?
- what is the maximal accuracy of the BSE? (in other words, what is the difference between the smallest positive number and zero?)
- what is the lowest accuracy of our standard? (in other words, what is the difference between the largest number we can represent and the second largest?)
- does the difference between two nearest representable change, when the dot position doesn't?
- now compute the precision of our format for a range of possible values of the BSE
- for these values, compare the BSE to the IEEE754 ``binary32`` format (or its numpy equivalent ``np.float32``) using [numpy.nextafter](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.nextafter.html).
- (optional: you can also use matplotlib and a log-log plot to produce a graphic similar to the [wikipedia page on IEEE 754](https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats))
Conclude. Do you think we should try to convince the *Institute of Electrical and Electronics Engineers* and [present them our results](https://xkcd.com/541/)?
```{warning}
The BSE format **is not** the IEEE754 format. The BSE is a fun format explaining *some* (but not all) of the underlying concepts behind floating point numbers. I'm just saying, because some people got confused during the exam and remembered the BSE better than the real floating point representation...
```
## Exercise #04-04: exponential error growth
The number `e` can be defined as the sum of the infinite series:
$$e = \sum_{n=0}^{\infty} \frac{1}{n!}$$
We are going to approximate this number by truncating the sum to a finite value. We use the **standard library** and it's math module:
```
import math
n = 100
e1 = 0
for i in range(n + 1):
e1 += 1. / math.factorial(i)
e1
```
Close enough! Now let's compute it with the same values, but summed from n=100 to n=0:
```
e2 = 0
for i in range(n + 1)[::-1]:
e2 += 1. / math.factorial(i)
e2
```
Seems reasonable too! Are they different?
```
e1 - e2
```
**Which of the two values is closest to the actual e? Explain why this occurs, and what we can learn from this experiment.**
| github_jupyter |
# Summary of Quantum Operations
In this section we will go into the different operations that are available in Qiskit Terra. These are:
- Single-qubit quantum gates
- Multi-qubit quantum gates
- Measurements
- Reset
- Conditionals
- State initialization
We will also show you how to use the three different simulators:
- unitary_simulator
- qasm_simulator
- statevector_simulator
```
# Useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from math import pi
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.tools.visualization import circuit_drawer
from qiskit.quantum_info import state_fidelity
from qiskit import BasicAer
backend = BasicAer.get_backend('unitary_simulator')
```
## Single Qubit Quantum states <a name="single_states"/>
A single qubit quantum state can be written as
$$\left|\psi\right\rangle = \alpha\left|0\right\rangle + \beta \left|1\right\rangle$$
where $\alpha$ and $\beta$ are complex numbers. In a measurement the probability of the bit being in $\left|0\right\rangle$ is $|\alpha|^2$ and $\left|1\right\rangle$ is $|\beta|^2$. As a vector this is
$$
\left|\psi\right\rangle =
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix}.
$$
Note, due to the conservation of probability $|\alpha|^2+ |\beta|^2 = 1$ and since global phase is undetectable $\left|\psi\right\rangle := e^{i\delta} \left|\psi\right\rangle$ we only require two real numbers to describe a single qubit quantum state.
A convenient representation is
$$\left|\psi\right\rangle = \cos(\theta/2)\left|0\right\rangle + \sin(\theta/2)e^{i\phi}\left|1\right\rangle$$
where $0\leq \phi < 2\pi$, and $0\leq \theta \leq \pi$. From this, it is clear that there is a one-to-one correspondence between qubit states ($\mathbb{C}^2$) and the points on the surface of a unit sphere ($\mathbb{R}^3$). This is called the Bloch sphere representation of a qubit state.
Quantum gates/operations are usually represented as matrices. A gate which acts on a qubit is represented by a $2\times 2$ unitary matrix $U$. The action of the quantum gate is found by multiplying the matrix representing the gate with the vector which represents the quantum state.
$$\left|\psi'\right\rangle = U\left|\psi\right\rangle$$
A general unitary must be able to take the $\left|0\right\rangle$ to the above state. That is
$$
U = \begin{pmatrix}
\cos(\theta/2) & a \\
e^{i\phi}\sin(\theta/2) & b
\end{pmatrix}
$$
where $a$ and $b$ are complex numbers constrained such that $U^\dagger U = I$ for all $0\leq\theta\leq\pi$ and $0\leq \phi<2\pi$. This gives 3 constraints and as such $a\rightarrow -e^{i\lambda}\sin(\theta/2)$ and $b\rightarrow e^{i\lambda+i\phi}\cos(\theta/2)$ where $0\leq \lambda<2\pi$ giving
$$
U = \begin{pmatrix}
\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
\end{pmatrix}.
$$
This is the most general form of a single qubit unitary.
## Single-Qubit Gates <a name="single_gates"/>
The single-qubit gates available are:
- u gates
- Identity gate
- Pauli gates
- Clifford gates
- $C3$ gates
- Standard rotation gates
We have provided a backend: `unitary_simulator` to allow you to calculate the unitary matrices.
```
q = QuantumRegister(1)
```
### u gates
In Qiskit we give you access to the general unitary using the $u3$ gate
$$
u3(\theta, \phi, \lambda) = U(\theta, \phi, \lambda)
$$
```
qc = QuantumCircuit(q)
qc.u3(pi/2,pi/2,pi/2,q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
The $u2(\phi, \lambda) =u3(\pi/2, \phi, \lambda)$ gate has the matrix form
$$
u2(\phi, \lambda) =
\frac{1}{\sqrt{2}} \begin{pmatrix}
1 & -e^{i\lambda} \\
e^{i\phi} & e^{i(\phi + \lambda)}
\end{pmatrix}.
$$
This is a useful gate as it allows us to create superpositions.
```
qc = QuantumCircuit(q)
qc.u2(pi/2,pi/2,q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
The $u1(\lambda)= u3(0, 0, \lambda)$ gate has the matrix form
$$
u1(\lambda) =
\begin{pmatrix}
1 & 0 \\
0 & e^{i \lambda}
\end{pmatrix},
$$
which is useful as it allows us to apply a quantum phase.
```
qc = QuantumCircuit(q)
qc.u1(pi/2,q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Identity gate
The identity gate is $Id = u0(1)$.
```
qc = QuantumCircuit(q)
qc.id(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Pauli gates
#### $X$: bit-flip gate
The bit-flip gate $X$ is defined as:
$$
X =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}= u3(\pi,0,\pi)
$$
```
qc = QuantumCircuit(q)
qc.x(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### $Y$: bit- and phase-flip gate
The $Y$ gate is defined as:
$$
Y =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}=u3(\pi,\pi/2,\pi/2)
$$
```
qc = QuantumCircuit(q)
qc.y(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### $Z$: phase-flip gate
The phase-flip gate $Z$ is defined as:
$$
Z =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}=u1(\pi)
$$
```
qc = QuantumCircuit(q)
qc.z(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Clifford gates
#### Hadamard gate
$$
H =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
1 & -1
\end{pmatrix}= u2(0,\pi)
$$
```
qc = QuantumCircuit(q)
qc.h(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### $S$ (or, $\sqrt{Z}$ phase) gate
$$
S =
\begin{pmatrix}
1 & 0\\
0 & i
\end{pmatrix}= u1(\pi/2)
$$
```
qc = QuantumCircuit(q)
qc.s(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### $S^{\dagger}$ (or, conjugate of $\sqrt{Z}$ phase) gate
$$
S^{\dagger} =
\begin{pmatrix}
1 & 0\\
0 & -i
\end{pmatrix}= u1(-\pi/2)
$$
```
qc = QuantumCircuit(q)
qc.sdg(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### $C3$ gates
#### $T$ (or, $\sqrt{S}$ phase) gate
$$
T =
\begin{pmatrix}
1 & 0\\
0 & e^{i \pi/4}
\end{pmatrix}= u1(\pi/4)
$$
```
qc = QuantumCircuit(q)
qc.t(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### $T^{\dagger}$ (or, conjugate of $\sqrt{S}$ phase) gate
$$
T^{\dagger} =
\begin{pmatrix}
1 & 0\\
0 & e^{-i \pi/4}
\end{pmatrix}= u1(-\pi/4)
$$
```
qc = QuantumCircuit(q)
qc.tdg(q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Standard Rotations
The standard rotation gates are those that define rotations around the Paulis $P=\{X,Y,Z\}$. They are defined as
$$ R_P(\theta) = \exp(-i \theta P/2) = \cos(\theta/2)I -i \sin(\theta/2)P$$
#### Rotation around X-axis
$$
R_x(\theta) =
\begin{pmatrix}
\cos(\theta/2) & -i\sin(\theta/2)\\
-i\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix} = u3(\theta, -\pi/2,\pi/2)
$$
```
qc = QuantumCircuit(q)
qc.rx(pi/2,q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### Rotation around Y-axis
$$
R_y(\theta) =
\begin{pmatrix}
\cos(\theta/2) & - \sin(\theta/2)\\
\sin(\theta/2) & \cos(\theta/2).
\end{pmatrix} =u3(\theta,0,0)
$$
```
qc = QuantumCircuit(q)
qc.ry(pi/2,q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### Rotation around Z-axis
$$
R_z(\phi) =
\begin{pmatrix}
e^{-i \phi/2} & 0 \\
0 & e^{i \phi/2}
\end{pmatrix}\equiv u1(\phi)
$$
Note that here we have used an equivalent as it is different to u1 by a global phase $e^{-i \phi/2}$.
```
qc = QuantumCircuit(q)
qc.rz(pi/2,q)
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
Note this is different due only to a global phase.
## Multi-Qubit Gates <a name="multi_gates"/>
### Mathematical Preliminaries
The space of a quantum computer grows exponentially with the number of qubits. For $n$ qubits the complex vector space has dimension $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to "glue together" operators and basis vectors.
Let's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \otimes B$ acting on two qubits is
$$\begin{equation}
A\otimes B =
\begin{pmatrix}
A_{00} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} & A_{01} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} \\
A_{10} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} & A_{11} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix}
\end{pmatrix},
\end{equation}$$
where $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.
Analogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit:
$$\begin{equation}\begin{split}
\left|{00}\right\rangle &= \begin{pmatrix}
1 \begin{pmatrix}
1 \\
0
\end{pmatrix} \\
0 \begin{pmatrix}
1 \\
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\0 \end{pmatrix}~~~\left|{01}\right\rangle = \begin{pmatrix}
1 \begin{pmatrix}
0 \\
1
\end{pmatrix} \\
0 \begin{pmatrix}
0 \\
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix}0 \\ 1 \\ 0 \\ 0 \end{pmatrix}\end{split}
\end{equation}$$
$$\begin{equation}\begin{split}\left|{10}\right\rangle = \begin{pmatrix}
0\begin{pmatrix}
1 \\
0
\end{pmatrix} \\
1\begin{pmatrix}
1 \\
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}~~~ \left|{11}\right\rangle = \begin{pmatrix}
0 \begin{pmatrix}
0 \\
1
\end{pmatrix} \\
1\begin{pmatrix}
0 \\
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\1 \end{pmatrix}\end{split}
\end{equation}.$$
Note we've introduced a shorthand for the tensor product of basis vectors, wherein $\left|0\right\rangle \otimes \left|0\right\rangle$ is written as $\left|00\right\rangle$. The state of an $n$-qubit system can be described using the $n$-fold tensor product of single-qubit basis vectors. Notice that the basis vectors for a 2-qubit system are 4-dimensional; in general, the basis vectors of an $n$-qubit sytsem are $2^{n}$-dimensional, as noted earlier.
### Basis vector ordering in Qiskit
Within the physics community, the qubits of a multi-qubit systems are typically ordered with the first qubit on the left-most side of the tensor product and the last qubit on the right-most side. For instance, if the first qubit is in state $\left|0\right\rangle$ and second is in state $\left|1\right\rangle$, their joint state would be $\left|01\right\rangle$. Qiskit uses a slightly different ordering of the qubits, in which the qubits are represented from the most significant bit (MSB) on the left to the least significant bit (LSB) on the right (big-endian). This is similar to bitstring representation on classical computers, and enables easy conversion from bitstrings to integers after measurements are performed. For the example just given, the joint state would be represented as $\left|10\right\rangle$. Importantly, *this change in the representation of multi-qubit states affects the way multi-qubit gates are represented in Qiskit*, as discussed below.
The representation used in Qiskit enumerates the basis vectors in increasing order of the integers they represent. For instance, the basis vectors for a 2-qubit system would be ordered as $\left|00\right\rangle$, $\left|01\right\rangle$, $\left|10\right\rangle$, and $\left|11\right\rangle$. Thinking of the basis vectors as bit strings, they encode the integers 0,1,2 and 3, respectively.
### Controlled operations on qubits
A common multi-qubit gate involves the application of a gate to one qubit, conditioned on the state of another qubit. For instance, we might want to flip the state of the second qubit when the first qubit is in $\left|0\right\rangle$. Such gates are known as _controlled gates_. The standard multi-qubit gates consist of two-qubit gates and three-qubit gates. The two-qubit gates are:
- controlled Pauli gates
- controlled Hadamard gate
- controlled rotation gates
- controlled phase gate
- controlled u3 gate
- swap gate
The three-qubit gates are:
- Toffoli gate
- Fredkin gate
## Two-qubit gates <a name="two_gates"/>
Most of the two-qubit gates are of the controlled type (the SWAP gate being the exception). In general, a controlled two-qubit gate $C_{U}$ acts to apply the single-qubit unitary $U$ to the second qubit when the state of the first qubit is in $\left|1\right\rangle$. Suppose $U$ has a matrix representation
$$U = \begin{pmatrix} u_{00} & u_{01} \\ u_{10} & u_{11}\end{pmatrix}.$$
We can work out the action of $C_{U}$ as follows. Recall that the basis vectors for a two-qubit system are ordered as $\left|00\right\rangle, \left|01\right\rangle, \left|10\right\rangle, \left|11\right\rangle$. Suppose the **control qubit** is **qubit 0** (which, according to Qiskit's convention, is one the _right-hand_ side of the tensor product). If the control qubit is in $\left|1\right\rangle$, $U$ should be applied to the **target** (qubit 1, on the _left-hand_ side of the tensor product). Therefore, under the action of $C_{U}$, the basis vectors are transformed according to
$$\begin{align*}
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{U\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{U\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle}\\
\end{align*}.$$
In matrix form, the action of $C_{U}$ is
$$\begin{equation}
C_U = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & u_{00} & 0 & u_{01} \\
0 & 0 & 1 & 0 \\
0 & u_{10} &0 & u_{11}
\end{pmatrix}.
\end{equation}$$
To work out these matrix elements, let
$$C_{(jk), (lm)} = \left(\underset{\text{qubit}~1}{\left\langle j \right|} \otimes \underset{\text{qubit}~0}{\left\langle k \right|}\right) C_{U} \left(\underset{\text{qubit}~1}{\left| l \right\rangle} \otimes \underset{\text{qubit}~0}{\left| k \right\rangle}\right),$$
compute the action of $C_{U}$ (given above), and compute the inner products.
As shown in the examples below, this operation is implemented in Qiskit as `cU(q[0],q[1])`.
If **qubit 1 is the control and qubit 0 is the target**, then the basis vectors are transformed according to
$$\begin{align*}
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|0\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|0\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{U\left|0\right\rangle}\\
C_{U}: \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{\left|1\right\rangle} &\rightarrow \underset{\text{qubit}~1}{\left|1\right\rangle}\otimes \underset{\text{qubit}~0}{U\left|1\right\rangle}\\
\end{align*},$$
which implies the matrix form of $C_{U}$ is
$$\begin{equation}
C_U = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & u_{00} & u_{01} \\
0 & 0 & u_{10} & u_{11}
\end{pmatrix}.
\end{equation}$$
```
q = QuantumRegister(2)
```
### Controlled Pauli Gates
#### Controlled-X (or, controlled-NOT) gate
The controlled-not gate flips the `target` qubit when the control qubit is in the state $\left|1\right\rangle$. If we take the MSB as the control qubit (e.g. `cx(q[1],q[0])`), then the matrix would look like
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}.
$$
However, when the LSB is the control qubit, (e.g. `cx(q[0],q[1])`), this gate is equivalent to the following matrix:
$$
C_X =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0
\end{pmatrix}.
$$
```
qc = QuantumCircuit(q)
qc.cx(q[0],q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### Controlled $Y$ gate
Apply the $Y$ gate to the target qubit if the control qubit is the MSB
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & -i\\
0 & 0 & i & 0
\end{pmatrix},
$$
or when the LSB is the control
$$
C_Y =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & -i\\
0 & 0 & 1 & 0\\
0 & i & 0 & 0
\end{pmatrix}.
$$
```
qc = QuantumCircuit(q)
qc.cy(q[0],q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
#### Controlled $Z$ (or, controlled Phase) gate
Similarly, the controlled Z gate flips the phase of the target qubit if the control qubit is $\left|1\right\rangle$. The matrix looks the same regardless of whether the MSB or LSB is the control qubit:
$$
C_Z =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & -1
\end{pmatrix}
$$
```
qc = QuantumCircuit(q)
qc.cz(q[0],q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Controlled Hadamard gate
Apply $H$ gate to the target qubit if the control qubit is $\left|1\right\rangle$. Below is the case where the control is the LSB qubit.
$$
C_H =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\\
0 & 0 & 1 & 0\\
0 & \frac{1}{\sqrt{2}} & 0& -\frac{1}{\sqrt{2}}
\end{pmatrix}
$$
```
qc = QuantumCircuit(q)
qc.ch(q[0],q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Controlled rotation gates
#### Controlled rotation around Z-axis
Perform rotation around Z-axis on the target qubit if the control qubit (here LSB) is $\left|1\right\rangle$.
$$
C_{Rz}(\lambda) =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & e^{-i\lambda/2} & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & e^{i\lambda/2}
\end{pmatrix}
$$
```
qc = QuantumCircuit(q)
qc.crz(pi/2,q[0],q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Controlled phase rotation
Perform a phase rotation if both qubits are in the $\left|11\right\rangle$ state. The matrix looks the same regardless of whether the MSB or LSB is the control qubit.
$$
C_{u1}(\lambda) =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & e^{i\lambda}
\end{pmatrix}
$$
```
qc = QuantumCircuit(q)
qc.cu1(pi/2,q[0], q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Controlled $u3$ rotation
Perform controlled-$u3$ rotation on the target qubit if the control qubit (here LSB) is $\left|1\right\rangle$.
$$
C_{u3}(\theta, \phi, \lambda) \equiv
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & e^{-i(\phi+\lambda)/2}\cos(\theta/2) & 0 & -e^{-i(\phi-\lambda)/2}\sin(\theta/2)\\
0 & 0 & 1 & 0\\
0 & e^{i(\phi-\lambda)/2}\sin(\theta/2) & 0 & e^{i(\phi+\lambda)/2}\cos(\theta/2)
\end{pmatrix}.
$$
```
qc = QuantumCircuit(q)
qc.cu3(pi/2, pi/2, pi/2, q[0], q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### SWAP gate
The SWAP gate exchanges the two qubits. It transforms the basis vectors as
$$\left|00\right\rangle \rightarrow \left|00\right\rangle~,~\left|01\right\rangle \rightarrow \left|10\right\rangle~,~\left|10\right\rangle \rightarrow \left|01\right\rangle~,~\left|11\right\rangle \rightarrow \left|11\right\rangle,$$
which gives a matrix representation of the form
$$
\mathrm{SWAP} =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}.
$$
```
qc = QuantumCircuit(q)
qc.swap(q[0], q[1])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
## Three-qubit gates <a name="three_gates"/>
There are two commonly-used three-qubit gates. For three qubits, the basis vectors are ordered as
$$\left|000\right\rangle, \left|001\right\rangle, \left|010\right\rangle, \left|011\right\rangle, \left|100\right\rangle, \left|101\right\rangle, \left|110\right\rangle, \left|111\right\rangle,$$
which, as bitstrings, represent the integers $0,1,2,\cdots, 7$. Again, Qiskit uses a representation in which the first qubit is on the right-most side of the tensor product and the third qubit is on the left-most side:
$$\left|abc\right\rangle : \underset{\text{qubit 2}}{\left|a\right\rangle}\otimes \underset{\text{qubit 1}}{\left|b\right\rangle}\otimes \underset{\text{qubit 0}}{\left|c\right\rangle}.$$
### Toffoli gate ($ccx$ gate)
The [Toffoli gate](https://en.wikipedia.org/wiki/Quantum_logic_gate#Toffoli_(CCNOT)_gate) flips the third qubit if the first two qubits (LSB) are both $\left|1\right\rangle$:
$$\left|abc\right\rangle \rightarrow \left|bc\oplus a\right\rangle \otimes \left|b\right\rangle \otimes \left|c\right\rangle.$$
In matrix form, the Toffoli gate is
$$
C_{CX} =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0
\end{pmatrix}.
$$
```
q = QuantumRegister(3)
qc = QuantumCircuit(q)
qc.ccx(q[0], q[1], q[2])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
### Controlled swap gate (Fredkin Gate)
The [Fredkin gate](https://en.wikipedia.org/wiki/Quantum_logic_gate#Fredkin_(CSWAP)_gate), or the *controlled swap gate*, exchanges the second and third qubits if the first qubit (LSB) is $\left|1\right\rangle$:
$$ \left|abc\right\rangle \rightarrow \begin{cases} \left|bac\right\rangle~~\text{if}~c=1 \cr \left|abc\right\rangle~~\text{if}~c=0 \end{cases}.$$
In matrix form, the Fredkin gate is
$$
C_{\mathrm{SWAP}} =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\end{pmatrix}.
$$
```
qc = QuantumCircuit(q)
qc.cswap(q[0], q[1], q[2])
qc.draw()
job = execute(qc, backend)
job.result().get_unitary(qc, decimals=3)
```
## Non-unitary operations <a name="non_unitary"/>
Now that we have gone through all the unitary operations in quantum circuits, we also have access to non-unitary operations. These include measurements, reset of qubits, and classical conditional operations.
```
q = QuantumRegister(1)
c = ClassicalRegister(1)
```
### Measurements
We don't have access to all the information when we make a measurement in a quantum computer. The quantum state is projected onto the standard basis. Below are two examples showing a circuit that is prepared in a basis state and the quantum computer prepared in a superposition state.
```
qc = QuantumCircuit(q, c)
qc.measure(q, c)
qc.draw()
backend = BasicAer.get_backend('qasm_simulator')
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
```
The simulator predicts that 100 percent of the time the classical register returns 0.
```
qc = QuantumCircuit(q, c)
qc.h(q)
qc.measure(q, c)
qc.draw()
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
```
The simulator predicts that 50 percent of the time the classical register returns 0 or 1.
### Reset
It is also possible to `reset` qubits to the $\left|0\right\rangle$ state in the middle of computation. Note that `reset` is not a Gate operation, since it is irreversible.
```
qc = QuantumCircuit(q, c)
qc.reset(q[0])
qc.measure(q, c)
qc.draw()
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
qc = QuantumCircuit(q, c)
qc.h(q)
qc.reset(q[0])
qc.measure(q, c)
qc.draw()
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
```
Here we see that for both of these circuits the simulator always predicts that the output is 100 percent in the 0 state.
### Conditional operations
It is also possible to do operations conditioned on the state of the classical register
```
qc = QuantumCircuit(q, c)
qc.x(q[0]).c_if(c, 0)
qc.measure(q,c)
qc.draw()
```
Here the classical bit always takes the value 0 so the qubit state is always flipped.
```
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
qc = QuantumCircuit(q, c)
qc.h(q)
qc.measure(q,c)
qc.x(q[0]).c_if(c, 0)
qc.measure(q,c)
qc.draw()
job = execute(qc, backend, shots=1024)
job.result().get_counts(qc)
```
Here the classical bit by the first measurement is random but the conditional operation results in the qubit being deterministically put into $\left|1\right\rangle$.
## Arbitrary initialization <a name="initialization"/>
What if we want to initialize a qubit register to an arbitrary state? An arbitrary state for $n$ qubits may be specified by a vector of $2^n$ amplitudes, where the sum of amplitude-norms-squared equals 1. For example, the following three-qubit state can be prepared:
$$\left|\psi\right\rangle = \frac{i}{4}\left|000\right\rangle + \frac{1}{\sqrt{8}}\left|001\right\rangle + \frac{1+i}{4}\left|010\right\rangle + \frac{1+2i}{\sqrt{8}}\left|101\right\rangle + \frac{1}{4}\left|110\right\rangle$$
```
# Initializing a three-qubit quantum state
import math
desired_vector = [
1 / math.sqrt(16) * complex(0, 1),
1 / math.sqrt(8) * complex(1, 0),
1 / math.sqrt(16) * complex(1, 1),
0,
0,
1 / math.sqrt(8) * complex(1, 2),
1 / math.sqrt(16) * complex(1, 0),
0]
q = QuantumRegister(3)
qc = QuantumCircuit(q)
qc.initialize(desired_vector, [q[0],q[1],q[2]])
qc.draw()
backend = BasicAer.get_backend('statevector_simulator')
job = execute(qc, backend)
qc_state = job.result().get_statevector(qc)
qc_state
```
[Fidelity](https://en.wikipedia.org/wiki/Fidelity_of_quantum_states) is useful to check whether two states are the same or not.
For quantum (pure) states $\left|\psi_1\right\rangle$ and $\left|\psi_2\right\rangle$, the fidelity is
$$
F\left(\left|\psi_1\right\rangle,\left|\psi_2\right\rangle\right) = \left|\left\langle\psi_1\middle|\psi_2\right\rangle\right|^2.
$$
The fidelity is equal to $1$ if and only if two states are equal.
```
state_fidelity(desired_vector,qc_state)
```
### Further details:
How does the desired state get generated behind the scenes? There are multiple methods for doing this. Qiskit uses a [method proposed by Shende et al](https://arxiv.org/abs/quant-ph/0406176). Here, the idea is to assume the quantum register to have started from our desired state, and construct a circuit that takes it to the $\left|00..0\right\rangle$ state. The initialization circuit is then the reverse of such circuit.
To take an arbitrary quantum state to the zero state in the computational basis, we perform an iterative procedure that disentangles qubits from the register one-by-one. We know that any arbitrary single-qubit state $\left|\rho\right\rangle$ can be taken to the $\left|0\right\rangle$ state using a $\phi$-degree rotation about the Z axis followed by a $\theta$-degree rotation about the Y axis:
$$R_y(-\theta)R_z(-\phi)\left|\rho\right\rangle = re^{it}\left|0\right\rangle$$
Since now we are dealing with $n$ qubits instead of just 1, we must factorize the state vector to separate the Least Significant Bit (LSB):
$$\begin{align*}
\left|\psi\right\rangle =& \alpha_{0_0}\left|00..00\right\rangle + \alpha_{0_1}\left|00..01\right\rangle + \alpha_{1_0}\left|00..10\right\rangle + \alpha_{1_1}\left|00..11\right\rangle + ... \\&+ \alpha_{(2^{n-1}-1)_0}\left|11..10\right\rangle + \alpha_{(2^{n-1}-1)_1}\left|11..11\right\rangle \\
=& \left|00..0\right\rangle (\alpha_{0_0}\left|0\right\rangle + \alpha_{0_1}\left|1\right\rangle) + \left|00..1\right\rangle (\alpha_{1_0}\left|0\right\rangle + \alpha_{1_1}\left|1\right\rangle) + ... \\&+ \left|11..1\right\rangle (\alpha_{(2^{n-1}-1)_0}(\left|0\right\rangle + \alpha_{(2^{n-1}-1)_1}\left|1\right\rangle) \\
=& \left|00..0\right\rangle\left|\rho_0\right\rangle + \left|00..1\right\rangle\left|\rho_1\right\rangle + ... + \left|11..1\right\rangle\left|\rho_{2^{n-1}-1}\right\rangle
\end{align*}$$
Now each of the single-qubit states $\left|\rho_0\right\rangle, ..., \left|\rho_{2^{n-1}-1}\right\rangle$ can be taken to $\left|0\right\rangle$ by finding appropriate $\phi$ and $\theta$ angles per the equation above. Doing this simultaneously on all states amounts to the following unitary, which disentangles the LSB:
$$U = \begin{pmatrix}
R_{y}(-\theta_0)R_{z}(-\phi_0) & & & &\\
& R_{y}(-\theta_1)R_{z}(-\phi_1) & & &\\
& . & & &\\
& & . & &\\
& & & & R_y(-\theta_{2^{n-1}-1})R_z(-\phi_{2^{n-1}-1})
\end{pmatrix} $$
Hence,
$$U\left|\psi\right\rangle = \begin{pmatrix} r_0e^{it_0}\\ r_1e^{it_1}\\ . \\ . \\ r_{2^{n-1}-1}e^{it_{2^{n-1}-1}} \end{pmatrix}\otimes\left|0\right\rangle$$
U can be implemented as a "quantum multiplexor" gate, since it is a block diagonal matrix. In the quantum multiplexor formalism, a block diagonal matrix of size $2^n \times 2^n$, and consisting of $2^s$ blocks, is equivalent to a multiplexor with $s$ select qubits and $n-s$ data qubits. Depending on the state of the select qubits, the corresponding blocks are applied to the data qubits. A multiplexor of this kind can be implemented after recursive decomposition to primitive gates of cx, rz and ry.
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
# Transposonmapper output data postprocessing
```
## Importing the required python libraries
import os, sys
import warnings
import timeit
import numpy as np
import pandas as pd
import pkg_resources
```
# How to clean the wig and bed files
Here we will remove transposon insertions in .bed and .wig files that were mapped outside the chromosomes, creates consistent naming for chromosomes and change the header of files with custom headers.
Clean wig files for proper visualization in the genome Browser http://genome-euro.ucsc.edu/cgi-bin/hgGateway
```
from transposonmapper.processing.clean_bedwigfiles import cleanfiles
######## Lets save the wig and bed files as variables to clean them and call the function#####################
import glob
wig_files=[]
bed_files=[]
data_dir = pkg_resources.resource_filename("transposonmapper", "data_files/files4test/")
#data_dir="../transposonmapper/data_files/files4test/"
wig_files = glob.glob(os.path.join(data_dir, '*sorted.bam.wig'))
bed_files = glob.glob(os.path.join(data_dir, '*sorted.bam.bed'))
############## Cleaning the files #############################
custom_header = ""
split_chromosomes = False
for files in zip(wig_files,bed_files):
cleanfiles(filepath=files[0], custom_header=custom_header, split_chromosomes=split_chromosomes)
cleanfiles(filepath=files[1], custom_header=custom_header, split_chromosomes=split_chromosomes)
```
# Visualize the insertions and reads per gene throughout the genome
```
## Import the function
from transposonmapper.processing.transposonread_profileplot_genome import profile_genome
####Lets save the cleaned files as variables to clean them and call the function####
cleanbed_files=[]
for root, dirs, files in os.walk(data_dir):
for file in files:
if file.endswith("clean.bed"):
cleanbed_files.append(os.path.join(root, file))
cleanwig_files=[]
for root, dirs, files in os.walk(data_dir):
for file in files:
if file.endswith("clean.wig"):
cleanwig_files.append(os.path.join(root, file))
#### vizualization #####
bed_file=cleanbed_files[0] # example for the 1st file
variable="transposons" #"reads" "transposons"
bar_width=None
savefig=False
profile=profile_genome(bed_file=bed_file, variable=variable, bar_width=bar_width, savefig=savefig,showfig=True)
```

# Zoom in into the chromosomes
```
from transposonmapper.processing.genomicfeatures_dataframe import dna_features
##### getting the files #########
pergene_files=[]
data_dir = pkg_resources.resource_filename("transposonmapper", "data_files/files4test/")
# data_dir="../transposonmapper/data_files/files4test/"
for root, dirs, files in os.walk(data_dir):
for file in files:
if file.endswith('sorted.bam_pergene_insertions.txt'):
pergene_files.append(os.path.join(root, file))
#### vizualization #####
wig_file = cleanwig_files[0]
pergene_insertions_file = pergene_files[0]
plotting=True
variable="reads" #"reads" or "insertions"
savefigure=False
verbose=True
region = "I" #e.g. 1, "I", ["I", 0, 10000"], gene name (e.g. "CDC42")
dna_features(region=region,
wig_file=wig_file,
pergene_insertions_file=pergene_insertions_file,
variable=variable,
plotting=plotting,
savefigure=savefigure,
verbose=verbose)
```
This is the plot for the case of the dummy sample files for chromosome I.

# Volcano plots
Do you want to compare two differente libraries to discover which genes stood out from their comparison?
Then do volcano plots!!
## Getting the volcano plot
Look at the help of this function , [HERE](https://github.com/SATAY-LL/Transposonmapper/blob/main/transposonmapper/statistics/volcanoplot.py)
```
from transposonmapper.statistics import volcano
# Please be aware that you should add the location of your tab separated pergene files (output of the pipeline)
# to the volcano function from
# two libraries.
# And also be aware you will need at least two replicates per library in order to have statistics
# for the volcano plot.
path_a = r""
filelist_a = ["",""]
path_b = r""
filelist_b = ["",""]
variable = 'read_per_gene' #'read_per_gene' 'tn_per_gene', 'Nreadsperinsrt'
significance_threshold = 0.01 #set threshold above which p-values are regarded significant
normalize=True
trackgene_list = ['my-favorite-gene'] # ["cdc42"]
figure_title = " "
volcano_df = volcano(path_a=path_a, filelist_a=filelist_a,
path_b=path_b, filelist_b=filelist_b,
variable=variable,
significance_threshold=significance_threshold,
normalize=normalize,
trackgene_list=trackgene_list,
figure_title=figure_title)
```
## This is a volcano plot made with real data!
- Comparing the libraries of wild type vs $\Delta$ nrp1

| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
import pickle
from matplotlib import patches, lines
%matplotlib inline
colors = pickle.load(open('./colors.p', 'rb'))
case_counts = pd.read_csv('../data/frequencies/reich2013_case_counts.csv', index_col=0)
case_counts.head()
serotypes = ['DENV1', 'DENV2', 'DENV3', 'DENV4']
case_counts = pd.read_csv('../data/frequencies/reich2013_case_counts.csv', index_col=0)
case_counts.rename(columns={ 'den.cases.str%d'%d : 'DENV%d'%d for d in range(1,5)}, inplace=True)
case_counts = case_counts[serotypes + ['date']]
def bin_date(date):
year, month, day = map(int, date.split('-'))
if 1<=month<=3:
month = 0.1667
elif 4<=month<=6:
month = 0.41667
elif 7<=month<=9:
month = 0.6667
else:
month = 0.91667
return year + month
case_counts['date'] = case_counts['date'].map(bin_date)
case_counts = case_counts.groupby('date').agg(sum)
case_counts.pivot_table(index='date')
case_counts = case_counts.astype(float)
case_counts.index = case_counts.index.map(lambda d: round(d,2))
case_counts.index.name = None
def normalize_timepoint(row):
total = row.sum()
if not np.isnan(total) and total > 0:
return row.map(lambda x: x / total)
else:
return row
case_counts = case_counts.apply(normalize_timepoint, axis=1)
print case_counts.tail()
frequencies = pd.read_csv('../data/frequencies/thai_serotype_frequencies.csv', index_col=0)
frequencies.index = frequencies.index.map(lambda d: round(d,2))
frequencies = frequencies.loc[(1975. <= frequencies.index) & (frequencies.index <= 2011) ]
case_counts = case_counts.loc[(1975. <= case_counts.index) & (case_counts.index <= 2011) ]
print frequencies.tail()
print case_counts.tail()
def make_scatterplot(case_counts, frequencies, serotype, ax, c):
counts, freqs = case_counts[serotype], frequencies[serotype]
sns.regplot(case_counts[serotype], frequencies[serotype], ax=ax, scatter_kws={'alpha': 0.3}, color=c)
fit = scipy.stats.linregress(counts.values, freqs.values)
r = fit[2]
ax.text(1, 0.05, r'Pearson $r = %.2f$'%r, ha='right', transform=ax.transAxes)
ax.set_xlabel('Proportion of reported cases')
ax.set_ylabel('Relative frequency')
ax.set_xlim(-0.1,1)
ax.set_ylim(0,1)
def compare_timeseries(case_counts, frequencies, serotype, ax, c):
ax.plot(frequencies.index.values, frequencies[serotype], c=c, linestyle='--', label='Frequency of sequenced isolates')
ax.plot(case_counts.index.values, case_counts[serotype], c=c, linestyle='-', label='Proportion of reported cases')
ax.set_ylabel(serotype, fontdict={'fontsize': 18})
ax.set_ylim(0,1)
ax.set_xlim(1975,2010)
sns.set(style='whitegrid', font_scale=1.0)
fig, ax_array = plt.subplots(ncols=2, nrows=4, figsize=(9,9),gridspec_kw={'width_ratios': [1,0.3]})
for serotype, axes in zip(serotypes, ax_array):
make_scatterplot(case_counts, frequencies, serotype, axes[1], c=colors[serotype])
compare_timeseries(case_counts, frequencies, serotype, axes[0], c = colors[serotype])
legend_handles = [ lines.Line2D([], [], color='darkgray', linestyle='-',label='Proportion of reported cases'),
lines.Line2D([], [], color='darkgray', linestyle='--',label='Relative frequency')]
legend_labels = ['Proportion of reported cases', 'Relative frequency']
for s in serotypes:
legend_patch = patches.Patch( color=colors[s], label=s)
legend_handles.append(legend_patch)
legend_labels.append(s)
fig.legend(legend_handles, legend_labels,bbox_to_anchor=(1.3,0.99),
bbox_transform=plt.gcf().transFigure, prop={'size': 12})
plt.tight_layout()
plt.savefig('./png/thai_frequencies_comparison.png', dpi=300, bbox_inches='tight')
plt.show()
```
| github_jupyter |
```
# for use in tutorial and development; do not include this `sys.path` change in production:
import sys ; sys.path.insert(0, "../")
```
# Vector embedding with `gensim`
Let's make use of deep learning through a technique called *embedding* – to analyze the relatedness of the labels used for recipe ingredients.
Among the most closely related ingredients:
* Some are very close synonyms and should be consolidated to improve data quality
* Others are interesting other ingredients that pair frequently, useful for recommendations
On the one hand, this approach is quite helpful for analyzing the NLP annotations that go into a knowledge graph.
On the other hand it can be used along with [`SKOS`](https://www.w3.org/2004/02/skos/) or similar vocabularies for ontology-based discovery within the graph, e.g., for advanced search UI.
## Curating annotations
We'll be working with the labels for ingredients that go into our KG.
Looking at the raw data, there are many cases where slightly different spellings are being used for the same entity.
As a first step let's define a list of synonyms to substitute, prior to running the vector embedding.
This will help produce better quality results.
Note that this kind of work comes of the general heading of *curating annotations* ... which is what we spend so much time doing in KG work.
It's similar to how *data preparation* is ~80% of the workload for data science teams, and for good reason.
```
SYNONYMS = {
"pepper": "black pepper",
"black pepper": "black pepper",
"egg": "egg",
"eggs": "egg",
"vanilla": "vanilla",
"vanilla extract": "vanilla",
"flour": "flour",
"all-purpose flour": "flour",
"onions": "onion",
"onion": "onion",
"carrots": "carrot",
"carrot": "carrot",
"potatoes": "potato",
"potato": "potato",
"tomatoes": "tomato",
"fresh tomatoes": "tomato",
"fresh tomato": "tomato",
"garlic": "garlic",
"garlic clove": "garlic",
"garlic cloves": "garlic",
}
```
## Analyze ingredient labels from 250K recipes
```
import csv
MAX_ROW = 250000 # 231638
max_context = 0
min_context = 1000
recipes = []
vocab = set()
with open("../dat/all_ind.csv", "r") as f:
reader = csv.reader(f)
next(reader, None) # remove file header
for i, row in enumerate(reader):
id = row[0]
ind_set = set()
# substitute synonyms
for ind in set(eval(row[3])):
if ind in SYNONYMS:
ind_set.add(SYNONYMS[ind])
else:
ind_set.add(ind)
if len(ind_set) > 1:
recipes.append([id, ind_set])
vocab.update(ind_set)
max_context = max(max_context, len(ind_set))
min_context = min(min_context, len(ind_set))
if i > MAX_ROW:
break
print("max context: {} unique ingredients per recipe".format(max_context))
print("min context: {} unique ingredients per recipe".format(min_context))
print("vocab size", len(list(vocab)))
```
Since we've performed this data preparation work, let's use `pickle` to save this larger superset of the recipes dataset to the `tmp.pkl` file:
```
import pickle
pickle.dump(recipes, open("tmp.pkl", "wb"))
recipes[:3]
```
Then we can restore the pickled Python data structure for usage later in other use cases.
The output shows the first few entries, to illustrated the format.
Now reshape this data into a vector of vectors of ingredients per recipe, to use for training a [*word2vec*](https://arxiv.org/abs/1301.3781) vector embedding model:
```
vectors = []
for id, ind_set in recipes:
v = []
for ind in ind_set:
v.append(ind)
vectors.append(v)
vectors[:3]
```
We'll use the [`Word2Vec`](https://radimrehurek.com/gensim/models/word2vec.html) implementation in the `gensim` library (i.e., *deep learning*) to train an embedding model.
This approach tends to work best if the training data has at least 100K rows.
Let's also show how to serialize the *word2vec* results, saving them to the `tmp.w2v` file so they could be restored later for other use cases.
NB: there is work in progress which will replace `gensim` with `pytorch` instead.
```
import gensim
MIN_COUNT = 2
model_path = "tmp.w2v"
model = gensim.models.Word2Vec(vectors, min_count=MIN_COUNT, window=max_context)
model.save(model_path)
```
The `get_related()` function takes any ingredient as input, using the embedding model to find the most similar other ingredients – along with calculating [`levenshtein`](https://github.com/toastdriven/pylev) edit distances (string similarity) among these labels. Then it calculates *percentiles* for both metrics in [`numpy`](https://numpy.org/) and returns the results as a [`pandas`](https://pandas.pydata.org/) DataFrame.
```
import numpy as np
import pandas as pd
import pylev
def get_related (model, query, n=20, granularity=100):
"""return a DataFrame of the closely related items"""
try:
bins = np.linspace(0, 1, num=granularity, endpoint=True)
v = sorted(
model.wv.most_similar(positive=[query], topn=n),
key=lambda x: x[1],
reverse=True
)
df = pd.DataFrame(v, columns=["ingredient", "similarity"])
s = df["similarity"]
quantiles = s.quantile(bins, interpolation="nearest")
df["sim_pct"] = np.digitize(s, quantiles) - 1
df["levenshtein"] = [ pylev.levenshtein(d, query) / len(query) for d in df["ingredient"] ]
s = df["levenshtein"]
quantiles = s.quantile(bins, interpolation="nearest")
df["lev_pct"] = granularity - np.digitize(s, quantiles)
return df
except KeyError:
return pd.DataFrame(columns=["ingredient", "similarity", "percentile"])
```
Let's try this with `dried basil` as the ingredient to query, and review the top `50` most similar other ingredients returned as the DataFrame `df`:
```
pd.set_option("max_rows", None)
df = get_related(model, "dried basil", n=50)
df
```
Note how some of the most similar items, based on vector embedding, are *synonyms* or special forms of our query `dried basil` ingredient: `dried basil leaves`, `dry basil`, `dried sweet basil leaves`, etc. These tend to rank high in terms of levenshtein distance too.
Let's plot the similarity measures:
```
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use("ggplot")
df["similarity"].plot(alpha=0.75, rot=0)
plt.show()
```
Notice the inflection points at approximately `0.56` and again at `0.47` in that plot.
We could use some statistical techniques (e.g., clustering) to segment the similarities into a few groups:
* highest similarity – potential synonyms for the query
* mid-range similarity – potential [hypernyms and hyponyms](https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy) for the query
* long-tail similarity – other ingredients that pair well with the query
In this example, below a threshold of the 75th percentile for vector embedding similarity, the related ingredients are less about being synonyms and more about other foods that pair well with basil.
Let's define another function `rank_related()` which ranks the related ingredients based on a combination of these two metrics.
This uses a cheap approximation of a [*pareto archive*](https://www.cs.bham.ac.uk/~jdk/multi/) for the ranking -- which comes in handing for recommender systems and custom search applications that must combine multiple ranking metrics:
```
from kglab import root_mean_square
def rank_related (df):
df2 = df.copy(deep=True)
df2["related"] = df2.apply(lambda row: root_mean_square([row[2], row[4]]), axis=1)
return df2.sort_values(by=["related"], ascending=False)
rank_related(df)
```
Notice how the "synonym" cases tend to move up to the top now?
Meanwhile while the "pairs well with" are in the lower half of the ranked list: `fresh mushrooms`, `italian turkey sausage`, `cooked spaghetti`, `white kidney beans`, etc.
---
## Exercises
**Exercise 1:**
Build a report for a *human-in-the-loop* reviewer, using the `rank_related()` function while iterating over `vocab` to make algorithmic suggestions for possible synonyms.
**Exercise 2:**
How would you make algorithmic suggestions for a reviewer about which ingredients could be related to a query, e.g., using the `skos:broader` and `skos:narrower` relations in the [`skos`](https://www.w3.org/2004/02/skos/) vocabulary to represent *hypernyms* and *hyponyms* respectively?
This could extend the KG to provide a kind of thesaurus about recipe ingredients.
| github_jupyter |
# Variational Autoencoder in TensorFlow
[Variational Autoencoders](https://arxiv.org/abs/1312.6114) (VAE) are a popular model that allows for unsupervised (and semi-supervised) learning. In this notebook, we'll implement a simple VAE on the MNIST dataset.
One of the primary goals of the VAE (and auto-encoders in general) is to reconstruct the original input. Why would we want to do that? At first glance, such a model seems silly: a simple identity function achieves the same thing with perfect results. However, with an autoencoder, we can learn a compresesed representation in a smaller latent space, allowing us to learn features and structure of the data. Autoencoders are composed of two arms, the **encoder** and **decoder**, which convert values from the data space to the latent space and vice versa, respectively.
Importantly, since we're simply reconstructing the original input, we do *not* necessarily need labels to do our learning, as we have in previous examples. This is significant, as labels are often far more expensive to acquire than raw data, often prohibitively so. VAEs therefore allow us to leverage abundant unlabeled data. That said, VAEs are also able to take advantage of labels when available as well, either in a completely supervised or semi-supervised setting. Altogether, autoencoders can achieve impressive results on tasks like denoising, segmentation, and even predicting future images.
## Imports and Data
First, some package imports and loading of the data. This is similar to what we've done before, with the main difference being that we're going to use TensorFlow Slim, as a follow-up to [notebook 02A](https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/02A_TensorFlow-Slim.ipynb).
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
slim = tf.contrib.slim
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
```
## Encoder
The encoder deterministically transforms the data $x$ from the data space to the latent space of $z$. Since we're dealing with a *variational* autoencoder, we attempt to model the *distribution* of the latent space given the input, represented by $q(z|x)$. This isn't immediately obvious in the code implementation, but we assume a standard Gaussian prior on this distribution, and our encoder returns the mean and variance (actually log-variance) of this distribution. We use log-variance because our model returns a real number, while variances must be positive.
MNIST is a very simple dataset, so let's also keep the model simple: an MLP with 2 fully connected layers. We name the output `mu_logvar` as we will be interpretting the first half of the final 128-dimensional vector as the mean $\mu$ and the second half as the log-variance log($\sigma^2$).
```
def encoder(x):
"""Network q(z|x)"""
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
mu_logvar = slim.fully_connected(x, 128, scope='fc1')
mu_logvar = slim.fully_connected(mu_logvar, 128, activation_fn=None, scope='fc2')
return mu_logvar
```
Note that we use a couple features of TF-Slim here:
1. We use `slim.fully_connected()` to specify which layers we want to use, without having to worry about defining weight or bias variables beforehand.
2. We use `slim.arg_scope()` to specify default arguments so we can leave them out of the definitions of each of the fully connected layers. We can still override the `activation_fn` for the last layer though.
For this simple model, TF-Slim doesn't actually benefit us all that much, but for the sake of demonstration, we'll stick with it.
## Decoder
The decoder is the generative arm of the auotencoder. In the variational autoencoder, the image generation process is probabilisitic: we draw a $z$ from the probability distribution output of the encoder and generate an output in the data domain. This reconstruction $\hat{x}$ is thus of the distribution $p(x|z)$.
Again, since MNIST is simple, we'll use a 2 layer MLP for the decoder. Importantly, since we are focusing on reconstruction, we make sure that the final output of the decoder $\hat{x}$ is the same dimensions as our input $x$.
```
def decoder(mu_logvar):
"""Network p(x|z)"""
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
# Interpret z as concatenation of mean and log variance
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
# Standard deviation must be positive
stddev = tf.sqrt(tf.exp(logvar))
# Draw a z from the distribution
epsilon = tf.random_normal(tf.shape(stddev))
z = mu + tf.multiply(stddev, epsilon)
x_hat = slim.fully_connected(z, 128, scope='fc1')
x_hat = slim.fully_connected(x_hat, 784, activation_fn=None, scope='fc2')
return x_hat
```
## Loss
Our model has two criteria we're training to optimize:
1. Reconstruction loss: As an **autoencoder**, we want to be able to reconstruct the original input. To evaluate how well the model has done that, we use a pixel-wise L2 distance metric. *Is this a good idea? What are the potential weaknesses of this approach?*
2. [KL Divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence): Because this model is **variational**, we also include a KL penalty to impose a Gaussian prior on the latent space. The exact derivation of this term can be found in the original [Auto-Encoding Variational Bayes paper](https://arxiv.org/abs/1312.6114). *Is a standard Gaussian prior a good assumption? What are the potential weaknesses of this approach?*
Because this model has two losses (unlike the single loss we've had in previous classification examples), we also have an extra parameter $\lambda$ to tune how to balance the two losses. This parameter can actually be very significant and require considerable tuning. What you set it to depends on the dataset, model, and application. Here, $\lambda=1$ turns out to work pretty well.
We use the ADAM algorithm that we've used before for optimization.
```
def optimizer(x_hat, x, mu_logvar):
"""Define loss functions (reconstruction, KL divergence) and optimizer"""
with tf.variable_scope('optimizer') as scope:
# Reconstruction loss
reconstruction = tf.reduce_sum(tf.squared_difference(x, x_hat))
# KL divergence
lam = 1
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
kl_d = lam * -0.5 * tf.reduce_sum(1.0 + logvar - tf.square(mu) - tf.exp(logvar))
# Total loss
loss = reconstruction + kl_d
# ADAM optimizer
train_step = tf.train.AdamOptimizer().minimize(loss)
return train_step, reconstruction, kl_d
```
## Visualization
It'll be nice to visualize the reconstructions that our model generates to see what it learns. This helper function plots the original inputs in one column and the reconstructions next to them in another column. I also may or may not have stolen it from Alex Lew, who included it in his [GAN notebook (03B)](https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/03B_Generative_Adversarial_Network.ipynb)...
```
def visualize_row(image, reconstruction, img_width=28, cmap='gray'):
"""
Takes in a tensor of images of given width, and displays them in a column
in a plot, using `cmap` to map from numbers to colors.
"""
fig, ax = plt.subplots(1, 2)
image = np.reshape(image, [-1, img_width])
reconstruction = np.reshape(reconstruction, [-1, img_width])
plt.figure()
ax[0].imshow(np.clip(image, 0, 1), cmap=cmap)
ax[1].imshow(np.clip(reconstruction, 0, 1), cmap=cmap)
plt.show()
```
## Define the graph and train
All of the functions we've written thus far are just that: functions. We still need to call them to assemble our TensorFlow computation graph. At this point, this should be becoming familiar.
One of the small differences is the inclusion of `tf.reset_default_graph()`, added to remedy a small, unfortunate side effect of using Jupyter and TensorFlow in conjunction, but you don't have to worry about it too much to understand the model. A more detailed explanation if you're interested below [1].
```
# Reset the graph
tf.reset_default_graph()
# Define input placeholder
x = tf.placeholder(tf.float32,[None, 784], name='x')
# Define VAE graph
with tf.variable_scope('encoder'):
mu_logvar = encoder(x)
with tf.variable_scope('decoder'):
x_hat = decoder(mu_logvar)
# Optimization
with tf.variable_scope('unlabeled') as scope:
train_step_unlabeled = optimizer(x_hat, x, mu_logvar)
```
<sub>*[1] The primary purpose of TensorFlow is to construct a computation graph connecting Tensors and operations. Each of these nodes must be assigned a unique name; if the user does not specify one, a unique name is automatically generated, like 'Placeholder_2', with the number at the end incrementing each time you create a new node of that type. Attempting to create a node with a name already found in the graph raises an error.*</sub>
<sub>*So how can this be problematic? In the Coding Environments notebook ([00B](https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/00B_Coding_Environments.ipynb)), it was mentioned that code from previously run cells persists. As such, if we're programming interactively and want to rebuild our graph after some updates, the new updated nodes we want to add collide with the names from our previous run, throwing an error. Why didn't we have to worry about this before? In the past, we haven't been naming our variables, so TensorFlow has been giving the nodes new unique names every time we update the graph and adding them to the collection of nodes from previous runs; the old nodes are never called, so they just sit there. However, TF-Slim does name the variables it generates, thus causing the problem. We can solve this by creating a new graph object before we define our computation graph, so every time we want to make modifications to the graph, we start anew.*</sub>
<sub>*If you're confused by that explanation, I wouldn't worry about it. It's not necessary for the program to run. It's there so we can re-run the cell defining the computation graph without restarting the entire kernel to clear memory of previous variables. In a traditionally written Python program (i.e. not IPython), you wouldn't need to do this.*</sub>
For training, we'll stay simple and train for 20000 iterations, visualizing our results with 5 digits from the validation set after every 1000 minibatches. Notice that this model is completely unsupervised: we never include the digit labels at any point in the process. Within a few thousand iterations, the model should start producing reasonable looking results:
```
with tf.Session() as sess:
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Train VAE model
for i in range(20000):
batch = mnist.train.next_batch(100)
sess.run(train_step_unlabeled, feed_dict={x: batch[0]}) # No labels
# Visualize reconstructions every 1000 iterations
if i % 1000 == 0:
batch = mnist.validation.next_batch(5)
reconstructions = sess.run(x_hat, feed_dict={x: batch[0]})
print("Iteration {0}:".format(i))
visualize_row(batch[0], reconstructions)
```
| github_jupyter |
# Community Detection with NetworKit
In this notebook we will cover some community detection algorithms implemented in the `community` module of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to each other than to the rest of the network. As a first step we import NetworKit:
```
import networkit as nk
```
The `community` module provides a top-level function, [detectCommunities(G, algo=None, inspect=True)](https://networkit.github.io/dev-docs/python_api/community.html?highlight=detect#networkit.community.detectCommunities) to perform community detection of a given graph with a suitable algorithm, and print some statistics about the result. If no algorithm is specified via the `algo` parameter, community detection is performed using the [PLM](https://networkit.github.io/dev-docs/python_api/community.html?highlight=plm#networkit.community.PLM) algorithm.
This function can be used as follows:
```
# Read graph
G = nk.readGraph("../input/karate.graph", nk.Format.METIS)
communities = nk.community.detectCommunities(G)
```
The following sections cover two popular community detection algorithms, `PLM` and `PLP`, and will illustrate how to use them.
## PLM
NetworKit provides a parallel implementation of the well-known Louvain method, which can be found in the [PLM](https://networkit.github.io/dev-docs/python_api/community.html?highlight=plm#networkit.community.PLM) class. It yields a high-quality solution at reasonably fast running times. The constructor `PLM(Graph, refine=False, gamma=0.1, par='balance', maxIter=32, turbo=True, recurse=True)` expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/networkit.html?highlight=graph#networkit.Graph) as a mandatory parameter. If the parameter `refine` is set to true, the algorithm performs a second move phase to refine the communities. The parameter `gamma` defines the multi-resolution modularity parameter. The string `par` defines the openmp parallelization strategy. `maxIter` is the maximum number of iterations for move phase. When `turbo` is set to true, the algorithm is faster but uses O(n) additional memory per thread. Set `recurse`to true in order to use recursive coarsening. Refer to [this]( http://journals.aps.org/pre/abstract/10.1103/PhysRevE.89.049902) for more details on recursive coarsening.
In the example below we run PLM with `refine` set to true while leaving the rest of the parameters to their default values."
```
# Choose and initialize algorithm
plmCommunities = nk.community.detectCommunities(G, algo=nk.community.PLM(G, True))
```
The output of the `detectCommunities` function is a partition of the nodes of the graph. It is represented by the [Partition](https://networkit.github.io/dev-docs/python_api/networkit.html?highlight=partition#networkit.Partition) data structure, which provides several methods for inspecting and manipulating a partition of a set of elements.
```
print("{0} elements assigned to {1} subsets".format(plmCommunities.numberOfElements(),
plmCommunities.numberOfSubsets()))
print("the biggest subset has size {0}".format(max(plmCommunities.subsetSizes())))
```
The contents of a partition object can be written to file in a simple format, in which the `i`-th line contains an integer representing the subset id of node `i`.
```
nk.community.writeCommunities(plmCommunities, "output/communtiesPLM.partition")
```
## PLP
The Label Propagation algorithm is an algorithm for finding communities in a graph. NetworKit provides a parallel implementation, [PLP(G, updateThreshold=none, maxIterations=none)](https://networkit.github.io/dev-docs/python_api/community.html?highlight=plp#networkit.community.PLP). The constructor expects a [networkit.Graph](https://networkit.github.io/dev-docs/python_api/networkit.html?highlight=graph#networkit.Graph) as a mandatory parameter. The parameter `updateThreshold` dictates the number of nodes that have to be changed in each iteration so that a new iteration starts, and `maxIterations` is the maximum number of iterations. `none` is NetworKit constant set to the maximum value of a 64-bit integer.
```
# Read graph
G = nk.readGraph("../input/jazz.graph", nk.Format.METIS)
# Choose and initialize algorithm
plpCommunities = nk.community.detectCommunities(G, algo=nk.community.PLP(G))
print("{0} elements assigned to {1} subsets".format(plpCommunities.numberOfElements(),
plpCommunities.numberOfSubsets()))
print("the biggest subset has size {0}".format(max(plpCommunities.subsetSizes())))
nk.community.writeCommunities(plpCommunities, "output/communtiesPLP.partition")
```
| github_jupyter |
# Katz Centrality
In this notebook, we will compute the Katz centrality of each vertex in our test datase using both cuGraph and NetworkX. Additionally, NetworkX also contains a Numpy implementation that will used. The NetworkX and cuGraph processes will be interleaved so that each step can be compared.
Notebook Credits
* Original Authors: Bradley Rees
* Created: 10/15/2019
* Last Edit: 08/16/2020
RAPIDS Versions: 0.14
Test Hardware
* GV100 32G, CUDA 10.2
## Introduction
Katz centrality is a measure of the relative importance of a vertex within the graph based on measuring the influence across the total number of walks between vertex pairs.
<img src="https://latex.codecogs.com/gif.latex?C_{katz}(i)&space;=&space;\sum_{k=1}^{\infty}&space;\sum_{j=1}^{n}&space;\alpha&space;^k(A^k)_{ji}" title="C_{katz}(i) = \sum_{k=1}^{\infty} \sum_{j=1}^{n} \alpha ^k(A^k)_{ji}" />
See [Katz on Wikipedia](https://en.wikipedia.org/wiki/Katz_centrality) for more details on the algorithm.
To compute the Katz centrality scores for a graph in cuGraph we use:<br>
__df = cugraph.katz_centrality(G,alpha=0.1, max_iter=100, tol=1.0e-6, nstart=None, normalized=True)__
G: cugraph.Graph object
alpha: float, Attenuation factor. default is 0.1
max_iter: int, The maximum number of iterations before an answer is returned.
This can be used to limit the execution time and do an early exit
before the solver reaches the convergence tolerance. If this value
is lower or equal to 0 cuGraph will use the default value, which is 100
tol: float, Set the tolerance the approximation, this parameter should be a small
magnitude value. The lower the tolerance the better the approximation. If
this value is 0.0f, cuGraph will use the default value which is 0.00001.
Setting too small a tolerance can lead to non-convergence due to numerical
roundoff. Usually values between 0.01 and 0.00001 are acceptable.
nstart:cuDataFrame, GPU Dataframe containing the initial guess for katz centrality.
Default is None
normalized:bool, If True normalize the resulting katz centrality values.
Default is True
Returns:
df: a cudf.DataFrame object with two columns:
df['vertex']: The vertex identifier for the vertex
df['katz_centrality']: The Katz centrality score for the vertex
The value of _alpha_ should be<br>
<img src="https://latex.codecogs.com/gif.latex?\alpha&space;<&space;\frac{1}{\lambda&space;_{max}&space;}" title="\alpha < \frac{1}{\lambda _{max} }" />
currently the user is responsible for setting alpha appropiatly.
### _NOTICE_
There is a difference between how cuGraph and how NetworkX computes the Katz centrality score. That difference leads to the scores not matching. cuGraph does not currently support the 'beta' and 'weight' parameters as seen in the corresponding networkX call. The cuGraph implementation is based on a relaxed version of Katz defined by Foster with a reduced computational complexity of O(n+m)
Foster, K.C., Muth, S.Q., Potterat, J.J. et al.
Computational & Mathematical Organization Theory (2001) 7: 275.
https://doi.org/10.1023/A:1013470632383
#### Some notes about vertex IDs...
* The current version of cuGraph requires that vertex IDs be representable as 32-bit integers, meaning graphs currently can contain at most 2^32 unique vertex IDs. However, this limitation is being actively addressed and a version of cuGraph that accommodates more than 2^32 vertices will be available in the near future.
* cuGraph will automatically renumber graphs to an internal format consisting of a contiguous series of integers starting from 0, and convert back to the original IDs when returning data to the caller. If the vertex IDs of the data are already a contiguous series of integers starting from 0, the auto-renumbering step can be skipped for faster graph creation times.
* To skip auto-renumbering, set the `renumber` boolean arg to `False` when calling the appropriate graph creation API (eg. `G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)`).
* For more advanced renumbering support, see the examples in `structure/renumber.ipynb` and `structure/renumber-2.ipynb`
### Test Data
We will be using the Zachary Karate club dataset
*W. W. Zachary, An information flow model for conflict and fission in small groups, Journal of
Anthropological Research 33, 452-473 (1977).*

Because the test data has vertex IDs starting at 1, the auto-renumber feature of cuGraph (mentioned above) will be used so the starting vertex ID is zero for maximum efficiency. The resulting data will then be auto-unrenumbered, making the entire renumbering process transparent to users.
### Prep
```
# Import needed libraries
import cugraph
import cudf
# NetworkX libraries
import networkx as nx
```
### Some Prep
```
# define the parameters
max_iter = 100 # The maximum number of iterations
tol = 0.00001 # tolerance
# Define the path to the test data
datafile='../data/karate-data.csv'
```
### Read in the data - GPU
cuGraph depends on cuDF for data loading and the initial Dataframe creation
The data file contains an edge list, which represents the connection of a vertex to another. The `source` to `destination` pairs is in what is known as Coordinate Format (COO). In this test case, the data is just two columns. However a third, `weight`, column is also possible
```
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
```
### Create a Graph
```
# create a Graph using the source (src) and destination (dst) vertex pairs from the Dataframe
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
# compute degree and get the max
degree = G.degrees()
lamda = degree['out_degree'].max()
print("The max degree is " + str(lamda))
```
### Call the Karz algorithm
```
alpha = 1 / lamda
# Call cugraph.katz_centrality to get the Katz scores
gdf_katz = cugraph.katz_centrality(G, alpha=alpha)
```
_It was that easy!_
----
Let's now look at the results
```
# Find the most important vertex using the scores
# This methods should only be used for small graph
def find_top_scores(_df) :
m = _df['katz_centrality'].max()
return _df.query('katz_centrality >= @m')
top_df = find_top_scores(gdf_katz)
top_df
# let's sort the data and look at the top 5 vertices
gdf_katz.sort_values(by='katz_centrality', ascending=False).head(5)
```
---
## Now compute using NetworkX
```
# Read the data, this also created a NetworkX Graph
file = open(datafile, 'rb')
Gnx = nx.read_edgelist(file)
k_nx = nx.katz_centrality(Gnx, alpha=alpha, max_iter=max_iter, tol=tol)
k_nx_s = sorted(((value, key) for (key,value) in k_nx.items()), reverse=True)
k_nx_s[:5]
```
As mentioned, the scores are different but the ranking is the same.
```
# The Numpy version
k_nx_mp = nx.katz_centrality_numpy(Gnx, alpha=alpha)
sorted(((value, key) for (key,value) in k_nx_mp.items()), reverse=True)[:5]
```
___
Copyright (c) 2019-2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
___
| github_jupyter |
# `Nuqleon.Linq.Expressions.Optimizers`
Provides optimizers for expression trees.
## Reference the library
### Option 1 - Use a local build
If you have built the library locally, run the following cell to load the latest build.
```
#r "bin/Debug/net50/Nuqleon.Linq.Expressions.Optimizers.dll"
```
### Option 2 - Use NuGet packages
If you want to use the latest published package from NuGet, run the following cell.
```
#r "nuget:Nuqleon.Linq.Expressions.Optimizers,*-*"
```
## (Optional) Attach a debugger
If you'd like to step through the source code of the library while running samples, run the following cell, and follow instructions to start a debugger (e.g. Visual Studio). Navigate to the source code of the library to set breakpoints.
```
System.Diagnostics.Debugger.Launch();
```
## `ExpressionOptimizer`
The `ExpressionOptimizer` class is an expression tree visitor that rewrites an expression tree by performing various types of optimizations that can be configured by the user.
Optimizers can come in handy to reduce the size and evaluation complexity at runtime. For example, in the context of Reaqtor expression trees get serialized, sent across cients and services, get stored in databases, and parts of them get evaluated for many events. Therefore, it makes sense to optimize expressions in many ways:
* Reduce the size, to make I/O more efficient.
* Reduce runtime overheads, including:
* CPU time to evaluate expressions;
* Memory allocations.
As an example, consider a query like this:
```csharp
Task CreateWeatherSubscription(ClientContext ctx, string city)
{
return ctx.GetObservable<WeatherInfo>(weatherUri).Where(x => x.City.ToLower() == city.ToLower()).SubscribeAsync(subUri, observer);
}
```
where we omitted some details. Upon submitting this query to the service, we'll inline the value of `city` in the query, resulting in a query of the following form:
```csharp
weather.Where(x => x.City.ToLower() == "Seattle".ToLower())
```
Note that this query would evaluate `"Seattle".ToLower()` for every weather event received on the stream. This has both CPU and memory costs. It'd be nice if the expression got optimized to:
```csharp
weather.Where(x => x.City.ToLower() == "seattle")
```
This, and much more, is what the expression optimizer can achieve. Let's have a look.
To create an optimizer instance, the constructor accepts two parameters:
* `ISemanticProvider` to specify a semantic provider that's consulted by the optimizer to make optimization decisions;
* `IEvaluatorFactory` to control the behavior of partial evaluation of subtrees.
We can have a first look at the optimizer by providing defaults for these parameters. In subsequent paragraphs of the notebook we'll get to the next level of detail.
```
using System.Linq.Expressions;
var sem = new DefaultSemanticProvider();
var eval = new DefaultEvaluatorFactory();
var opt = new ExpressionOptimizer(sem, eval);
```
To illustrate optimizations, let's first craft an expression by hand.
```
var expr = Expression.Add(Expression.Constant(1), Expression.Constant(2));
Console.WriteLine(expr);
```
Obviously, we can perform constant folding on this expression to reduce it to a single `ConstantExpression` node whose value is `3`. By running the optimizer's `Visit` method, we get exactly that result.
```
var optimized = opt.Visit(expr);
Console.WriteLine(optimized);
```
What about a more complex expression that involves method calls and whatnot? An example is shown below, this time using `Expression<T>` language support to construct such an expression.
```
Expression<Func<string>> f = () => "foobarqux".ToUpper().Substring(2, int.Parse("3"));
Console.WriteLine(f);
var optimized = opt.Visit(f);
Console.WriteLine(optimized);
```
This time, nothing happens, because the default semantic provider does not supply information about the purity of methods that enables it to perform evaluation of these subexpressions during optimization. To achieve this, we can zoom in to the semantic provider a tiny bit.
```
var intParseMethod = typeof(int).GetMethod(nameof(int.Parse), new[] { typeof(string) });
Console.WriteLine(sem.IsPure(intParseMethod));
```
The optimizer talks to the semantic provider to ask questions like `IsPure(expr)` to check whether an expression is pure. There are many more questions it can ask, which we shown below by just dumping the interface's members.
```
foreach (var m in typeof(ISemanticProvider).GetMethods().Select(m => m.Name).Distinct().OrderBy(m => m))
{
Console.WriteLine(m);
}
```
How can we teach the semantic provider that `int.Parse(string)` is pure so an optimizer can perform partial evaluation?
### Building a simple custom semantic provider
One option is to inherit from `DefaultSemanticProvider` and override the method.
```
using System.Reflection;
class MySemanticProvider : DefaultSemanticProvider
{
public override bool IsPure(MemberInfo member)
{
return base.IsPure(member) || member == typeof(int).GetMethod(nameof(int.Parse), new[] { typeof(string) });
}
}
```
Later, we'll see how this can be made easier by building catalogs. For now, let's stick with this approach and create a new optimizer instance using the semantic provider shown above.
```
var sem = new MySemanticProvider();
var eval = new DefaultEvaluatorFactory();
var opt = new ExpressionOptimizer(sem, eval);
```
If we run this optimizer over our expression, we get a different result.
```
Console.WriteLine(f);
var optimized = opt.Visit(f);
Console.WriteLine(optimized);
```
Note that `int.Parse("3")` was evaluated to `3`.
> **Note:** The attentive reader will remark that `int.Parse(string)` is not pure, because it depends on the current culture. However, if an environment is configured in such a way that the result is predictable, e.g. the expression is optimized in the same environment where it is being evaluated, the optimization is valid. The extensible nature of semantic providers enables to make this type of choices.
It goes without saying that having to implement all of these purity checks can get cumbersome really fast. There are tons of members in the .NET Framework that have specific characteristics such as purity (but, as we will learn later, there are many other semantic questions). For example, we'd have to list `Substring` and `ToUpper` as pure. Luckily, there's a notion of catalogs.
### Using built-in catalogs
This library ships with a set of catalogs for commonly used semantic questions. An example is the `PureMemberCatalog`, which is shown below for the `System.String` type.
```
var catalog = PureMemberCatalog.System.String;
foreach (var member in catalog)
{
Console.WriteLine(member);
}
```
Built-in catalogs are organized by namespace and type, so you could find the pure members on e.g. `Regex` as follows:
```
foreach (var member in PureMemberCatalog.System.Text.RegularExpressions.Regex)
{
Console.WriteLine(member);
}
```
In this context, purity means that the result of evaluating the member with its target instance (if any) and all of its arguments being constants will always produce the same result.
Member catalogs are further structured such that one can obtain a catalog for an entire namespace, with or without including child namespaces. Some examples are shown below:
```
Console.WriteLine($"Pure member count in System = {PureMemberCatalog.System.AllThisNamespaceOnly.Count()}");
Console.WriteLine($"Pure member count on System.DateTimeOffset = {PureMemberCatalog.System.DateTimeOffset.Count()}");
Console.WriteLine($"Pure member count in System.Text.RegularExpressions = {PureMemberCatalog.System.Text.RegularExpressions.AllThisNamespaceOnly.Count()}");
Console.WriteLine($"Pure member count on System.Text.RegularExpressions.Regex = {PureMemberCatalog.System.Text.RegularExpressions.Regex.Count()}");
Console.WriteLine($"Pure member count in System.Collections and all child namespaces = {PureMemberCatalog.System.Collections.AllThisAndChildNamespaces.Count()}");
Console.WriteLine($"Total pure member count = {PureMemberCatalog.All.Count()}");
```
Member catalogs are immutable, but one can construct custom catalogs on top of existing ones, as shown below:
```
var myCatalog = new MemberTable
{
PureMemberCatalog.All,
typeof(int).GetMethod(nameof(int.Parse), new[] { typeof(string) }),
typeof(string).GetMethod(nameof(string.ToUpper), Type.EmptyTypes),
};
```
Note that we've added `ToUpper` ourselves, because the default catalog doesn't consider this method to be pure either, because it does depend on the current culture. However, the default catalog does contain `Substring`, so we don't have to worry about that.
Armed with this catalog, we can construct a `MetadataSemanticProvider` which inherits from `DefaultSemanticProvider` but provides properties that enable setting things such as the `PureMembers`.
```
var sem = new MetadataSemanticProvider { PureMembers = myCatalog };
var eval = new DefaultEvaluatorFactory();
var opt = new ExpressionOptimizer(sem, eval);
```
When we try to optimize our expression now, we'd expect to get a different result.
```
Console.WriteLine(f);
var optimized = opt.Visit(f);
Console.WriteLine(optimized);
```
Indeed, the result is a single constant node containing the value `"OBA"` which is the result of evaluating `ToUpper`, `Substring`, and `Parse`.
But what if the attempt to partially evaluate an expression causes an exception to be thrown? An example is shown below.
```
Expression<Func<string>> f = () => "foobarqux".ToUpper().Substring(10);
var optimized = opt.Visit(f);
Console.WriteLine(optimized);
```
This time, the expression gets rewritten to contain an `Expression.Throw` expression that will throw the exception that was encountered during partial evaluation at optimization time.
## A closer look at `ISemanticProvider`
Semantic providers are used by the optimizer to gather semantic information about expressions, types, members, values, etc. The `ISemanticProvider` interface represents all the capabilities of a semantic provider.
While custom implementations of this interface are possible, a good default choice for the semantic provider is `DefaultSemanticProvider` or `MetadataSemanticProvider`. The latter is more powerful because it supports specifying semantic information about .NET types and members, for example if a given type is immutable, or if a given member is a pure function. The library comes with various catalogs for commonly used types and members in the .NET Base Class Libraries, which can be used to construct a `MetadataSemanticProvider` as shown below.
```
var msp = new MetadataSemanticProvider
{
PureMembers = PureMemberCatalog.All,
ConstParameters = ConstParameterCatalog.All,
ImmutableTypes = ImmutableTypeCatalog.All
};
var opt = new ExpressionOptimizer(msp, new DefaultEvaluatorFactory());
void Example(Expression expr)
{
Console.WriteLine(expr);
var optimized = opt.Visit(expr);
Console.WriteLine(optimized);
}
```
### Pure members
We've already looked at pure members before. Pure members are used to perform partial evaluation of nodes such as `MethodCallExpression`, for example `Math.Abs(n)` where `n` itself is pure.
```
Example(Expression.Call(typeof(Math).GetMethod(nameof(Math.Abs), new[] { typeof(int) }), Expression.Constant(-2)));
```
### Constant parameters
Constant parameters are required for partial evaluation of a function if any of its parameters has a mutable type but the function doesn't perform any mutation, e.g. `string.Split(string, char[])` doesn't mutate the given `char[]`. This makes the following optimization work.
```
Expression<Func<string, string[]>> f = s => s.Split(',', ';');
Example(f);
```
The allocation of the `char[]` can now be avoided for every evaluation of the expression at runtime, because a single constant `char[]` is used. This is safe because we know that `Split` only reads from the array and never mutates its contents. If we have another custom API that exhibits this behavior, we can add it to the catalog as well. Let's first define such a method.
```
class Bar
{
public static string Foo(string s, params int[] xs) => s + " = " + string.Join(",", xs);
}
Expression<Func<string>> f = () => Bar.Foo("qux", 1, 2, 3);
Example(f);
```
To add `Bar.Foo(int[])` to the catalog, we have to specify a pattern that indicates which parameter of `Foo` is to be treated as `const`. This is shown below:
```
var constParameterTable = new ParameterTable { ConstParameterCatalog.All };
constParameterTable.Add<int[]>(xs => Bar.Foo("", xs));
msp = new MetadataSemanticProvider
{
PureMembers = PureMemberCatalog.All,
ConstParameters = constParameterTable,
ImmutableTypes = ImmutableTypeCatalog.All
};
opt = new ExpressionOptimizer(msp, new DefaultEvaluatorFactory());
```
When we apply the optimizer this time, the `NewArrayInit` expression can get reduced to a constant as well.
```
Example(f);
```
Note that if `Foo` itself would be marked as a pure member, the whole `Foo("qux", 1, 2, 3)` expression could be evaluated during optimization. Let's demonstrate this to show how all of these optimizations can "cascade".
```
var pureMemberTable = new MemberTable
{
PureMemberCatalog.All,
typeof(Bar).GetMethod(nameof(Bar.Foo))
};
msp = new MetadataSemanticProvider
{
PureMembers = pureMemberTable,
ConstParameters = constParameterTable,
ImmutableTypes = ImmutableTypeCatalog.All
};
opt = new ExpressionOptimizer(msp, new DefaultEvaluatorFactory());
```
When we apply the optimizer this time, the whole expression gets reduced to a constant.
```
Example(f);
```
### Immutable types
Finally, checks for immutable types are used for various checks to ensure a member can't mutate the state of an object, e.g. `System.Tuple<T1, T2>` is immutable so it's safe to evaluate an instance of this type to a `Constant` such that subsequent calls can be made. Another well-known immutable type is `string`. For example, the reason that reducing `"foo".Substring(1).ToUpper()` to `"oo".ToUpper()` is because it's known that none of the members on `string` can cause mutation. Therefore, it's safe to have a `Constant` containing `"oo"` rather than evaluating `"foo".Substring(1)` every time to create a unique instance.
To demonstrate this principle, we can use a custom record type in C# 9.0.
```
record Person(string Name, int Age);
```
A record is immutable, but the optimizer does have no notion of this. As such, if we try to construct an instance of `Person` given constant arguments, it does not know to optimize this to a constant. Let's show this below.
```
Expression<Func<int>> f = () => new Person("Bart", 21).Age;
Example(f);
```
First, we can attempt to add the constructor of `Person` to the pure member catalog, as shown below:
```
var pureMemberTable = new MemberTable
{
PureMemberCatalog.All,
typeof(Bar).GetMethod(nameof(Bar.Foo)),
typeof(Person).GetConstructor(new[] { typeof(string), typeof(int) })
};
msp = new MetadataSemanticProvider
{
PureMembers = pureMemberTable,
ConstParameters = constParameterTable,
ImmutableTypes = ImmutableTypeCatalog.All
};
opt = new ExpressionOptimizer(msp, new DefaultEvaluatorFactory());
```
However, when we try to optimize the expression, we still are out of luck.
```
Example(f);
```
Even though the constructor is pure, the result of evaluating it would be a `ConstantExpression` containing a value of type `Person`. If `Person` were mutable, the optimization would be unsafe, because the shared instance could get mutated. We need to teach the semantic provider that `Person` is immutable, making it safe to share an instance.
```
var immutableTypes = new TypeTable
{
ImmutableTypeCatalog.All,
typeof(Person)
};
msp = new MetadataSemanticProvider
{
PureMembers = pureMemberTable,
ConstParameters = constParameterTable,
ImmutableTypes = immutableTypes
};
opt = new ExpressionOptimizer(msp, new DefaultEvaluatorFactory());
```
This time aorund, optimization is more fruitful.
```
Example(f);
```
This may be subtle, but the `ToString` of the expression tree's `ConstantExpression` node is showing the result of calling `ToString` on `Person`, which reads as `Person { Name = Bart, Age = 21 }`. Obivously, we could take this one step further and let the optimizer know that `Name` and `Age` properties are pure as well, meaning that when evaluated on a constant `Person` instance, they will always return the same result.
```
pureMemberTable.Add(typeof(Person).GetProperty(nameof(Person.Name)));
pureMemberTable.Add(typeof(Person).GetProperty(nameof(Person.Age)));
msp = new MetadataSemanticProvider
{
PureMembers = pureMemberTable,
ConstParameters = constParameterTable,
ImmutableTypes = immutableTypes
};
opt = new ExpressionOptimizer(msp, new DefaultEvaluatorFactory());
```
And finally, we end up with the whole expression reducing to `21`.
```
Example(f);
```
To show all of the optimizations combined, let's shown a more complex expression that involves an `InvocationExpression` of a `LambdaExpression` that prints a `Person` object.
```
Expression<Func<Person, string>> toString = p => p.Name + " is " + p.Age.ToString();
Expression<Func<Person>> newPerson = () => new Person("Bart", 21);
var e = Expression.Invoke(toString, newPerson.Body);
Example(e);
```
Note that the optimizer got quite far in constant folding this fairly complex expression. The original expression looked like this:
```csharp
(p => (p.Name + " is ") + p.Age.ToString())(new Person("Bart", 21))
```
where the string concatenation was carried out in two steps and involving a boxing conversion for the age value.
First, it knew that evaluating `new Person("Bart", 21)` was safe to do, resulting in a `ConstantExpression`:
```csharp
(p => (p.Name + " is ") + p.Age.ToString())(c)
```
where `c` is a constant containing `Person { Name = "Bart", Age = 21 }`.
Next, this enabled inlining (also known as beta reduction) of the `Person` argument when invoking the `toString` lambda expression, so we ended up with:
```csharp
(c.Name + " is ") + c.Age.ToString()
```
Because `Name` and `Age` are pure, this got further rewritten into:
```csharp
("Bart" + " is ") + 21.ToString()
```
The string concatenation operator for two `string` arguments is also considered pure in the default pure members catalog, so more constant evaluation took place:
```csharp
"Bart is " + 21.ToString()
```
And this is where the optimization ended, because `int.ToString()` is culture-sensitive and therefore not considered pure.
## Evaluator factories
Evaluator factories are used to perform partial evaluation of an expression tree, e.g. for a node whose behavior is pure and whose children are constants. An example of such an expression is shown below.
```
var expr = Expression.Add(Expression.Constant(DateTime.Now), Expression.Constant(TimeSpan.FromHours(1)));
Console.WriteLine(expr);
```
This node is a `BinaryExpression` of type `Add` where the `Method` refers to `op_Addition(DateTime, TimeSpan)` on `System.DateTime`. The default pure member catalog contains this method. If we run this expression through the optimizer, the evaluator factory is used to get a delegate that can evaluate this `+` operation given two constant operands. To illustrate this behavior, we'll implement `IEvaluatorFactory`, or rather inherit from `DefaultEvaluatorFactory` to add some logging.
```
class MyEvaluatorFactory : DefaultEvaluatorFactory
{
public override Delegate GetEvaluator(MethodInfo method)
{
var res = base.GetEvaluator(method);
Console.WriteLine($"Got evaluator for {method}.");
return res;
}
}
```
Using the metadata semantic provider and the custome valuator factory, we can construct an expression optimizer instance.
```
var sem = new MetadataSemanticProvider { PureMembers = { PureMemberCatalog.All }, ImmutableTypes = { ImmutableTypeCatalog.All }, ConstParameters = { ConstParameterCatalog.All } };
var eval = new MyEvaluatorFactory();
var opt = new ExpressionOptimizer(sem, eval);
```
Let's now apply this optimizer to our expression and see the invocation to the evaluator.
```
var optimized = opt.Visit(expr);
Console.WriteLine(optimized);
```
Note that the optimizer does not cache the delegates returned from the evaluator factory. This is done to avoid leaks, and one can perform caching within the evaluator factory instead. To show the lack of caching, we can apply the optimizer again to another expression.
```
Expression<Func<DateTime, double, DateTime>> addHours = (dt, hours) => dt + TimeSpan.FromHours(hours);
var anotherExpr = Expression.Invoke(addHours, Expression.Constant(DateTime.Now), Expression.Constant(2.0));
var anotherOptimizedExpr = opt.Visit(anotherExpr);
Console.WriteLine(anotherOptimizedExpr);
```
One way to provide caching at the evaluator factory level would be to use `Nuqleon.Memory`'s memoization support. An example of how to compose these pieces is shown below.
```
using System.Memory;
class MyMemoizingEvaluatorFactory : MyEvaluatorFactory, IClearable
{
private readonly IMemoizedDelegate<Func<MemberInfo, Delegate>> _memoizedGetEvaluator;
public MyMemoizingEvaluatorFactory(IMemoizationCacheFactory factory)
{
var mem = Memoizer.Create(factory);
_memoizedGetEvaluator = mem.Memoize<MemberInfo, Delegate>(base.GetEvaluator);
}
public override Delegate GetEvaluator(MemberInfo member)
{
return _memoizedGetEvaluator.Delegate(member);
}
public void Clear()
{
_memoizedGetEvaluator.Cache.Clear();
}
}
```
A real implementation would override a few more evaluator factory methods, but this suffices to demonstrate the effect by constructing a new optimizer that uses our memoizing evaluator factory. Also note that we're overriding `GetEvaluator(MemberInfo)` rather than the overload with `MethodInfo`. This is the entry-point method on the interface, so it allows for caching of evaluators of different member kinds (i.e. fields, properties, constructors, and methods).
```
var sem = new MetadataSemanticProvider { PureMembers = { PureMemberCatalog.All }, ImmutableTypes = { ImmutableTypeCatalog.All }, ConstParameters = { ConstParameterCatalog.All } };
var cacheFactory = ConcurrentMemoizationCacheFactory.CreateLru(16);
var eval = new MyMemoizingEvaluatorFactory(cacheFactory);
var opt = new ExpressionOptimizer(sem, eval);
```
When we apply the optimizer to both expressions, we'll see the effects of caching.
```
opt.Visit(expr);
opt.Visit(anotherExpr);
```
This time, we only see one call for `op_Addition`, as expected. Because our cache policy is set to LRU and to only contain 16 entries, we'll get a cap on the memory used.
> **Note:** Many uses of expression optimizers in Reaqtor stacks use unbounded caches during optimization of a whole bunch of expressions trees, e.g. when trying to compact the expressions in a query engine. At the end of the optimization pass, the caches are cleared or simply dropped and garbage collected. Either way, the design of decoupling the optimizer from aspects such as semantic providers and evaluator factories allows for a great deal of flexibility and separation of concerns.
## Customizing the `ExpressionOptimizer` by overriding `Visit` methods
The behavior of the expression visitor can be influenced by overriding various `Visit` methods as well. Three customization points are worth mentioning:
```csharp
protected virtual bool ShouldOptimize(Expression node);
protected virtual Expression VisitPreOptimize(Expression node);
protected virtual Expression VisitPostOptimize(Expression original, Expression optimized);
```
In case it's undesirable for certain nodes to get optimized, one can override `ShouldOptimize`. Returning `false` from this method will cause the optimizer to stop traversing the given expression. Examples include retaining the exact shape of a `Quote` expression, or preventing any optimization for nodes of a certain `Type`. For example, if a node could get partially evaluated to a constant of some type `T` which is not supported by some serializer that will run post optimization, one can avoid that such constants end up in the tree. Alternatively, one could override `VisitPostOptimize` and return the original expression if a rewrite was undesirable. This enables a specialized optimizer to "change its mind".
Rather than discussing all the possible ways these methods can be used, we'll just use them for logging in the example below. This also sheds some light on the optimizer's behavior.
```
class LoggingExpressionOptimizer : ExpressionOptimizer
{
private string _padLeft = "";
public LoggingExpressionOptimizer(ISemanticProvider semanticProvider, IEvaluatorFactory evaluatorFactory)
: base(semanticProvider, evaluatorFactory)
{
}
public override Expression Visit(Expression node)
{
Console.WriteLine($"{_padLeft}{nameof(Visit)}({node}) \r\n{_padLeft}{{");
Indent();
var res = base.Visit(node);
Outdent();
Console.WriteLine($"{_padLeft}}} = {res}");
return res;
}
protected override bool ShouldOptimize(Expression node)
{
var res = base.ShouldOptimize(node);
Console.WriteLine($"{_padLeft}{nameof(ShouldOptimize)}({node}) = {res}");
return res;
}
protected override Expression VisitPreOptimize(Expression node)
{
var res = base.VisitPreOptimize(node);
Console.WriteLine($"{_padLeft}{nameof(VisitPreOptimize)}({node}) = {res}");
return res;
}
protected override Expression VisitPostOptimize(Expression original, Expression optimized)
{
var res = base.VisitPostOptimize(original, optimized);
Console.WriteLine($"{_padLeft}{nameof(VisitPostOptimize)}({original}, {optimized}) = {res}");
return res;
}
private void Indent() => _padLeft = new string(' ', _padLeft.Length + 2);
private void Outdent() => _padLeft = new string(' ', _padLeft.Length - 2);
}
```
Let's just run our logging optimizer over an expression that will have a few rewrites.
```
var opt = new LoggingExpressionOptimizer(sem, eval);
Expression<Func<int>> f = () => "foobarqux".Substring(2, 3).ToUpperInvariant().Length + 1;
var res = opt.Visit(f);
Console.WriteLine(res);
```
## More optimizations
Besides partial evaluation and constant folding, the optimizer has a lot more optimization techniques under its belt. The list is quite long, so we'll limit ourselves to exploring a few of them here.
### Branch analysis
`ConditionalExpression` can be used for the conditional ternary operator `?:` as well as `if` statements. The expression optimizer can remove such branches if the condition evaluates to a constant. For example:
```
Expression<Func<string, char?>> getFirstChar = s => s != null && s.Length > 0 ? s[0] : null;
var expr = Expression.Invoke(getFirstChar, Expression.Constant("bar"));
Console.WriteLine(opt.Visit(expr));
```
A lot is going on here. Let's break it down step by step. Our original expression is:
```csharp
(s => s => s != null && s.Length > 0 ? s[0] : null)("bar")
```
First, there's beta reduction where the constant string got inlined in the lambda.
```csharp
"bar" != null && "bar".Length > 0 ? "bar"[0] : null
```
Next, the optimizer can determine that `"bar"` is never null. It knows that's the case of `new` expressions and a couple of other nodes, but also for constants it can inspect. In addition, it can ask the semantic provider whether an expression can never be null by using `IsNeverNull`.
> **Note:** The optimizer predates the work on nullability analysis that was done in C# 8.0. A future revision of the optimizer could support much more nullability checks, e.g. for the return types of methods.
Because `"bar"` can never be null, the expression can get reduced to:
```csharp
true && "bar".Length > 0 ? "bar"[0] : null
```
Rules for the binary `&&` operator enable dropping the `true` operand. (If the left operand were `false`, the expression optimizer would reduce the entire `&&` expression to `false`.) Now we have:
```csharp
"bar".Length > 0 ? "bar"[0] : null
```
Next, we're faced with a `MemberExpression` for `string.Length`, which results in a check for purity. Note there are more semantic questions asked, which we've never touched upon. In particular, the optimizer will check if the operand `IsAlwaysNull`. If so, it can directly emit code to `throw new NullReferenceException()`. Further optimizations of `TryExpression` nodes can also reason over exception flow. In this case, `"bar"` is never null, and `"bar".Length` will get evaluated to `3`. This can be seen in the output above, with a logging message indicating that the evaluator factory was consulted to build an evaluator for `(string s) => s.Length`, which then got passed `"bar"`. This results in:
```csharp
3 > 0 ? "bar"[0] : null
```
Partial evaluation of `3 > 0` results in `true`:
```csharp
true ? "bar"[0] : null
```
This in turn enables branch prediction, so we end up with:
```csharp
"bar"[0]
```
Once more, we have a pure member for the indexer of `string`, resulting in the construction of an evaluator, which finally results in:
```csharp
'b'
```
Similar optimizations exist for `SwitchExpression` nodes where a `SwitchCase` "arm" of the expression can be predicted if the test value is a constant.
### Inlining of invocations
As we've seen in the samples before, the expression optimizer also knows how to inline `InvocationExpression` nodes applied to `LambdaExpression` operands. This form of beta reduction is only carried out if it's safe to do so, i.e. if side-effects would not get reordered or dropped on the floor. Beta reduction often occurs after binding template expressions with concrete parameter values. An example is shown below.
```
Expression<Func<IEnumerable<int>, int, IEnumerable<int>>> query = (xs, a) => xs.Where(x => x > a);
var boundQuery = Expression.Invoke(query, Expression.Constant(new[] { 1, 2, 3 }), Expression.Constant(0));
Console.WriteLine(opt.Visit(boundQuery));
```
### Exception flow analysis
As one final example, we'll consider a more complex statement tree to illustrate how the optimizer can reason over exception flow as well.
```
var expr =
Expression.TryCatch(
Expression.Convert(
Expression.AddChecked(Expression.Constant(int.MaxValue), Expression.Constant(1)),
typeof(int?)
),
Expression.Catch(
typeof(OverflowException),
Expression.Default(typeof(int?))
)
);
Console.WriteLine(opt.Visit(expr));
```
During optimization of the `+` operation with constant operands, partial evaluation triggered an `OverflowException`. This then caused the exception flow analysis to kick in, where the optimizer emulated the exception flow and ended up picking the `catch (OverflowException)` block, which returned a `default(int?)` expression. Even if this catch block would contain a non-trivial expression (e.g. `Console.WriteLine("Oops!")`), the optimizer would be able to eliminate the whole `TryExpression` in favor of the catch block's body. A more complex example is shown here:
```
var expr =
Expression.TryCatch(
Expression.Call(
typeof(Console).GetMethod(nameof(Console.WriteLine), new[] { typeof(int) }),
Expression.AddChecked(Expression.Constant(int.MaxValue), Expression.Constant(1))
),
Expression.Catch(
typeof(OverflowException),
Expression.Call(
typeof(Console).GetMethod(nameof(Console.WriteLine), new[] { typeof(string) }),
Expression.Constant("Oops!")
)
)
);
Console.WriteLine(opt.Visit(expr));
```
All of the logic that did the pure computation and triggered the exception was dropped and all that remains in a simple `Console.WriteLine("Oops!");` statement.
Obviously, this shows an extreme example of optimization where all the circumstances are just right to cause a massive reduction of the original expression. However, many of the optimization rules working in concert results in very powerful optimization capabilities.
| github_jupyter |
```
# british film institute
# import libraries
import rdflib, pandas, pathlib, json
import numpy, uuid, xmltodict, pydash
# define graph and namespace
graph = rdflib.Graph()
name_bfi = rdflib.Namespace('https://www.bfi.org.uk/')
name_wb = rdflib.Namespace('http://wikibas.se/ontology')
name_fiaf = rdflib.Namespace('https://www.fiafnet.org/')
# useful functions
def make_claim(s, p, o):
claim_id = name_bfi[f"resource/claim/{uuid.uuid4()}"]
graph.add((s, name_wb['#claim'], claim_id))
graph.add((claim_id, p, o))
return claim_id
def make_qual(s, p, o):
qual_id = name_bfi[f"resource/qualifier/{uuid.uuid4()}"]
graph.add((s, name_wb['#qualifier'], qual_id))
graph.add((qual_id, p, o))
return qual_id
def reference(claim_id, institute):
ref_id = name_bfi[f"resource/reference/{uuid.uuid4()}"]
graph.add((claim_id, name_wb['#reference'], ref_id))
graph.add((ref_id, name_fiaf['ontology/property/contributed_by'], institute))
def single_list(data):
if isinstance(data, list):
return data
else:
return [data]
# define institution
graph.add((name_bfi['ontology/item/bfi'], rdflib.RDFS.label, rdflib.Literal('British Film Institute', lang='en')))
make_claim(name_bfi['ontology/item/bfi'], name_fiaf['ontology/property/instance_of'], name_fiaf['ontology/item/holding_institution'])
make_claim(name_bfi['ontology/item/bfi'], name_fiaf['ontology/property/located_in'], name_fiaf['ontology/item/uk'])
print(len(graph))
# format data
path = pathlib.Path.home() / 'murnau-data' / 'british_film_institute'
with open(path / 'BFI_Murnau_Works.json') as data:
data = [x for x in pydash.get(json.load(data), 'adlibJSON.recordList.record')]
print(len(graph))
# write work
graph.add((name_bfi['ontology/item/wikidata'], rdflib.RDFS.label, rdflib.Literal('Wikidata', lang='en')))
for x in data:
work_id = x['object_number'][0]
work = name_bfi[f"resource/work/{work_id}"]
make_claim(work, name_fiaf['ontology/property/instance_of'], name_fiaf['ontology/item/work'])
claim1 = make_claim(work, name_fiaf['ontology/property/external_id'], rdflib.Literal(work_id))
make_qual(claim1, name_fiaf['ontology/property/institution'], name_bfi['ontology/item/bfi'])
reference(claim1, name_bfi['ontology/item/bfi'])
for y in pydash.get(x, 'URL'):
if 'Wikidata' in pydash.get(y, 'URL\.description.0'):
wikidata_id = pydash.get(y, 'URL.0').split('/')[-1]
claim_id = make_claim(work, name_fiaf['ontology/property/external_id'], rdflib.Literal(wikidata_id))
make_qual(claim_id, name_fiaf['ontology/property/institution'], name_bfi['ontology/item/wikidata'])
reference(claim_id, name_bfi['ontology/item/bfi'])
if pydash.get(x, 'worklevel_type.0.value.1') == 'Monographic':
claim_id = make_claim(work, name_fiaf['ontology/property/work_type'], name_fiaf['ontology/item/monographic'])
reference(claim_id, name_bfi['ontology/item/bfi'])
print(len(graph))
# write title
for x in data:
work_id = x['object_number'][0]
work = name_bfi[f"resource/work/{work_id}"]
orig = [y for y in pydash.get(x, 'Title') if 'Original' in str(y)][0]
title = pydash.get(orig, 'title')[0]
if pydash.get(orig, 'title\.article') is not None:
title = pydash.get(orig, 'title\.article')[0]+' '+title
claim1 = make_claim(work, name_fiaf['ontology/property/title'], rdflib.Literal(title.strip()))
make_qual(claim1, name_fiaf['ontology/property/title_type'], name_fiaf['ontology/item/original_title'])
reference(claim1, name_bfi['ontology/item/bfi'])
print(len(graph))
# write country
for x in data:
work_id = x['object_number'][0]
work = name_bfi[f"resource/work/{work_id}"]
for k, v in {'Germany':name_fiaf['ontology/item/germany'], 'USA':name_fiaf['ontology/item/usa']}.items():
if pydash.get(x, 'production_country.0.term.0') == k:
claim_id = make_claim(work, name_fiaf['ontology/property/production_country'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
print(len(graph))
# write agent
def write_credit(source, contribution, uri):
for s in [x for x in source if x['role'] == contribution]:
work = s['work']
agent = name_bfi[f"resource/agent/{s['id']}"]
claim_id = make_claim(work, name_fiaf['ontology/property/agent'], agent)
make_qual(claim_id, name_fiaf['ontology/property/agent_type'], uri)
reference(claim_id, name_bfi['ontology/item/bfi'])
make_claim(agent, name_fiaf['ontology/property/instance_of'], name_fiaf['ontology/item/agent'])
claim_id = make_claim(agent, name_fiaf['ontology/property/external_id'], rdflib.Literal(s['id']))
make_qual(claim_id, name_fiaf['ontology/property/institution'], name_bfi['ontology/item/bfi'])
reference(claim_id, name_bfi['ontology/item/bfi'])
claim_id = make_claim(agent, name_fiaf['ontology/property/surname'], rdflib.Literal(s['name'][0].strip()))
reference(claim_id, name_bfi['ontology/item/bfi'])
if len(s['name']) > 1:
claim_id = make_claim(agent, name_fiaf['ontology/property/forename'], rdflib.Literal(s['name'][1].strip()))
reference(claim_id, name_bfi['ontology/item/bfi'])
claim_id = make_claim(agent, name_fiaf['ontology/property/work'], work)
reference(claim_id, name_bfi['ontology/item/bfi'])
combined = list()
for x in data:
work_id = x['object_number'][0]
work = name_bfi[f"resource/work/{work_id}"]
for y in pydash.get(x, 'cast'):
name = pydash.get(y, 'cast\.name.0.name')[0].split(',')
credit_type = pydash.get(y, 'cast\.name.0.party\.class.0.value')[0]
agent_id = pydash.get(y, 'cast\.name\.lref')[0]
combined.append({'work': work, 'id':agent_id, 'name':name, 'type':credit_type, 'role':'Cast'})
for y in pydash.get(x, 'credits'):
name = pydash.get(y, 'credit\.name.0.name')[0].split(',')
credit_type = pydash.get(y, 'credit\.name.0.party\.class.0.value')[0]
role = pydash.get(y, 'credit\.type.0.term')[0]
agent_id = pydash.get(y, 'credit\.name\.lref')[0]
combined.append({'work': work, 'id':agent_id, 'name':name, 'type':credit_type, 'role':role})
combined = [x for x in combined if x['type'] == 'PERSON']
write_credit(combined, 'Cast', name_fiaf['ontology/item/cast'])
write_credit(combined, 'Director', name_fiaf['ontology/item/director'])
write_credit(combined, 'Screenplay', name_fiaf['ontology/item/screenwriter'])
write_credit(combined, 'Producer', name_fiaf['ontology/item/producer'])
write_credit(combined, 'Photography', name_fiaf['ontology/item/cinematographer'])
write_credit(combined, 'Music', name_fiaf['ontology/item/composer'])
write_credit(combined, 'Editor', name_fiaf['ontology/item/editor'])
print(len(graph))
# write manifestations/items
items = list()
for x in data:
work_id = x['object_number'][0]
for manifestation in pydash.get(x, 'Parts'):
for item in pydash.get(manifestation, 'parts_reference'):
if 'Parts' in item:
for carrier in pydash.get(item, 'Parts'):
carrier = pydash.get(carrier, 'parts_reference.0')
item_id = pydash.get(carrier, 'object_number.0')
copy_status = pydash.get(carrier, 'copy_status.0.value.1')
item_type = pydash.get(carrier, 'item_type.0.value.0')
sound = pydash.get(carrier, 'sound_item.0.value.1')
base = pydash.get(carrier, 'base.0.value.1')
phys = pydash.get(carrier, 'physical_description')
gauge = pydash.get(carrier, 'gauge_film.0.value.1')
vid_form = pydash.get(carrier, 'video_format.0.value.0')
if 'Dimension' in carrier:
duration = [y for y in pydash.get(carrier, 'Dimension') if pydash.get(y, 'dimension\.part.0') == 'Total']
duration = pydash.get(duration, '0.dimension\.value.0')
else:
duration = None
items.append({'work_id':work_id, 'item_id':item_id, 'copy':copy_status, 'item_type':item_type,
'sound':sound, 'base':base, 'element':phys, 'gauge':gauge, 'video':vid_form, 'dure':duration})
for i in items:
if i['copy'] != 'Removed':
work = name_bfi[f"resource/work/{i['work_id']}"]
manifestation = name_bfi[f"resource/manifestation/{uuid.uuid4()}"]
make_claim(manifestation, name_fiaf['ontology/property/instance_of'], name_fiaf['ontology/item/manifestation'])
make_claim(manifestation, name_fiaf['ontology/property/manifestation_of'], work)
item_id = i['item_id']
item = name_bfi[f"resource/item/{item_id}"]
make_claim(item, name_fiaf['ontology/property/instance_of'], name_fiaf['ontology/item/item'])
make_claim(item, name_fiaf['ontology/property/item_of'], manifestation)
claim_id = make_claim(item, name_fiaf['ontology/property/held_at'], name_bfi['ontology/item/bfi'])
reference(claim_id, name_bfi['ontology/item/bfi'])
claim_id = make_claim(item, name_fiaf['ontology/property/external_id'], rdflib.Literal(item_id))
make_qual(claim_id, name_fiaf['ontology/property/institution'], name_bfi['ontology/item/bfi'])
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {'FILM':name_fiaf['ontology/item/film'],
'VIDEO':name_fiaf['ontology/item/video_tape'], 'DIGITAL':name_fiaf['ontology/item/digital']}.items():
if i['item_type'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/carrier'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {'16mm':name_fiaf['ontology/item/16mm'], '35mm':name_fiaf['ontology/item/35mm']}.items():
if i['gauge'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/specific_carrier'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {'Safety':name_fiaf['ontology/item/acetate'], 'Acetate':name_fiaf['ontology/item/acetate'],
'Nitrate':name_fiaf['ontology/item/nitrate'], 'Polyester':name_fiaf['ontology/item/polyester']}.items():
if i['base'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/base'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {'Silent':name_fiaf['ontology/item/silent'], 'Combined':name_fiaf['ontology/item/sound'],
'Mute':name_fiaf['ontology/item/silent'], 'Mixed':name_fiaf['ontology/item/sound']}.items():
if i['sound'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/sound'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {'Master':name_fiaf['ontology/item/master'], 'Viewing':name_fiaf['ontology/item/viewing']}.items():
if i['copy'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/access'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {
'Dupe Negative':name_fiaf['ontology/item/duplicate_negative'],
'BW Positive':name_fiaf['ontology/item/print'],
'Negative':name_fiaf['ontology/item/negative'],
'Duplicating Positive':name_fiaf['ontology/item/duplicate_positive'],
'Colour Positive':name_fiaf['ontology/item/print']}.items():
if i['element'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/element'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
if 'BW' in str(i['element']):
claim_id = make_claim(item, name_fiaf['ontology/property/colour'], name_fiaf['ontology/item/black_and_white'])
reference(claim_id, name_bfi['ontology/item/bfi'])
if 'Colour' in str(i['element']):
claim_id = make_claim(item, name_fiaf['ontology/property/colour'], name_fiaf['ontology/item/colour'])
reference(claim_id, name_bfi['ontology/item/bfi'])
for k, v in {'VHS':name_fiaf['ontology/item/master'], 'DVD':name_fiaf['ontology/item/viewing'],
'DB':name_fiaf['ontology/item/digibeta']}.items():
if i['video'] == k:
claim_id = make_claim(item, name_fiaf['ontology/property/specific_carrier'], v)
reference(claim_id, name_bfi['ontology/item/bfi'])
if i['dure']:
claim_id = make_claim(item, name_fiaf['ontology/property/extent_feet'], rdflib.Literal(i['dure']))
reference(claim_id, name_bfi['ontology/item/bfi'])
make_claim(work, name_fiaf['ontology/property/manifestation'], manifestation)
make_claim(manifestation, name_fiaf['ontology/property/item'], item)
print(len(graph))
graph.serialize(destination=str(pathlib.Path.cwd() / 'british_film_institute.ttl'), format="turtle")
print(len(graph))
```
| github_jupyter |
```
# Load packages
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pandas as pd
import os
import pickle
import time
import scipy as scp
import scipy.stats as scps
from scipy.optimize import differential_evolution
from scipy.optimize import minimize
from datetime import datetime
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# Load my own functions
import dnnregressor_train_eval_keras as dnnk
import make_data_wfpt as mdw
from kde_training_utilities import kde_load_data
import ddm_data_simulation as ddm_sim
import boundary_functions as bf
# Handle some cuda business
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="2"
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
# Load Model
model_path = '/media/data_cifs/afengler/data/kde/full_ddm/keras_models/dnnregressor_full_ddm_06_28_19_00_48_00/model_0'
ckpt_path = '/media/data_cifs/afengler/data/kde/full_ddm/keras_models/dnnregressor_full_ddm_06_28_19_00_48_00/ckpt_0_10'
model = keras.models.load_model(model_path)
model.load_weights(ckpt_path)
model.summary()
# Initializations -----
n_runs = 100
n_samples = 2500
feature_file_path = '/media/data_cifs/afengler/data/kde/ornstein_uhlenbeck/train_test_data/test_features.pickle'
mle_out_path = '/media/data_cifs/afengler/data/kde/ornstein_uhlenbeck/mle_runs'
# NOTE PARAMETERS: WEIBULL: [v, a, w, node, shape, scale]
param_bounds = [(-2, 2), (0.5, 2), (0.3, 0.7), (-1.0, 1.0)]
# my_optim_columns = ['v_sim', 'a_sim', 'w_sim', 'node_sim', 'theta_sim',
# 'v_mle', 'a_mle', 'w_mle', 'node_mle', 'theta_mle', 'n_samples']
# Get parameter names in correct ordering:
dat = pickle.load(open(feature_file_path,
'rb'))
parameter_names = list(dat.keys())[:-2] # :-1 to get rid of 'rt' and 'choice' here
# Make columns for optimizer result table
p_sim = []
p_mle = []
for parameter_name in parameter_names:
p_sim.append(parameter_name + '_sim')
p_mle.append(parameter_name + '_mle')
my_optim_columns = p_sim + p_mle + ['n_samples']
# Initialize the data frame in which to store optimizer results
optim_results = pd.DataFrame(np.zeros((n_runs, len(my_optim_columns))), columns = my_optim_columns)
optim_results.iloc[:, 2 * len(parameter_names)] = n_samples
# define boundary
boundary = bf.constant
boundary_multiplicative = True
# Define the likelihood function
def log_p(params = [0, 1, 0.9], model = [], data = [], parameter_names = []):
# Make feature array
feature_array = np.zeros((data[0].shape[0], len(parameter_names) + 2))
# Store parameters
cnt = 0
for i in range(0, len(parameter_names), 1):
feature_array[:, i] = params[i]
cnt += 1
# Store rts and choices
feature_array[:, cnt] = data[0].ravel() # rts
feature_array[:, cnt + 1] = data[1].ravel() # choices
# Get model predictions
prediction = model.predict(feature_array)
# Some post-processing of predictions
prediction[prediction < 1e-29] = 1e-29
return(- np.sum(np.log(prediction)))
def make_params(param_bounds = []):
params = np.zeros(len(param_bounds))
for i in range(len(params)):
params[i] = np.random.uniform(low = param_bounds[i][0], high = param_bounds[i][1])
return params
# ---------------------
my_optim_columns
# Main loop ----------- TD: Parallelize
for i in range(0, n_runs, 1):
# Get start time
start_time = time.time()
tmp_params = make_params(param_bounds = param_bounds)
# Store in output file
optim_results.iloc[i, :len(parameter_names)] = tmp_params
# Print some info on run
print('Parameters for run ' + str(i) + ': ')
print(tmp_params)
# Define boundary params
# Linear Collapse
# boundary_params = {'node': tmp_params[3],
# 'theta': tmp_params[4]}
# Constant
boundary_params = {}
# Run model simulations
ddm_dat_tmp = ddm_sim.ddm_flexbound_simulate(v = tmp_params[0],
a = tmp_params[1],
w = tmp_params[2],
s = 1,
delta_t = 0.001,
max_t = 20,
n_samples = n_samples,
boundary_fun = boundary, # function of t (and potentially other parameters) that takes in (t, *args)
boundary_multiplicative = boundary_multiplicative, # CAREFUL: CHECK IF BOUND
boundary_params = boundary_params)
# Print some info on run
print('Mean rt for current run: ')
print(np.mean(ddm_dat_tmp[0]))
# Run optimizer
out = differential_evolution(log_p,
bounds = param_bounds,
args = (model, ddm_dat_tmp, parameter_names),
popsize = 30,
disp = True)
# Print some info
print('Solution vector of current run: ')
print(out.x)
print('The run took: ')
elapsed_time = time.time() - start_time
print(time.strftime("%H:%M:%S", time.gmtime(elapsed_time)))
# Store result in output file
optim_results.iloc[i, len(parameter_names):(2*len(parameter_names))] = out.x
# -----------------------
# Save optimization results to file
optim_results.to_csv(mle_out_path + '/mle_results_1.csv')
# Read in results
optim_results = pd.read_csv(mle_out_path + '/mle_results.csv')
plt.scatter(optim_results['v_sim'], optim_results['v_mle'], c = optim_results['theta_mle'])
# Regression for v
reg = LinearRegression().fit(np.expand_dims(optim_results['v_mle'], 1), np.expand_dims(optim_results['v_sim'], 1))
reg.score(np.expand_dims(optim_results['v_mle'], 1), np.expand_dims(optim_results['v_sim'], 1))
plt.scatter(optim_results['a_sim'], optim_results['a_mle'], c = optim_results['theta_mle'])
# Regression for a
reg = LinearRegression().fit(np.expand_dims(optim_results['a_mle'], 1), np.expand_dims(optim_results['a_sim'], 1))
reg.score(np.expand_dims(optim_results['a_mle'], 1), np.expand_dims(optim_results['a_sim'], 1))
plt.scatter(optim_results['w_sim'], optim_results['w_mle'])
# Regression for w
reg = LinearRegression().fit(np.expand_dims(optim_results['w_mle'], 1), np.expand_dims(optim_results['w_sim'], 1))
reg.score(np.expand_dims(optim_results['w_mle'], 1), np.expand_dims(optim_results['w_sim'], 1))
plt.scatter(optim_results['theta_sim'], optim_results['theta_mle'])
# Regression for c1
reg = LinearRegression().fit(np.expand_dims(optim_results['theta_mle'], 1), np.expand_dims(optim_results['theta_sim'], 1))
reg.score(np.expand_dims(optim_results['theta_mle'], 1), np.expand_dims(optim_results['theta_sim'], 1))
plt.scatter(optim_results['c2_sim'], optim_results['c2_mle'], c = optim_results['a_mle'])
# Regression for w
reg = LinearRegression().fit(np.expand_dims(optim_results['c2_mle'], 1), np.expand_dims(optim_results['c2_sim'], 1))
reg.score(np.expand_dims(optim_results['c2_mle'], 1), np.expand_dims(optim_results['c2_sim'], 1))
```
| github_jupyter |
# Prepare and Deploy a TensorFlow Model to AI Platform for Online Serving
This Notebook demonstrates how to prepare a TensorFlow 2.x model and deploy it for serving with AI Platform Prediction. This example uses the pretrained [ResNet V2 101](https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4) image classification model from [TensorFlow Hub](https://tfhub.dev/) (TF Hub).
The Notebook covers the following steps:
1. Downloading and running the ResNet module from TF Hub
2. Creating serving signatures for the module
3. Exporting the model as a SavedModel
4. Deploying the SavedModel to AI Platform Prediction
5. Validating the deployed model
## Setup
This Notebook was tested on **AI Platform Notebooks** using the standard TF 2.2 image.
### Import libraries
```
import base64
import os
import json
import requests
import time
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from typing import List, Optional, Text, Tuple
```
### Configure GCP environment settings
```
PROJECT_ID = '[your-google-project-id]' # Set your project Id
BUCKET = '[your-bucket-name]' # Set your bucket name Id
REGION = '[your-region]' # Set your region for deploying the model
MODEL_NAME = 'resnet_classifier'
MODEL_VERSION = 'v1'
GCS_MODEL_LOCATION = 'gs://{}/models/{}/{}'.format(BUCKET, MODEL_NAME, MODEL_VERSION)
THUB_MODEL_HANDLE = 'https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4'
IMAGENET_LABELS_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
IMAGES_FOLDER = 'test_images'
!gcloud config set project $PROJECT_ID
```
### Create a local workspace
```
LOCAL_WORKSPACE = '/tmp/workspace'
if tf.io.gfile.exists(LOCAL_WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(LOCAL_WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(LOCAL_WORKSPACE)
```
## 1. Loading and Running the ResNet Module
### 1.1. Download and instantiate the model
```
os.environ["TFHUB_DOWNLOAD_PROGRESS"] = 'True'
local_savedmodel_path = hub.resolve(THUB_MODEL_HANDLE)
print(local_savedmodel_path)
!ls -la {local_savedmodel_path}
model = hub.load(THUB_MODEL_HANDLE)
```
The expected input to most TF Hub TF2 image classification models, including ResNet 101, is a rank 4 tensor conforming to the following tensor specification: `tf.TensorSpec([None, height, width, 3], tf.float32)`. For the ResNet 101 model, the expected image size is `height x width = 224 x 224`. The color values for all channels are expected to be normalized to the [0, 1] range.
The output of the model is a batch of logits vectors. The indices into the logits are the `num_classes = 1001` classes from the ImageNet dataset. The mapping from indices to class labels can be found in the [labels file](download.tensorflow.org/data/ImageNetLabels.txt) with class 0 for "background", followed by 1000 actual ImageNet classes.
We will now test the model on a couple of JPEG images.
### 1.2. Display sample images
```
image_list = [tf.io.read_file(os.path.join(IMAGES_FOLDER, image_path))
for image_path in os.listdir(IMAGES_FOLDER)]
ncolumns = len(image_list) if len(image_list) < 4 else 4
nrows = int(len(image_list) // ncolumns)
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(10,10))
for axis, image in zip(axes.flat[0:], image_list):
decoded_image = tf.image.decode_image(image)
axis.set_title(decoded_image.shape)
axis.imshow(decoded_image.numpy())
```
### 1.3. Preprocess the testing images
The images need to be preprocessed to conform to the format expected by the ResNet101 model.
```
def _decode_and_scale(image, size):
image = tf.image.decode_image(image, expand_animations=False)
image_height = image.shape[0]
image_width = image.shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.cast(tf.image.resize(image, [size, size]), tf.uint8)
return image
size = 224
raw_images = tf.stack(image_list)
preprocessed_images = tf.map_fn(lambda x: _decode_and_scale(x, size), raw_images, dtype=tf.uint8)
preprocessed_images = tf.image.convert_image_dtype(preprocessed_images, tf.float32)
print(preprocessed_images.shape)
```
### 2.4. Run inference
```
predictions = model(preprocessed_images)
predictions
```
The model returns a batch of arrays with logits. This is not a very user friendly output so we will convert it to the list of ImageNet class labels.
```
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
IMAGENET_LABELS_URL)
imagenet_labels = np.array(open(labels_path).read().splitlines())
```
We will display the 5 highest ranked labels for each image
```
for prediction in list(predictions):
decoded = imagenet_labels[np.argsort(prediction.numpy())[::-1][:5]]
print(list(decoded))
```
## 2. Create Serving Signatures
The inputs and outputs of the model as used during model training may not be optimal for serving. For example, in a typical training pipeline, feature engineering is performed as a separate step preceding model training and hyperparameter tuning. When serving the model, it may be more optimal to embed the feature engineering logic into the serving interface rather than require a client application to preprocess data.
The ResNet V2 101 model from TF Hub is optimized for recomposition and fine tuning. Since there are no serving signatures in the model's metadata, it cannot be served with TF Serving as is.
```
list(model.signatures)
```
To make it servable, we need to add a serving signature(s) describing the inference method(s) of the model.
We will add two signatures:
1. **The default signature** - This will expose the default predict method of the ResNet101 model.
2. **Prep/post-processing signature** - Since the expected inputs to this interface require a relatively complex image preprocessing to be performed by a client, we will also expose an alternative signature that embeds the preprocessing and postprocessing logic and accepts raw unprocessed images and returns the list of ranked class labels and associated label probabilities.
The signatures are created by defining a custom module class derived from the `tf.Module` base class that encapsulates our ResNet model and extends it with a method implementing the image preprocessing and output postprocessing logic. The default method of the custom module is mapped to the default method of the base ResNet module to maintain the analogous interface.
The custom module will be exported as `SavedModel` that includes the original model, the preprocessing logic, and two serving signatures.
This technique can be generalized to other scenarios where you need to extend a TensorFlow model and you have access to the serialized `SavedModel` but you don't have access to the Python code implementing the model.
#### 2.1. Define the custom serving module
```
LABELS_KEY = 'labels'
PROBABILITIES_KEY = 'probabilities'
NUM_LABELS = 5
class ServingModule(tf.Module):
"""
A custom tf.Module that adds image preprocessing and output post processing to
a base TF 2 image classification model from TF Hub.
"""
def __init__(self, base_model, input_size, output_labels):
super(ServingModule, self).__init__()
self._model = base_model
self._input_size = input_size
self._output_labels = tf.constant(output_labels, dtype=tf.string)
def _decode_and_scale(self, raw_image):
"""
Decodes, crops, and resizes a single raw image.
"""
image = tf.image.decode_image(raw_image, dtype=tf.dtypes.uint8, expand_animations=False)
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.image.resize(image, [self._input_size, self._input_size])
image = tf.cast(image, tf.uint8)
return image
def _preprocess(self, raw_inputs):
"""
Preprocesses raw inputs as sent by the client.
"""
# A mitigation for https://github.com/tensorflow/tensorflow/issues/28007
with tf.device('/cpu:0'):
images = tf.map_fn(self._decode_and_scale, raw_inputs, dtype=tf.uint8)
images = tf.image.convert_image_dtype(images, tf.float32)
return images
def _postprocess(self, model_outputs):
"""
Postprocesses outputs returned by the base model.
"""
probabilities = tf.nn.softmax(model_outputs)
indices = tf.argsort(probabilities, axis=1, direction='DESCENDING')
return {
LABELS_KEY: tf.gather(self._output_labels, indices, axis=-1)[:,:NUM_LABELS],
PROBABILITIES_KEY: tf.sort(probabilities, direction='DESCENDING')[:,:NUM_LABELS]
}
@tf.function(input_signature=[tf.TensorSpec([None, 224, 224, 3], tf.float32)])
def __call__(self, x):
"""
A pass-through to the base model.
"""
return self._model(x)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def predict_labels(self, raw_images):
"""
Preprocesses inputs, calls the base model
and postprocesses outputs from the base model.
"""
# Call the preprocessing handler
images = self._preprocess(raw_images)
# Call the base model
logits = self._model(images)
# Call the postprocessing handler
outputs = self._postprocess(logits)
return outputs
serving_module = ServingModule(model, 224, imagenet_labels)
```
#### 2.2. Test the custom serving module
```
predictions = serving_module.predict_labels(raw_images)
predictions
```
## 3. Save the custom serving module as `SavedModel`
```
model_path = os.path.join(LOCAL_WORKSPACE, MODEL_NAME, MODEL_VERSION)
default_signature = serving_module.__call__.get_concrete_function()
preprocess_signature = serving_module.predict_labels.get_concrete_function()
signatures = {
'serving_default': default_signature,
'serving_preprocess': preprocess_signature
}
tf.saved_model.save(serving_module, model_path, signatures=signatures)
```
### 3.1. Inspect the `SavedModel`
```
!saved_model_cli show --dir {model_path} --tag_set serve --all
```
### 3.2. Test loading and executing the `SavedModel`
```
model = tf.keras.models.load_model(model_path)
model.predict_labels(raw_images)
```
## 4. Deploy the `SavedModel` to AI Platform Prediction
### 4.1. Copy the `SavedModel` to GCS
```
!gsutil cp -r {model_path} {GCS_MODEL_LOCATION}
!gsutil ls {GCS_MODEL_LOCATION}
```
### 4.2 Create a model in AI Platform Prediction
```
!gcloud ai-platform models create {MODEL_NAME} \
--project {PROJECT_ID} \
--regions {REGION}
!gcloud ai-platform models list --project {PROJECT_ID}
```
### 4.3 Create a model version
```
MACHINE_TYPE='n1-standard-8'
ACCELERATOR='count=1,type=nvidia-tesla-p4'
!gcloud beta ai-platform versions create {MODEL_VERSION} \
--model={MODEL_NAME} \
--origin={GCS_MODEL_LOCATION} \
--runtime-version=2.1 \
--framework=TENSORFLOW \
--python-version=3.7 \
--machine-type={MACHINE_TYPE} \
--accelerator={ACCELERATOR} \
--project={PROJECT_ID}
!gcloud ai-platform versions list --model={MODEL_NAME} --project={PROJECT_ID}
```
## 5. Validate the Deployed Model Version to AI Platform Prediction
```
import googleapiclient.discovery
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, MODEL_VERSION)
print("Service name: {}".format(name))
def caip_predict(instances, signature_name='serving_default'):
request_body={
'signature_name': signature_name,
'instances': instances}
response = service.projects().predict(
name=name,
body=request_body
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
outputs = response['predictions']
return outputs
signature_name = 'serving_preprocess'
encoded_images = [{'b64': base64.b64encode(image.numpy()).decode('utf-8')}
for image in image_list]
caip_predict(encoded_images, signature_name=signature_name)
```
## License
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Part 12: Train an Encrypted NN on Encrypted Data
In this notebook, we're going to use all the techniques we've learned thus far to perform neural network training (and prediction) while both the model and the data are encrypted.
In particular, we present our custom Autograd engine which works on encrypted computations.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Jason Paumier - Github: [@Jasopaum](https://github.com/Jasopaum)
- Théo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel)
# Step 1: Create Workers and Toy Data
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import syft as sy
# Set everything up
hook = sy.TorchHook(torch)
alice = sy.VirtualWorker(id="alice", hook=hook)
bob = sy.VirtualWorker(id="bob", hook=hook)
james = sy.VirtualWorker(id="james", hook=hook)
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]])
target = torch.tensor([[0],[0],[1],[1.]])
# A Toy Model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 2)
self.fc2 = nn.Linear(2, 1)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
model = Net()
```
# Step 2: Encrypt the Model and Data
Encryption here comes in two steps. Since Secure Multi-Party Computation only works on integers, in order to operate over numbers with decimal points (such as weights and activations), we need to encode all of our numbers using Fixed Precision, which will give us several bits of decimal precision. We do this by calling .fix_precision().
We can then call .share() as we have for other demos, which will encrypt all of the values by sharing them between Alice and Bob. Note that we also set requires_grad to True, which also adds a special autograd method for encrypted data. Indeed, since Secure Multi-Party Computation doesn't work on float values, we can't use the usual PyTorch autograd. Therefore, we need to add a special AutogradTensor node that computes the gradient graph for backpropagation. You can print any of this element to see that it includes an AutogradTensor.
```
# We encode everything
data = data.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
target = target.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
model = model.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
print(data)
```
# Step 3: Train
And now we can train using simple tensor logic.
```
opt = optim.SGD(params=model.parameters(),lr=0.1).fix_precision()
for iter in range(20):
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# 6) print our progress
print(loss.get().float_precision())
```
The loss indeed decreased!
## Impact of fixed precision
You might wonder how encrypting everything impacts the decreasing loss. Actually, because the theoretical computation is the same, the numbers are very close to non-encrypted training. You can verify this by running the same example without encryption and with a deterministic initialisation of the model like this one in the model `__init__`:
```
with torch.no_grad():
self.fc1.weight.set_(torch.tensor([[ 0.0738, -0.2109],[-0.1579, 0.3174]], requires_grad=True))
self.fc1.bias.set_(torch.tensor([0.,0.1], requires_grad=True))
self.fc2.weight.set_(torch.tensor([[-0.5368, 0.7050]], requires_grad=True))
self.fc2.bias.set_(torch.tensor([-0.0343], requires_grad=True))
```
The slight difference you might observe is due to the rounding of values performed while transforming to fixed precision. The default `precision_fractional` is 3 and if you get it down to 2 the divergence with clear text training increases, while it reduces if you choose `precision_fractional = 4`.
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on Github
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft Github Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for github issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# CodeBert Grid Experiment Evaluation
Nice to see you around! Have a seat.
Would you like a drink? Maybe a cigar?
**A full run of this Notebook takes about 40 minutes on my machine.**
Make sure to have all required dependencies installed - they are listed in the [environment.yml](./environment.yml).
You create a conda environment from the yml using
```
conda env create -f environment.yml
conda activate Lampion-Codebert-Evaluation
```
Make sure to run your Jupyter Notebook from that environment!
Otherwise you are (still) missing the dependencies.
**OPTIONALLY** you can use the environment in which your jupter notebook is already running, with starting a new terminal (from jupyter) and run
```
conda env update --prefix ./env --file environment.yml --prune
```
Please be aware that by the end of this notebook we create a big .csv file.
Some of the statistical tests where easier to do in R, which is provided in a seperate file starting from the bleus.csv created by the end of the notebook.
```
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import stats
import nltk
nltk.download("punkt")
# Homebrew Imports (python-file next to this)
import bleu_evaluator as foreign_bleu
# Set Jupyter vars
# %matplotlib notebook
plt.rcParams.update({'font.size': 35})
%matplotlib inline
```
## Data-Loading / Preparation
Make sure that your dataset looks like described in the [Readme](./README.md), that is
```
./data
/PaperResults
/configs
/reference
test_0.gold
test_0.output
bleu.txt (optional, can be created below)
/config_0
config.properties
test_0.gold
test_0.output
bleu.txt (optional, can be created below)
/config_1
config.properties
test_0.gold
test_0.output
bleu.txt (optional, can be created below)
...
```
where the configs **must** be numbered to be correctly detected.
```
# This runs the bleu-score upon the config files, creating the bleu.txt's
# If your data package was provided including the txt you dont need to do this.
# Existing bleu.txt's will be overwritten.
#!./metric_runner.sh ./data/PaperResults/
```
The following cells first run over the given data directory and read all paths,
then the properties and finally all of the data is loaded.
The bleu.txt files are required at this stage.
```
# The directory where to look for the data, default are the paper results
data_directory = "./data/PaperResults"
# These archetypes are later used to group the configurations
# While to grouping is up to you, right here it simply is one archetype for each transformation type,
# Grouping together different configs with the same transformations applied (but different #Transformations)
config_archetypes = {
"config_0":"if","config_1":"if","config_2":"if",
"config_3":"neutral-element","config_4":"neutral-element","config_5":"neutral-element",
"config_6":"mixed names(pseudo)","config_7":"mixed names(pseudo)","config_8":"mixed names(pseudo)",
"config_9":"mixed-names(random)","config_10":"mixed-names(random)","config_11":"mixed-names(random)",
"config_12": "add-var(pseudo)","config_13": "add-var(pseudo)","config_14": "add-var(pseudo)",
"config_15": "add-var(random)","config_16": "add-var(random)","config_17": "add-var(random)",
"config_18": "if & neutral-element","config_19": "if & neutral-element","config_20": "if & neutral-element"
}
print(f"looking for results in {data_directory}" )
results={}
for root,dirs,files in os.walk(data_directory):
for name in files:
if ".gold" in name:
directory = os.path.basename(root)
results[directory]={}
results[directory]["result_file"]=os.path.join(root,"test_0.output")
results[directory]["gold_file"]=os.path.join(root,"test_0.gold")
results[directory]["bleu_file"]=os.path.join(root,"bleu.txt")
if os.path.exists(os.path.join(root,"config.properties")):
results[directory]["property_file"]=os.path.join(root,"config.properties")
print(f"Found {len(results.keys())} configuration folders in {data_directory}")
def load_properties(filepath, sep='=', comment_char='#'):
"""
Read the file passed as parameter as a properties file.
"""
props = {}
with open(filepath, "rt") as f:
for line in f:
l = line.strip()
if l and not l.startswith(comment_char):
key_value = l.split(sep)
key = key_value[0].strip()
value = sep.join(key_value[1:]).strip().strip('"')
props[key] = value
return props
print("reading in property-files")
for key in results.keys():
if "property_file" in results[key].keys():
results[f"{key}"]["properties"]=load_properties(results[key]["property_file"])
print("done reading the properties")
print("reading in result-files")
for key in results.keys():
result_file = results[key]["result_file"]
f = open(result_file)
lines=f.readlines()
results[key]["results"]={}
for l in lines:
num = int(l.split("\t")[0])
content = l.split("\t")[1]
content = content.strip()
results[key]["results"][num] = content
f.close()
gold_file = results[key]['gold_file']
gf = open(gold_file)
glines=gf.readlines()
results[key]["gold_results"]={}
for gl in glines:
num = int(gl.split("\t")[0])
content = gl.split("\t")[1]
content = content.strip()
results[key]["gold_results"][num] = content
gf.close()
print("done reading the result files")
# Comment this in for inspection of results
#results
print("reading in the bleu-scores")
for key in results.keys():
bleu_file = results[key]["bleu_file"]
f = open(bleu_file)
score=f.readlines()[0]
results[key]["bleu"]=float(score)
f.close()
print("done reading the bleu-scores")
#results["config_0"]["bleu"]
```
The following are little helpers and wrappers to make the notebook a bit smoother.
```
"""
There is a small issue with the configs being named config_0, config_1, config_10:
As they are treated as strings, config_10 is "smaller then" config_2, making the sort unintuitive
This method should help to sort configs in the intendet way: config_1,config_2,...,config_9,config_10,config_11,...
config_num can be used to sort the configs where necessary. It can be used e.g. as
sorted(non_reference_configs,key=config_num)
"""
def config_num(c):
# Fallback: If we are not trying to sort configs, just do a normal compare
if not "config_" in c:
return -1
else:
c_part = int(c.split("_")[1])
return c_part
# The non reference configs are all result-keys that are not "reference"
# Additionally, they are sorted to match the above behaviour (config10>config2)
non_reference_configs = sorted([k for k in results.keys() if "reference" != k],key=config_num)
# Set the Archetypes also into the results using the Archetype Dictionary defined at the beginning of the notebook
for key in non_reference_configs:
if "property_file" in results[key].keys():
results[key]["archetype"]=config_archetypes[key]
# This helps looking up archetype+transformations per configuration
def archetype_info(config):
archetype = config_archetypes[config]
transforms = int(results[config]["properties"]["transformations"])
return (archetype,transforms)
# Pretty Print archetype info for a given config
print_archetype_info = lambda config: f"{(archetype_info(config))[0]}@{(archetype_info(config))[1]}"
# Another Set of archetypes used e.g. for grouping and printing
all_archetypes = set(config_archetypes.values())
# archetype-MT-Mapping for Paper (Where we use MT)
archetype_mt_mapping = {
"if":"MT-IF",
"neutral-element":"MT-NE",
"mixed names(pseudo)": "MT-REP + MT-UVP",
"mixed-names(random)": "MT-RER + MT-UVR",
"add-var(pseudo)":"MT-UVP",
"add-var(random)":"MT-UVR",
"if & neutral-element":"MT-IF + MT-NE"
}
# These Two Wrappers are adapters to the ntlk library,
# In addition they cover often-occuring errors with a default behaviour
# (Instead of throwing errors)
def jaccard_wrapper(sentenceA,sentenceB,ngram=1,lowercasing=True):
a = sentenceA.lower() if lowercasing else sentenceA
b = sentenceB.lower() if lowercasing else sentenceB
tokensA = nltk.word_tokenize(a)
tokensB = nltk.word_tokenize(b)
ngA_tokens = set(nltk.ngrams(tokensA, n=ngram))
ngB_tokens = set(nltk.ngrams(tokensB, n=ngram))
if (len(ngB_tokens)==0) and (len(ngA_tokens)==0):
return 0
if (len(ngB_tokens)==0) or (len(ngA_tokens)==0):
return 1
return nltk.jaccard_distance(ngA_tokens, ngB_tokens)
def bleu_wrapper(sentence_to_check,reference):
check_tokens = nltk.word_tokenize(sentence_to_check)
ref_tokens = nltk.word_tokenize(reference)
# From comparing the foreign_bleu and nltk the method4 seems to match
# The Paper names the BLEU-4-Score with a citation to chen & cherry
# I wish I could be named chen & cherry, its a very cool name.
chencherry = nltk.translate.bleu_score.SmoothingFunction()
smooth_fn = chencherry.method4
try:
return nltk.translate.bleu_score.sentence_bleu([ref_tokens],check_tokens,smoothing_function=smooth_fn)
except:
return 0
```
## Bleu-Scores
In the following, the BLEU-scores will be calculated using the foreign libary.
While there have been minor changes to standard-BLEU, it is the same as used in the original experiment.
The aggregated BLEU-Scores will be stored to the results.
```
bleu_data = {}
archetypes = set([results[k]["archetype"] for k in results.keys() if "archetype" in results[k].keys()])
for archetype in archetypes:
bleu_data[archetype]={}
bleu_data[archetype][0]=results["reference"]["bleu"]
relevant_configs = [k for k
in results.keys()
if "archetype" in results[k].keys()
and results[k]["archetype"]==archetype]
for c in relevant_configs:
bleu_data[archetype][int(results[c]["properties"]["transformations"])]=results[c]["bleu"]
bleu_data_df = pd.DataFrame.from_dict(bleu_data)
bleu_data_df = bleu_data_df.sort_index()
bleu_data_df = bleu_data_df.applymap(lambda cell: round(cell,3))
bleu_data_df.columns = [archetype_mt_mapping[n] for n in bleu_data_df.columns]
with open("./exports/bleu_table.tex","w") as f:
f.write(
bleu_data_df.to_latex(
caption="BLEU4-Scores for increasing number of metamorphic transformations \n (applied n-times per datapoint)"
,label="tab:bleus"
,position="tbh"
#,column_format={rrrrrrr}
)
)
bleu_data_df
#bleu_data_df.columns = [archetype_mt_mapping[a] for a in bleu_data_df.columns]
plt.figure(figsize=(14,7))
plt.ylabel("BLEU-Score",fontsize=20)
#plt.xlabel("# Transformations")
plt.xlabel("Order",fontsize=22)
#for latex, its nicer to have the title set from latex itself
#plt.title("BLEU4-Scores for increasing number of metamorphic transformations \n (applied n-times per datapoint)")
plot = sns.lineplot(data=bleu_data_df,markers=True,style=None,dashes=False)
plt.xticks([0,1,5,10],fontsize=20)
plt.yticks(fontsize=20)
plt.xlim(-0.025,10.1)
plt.legend(bleu_data_df.columns,fontsize=16)
plt.savefig('images/bleu_scores.png')
plt.show()
bleu_data_df_transposed = bleu_data_df.transpose()
bleu_data_df_transposed = bleu_data_df_transposed.drop(axis=1,columns=0)
with open("./exports/transposed_bleu_table.tex","w") as f:
f.write(
bleu_data_df_transposed.to_latex(
caption="BLEU4-Scores for increasing order of metamorphic transformations \n (applied n-times per datapoint)"
,label="tab:bleus"
,position="th"
#,column_format={rrrrrrr}
)
)
#bleu_data_df_transposed
```
## Per Entry Bleu
Now we use the nltk-provided bleu score to calculate the bleu-scores for all entries.
We store them on a per-result basis always bleu(gold,config).
The nltk bleu does not go from 0 to 100 but from 0 to 1, but they are the same by a factor of 100.
```
# This wrapper applies the "bleu_wrapper" to every element of a configurations results.
# The result is a list of [bleu-score(config[i],gold[i])]
# Entries are in order ascending
calculate_bleus = lambda config_id : [
bleu_wrapper(results[config_id]["results"][i],results[config_id]["gold_results"][i])
for i
in results[config_id]["results"].keys()
]
"""
These plots, while not necessary the best, try to compare the bleus of the reference to the bleus of a config.
They don't take very long, the actual bleu-calculation is what takes time in the cell below.
"""
def plot_bleu_histogram(config_data,reference_data,title):
plt.figure(figsize=(14,7))
histo_df=pd.DataFrame.from_dict(
{"reference":reference_data,
title:config_data }
)
sns.displot(
data=histo_df,
kind="hist", kde=True,
height=6, aspect=10/6
)
plt.title(f"Histogram of Bleu-Scores for {title}")
plt.xlabel("Bleu-Score")
#plt.ylabel("# of Entries")
plt.xlim(0,1)
plt.savefig(f'images/{title}_bleu_histogram.png')
plt.show()
def plot_bleu_boxplot(config_data,reference_data,title=None):
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(1, 1, 1)
box_df=pd.DataFrame.from_dict(
{"reference":reference_data,
title:config_data }
)
sns.boxplot(
data=box_df)
plt.title(f"Boxplot of Bleu-Scores for {title}")
plt.ylabel("Bleu-Score")
major_ticks = np.arange(0, 1, 0.2)
minor_ticks = np.arange(0, 1, 0.05)
ax.set_yticks(major_ticks)
ax.set_yticks(minor_ticks, minor=True)
# And a corresponding grid
ax.grid(which='both')
#plt.grid()
plt.savefig(f'images/{title}_bleu_box.png')
plt.ylim(0,1)
plt.show()
def plot_bleu_violinplot(config_data,reference_data,title):
plt.figure(figsize=(6,4))
violin_df=pd.DataFrame.from_dict(
{"reference":reference_data,
title:config_data }
)
sns.violinplot(data=violin_df)
plt.grid()
plt.title(f"ViolinPlot of Bleu-Scores for {title}")
plt.ylabel("Bleu-Score")
plt.savefig(f'images/{title}_bleu_violin.png')
plt.show()
#plot_bleu_violinplot(sample_bleus_config_data,bleus_reference_data,"config_20")
#plot_bleu_boxplot(sample_bleus_config_data,bleus_reference_data,"config_20")
#plot_bleu_histogram(sample_bleus_config_data,bleus_reference_data,"config_20")
%%time
# Calculate the reference bleus and store them
bleus_reference_data = calculate_bleus("reference")
results["reference"]["bleu_values"]=bleus_reference_data
# For every entry in the config, calculate bleu and make comparison plots
for config in non_reference_configs:
bleus_data = calculate_bleus(config)
# Set the bleu values to only calculate them once
results[config]["bleu_values"]=bleus_data
# Use the bleu-data to make some plots
plot_bleu_violinplot(bleus_data,bleus_reference_data,config)
plot_bleu_boxplot(bleus_data,bleus_reference_data,config)
plot_bleu_histogram(bleus_data,bleus_reference_data,config)
# Delete the bleu data to free some memory and not collide on names
del bleus_data
```
## Samples
Before the samples can be inspected, the items need to be re-indexed.
While all config_results are in the reference_results, there might is an issue with the data being shuffeld.
To fix this, a reindexing is done.
```
%%time
#Reindexing Pseudocode
def lookup_index(sentence, comparison_dict):
for (gold_key,gold_value) in comparison_dict.items():
if sentence == gold_value:
return gold_key
return -1
# Pseudocode:
# For each config (that is not reference)
# Create a lookup of reference_gold_index -> config_gold_index
# Invert the lookup
# Make a new dictionary where
# For every key of the config_gold
# lookup the key of the reference_gold
# And fill it with {reference_gold_key : config_gold_value}
# Do the same with the non-gold results
# Fill it with {reference_gold_key : config_result_value}
# Set result[config_X]["gold_results"] to the newly created, matching index one
# same for non-gold-results
for config in non_reference_configs:
keyMapping={}
for (k,v) in results[config]["gold_results"].items():
gk = lookup_index(v,results["reference"]["gold_results"])
keyMapping[k]=gk
new_gold_results={}
new_results={}
for (config_key,gold_key) in keyMapping.items():
if gold_key != -1:
new_gold_results[gold_key]=results[config]["gold_results"][config_key]
new_results[gold_key]=results[config]["results"][config_key]
results[config]["gold_results"]=new_gold_results
results[config]["results"]=new_results
# Short Example that the reindexing worked and looks about right
sample_index = 250
print(results["reference"]["gold_results"][sample_index] )
print()
print(results["reference"]["results"][sample_index])
print(results["config_2"]["results"][sample_index])
del sample_index
```
## Probing and Sampling
These cells look into the entries and find outstanding / most prominent results given diverse criteria.
As they are qualitative inspections, they are not being plotted but only printed.
(Previously *hall of shame*)
```
%%time
biggest_len_inc = 0
biggest_len_inc_pos = ()
biggest_len_dec = 0
biggest_len_dec_pos = ()
biggest_jaccard_dist = 0
biggest_jaccard_dist_pos = ()
smallest_jaccard_dist = 1
smallest_jaccard_dist_pos = ()
for config in non_reference_configs:
for index in list(results[config]["results"].keys()):
gold = results["reference"]["gold_results"][index]
reference = results["reference"]["results"][index]
altered = results[config]["results"][index]
if len(reference)-len(altered)>biggest_len_inc:
biggest_len_inc = len(reference)-len(altered)
biggest_len_inc_pos = (index,config)
if len(altered)-len(reference)>biggest_len_dec:
biggest_len_dec = len(altered)-len(reference)
biggest_len_dec_pos = (index,config)
jacc_dist = jaccard_wrapper(altered,reference)
if jacc_dist > biggest_jaccard_dist and jacc_dist < 1:
biggest_jaccard_dist = jacc_dist
biggest_jaccard_dist_pos = (index,config)
if jacc_dist < smallest_jaccard_dist and jacc_dist > 0:
smallest_jaccard_dist = jacc_dist
smallest_jaccard_dist_pos = (index,config)
# This method prints the i'ths entry of config X aswell as the gold and reference entry for it.
def print_config_item_with_reference(index,config):
print("Gold:")
print(results[config]["gold_results"][index])
print("Reference:")
print(results["reference"]["results"][index])
print(f"Altered ({config}@{index}):")
print(results[config]["results"][index])
print("Biggest jaccard Distance (that is not 1):\n")
print_config_item_with_reference(biggest_jaccard_dist_pos[0],biggest_jaccard_dist_pos[1])
print("Biggest decrease in length:\n")
print_config_item_with_reference(biggest_len_inc_pos[0],biggest_len_inc_pos[1])
print("Biggest increase in length:\n")
print_config_item_with_reference(biggest_len_dec_pos[0],biggest_len_dec_pos[1])
print("Smallest Jaccard Distance (that is not 0):\n")
print_config_item_with_reference(smallest_jaccard_dist_pos[0],smallest_jaccard_dist_pos[1])
```
**Fishy Example from a Kids Java-Learning Book.**
Code is actually about learning switch-case statements and set a image to the corresponding fishes (e.g. empty fish glass, fish glass with 2 fishes etc.)
The code examples are put into the paper repository as a separate artefact.
```
fishyKey = -1
for (key,value) in results["reference"]["gold_results"].items():
#print(value)
if "makeAFishyDecision " in value:
fishyKey = key
print("Fishy Results! \n")
print("Gold:")
print(results["reference"]["gold_results"][fishyKey])
print("Reference:")
print(results["reference"]["results"][fishyKey],"\n")
#for config in non_reference_configs:
for config in ["config_0","config_1","config_20","config_10"]:
print(f"Altered({config},{print_archetype_info(config)}):")
print(results[config]["results"][fishyKey])
entries_to_look_at = 3
longest_gold = sorted(list(results["reference"]["gold_results"].items()),reverse=True,key=lambda pair: len(pair[1]))[:entries_to_look_at]
#longest_gold
for l_gold in longest_gold:
#for config in non_reference_configs:
for config in ["config_1","config_7","config_14"]:
print_config_item_with_reference(l_gold[0],config)
print()
shortest_gold = sorted(list(results["reference"]["gold_results"].items()),reverse=True,key=lambda pair: len(pair[1]))[-3:]
#shortest_gold
for s_gold in shortest_gold:
#for config in non_reference_configs:
for config in ["config_1","config_7","config_14"]:
print_config_item_with_reference(s_gold[0],config)
print()
```
For the shortest gold standard you can clearly see that the gold-standard is cut at the first @-Sign.
Looking for certain key-words in the altered configs
We want to inspect
- where is the keyword x the most times
- how often does keyword x appear in results for config x
```
def find_entry_with_most_frequent_keyword(keyword):
most_keywords=0
most_keywords_pos=()
for config in non_reference_configs:
for index in list(results[config]["results"].keys()):
altered = results[config]["results"][index]
keywords = altered.lower().count(keyword)
if keywords>most_keywords:
most_keywords = keywords
most_keywords_pos = (index,config)
return most_keywords_pos
most_adds = find_entry_with_most_frequent_keyword("add")
print(f"Most occurrences of 'add':\n")
print_config_item_with_reference(most_adds[0],most_adds[1])
print()
most_gets = find_entry_with_most_frequent_keyword("get")
print(f"Most occurrences of 'get':\n")
print_config_item_with_reference(most_gets[0],most_gets[1])
print()
most_configs = find_entry_with_most_frequent_keyword("config")
print(f"Most occurrences of 'config':\n")
print_config_item_with_reference(most_configs[0],most_configs[1])
print()
"""
looks for a certain keyword in the results.
If a config is specified, it only tries to look for that config.
Searches in all configs otherwise.
Returns the entries containing the keyword as a list of pairs (index,config)
"""
def find_entries_with_keyword(keyword,config=None):
entries=[]
if config:
for index in list(results[config]["results"].keys()):
altered = results[config]["results"][index]
if keyword in altered.lower():
entries.append((index,config))
else:
for config in non_reference_configs:
for index in list(results[config]["results"].keys()):
altered = results[config]["results"][index]
if keyword in altered.lower():
entries.append((index,config))
return entries
print(f"Altered-Entries with 'add':\t{len(find_entries_with_keyword('add'))}")
print(f"Altered-Entries with 'get':\t{len(find_entries_with_keyword('get'))}")
print(f"Altered-Entries with 'get':\t{len(find_entries_with_keyword('set'))}")
# The configs 6 7 and 8 are the "Add Neutral" Transformations
print(f"Entries with 'add' in 'reference':\t{len(find_entries_with_keyword('add','reference'))}")
print(f"Entries with 'add' in 'config_6':\t{len(find_entries_with_keyword('add','config_6'))}")
print(f"Entries with 'add' in 'config_7':\t{len(find_entries_with_keyword('add','config_7'))}")
print(f"Entries with 'add' in 'config_8':\t{len(find_entries_with_keyword('add','config_8'))}")
print()
keyword="mock"
for config in non_reference_configs:
print(f"Entries with '{keyword}' in '{config}':\t{len(find_entries_with_keyword(keyword,config))}")
```
There seems to be no significant change in what are "getters","setters" and similar items.
They appear mostly evenly distributed and staying that way.
**Differences in AddVar5 to AddVar10**
Next examples look into bleu differences and "why" addvar10 full random is doing better than addvar5.
```
%%time
# Config 16 and 17 are add_var(5,random) and add_var(10,random)
add_var_diffs = []
for index in list(results["config_16"]["results"].keys()):
addvar5result=results["config_16"]["results"][index]
addvar10result=results["config_17"]["results"][index]
reference=results["reference"]["results"][index]
gold=results["reference"]["gold_results"][index]
addvar5bleu = bleu_wrapper(addvar5result,gold)
addvar10bleu = bleu_wrapper(addvar10result,gold)
diff = (addvar5bleu-addvar10bleu,index)
add_var_diffs.append(diff)
add_var_diffs=sorted(add_var_diffs,key=lambda p:p[0])
for worsties in add_var_diffs[-5:]:
print(f"Worsened bleu by {worsties[0]}")
print("Gold:")
print(f"\t{results['reference']['gold_results'][worsties[1]]}")
print("Reference:")
print(f"\t{results['reference']['results'][worsties[1]]}")
print("AddVar(5):")
print(f"\t{results['config_16']['results'][worsties[1]]}")
print("AddVar(10):")
print(f"\t{results['config_17']['results'][worsties[1]]}")
print()
for besties in add_var_diffs[:5]:
print(f"Bettered bleu by {besties[0]}")
print("Gold:")
print(f"\t{results['reference']['gold_results'][besties[1]]}")
print("Reference:")
print(f"\t{results['reference']['results'][besties[1]]}")
print("AddVar(5):")
print(f"\t{results['config_16']['results'][besties[1]]}")
print("AddVar(10):")
print(f"\t{results['config_17']['results'][besties[1]]}")
print()
```
However these are not helpfull, they only show that the biggest bleu movements are in getters and setters, which is the same behaviour than in other non addvar-entries.
## Jaccard Distances
The following cells want to inspect the jaccard distances.
For now, I looked mostly into jaccard(config,reference), but the same plots can be re-done for jaccard(config,gold)
```
"""
This method requires for the xs and ys to be sorted!
Without matching indizes it does not make any sense.
"""
def calculate_jaccard_distances(xs,ys,ngrams=1):
agg = []
indX = len(xs)
indY = len(ys)
if indX != indY:
raise IndexError()
else:
running_index = 0
while running_index < indX:
agg.append(jaccard_wrapper(xs[running_index],ys[running_index],ngrams))
running_index = running_index + 1
return agg
jaccs = {}
for config in non_reference_configs:
distances = calculate_jaccard_distances(results["reference"]["results"],results[config]["results"])
jaccs[config]=distances
plt.figure(figsize=(20,12))
sns.displot(
distances,
kind="hist", kde=True,
bins=20
)
plt.title(f"Histogram of JaccardDistances for {config}\n({print_archetype_info(config)})")
plt.xlabel("JaccardDistance \n Reference to Altered")
plt.ylabel("# of Entries")
plt.xlim(0,1)
plt.ylim(0,10000)
plt.savefig(f'images/{config}_jaccard_histogram.png')
plt.show()
jaccs_n2 = {}
for config in non_reference_configs:
distances = calculate_jaccard_distances(results["reference"]["results"],results[config]["results"],ngrams=2)
jaccs_n2[config]=distances
plt.figure(figsize=(20,12))
sns.displot(
distances,
kind="hist", kde=True,
bins=20
)
plt.title(f"Histogram of JaccardDistances for {config}\n({print_archetype_info(config)})")
plt.xlabel("JaccardDistance (ngram=2) \n Reference to Altered")
plt.ylabel("# of Entries")
plt.xlim(0,1)
plt.ylim(0,10000)
plt.savefig(f'images/{config}_jaccard_ngram2_histogram.png')
plt.show()
jacc_data = []
for config in jaccs.keys():
jacc_data.append((config,config_archetypes[config],jaccs[config]))
df = pd.DataFrame(jacc_data)
df.columns=["config","archetype","jacc_dist"]
df = df.explode('jacc_dist')
df['jacc_dist'] = df['jacc_dist'].astype('float')
df = df.dropna()
plt.figure(figsize=(30,12))
sns.boxplot(
x="config",
y="jacc_dist",
hue="archetype",
#width=4.5,
dodge =False,
data=df)
plt.grid()
plt.title(f"Boxplot of Jaccard_Distances")
plt.ylabel("Jaccard Distance")
plt.ylim(0,1)
plt.savefig(f'images/jaccard_distances_boxplot.png')
plt.show()
plt.figure(figsize=(30,12))
sns.violinplot(
x="config",
hue="archetype",
y="jacc_dist",
data=df,
#width=5.5,
showmeans=False,
showmedians=False,
inner=None,
dropnan=True,
dropna=True,
dodge =False
)
plt.ylim(0,1)
plt.savefig(f'images/jaccard_distances_violinplot.png')
plt.show()
del df
```
## Pandas
This is a different approach to gather all data in a pandas frame and then make 3 dimensional plots and other funny things.
```
%%time
# Driver for the time is the jaccard distance
result_df_data = []
for config in non_reference_configs:
arch = config_archetypes[config]
ts = results[config]["properties"]["transformations"]
index = 0
while index < len(results[config]["results"]):
ref = results["reference"]["results"][index]
res = results[config]["results"][index]
gold = results["reference"]["gold_results"][index]
bleu = results[config]["bleu_values"][index]
ref_bleu = results["reference"]["bleu_values"][index]
diff = res != ref
perfect = gold == res
# Distance Gold<>ConfigText
jacc_1 = jaccard_wrapper(res,gold,ngram=1)
jacc_2 = jaccard_wrapper(res,gold,ngram=2)
# Distance Gold<>ReferenceText
jacc_1_ref = jaccard_wrapper(ref,res,ngram=1)
jacc_2_ref = jaccard_wrapper(ref,res,ngram=2)
result_df_data.append(
(config,arch,archetype_mt_mapping[arch],ts,index,
bleu,ref_bleu,
diff,perfect,
jacc_1,jacc_2,
jacc_1_ref,jacc_2_ref,
gold,ref,res)
)
index = index + 1
result_df = pd.DataFrame(result_df_data)
result_df.columns=[
"config","archetype","MT","transformations","index",
"bleu","reference_bleu",
"difference","perfect_match",
"jaccard_n1","jaccard_n2","jaccard_n1_reference","jaccard_n2_reference",
"gold_result","reference_result","config_result"
]
#result_df = result_df.dropna()
result_df.head()
```
### Differences
Looking for Differences in results - similar to Jaccard Distance
```
plt.figure(figsize=(21, 7))
plt.grid()
plt.title('Result-Differences per Configuration')
sns.barplot(
x="config",y="difference",
data=result_df,
hue="MT",
dodge =False
)
plt.savefig(f'images/number_of_diffs_by_config.png')
plt.show()
```
### RQ2 Results
For RQ2 we first needed to have simple counts and percentages of the mere numbers.
```
%%time
totalPerO = result_df[(result_df["transformations"]=='1')].count()[0]
firstOdiff = result_df[(result_df["transformations"]=='1') & (result_df["difference"])].count()[0]
fifthOdiff = result_df[(result_df["transformations"]=='5') & (result_df["difference"])].count()[0]
tenthOdiff = result_df[(result_df["transformations"]=='10') & (result_df["difference"])].count()[0]
print("Total number of entries per Order:",totalPerO)
print(f"Changes in first order {firstOdiff}({round(firstOdiff/totalPerO,3)}%)")
print(f"Changes in fifth order {fifthOdiff}({round(fifthOdiff/totalPerO,3)}%)")
print(f"Changes in tenth order {tenthOdiff}({round(tenthOdiff/totalPerO,3)}%)")
```
### RQ1 Results
These are some infos on the changed and affected results for the first order mts changes
```
plot_df = result_df.copy()
plot_df = plot_df[plot_df["transformations"]=='1']
plot_df["jacc1_diff"] = plot_df["jaccard_n1"]-plot_df["jaccard_n1_reference"]
plot_df["abs_jacc1_diff"] = abs(plot_df["jacc1_diff"])
plot_df["bleu_diff"] = plot_df["bleu"]-plot_df["reference_bleu"]
plot_df["abs_bleu_diff"]=abs(plot_df["bleu_diff"])
diffed_df = plot_df[plot_df["jaccard_n1_reference"]>0]
plot_df.head(3)
post_1stMT_count = plot_df.count()[0]
count_jacc_samsies = plot_df[plot_df["jaccard_n1_reference"]==0].count()[0]
count_jacc_diffs = diffed_df.count()[0]
count_bleu_diffs = plot_df[plot_df["abs_bleu_diff"]>0].count()[0]
avg_bleu_diff = np.mean(plot_df[plot_df["abs_bleu_diff"]>0]["abs_bleu_diff"])
print("Entries for first order",post_1stMT_count)
print("Jaccard Changes:",count_jacc_diffs)
print(f"BLEU Changes: {count_bleu_diffs}({round(count_bleu_diffs/post_1stMT_count,3)}%)")
print("Average Bleu-Diff:",avg_bleu_diff)
avg_jacc_diff = np.mean(plot_df[plot_df["abs_bleu_diff"]>0]["abs_bleu_diff"])
median_jacc_diff = np.median(plot_df["jaccard_n1_reference"])
iqr_jacc_diff = stats.iqr(plot_df["jaccard_n1_reference"])
print("Average Jacc Diff:",avg_jacc_diff)
print("Median Jacc Diff:",median_jacc_diff)
print("IQR Jacc Diffs:",iqr_jacc_diff)
```
Histogram of changes
(To show nice non-null changes)
```
fig, axes = plt.subplots(1, 2, figsize=(16.5, 8.8))
sns.histplot(ax=axes[0],data=plot_df,
x="jaccard_n1_reference",
bins=25)
axes[0].set(xlim=(0,1.01))
axes[0].set_xlabel('Difference in Jaccard Distance \n Reference <> First Order MTs', fontsize=23)
axes[0].set_ylabel('Number of Entries', fontsize=19)
axes[0].set_xticks(np.arange(0,1.2,0.2))
axes[0].set_xticklabels([round(x,1) for x in np.arange(0,1.2,0.2)],fontsize=17)
axes[0].set_yticklabels([int(a) for a in axes[0].get_yticks()],fontsize=17)
sns.histplot(ax=axes[1],data=diffed_df,
x="abs_bleu_diff",
bins=50)
axes[1].set(xlim=(0,1.01))
axes[1].set_xlabel('Absolute Difference in BLEU4-Score \n for Summaries with Jaccard-Delta', fontsize=23)
axes[1].set_ylabel('Number of Entries', fontsize=19)
axes[1].set_xticks(np.arange(0,1.2,0.2))
axes[1].set_xticklabels([round(x,1) for x in np.arange(0,1.2,0.2)],fontsize=17)
axes[1].set_yticklabels([int(a) for a in axes[1].get_yticks()],fontsize=17)
plt.savefig(f'images/overview_plot_changes_of_firstorder_mts_small.png')
plt.show()
#del plot_df
# One Example for a where the MT-IF creates "growth" in the result
result_df[(result_df["MT"]=="MT-IF") & (result_df["index"]==327)]["reference_bleu"]
plt.figure(figsize=(21, 7))
plt.grid()
sns.scatterplot(x="jaccard_n1",y="bleu",hue="config",style="archetype",data=result_df)
plt.figure(figsize=(15, 15))
plt.grid()
plt.title("Scatterplot of Entries \n Bleu<>ReferenceBleu")
sns.scatterplot(x="reference_bleu",y="bleu",hue="config",size="jaccard_n1",style="archetype",data=result_df[result_df.index % 10 == 0])
plt.savefig(f'images/scatterplot_bleu_reference.png')
```
## Shapiro Tests
```
from scipy import stats
jaccs = result_df[result_df["config"]=="config_1"]["reference_bleu"].to_numpy()
shapiro_test = stats.shapiro(jaccs)
print(f"reference bleu score",shapiro_test)
for config in non_reference_configs:
df_mask=result_df['config']==config
jaccs1 = result_df[df_mask]["jaccard_n1"].to_numpy()
jaccs2 = result_df[df_mask]["jaccard_n2"].to_numpy()
shapiro_test1 = stats.shapiro(jaccs1)
shapiro_test2 = stats.shapiro(jaccs2)
print(f"jacc1_dist {config}",shapiro_test1)
print(f"jacc2_dist {config}",shapiro_test2)
agg_df_data = []
for config in non_reference_configs:
df_mask=result_df['config']==config
bleu_data = result_df[df_mask]["bleu"].to_numpy()
jacc_1_data = result_df[df_mask]["jaccard_n1"].to_numpy()
jacc_2_data = result_df[df_mask]["jaccard_n2"].to_numpy()
arch = config_archetypes[config]
ts = results[config]["properties"]["transformations"]
shapiro_test = stats.shapiro(bleu_data)
bleu_median = np.median(bleu_data)
bleu_mean = np.mean(bleu_data)
bleu_iqr = stats.iqr(bleu_data)
jacc1_median = np.median(jacc_1_data)
jacc1_mean = np.mean(jacc_1_data)
jacc1_iqr = stats.iqr(jacc_1_data)
jacc2_median = np.median(jacc_2_data)
jacc2_mean = np.mean(jacc_2_data)
jacc2_iqr = stats.iqr(jacc_2_data)
config_entry = (config,arch,ts,
shapiro_test,
bleu_median,bleu_mean,bleu_iqr,
jacc1_median,jacc1_mean,jacc1_iqr,
jacc2_median,jacc2_mean,jacc2_iqr)
agg_df_data.append(config_entry)
#print(f"delta-bleus {config}",median,mean)
agg_df = pd.DataFrame(agg_df_data)
del agg_df_data
agg_df.columns=[
"config","archetype","transformations",
"bleu_shapiro_test",
"bleu_median","bleu_mean","bleu_iqr",
"jacc1_median","jacc1_mean","jacc1_iqr",
"jacc2_median","jacc2_mean","jacc2_iqr",
]
agg_df.head()
plt.figure(figsize=(12, 10))
plt.title("Delta-TScore IQR for non-zero delta-tscores")
pivoted_data = agg_df.pivot(index='transformations', columns='archetype', values='bleu_iqr')
pivoted_data = pivoted_data.sort_values("transformations",key=lambda col:col.astype(int),ascending=True)
sns.heatmap(pivoted_data, annot=True, fmt="g",cmap='viridis')
plt.savefig(f'images/heatmap_nonzero_shapiro_pvalues.png')
plt.show()
#sns.heatmap(x="transformations",y="archetype",hue="delta_tscore_iqr",center=0,data=filtered_agg_df)
plt.figure(figsize=(21, 7))
plt.grid()
plt.title('bleu IQR')
sns.barplot(
x="config",y="bleu_iqr",
data=agg_df,
hue="archetype",
dodge =False
)
plt.savefig(f'images/barplot_deltatscore_iqrs.png')
plt.show()
```
## Non - Setter / Getter Split
As we looked in the data, there seems to be a lot of items just for setters and getters that even in the gold standard have a text like "set the XY".
This is rather noisy, and we want to split the data into "Setter","Getter","Other" and have a look at each group.
```
get_indizes=[]
set_indizes=[]
low_word_indizes=[]
other_indizes=[]
for index in list(results["reference"]["results"].keys()):
gold = results["reference"]["gold_results"][index]
words = len(gold.split())
if "get" in gold.lower() and words < 10:
get_indizes.append(index)
elif "set" in gold.lower() and not "setting" in gold.lower() and words < 10:
set_indizes.append(index)
elif words < 5:
low_word_indizes.append(index)
print("gets:",len(get_indizes))
print("sets:",len(set_indizes))
print("low_words:",len(low_word_indizes))
other_indizes = [ i for i in list(results["reference"]["results"].keys())
if not i in get_indizes + set_indizes + low_word_indizes ]
print("remaining indizes:",len(other_indizes))
# Comment this in for sampling the remaining indizes
#for i in other_indizes[:50]:
# print(results["reference"]["gold_results"][i])
ref_get_bleus = [results["reference"]["bleu_values"][index] for index in get_indizes]
ref_getter_bleu = np.mean(ref_get_bleus)
print("get:",ref_getter_bleu)
ref_set_bleus = [results["reference"]["bleu_values"][index] for index in set_indizes]
ref_setter_bleu = np.mean(ref_set_bleus)
print("set:",ref_setter_bleu)
ref_low_word_bleus = [results["reference"]["bleu_values"][index] for index in low_word_indizes]
ref_lowwords_bleu = np.mean(ref_low_word_bleus)
print("low words:",ref_lowwords_bleu)
ref_cleaned_bleus = [results["reference"]["bleu_values"][index] for index in other_indizes]
ref_remaining_bleu = np.mean(ref_cleaned_bleus)
print("remaining indizes:",ref_remaining_bleu)
split_bleus_data = []
# For every archetype, add as the 0 transformation point the reference
for archetype in set(config_archetypes.values()):
datapoint = ("reference",archetype,0,
results["reference"]["bleu"]/100,
ref_getter_bleu,ref_setter_bleu,ref_lowwords_bleu,ref_remaining_bleu)
split_bleus_data.append(datapoint)
# For all configs, make a datapoint with the separated bleus
for config in non_reference_configs:
archetype = config_archetypes[config]
transformations = results[config]["properties"]["transformations"]
getter_bleus = [results[config]["bleu_values"][index] for index in get_indizes]
getter_agg_bleu = np.mean(ref_get_bleus)
setter_bleus = [results[config]["bleu_values"][index] for index in set_indizes]
setter_agg_bleu = np.mean(setter_bleus)
low_word_bleus = [results[config]["bleu_values"][index] for index in low_word_indizes]
lowwords_agg_bleu = np.mean(low_word_bleus)
other_bleus = [results[config]["bleu_values"][index] for index in other_indizes]
other_agg_bleu = np.mean(other_bleus)
datapoint = (config,archetype,transformations,
results[config]["bleu"]/100,
getter_agg_bleu,setter_agg_bleu,lowwords_agg_bleu,other_agg_bleu)
split_bleus_data.append(datapoint)
# Make a dataframe from the values
split_bleus_df = pd.DataFrame(split_bleus_data)
split_bleus_df.columns = [
"config","archetype","transformations",
"bleu",
"getter_bleu","setter_bleu","low_word_bleu","remaining_bleu"
]
split_bleus_df["transformations"] = split_bleus_df["transformations"].astype("int")
split_bleus_df = split_bleus_df.sort_values(["archetype","transformations"])
split_bleus_df.head()
split_bleus_data_type_b = []
# For every archetype, add as the 0 transformation point the reference
for archetype in set(config_archetypes.values()):
split_bleus_data_type_b.append(
("reference",archetype,0,"getter_bleu",ref_getter_bleu)
)
split_bleus_data_type_b.append(
("reference",archetype,0,"setter_bleu",ref_setter_bleu)
)
split_bleus_data_type_b.append(
("reference",archetype,0,"low_word_bleu",ref_lowwords_bleu)
)
split_bleus_data_type_b.append(
("reference",archetype,0,"remaining_bleu",ref_remaining_bleu)
)
split_bleus_data_type_b.append(
("reference",archetype,0,"bleu",results["reference"]["bleu"]/100)
)
# For all configs, make a datapoint with the separated bleus
for config in non_reference_configs:
archetype = config_archetypes[config]
transformations = results[config]["properties"]["transformations"]
getter_bleus = [results[config]["bleu_values"][index] for index in get_indizes]
getter_agg_bleu = np.mean(ref_get_bleus)
split_bleus_data_type_b.append(
(config,archetype,transformations,"getter_bleu",getter_agg_bleu)
)
setter_bleus = [results[config]["bleu_values"][index] for index in set_indizes]
setter_agg_bleu = np.mean(setter_bleus)
split_bleus_data_type_b.append(
(config,archetype,transformations,"setter_bleu",setter_agg_bleu)
)
low_word_bleus = [results[config]["bleu_values"][index] for index in low_word_indizes]
lowwords_agg_bleu = np.mean(low_word_bleus)
split_bleus_data_type_b.append(
(config,archetype,transformations,"low_word_bleu",lowwords_agg_bleu)
)
other_bleus = [results[config]["bleu_values"][index] for index in other_indizes]
other_agg_bleu = np.mean(other_bleus)
split_bleus_data_type_b.append(
(config,archetype,transformations,"remaining_bleu",other_agg_bleu)
)
split_bleus_data_type_b.append(
(config,archetype,transformations,"bleu",results[config]["bleu"]/100 )
)
# Make a dataframe from the values
split_bleus_df_type_b = pd.DataFrame(split_bleus_data_type_b)
split_bleus_df_type_b.columns = [
"config","archetype","transformations","type","value"
]
split_bleus_df_type_b["transformations"] = split_bleus_df_type_b["transformations"].astype("int")
split_bleus_df_type_b = split_bleus_df_type_b.sort_values(["archetype","type","transformations"])
split_bleus_df_type_b.head(10)
plt.figure(figsize=(22, 7))
plt.grid()
sns.lineplot(
data=split_bleus_df_type_b,
x="transformations",
y="value",
hue="archetype",
style="type",
marker=True)
plt.xticks([0,1,5,10])
plt.ylabel("Averaged Bleu-Score")
plt.savefig(f'images/bleu_score_per_category_per_archetype.png')
plt.show()
plt.figure(figsize=(22, 7))
plt.grid()
plt.xticks([0,1,5,10])
plt.ylabel("Averaged Bleu-Score")
sns.lineplot(
data=split_bleus_df_type_b,
x="transformations",
y="value",
style="type")
plt.title("Average Bleu Score categorized into getters, setters, low words and others")
plt.xlim(0,10)
plt.savefig(f'images/bleu_score_per_category.png')
plt.show()
```
Word count in gold
```
data = []
for index in results["reference"]["gold_results"].keys():
words = len(results["reference"]["gold_results"][index].split())
data.append(words)
plt.figure(figsize=(15, 6))
plt.grid()
sns.histplot(data,bins=50)
plt.title("Distribution of words in gold standard")
plt.xlabel("# of words")
plt.ylabel("# of entries")
plt.xlim(0,100)
plt.xticks(np.arange(0,100,5))
plt.savefig(f'images/word_distribution_goldstandard.png')
plt.show()
del words,data
data = []
for index in results["reference"]["results"].keys():
words = len(results["reference"]["results"][index].split())
data.append(words)
plt.figure(figsize=(15, 6))
plt.grid()
sns.histplot(data,bins=50)
plt.title("Distribution of words in reference standard")
plt.xlabel("# of words")
plt.ylabel("# of entries")
plt.xlim(0,100)
plt.xticks(np.arange(0,100,5))
plt.savefig(f'images/word_distribution_reference.png')
plt.show()
del words,data
```
## Chi-Tests
To check whether there are (statistically) significant differences in the groups
```
ref_bleus = results["reference"]["bleu_values"]
wilcoxon_data = []
for config in non_reference_configs:
config_bleus = results[config]["bleu_values"]
archetype = config_archetypes[config]
transformations = results[config]["properties"]["transformations"]
wilcoxon_result = stats.wilcoxon(ref_bleus,config_bleus)
statistic = wilcoxon_result[0]
pvalue = wilcoxon_result[1]
twosided_wilcoxon_result = stats.wilcoxon(ref_bleus,config_bleus,alternative="two-sided")
twosided_statistic = twosided_wilcoxon_result[0]
twosided_pvalue = twosided_wilcoxon_result[1]
datapoint = (config,archetype,transformations,
#statistic,pvalue,
twosided_statistic,twosided_pvalue)
wilcoxon_data.append(datapoint)
wilcoxon_df = pd.DataFrame(wilcoxon_data)
wilcoxon_df.columns = ["config","archetype","transformations",
#"wilcoxon_statistics","wilcoxon_pvalue",
"twosided_wilcoxon_statistics","twosided_wilcoxon_pvalue"]
del config_bleus, ref_bleus
wilcoxon_df.head(7)
%%time
ref_bleus = results["reference"]["bleu_values"]
friedman_data = []
for configA in non_reference_configs:
configA_bleus = results[configA]["bleu_values"]
archetypeA = config_archetypes[configA]
transformationsA = results[configA]["properties"]["transformations"]
for configB in non_reference_configs:
configB_bleus = results[configB]["bleu_values"]
archetypeB = config_archetypes[configB]
transformationsB = results[configB]["properties"]["transformations"]
friedman_result = stats.friedmanchisquare(ref_bleus,configA_bleus,configB_bleus)
#print(friedman_result)
statistic = friedman_result[0]
pvalue = friedman_result[1]
datapoint = (configA,archetypeA,transformationsA,
configB,archetypeB,transformationsB,
statistic,pvalue)
friedman_data.append(datapoint)
friedman_df = pd.DataFrame(friedman_data)
friedman_df.columns = [
"configA","archetypeA","transformationsA",
"configB","archetypeB","transformationsB",
"friedman_statistics","friedman_pvalue"]
del configB_bleus,configA_bleus, ref_bleus
friedman_df.head()
plt.figure(figsize=(12, 10))
plt.title("Friedman-PValue \nof ConfigA<>ConfigB<>Reference")
ffriedman_df = friedman_df.copy()
ffriedman_df['configA'] = friedman_df['configA'].apply(config_num)
ffriedman_df['configA'].astype(int)
ffriedman_df['configB'] = friedman_df['configB'].apply(config_num)
ffriedman_df['configB'].astype(int)
pivoted_data = ffriedman_df.pivot(index='configA', columns='configB', values='friedman_pvalue')
sns.heatmap(pivoted_data, annot=False, fmt="g",cmap='viridis')
plt.savefig(f'images/friedman_pvalues_bleuscore.png')
plt.show()
plot_df = result_df[(result_df["difference"])]
plot_df = plot_df[(plot_df["transformations"]=='1')]
plot_df.info()
```
# Export
This can be used to print a pdf (or html). Comment it in if you want to do so.
--to=pdf takes quite a while, --to=html is pretty fast.
```
%%time
# export to csv as annibale wants
method_dict = {}
for i in get_indizes:
method_dict[i]="Getter"
for i in set_indizes:
method_dict[i]="Setter"
for i in low_word_indizes:
method_dict[i]="Low_Words"
for i in other_indizes:
method_dict[i]="Normal"
csv_export_data = []
for index in results["reference"]["results"].keys():
ref_data = results["reference"]["results"][index]
gold_data = results["reference"]["gold_results"][index]
ref_bleu = results["reference"]["bleu_values"][index]
ref_length = len(ref_data)
ref_word_length = len(ref_data.split())
ref_jacc1_to_ref = 0
ref_jacc_1_to_gold = jaccard_wrapper(ref_data,gold_data)
ref_perfect = gold_data == ref_data
method_type = method_dict[index]
ref_datapoint = (
"reference","none","none","0", method_type,index,
#ref_data,
ref_bleu,ref_jacc1_to_ref,ref_jacc_1_to_gold,ref_length,
False,ref_perfect
)
csv_export_data.append(ref_datapoint)
for config in non_reference_configs:
conf_data = results[config]["results"][index]
#print(config,conf_data,ref_data)
arch = config_archetypes[config]
mt = archetype_mt_mapping[arch]
ts = results[config]["properties"]["transformations"]
bleu = results[config]["bleu_values"][index]
length = len(conf_data)
word_length = len(conf_data.split())
diff = conf_data != ref_data
perfect = conf_data == gold_data
result_df_line = result_df[(result_df["config"]==config) & (result_df["index"]==index)]
jacc_1_to_ref = result_df_line["jaccard_n1_reference"].iloc(0)[0]
jacc_1_to_gold = result_df_line["jaccard_n1"].iloc(0)[0]
conf_datapoint = (
config,arch,mt,ts, method_type,index,
#ref_data,
bleu,jacc_1_to_ref,jacc_1_to_gold,
length,word_length,
diff, perfect
)
csv_export_data.append(conf_datapoint)
#print(index)
csv_export_df = pd.DataFrame(csv_export_data)
csv_export_df.columns = [
"config","archetype","MT","transformations","method_type","entry",
"bleu_score",
"jaccard_distance_to_gold","jaccard_distance_to_reference",
"length_in_characters", "length_in_words",
"different_to_ref", "perfect_match_with_gold"
]
csv_export_df.to_csv("./exports/bleus.csv")
csv_export_df.head(5)
del csv_export_df
!jupyter nbconvert --to=pdf --output-dir=./exports Evaluation.ipynb
```
| github_jupyter |
#### 1. If you were to plot the point (−1,−7) what would be the correct method? Select the two options that complete the blank spaces in the following statement: 'Starting from the origin ________ and ________'
##### Ans:
- Move 1 units to the left in the horizontal direction
- Move 7 units to the down in the vertical direction
#### 2. Write the condition that describes the interval for values of xxx:
(−20,13].
Notation:
Use <= for the ≤\leq≤ symbol
Use >= for the ≥\geq≥ symbol
For example:
to represent 2≤x<5{2\leq x< 5}2≤x<5, simply write 2<=x<52<=x<52<=x<5.
(Write your answer with no spaces in between characters: e.g. 2<=x<5)
##### Ans: -20<x<=13
#### 3. Write the interval notation for the condition:
−3≤x≤1-3 \leq x \leq 1−3≤x≤1.
Notation:
Use the following notation to represent +∞+\infty+∞ and −∞-\infty−∞
Start the expression with $ $ and end with $ $, no spaces between the $ symbols.
For example:
to represent (−∞,−2)(-\infty,-2)(−∞,−2), simply write $ $(-\infty,-2)$ $ with no spaces in between the $ symbol.
to represent 2<x<5{2< x< 5}2<x<5, simply write (2,5)(2,5)(2,5).
(Write your answer with no spaces in between characters)
##### Ans: [-3,1]
#### 4. Consider the function f(x)=\frac{3}{x}. Prepare a table of values to plot the graph of f(x). Which of the following options are good practice or you will observe in your table of values? Select all that apply. (0.90 / 1)
##### Ans:
- The graph has a vertical asymptote at x=0
- Include symmetrical values of x
- The domain of this function is R−0
- Include positive and negative values of x
- The graph has a horizontal asymptote at y=0
- Cannot calculate the function for x=0
#### 4. Consider the function f(x)=\frac{3}{(-3-x)^{2}} Prepare a table of values to plot the graph of f(x). Which of the following options are good practice or you will observe in your table of values? Select all that apply.
##### Ans:
(0.90 / 1)
- Include more values of x close to 0 than around any other number
- Include symmetrical values of x
- The graph has a horizontal asymptote at y=0
- Include positive and negative values of x
#### 5. Consider the function f(x)=7x^{2}. Prepare a table of values to plot the graph of f(x). Which of the following options are good practice or you will observe in your table of values? Select all that apply. (0.90 / 1)
##### Ans:
- Include positive and negative values of xxx
- The graph of f(x) as a vertex at (0,0)
- Include symmetrical values of x
#### 5. Consider the function f(x)=x^2+8x+16. Prepare a table of values to plot the graph of f(x). Which of the following options are good practice or you will observe in your table of values? Select all that apply.
##### Ans:
- Include symmetrical values of x
- The graph of f(x) as a vertex at (−4,0)
- Include positive and negative values of x
#### 6. In an attempt to plot the graph of the functionf(x)=\frac{1}{x^{2}} a learner produced the graph below:
<img src="https://d3c33hcgiwev3.cloudfront.net/imageAssetProxy.v1/uw2DjOQOEeiAgQrXx6bp4g_c504a6b3c64a152f809c300923a373ed_graph_1_over_x_squared_good_window_asymptotes_joined_up_points.png?expiry=1555200000000&hmac=OQw2YUucm-gbnZBzKeQQf-cCKf_TzFVRFzs5vjN4-Wo" alt="" >
The 'crosses' mark points that were plotted from the table of values.
The orange line is the intended graph.
There are also interrupted lines in green marked on the plot.
Select all statements that are true.
##### Ans:
(0.6/1)
- more points for x greater than 5 and also lesser than −5 need to be calculated and plotted
- the green lines represent the asymptotes
(0.60 / 1)
- the green lines represent the asymptotes
- more points near x=0 need to be calculated and plotted
#### 7. In an attempt to plot the graph of the function f(x)=\frac{1}{x^{2}} a learner produced the graph below:
<img src="https://d3c33hcgiwev3.cloudfront.net/imageAssetProxy.v1/HSEfyuQbEeiixgqCUDoEfA_75ac3990323a81c3269df1bbe992ba64_graph_1_over_x_squared_good_window_asymptotes.png?expiry=1555200000000&hmac=GRyTCLJu1zrehw1CigjaaYrR31hmm493xqYsiXw_FLo" alt="" >
The 'crosses' mark points that were plotted from the table of values.
The orange line is the intended graph.
There are also interrupted lines in green marked on the plot.
Select all statements that are true.
##### Ans:
(0.8/1)
- the orange line should not continue between x=−0.1 and x=0.1 because there are no points plotted for those values of x
- the graph is complete
- the green lines represent the asymptotes
(0.75 / 1)
- the green lines represent the asymptotes
- more points for x greater than 5 and also lesser than −5 need to be calculated and plotted
- the graph is complete
#### 8. In an attempt to plot the graph of the function f(x)=\frac{1}{x^{2}} a learner produced the graph below:
<img src="https://d3c33hcgiwev3.cloudfront.net/imageAssetProxy.v1/e2xjF-QfEeiixgqCUDoEfA_7cc7c787164b8101375eeccfa700a933_graph_1_over_x_squared_some_points_correct_line_bad_window.png?expiry=1555200000000&hmac=FrwLwkC3QQsZdumCp8dJs7SAIsJlS61dlCmAPcu7lJ0" alt="" >
The 'crosses' mark points that were plotted from the table of values.
The orange line is the intended graph.
Select all statements that are true.
##### Ans:
(0.8/1)
- the line of equation y=0 is an asymptote of this graph
- more points for x greater than 6 and also lesser than −6 need to be calculated and plotted
- the choice of window for the graph does not fully portray the behaviour of the function
(0.80 / 1)
- it is absolutely crucial to calculate and plot more points for xxx bewteen −1 and 1
- the asymptotic behaviour near x=0 is not portrayed in the plot
- the choice of window for the graph does not fully portray the behaviour of the function
- the line of equation y=0 is an asymptote of this graph
#### 9. What is wrong with this plot with table of values:
x=−5−4−3−2−112345 y=0.040.060.110.25110.250.110.060.04
<img src="https://d3c33hcgiwev3.cloudfront.net/imageAssetProxy.v1/d6vABOQkEei5Kg7DUflKxA_3209764f99d7a56863649f8bbcf5ef27_graph_1_over_x_squared_wrong.png?expiry=1555200000000&hmac=FBYuJAz0aKZ3vMUwTOlV1hMDW7MMLxhigs7UThZbWEM" alt="" >
Select all statements that are true.
##### Ans:
(0.80 / 1)
- it excludes the zeros of the function
- the graph excludes the behaviour of the function between x=−1 and x=1
- the asymptotes are missing
(0.60 / 1)
- it excludes the zeros of the function
- the graph excludes the behaviour for x>6 and for x<−6
- nothing is wrong, it is a fair plot as it shows all the points calculated
- the graph excludes the behaviour of the function between x=−1 and x=1
#### 10. Choose all that apply:
y=\frac{1}{x^{2}+x} is transformed to y=-5+\frac{1}{(x-8)^{2}+x-8}
##### Ans:
- A shift of −5 in the y direction
- A shift of +8 in the x direction
#### 11. You may use the following kinematics equations (SUVAT equations):
s=\frac{1}{2}(u+v){t}
v^{2}=u^{2}+2as
v=u+at
A particle moves with constant acceleration 7ms^{-2}. It's original velocity is 1ms^{-1}. Find the distance in metres the particle has travelled after 4s.
##### Ans: 60
#### 12. You may use the following kinematics equations (SUVAT equations):
s=\frac{1}{2}(u+v){t}
v^{2}=u^{2}+2as
v=u+at
A particle moves with constant acceleration. It's final velocity is 2ms^{-1} and it's acceleration is -3ms^{-2}. Find the initial velocity if the particle travels a distance of 7m.
##### Ans: $\pm6.8$ (wrong)
#### 12. You may use the following kinematics equations (SUVAT equations):
s=\frac{1}{2}(u+v){t}
v^{2}=u^{2}+2as
v=u+at
A particle moves with constant acceleration. It's original velocity is 1ms^{-1} and it's final velocity is -7ms^{-1}. Find the time taken to cover the distance of −1m.
##### Ans: 0.3
| github_jupyter |
# Imports
```
import pandas as pd
# import matplotlib.pyplot as plt
# from wordcloud import WordCloud
import os
import numpy as np
```
# Read Data
```
# read data
filepath = os.path.join(
"\\".join([os.getcwd(), "twitterdataFinal2.xlsx"])
)
df = pd.read_excel(filepath,sheet_name=None, dtype='object', index_col=False)
# df.head()
df.keys()
all_data = pd.DataFrame()
```
## congrats_data
```
congrats_data = df.get('congrats_data')
```
### Positive
```
congrats_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Positive'
congrats_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(congrats_data, ignore_index=True)
all_data.shape
```
## award_data
```
award_data=df.get('award_data')
```
### Positive
```
award_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Positive'
award_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(award_data, ignore_index=True)
all_data.shape
```
## vaccine_data
```
vaccine_data = df.get('vaccine_data')
```
### Neutral
```
vaccine_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
vaccine_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(vaccine_data, ignore_index=True)
all_data.shape
```
## support_data
```
support_data = df.get('support_data')
```
### Positive
```
support_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Positive'
support_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(support_data, ignore_index=True)
all_data.shape
```
## gigem_data
```
gigem_data = df.get('gigem_data')
```
### Positive
```
gigem_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Positive'
gigem_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(gigem_data, ignore_index=True)
all_data.shape
```
## proud_data
```
proud_data = df.get('proud_data')
```
### Positive
```
proud_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Positive'
proud_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(proud_data, ignore_index=True)
all_data.shape
```
## program_data
```
program_data = df.get('program_data')
```
### Neutral
```
program_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
program_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(program_data, ignore_index=True)
all_data.shape
```
## game_data
```
game_data = df.get('game_data')
```
### Neutral
```
game_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
game_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(game_data, ignore_index=True)
all_data.shape
```
## event_data
```
event_data = df.get('event_data')
```
### Neutral
```
event_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
event_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(event_data, ignore_index=True)
all_data.shape
```
### university_data
```
university_data = df.get('university_data')
```
### Neutral
```
university_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
university_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(university_data, ignore_index=True)
all_data.shape
```
## TeacherStudent_data
```
TeacherStudent_data = df.get('TeacherStudent_data')
```
### Neutral
```
TeacherStudent_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
TeacherStudent_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(TeacherStudent_data, ignore_index=True)
all_data.shape
```
## learn_data
```
learn_data = df.get('learn_data')
```
### Neutral
```
learn_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
learn_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(learn_data, ignore_index=True)
all_data.shape
```
## community_data
```
community_data = df.get('community_data')
```
### Neutral
```
community_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
community_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
```
### community
```
community_data.columns
community_data['User to Environment Engagement'] = 'Community'
community_data['User to Environment Engagement'].value_counts()
all_data = all_data.append(community_data, ignore_index=True)
all_data.shape
```
## crime_data
```
crime_data = df.get('crime_data')
```
### Neutral & [Negative -> if retweet/quote/reply]
```
crime_data.tweet_type.value_counts()
def reaction(row, colname): # colname
thistweet = row[colname]
if (thistweet == "original"): return "Neutral"
if (thistweet == "retweet"): return "Negative"
if (thistweet == "reply"): return "Negative"
if (thistweet == "quote"): return "Negative"
return np.nan
crime_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = crime_data.apply(reaction, colname='tweet_type', axis=1)
crime_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(crime_data, ignore_index=True)
all_data.shape
```
## texas_data
```
texas_data = df.get('texas_data')
```
### Neutral
```
texas_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
texas_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(texas_data, ignore_index=True)
all_data.shape
```
## aggie_data
```
aggie_data = df.get('aggie_data')
```
### Neutral
```
aggie_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
aggie_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(aggie_data, ignore_index=True)
all_data.shape
```
## tamu_data
```
tamu_data = df.get('tamu_data')
```
### Neutral
```
tamu_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'] = 'Neutral'
tamu_data['Content Disposition: supportive/affirming; contradicting/critical; cannot determine'].value_counts()
all_data = all_data.append(tamu_data, ignore_index=True)
all_data.shape
```
# Save all_data
```
# filepath = os.path.join(
# "\\".join([os.getcwd(), "twitterdata_for_analysis.xlsx"])
# )
# with pd.ExcelWriter(filepath, engine="xlsxwriter", mode='w',options={'strings_to_urls': False}) as writer:
# all_data.to_excel(excel_writer=writer, index=False)
```
| github_jupyter |
# Operation Private Ryan
### Patch the prediction, to gaurantee at least a single child
### Using data leak
All picture must has at least 1 class of prediction
Patch the prediction with safenet
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
import cv2
import os
from tqdm import tqdm
from pathlib import Path
```
Configs
```
MIN_SIZE = 1000
HOME = Path(os.environ["HOME"])
INPUT_DIR = Path(HOME/"ucsi")
SUBS = INPUT_DIR/"subs"
```
Check sub files
```
!ls -l {SUBS}
```
Set submission csv pool
```
CSV = SUBS/"emp_convex_1110_154142_submission.csv"
df = pd.read_csv(CSV)
df.head()
def checkWorthy(df):
"""
Check if the prediction is worthy
"""
df["img"] = df.Image_Label.apply(lambda x: x.split("_")[0])
df["cls"] = df.Image_Label.apply(lambda x: x.split("_")[1])
df["mark"] = (~df.EncodedPixels.isnull())*1
img_count = df.groupby("img").sum()[["mark"]].reset_index()
missing = list(img_count[img_count.mark ==0]["img"])
df["worthy"] = ~df.img.isin(missing)
df = df.drop(["img","cls","mark"],axis=1)
print(df.worthy.value_counts())
return df,missing
df,missing = checkWorthy(df)
print(len(missing), "pics are missing class")
for m in missing: print(m, end = "\t")
# This has to be an odd number
sub_fnames = list([
"resnext_miracle_raw.csv",# resnext 101 1st miracle 0.6596
"emp_convex_384x567_1117_211214_submission.csv",
# ("1110_154142_submission.csv",1/7), # enesemble
# ("b5_pla_0.72_1573548650_submission.csv",0.33333333)
# ("b5_r2_1573525424_submission.csv",0.3333333)finding
# ("dpn_131_0.73_1573574797_submission.csv",0.333333) #0.6443
"b5_fold_optimize_trian0.5_1114_150440_submission.csv", # 0.6587
# ("b6_ver2_submission.csv",1/7), #0.6527
"b6_ranger_submission.csv", #0.6556
"1114_090732_submission.csv", # resnext 101 5 folds 0.6569
"b6_fold_1114_121001_submission.csv", # b6 6 folds 0.6558
"convex_b5_fold_optimize_trian0.5_1114_150440_submission.csv", # convex b5 5 folds
"convex_b6_fold_train0.5_optima_1116_080218_submission.csv", # convex b6 5 bolds
])
sub_paths = list(SUBS/p for p in sub_fnames)
def bringOrder(df):
return df.sort_values(by="Image_Label", ascending=True).reset_index().drop("index",axis=1)
from itertools import chain
miss_labels = []
for m in missing:
miss_labels = chain(miss_labels,list("%s_%s"%(m,c) for c in ["Fish","Flower","Gravel","Sugar"]))
miss_list = list(miss_labels)
len(miss_list)
sub_dfs = list(checkWorthy(bringOrder(pd.read_csv(p)))[0] for p in sub_paths)
sub_dfs = list(df[df.Image_Label.isin(miss_list)] for df in sub_dfs)
sub_dfs = list(df[df.worthy].fillna("") for df in sub_dfs)
sample_df = sub_dfs[0]
```
### Statistic on patch helpers
```
work_df = pd.DataFrame({"Image_Label":miss_list})
i = 0
for help_df in sub_dfs:
lbl_map = dict(zip(help_df.Image_Label,help_df.worthy))
def getHelp(x):
try:
return lbl_map[x]
except:
return False
work_df["h_%s"%(i)] = work_df.Image_Label.apply(getHelp)
i+=1
cols = list("h_%s"%(h) for h in range(i))
work_df["total_worthy"] = work_df.apply(lambda x: sum(list(x[c]for c in cols)), axis=1)
work_df
print(pd.DataFrame(work_df.total_worthy.value_counts()))
print("How many rows of problem we can solve")
print(pd.DataFrame(work_df.total_worthy.apply(lambda x:x>0).value_counts()))
```
### Helper Functions
```
def rle_decode(mask_rle: str = '', shape = (350,525 )):
'''
Decode rle encoded mask.
:param mask_rle: run-length as string formatted (start length)
:param shape: (height, width) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape, order='F')
def post_process(probability, threshold=0.5, min_size = MIN_SIZE):
"""
Post processing of each predicted mask, components with lesser number of pixels
than `min_size` are ignored
"""
# don't remember where I saw it
mask = cv2.threshold(np.float32(probability), threshold, 1, cv2.THRESH_BINARY)[1]
num_component, component = cv2.connectedComponents(mask.astype(np.uint8))
predictions = np.zeros((350, 525), np.float32)
num = 0
for c in range(1, num_component):
p = (component == c)
if p.sum() > min_size:
predictions[p] = 1
num += 1
return predictions, num
def mask2rle(img):
'''
Convert mask to rle.
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels= img.T.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
# def draw_convex_hull(mask, mode='convex'):
# # mask = np.expand_dims(mask,axis=0)
# mask = mask.astype(np.uint8)
# img = np.zeros(mask.shape)
# contours, hier = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# for c in contours:
# if mode=='rect': # simple rectangle
# x, y, w, h = cv2.boundingRect(c)
# cv2.rectangle(img, (x, y), (x+w, y+h), (255, 255, 255), -1)
# elif mode=='convex': # minimum convex hull
# hull = cv2.convexHull(c)
# cv2.drawContours(img, [hull], 0, (255, 255, 255),0)
# elif mode=='approx':
# epsilon = 0.02*cv2.arcLength(c,True)
# approx = cv2.approxPolyDP(c,epsilon,True)
# cv2.drawContours(img, [approx], 0, (255, 255, 255),-1)
# else: # minimum area rectangle
# rect = cv2.minAreaRect(c)
# box = cv2.boxPoints(rect)
# box = np.int0(box)
# cv2.drawContours(img, [box], 0, (255, 255, 255),-1)
# return img/255.
# def draw_masks(img2,img_mask_list):
# # img2 = np.expand_dims(img2,axis=0)
# # img_mask_list =list(np.expand_dims(a,axis=0) for a in img_mask_list)
# img = img2.copy()
# color_mask = np.zeros(img2.shape)
# temp_mask = np.ones([img2.shape[0],img2.shape[1]])*127./255.
# temp_mask[img_mask_list[0] == 0] = 0
# color_mask[:,:] = temp_mask
# img += color_mask
# return np.clip(img,0.,1.)
i = 0
patch = dict()
for r in work_df.iterrows():
i+=1
# if i == 21: break
row = r[1]
label = row[0]
total_worthy = row[-1]
row_map = row[1:-1]
dfs = list(sub_dfs[i] for i in range(len(row_map))if row_map[i])
# print(label)
patch_cluster = list(list(df_[df_.Image_Label==label]["EncodedPixels"])[0] for df_ in dfs)
if len(patch_cluster)==1:
print("[%s] has only 1 solution, no ensemble"%(label))
if patch_cluster[0]!='':
patch[label] = patch_cluster[0]
continue
if len(patch_cluster)==0:
print("[%s] has no solution"%(label))
continue
pred_list = list(rle_decode(p) for p in patch_cluster)
stacked = np.mean(np.stack(pred_list,axis=0),axis=0)
stacked = post_process(stacked, threshold=0.2,min_size=1000)[0]
# stacked = draw_masks(stacked,list([draw_convex_hull(stacked)]))
# stacked = post_process(stacked)[0]
newRLE = mask2rle(stacked)
if newRLE!='':
patch[label] = newRLE
print("total patch",len(patch))
df = df.drop("worthy",axis=1)
for img_,rle_ in patch.items():
df.loc[df.Image_Label==img_,"EncodedPixels"]=rle_
df2,_ = checkWorthy(df)
```
### Saving to submission CSV
```
df.to_csv('ryan_%s'%(CSV.name), columns=['Image_Label', 'EncodedPixels'], index=False)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import calmap
```
Data Source: https://www.kaggle.com/aungpyaeap/supermarket-sales
The growth of supermarkets in most populated cities are increasing and market competitions are also high. The dataset is one of the historical sales of supermarket company which has recorded in 3 different branches for 3 months data.
# Data Dictionary
Invoice id: Computer generated sales slip invoice identification number
Branch: Branch of supercenter (3 branches are available identified by A, B and C).
City: Location of supercenters
Customer type: Type of customers, recorded by Members for customers using member card and Normal for without member card.
Gender: Gender type of customer
Product line: General item categorization groups - Electronic accessories, Fashion accessories, Food and beverages, Health and beauty, Home and lifestyle, Sports and travel
Unit price: Price of each product in USD
Quantity: Number of products purchased by customer
Tax: 5% tax fee for customer buying
Total: Total price including tax
Date: Date of purchase (Record available from January 2019 to March 2019)
Time: Purchase time (10am to 9pm)
Payment: Payment used by customer for purchase (3 methods are available – Cash, Credit card and Ewallet)
COGS: Cost of goods sold
Gross margin percentage: Gross margin percentage
Gross income: Gross income
Rating: Customer stratification rating on their overall shopping experience (On a scale of 1 to 10)
# Initial Data Exploration
```
df = pd.read_csv('supermarket_sales.csv')
df.head(10)
df.columns
df['Date'] = pd.to_datetime(df['Date'])
df['Date']
#changed the object type of date
df.dtypes
df.set_index('Date',inplace=True)
df.head()
df.describe()
```
# Univariate Analysis
Distrubution of customer ratings
```
sns.distplot(df['Rating'])
plt.axvline(x=np.mean(df['Rating']),c='red',ls='--',label='mean')
plt.axvline(x=np.percentile(df['Rating'],25),c='green',ls='--',label='25-75th percentile')
plt.axvline(x=np.percentile(df['Rating'],75),c='green',ls='--')
plt.legend()
# It looks like a uniform distribution with no skew in either direction.
df.hist(figsize=(10,10))
```
# Do aggregate sales numbers differ by much between branches?
```
sns.countplot(df['Branch'])
df['Branch'].value_counts()
```
All 3 branches have almost the same amount of sales
```
sns.countplot(df['Payment'])
Any relationship between gross income and customer ratings?
```
# Any relationship between gross income and customer ratings
```
sns.regplot(df['Rating'],df['gross income'])
```
There doesn't seem to be a relationship between these 2 variables.
```
sns.boxplot(x=df['Branch'],y=df['gross income'])
```
# Is there a time trend in gross income?
```
df.groupby(df.index).mean()
sns.lineplot(x=df.groupby(df.index).mean().index,
y= df.groupby(df.index).mean()['gross income'])
```
# Find Duplicate rows and missing values
```
df.duplicated()
df[df.duplicated()==True]
df.drop_duplicates(inplace=True)
df.duplicated().sum()
sns.heatmap(df.isnull(),cbar=False)
```
| github_jupyter |
# Smart Queue Monitoring System - Retail Scenario
## Overview
Now that you have your Python script and job submission script, you're ready to request an **IEI Tank-870** edge node and run inference on the different hardware types (CPU, GPU, VPU, FPGA).
After the inference is completed, the output video and stats files need to be retrieved and stored in the workspace, which can then be viewed within the Jupyter Notebook.
## Objectives
* Submit inference jobs to Intel's DevCloud using the `qsub` command.
* Retrieve and review the results.
* After testing, go back to the proposal doc and update your original proposed hardware device.
## Step 0: Set Up
#### IMPORTANT: Set up paths so we can run Dev Cloud utilities
You *must* run this every time you enter a Workspace session.
(Tip: select the cell and use **Shift+Enter** to run the cell.)
```
%env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support
import os
import sys
sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support'))
sys.path.insert(0, os.path.abspath('/opt/intel'))
```
### Step 0.1: (Optional-step): Original Video
If you are curious to see the input video, run the following cell to view the original video stream we'll be using for inference.
```
import videoHtml
videoHtml.videoHTML('Retail', ['original_videos/Retail.mp4'])
```
## Step 1 : Inference on a Video
In the next few cells, You'll submit your job using the `qsub` command and retrieving the results for each job. Each of the cells below should submit a job to different edge compute nodes.
The output of the cell is the `JobID` of your job, which you can use to track progress of a job with `liveQStat`.
You will need to submit a job for each of the following hardware types:
* **CPU**
* **GPU**
* **VPU**
* **FPGA**
**Note** You will have to submit each job one at a time and retrieve their results.
After submission, they will go into a queue and run as soon as the requested compute resources become available.
(Tip: **shift+enter** will run the cell and automatically move you to the next cell.)
If your job successfully runs and completes, once you retrieve your results, it should output a video and a stats text file in the `results/retail/<DEVICE>` directory.
For example, your **CPU** job should output its files in this directory:
> **results/retail/cpu**
**Note**: To get the queue labels for the different hardware devices, you can go to [this link](https://devcloud.intel.com/edge/get_started/devcloud/).
The following arguments should be passed to the job submission script after the `-F` flag:
* Model path - `/data/models/intel/person-detection-retail-0013/<MODEL PRECISION>/`. You will need to adjust this path based on the model precision being using on the hardware.
* Device - `CPU`, `GPU`, `MYRIAD`, `HETERO:FPGA,CPU`
* Manufacturing video path - `/data/resources/retail.mp4`
* Manufacturing queue_param file path - `/data/queue_param/retail.npy`
* Output path - `/output/results/retail/<DEVICE>` This should be adjusted based on the device used in the job.
* Max num of people - This is the max number of people in queue before the system would redirect them to another queue.
## Step 1.1: Submit to an Edge Compute Node with an Intel CPU
In the cell below, write a script to submit a job to an <a
href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI
Tank* 870-Q170</a> edge node with an <a
href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel® Core™ i5-6500TE processor</a>. The inference workload should run on the CPU.
```
#Submit job to the queue
cpu_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te -F "/data/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013 CPU /data/resources/retail.mp4 /data/queue_param/retail.npy /output/results/retail/cpu 5" -N storage
print(cpu_job_id[0])
```
#### Check Job Status
To check on the job that was submitted, use `liveQStat` to check the status of the job.
Column `S` shows the state of your running jobs.
For example:
- If `JOB ID`is in Q state, it is in the queue waiting for available resources.
- If `JOB ID` is in R state, it is running.
```
import liveQStat
liveQStat.liveQStat()
```
#### Get Results
Run the next cell to retrieve your job's results.
```
import get_results
get_results.getResults(cpu_job_id[0], filename='output.tgz', blocking=True)
```
#### Unpack your output files and view stdout.log
```
!tar zxf output.tgz
!cat stdout.log
```
#### View stderr.log
This can be used for debugging
```
!cat stderr.log
```
#### View Output Video
Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected.
```
import videoHtml
videoHtml.videoHTML('Retail CPU', ['results/retail/cpu/output_video.mp4'])
```
## Step 1.2: Submit to an Edge Compute Node with a CPU and IGPU
In the cell below, write a script to submit a job to an <a
href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI
Tank* 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel® Core i5-6500TE</a>. The inference workload should run on the **Intel® HD Graphics 530** integrated GPU.
```
#Submit job to the queue
gpu_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te:intel-hd-530 -F "/data/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013 GPU /data/resources/retail.mp4 /data/queue_param/retail.npy /output/results/retail/gpu 5" -N storage
print(gpu_job_id[0])
```
### Check Job Status
To check on the job that was submitted, use `liveQStat` to check the status of the job.
Column `S` shows the state of your running jobs.
For example:
- If `JOB ID`is in Q state, it is in the queue waiting for available resources.
- If `JOB ID` is in R state, it is running.
```
import liveQStat
liveQStat.liveQStat()
```
#### Get Results
Run the next cell to retrieve your job's results.
```
import get_results
get_results.getResults(gpu_job_id[0], filename='output.tgz', blocking=True)
```
#### Unpack your output files and view stdout.log
```
!tar zxf output.tgz
!cat stdout.log
```
#### View stderr.log
This can be used for debugging
```
!cat stderr.log
```
#### View Output Video
Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected.
```
import videoHtml
videoHtml.videoHTML('Retail GPU', ['results/retail/gpu/output_video.mp4'])
```
## Step 1.3: Submit to an Edge Compute Node with an Intel® Neural Compute Stick 2
In the cell below, write a script to submit a job to an <a
href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI
Tank 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel Core i5-6500te CPU</a>. The inference workload should run on an <a
href="https://software.intel.com/en-us/neural-compute-stick">Intel Neural Compute Stick 2</a> installed in this node.
```
#Submit job to the queue
vpu_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te:intel-ncs2 -F "/data/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013 MYRIAD /data/resources/retail.mp4 /data/queue_param/retail.npy /output/results/retail/vpu 5" -N storage
print(vpu_job_id[0])
```
### Check Job Status
To check on the job that was submitted, use `liveQStat` to check the status of the job.
Column `S` shows the state of your running jobs.
For example:
- If `JOB ID`is in Q state, it is in the queue waiting for available resources.
- If `JOB ID` is in R state, it is running.
```
import liveQStat
liveQStat.liveQStat()
```
#### Get Results
Run the next cell to retrieve your job's results.
```
import get_results
get_results.getResults(vpu_job_id[0], filename='output.tgz', blocking=True)
```
#### Unpack your output files and view stdout.log
```
!tar zxf output.tgz
!cat stdout.log
```
#### View stderr.log
This can be used for debugging
```
!cat stderr.log
```
#### View Output Video
Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected.
```
import videoHtml
videoHtml.videoHTML('Retail VPU', ['results/retail/vpu/output_video.mp4'])
```
## Step 1.4: Submit to an Edge Compute Node with IEI Mustang-F100-A10
In the cell below, write a script to submit a job to an <a
href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI
Tank 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel Core™ i5-6500te CPU</a> . The inference workload will run on the <a href="https://www.ieiworld.com/mustang-f100/en/"> IEI Mustang-F100-A10 </a> FPGA card installed in this node.
```
#Submit job to the queue
fpga_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te:iei-mustang-f100-a10 -F "/data/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013 HETERO:FPGA,CPU /data/resources/retail.mp4 /data/queue_param/retail.npy /output/results/retail/fpga 5" -N storage
print(fpga_job_id[0])
```
### Check Job Status
To check on the job that was submitted, use `liveQStat` to check the status of the job.
Column `S` shows the state of your running jobs.
For example:
- If `JOB ID`is in Q state, it is in the queue waiting for available resources.
- If `JOB ID` is in R state, it is running.
```
import liveQStat
liveQStat.liveQStat()
```
#### Get Results
Run the next cell to retrieve your job's results.
```
import get_results
get_results.getResults(fpga_job_id[0], filename='output.tgz', blocking=True)
```
#### Unpack your output files and view stdout.log
```
!tar zxf output.tgz
!cat stdout.log
```
#### View stderr.log
This can be used for debugging
```
!cat stderr.log
```
#### View Output Video
Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected.
```
import videoHtml
videoHtml.videoHTML('Retail FPGA', ['results/retail/fpga/output_video.mp4'])
```
***Wait!***
Please wait for all the inference jobs and video rendering to complete before proceeding to the next step.
## Step 2: Assess Performance
Run the cells below to compare the performance across all 4 devices. The following timings for the model are being comapred across all 4 devices:
- Model Loading Time
- Average Inference Time
- FPS
```
import matplotlib.pyplot as plt
device_list=['cpu', 'gpu', 'fpga', 'vpu']
inference_time=[]
fps=[]
model_load_time=[]
for device in device_list:
with open('results/retail/'+device+'/stats.txt', 'r') as f:
inference_time.append(float(f.readline().split("\n")[0]))
fps.append(float(f.readline().split("\n")[0]))
model_load_time.append(float(f.readline().split("\n")[0]))
plt.bar(device_list, inference_time)
plt.xlabel("Device Used")
plt.ylabel("Total Inference Time in Seconds")
plt.show()
plt.bar(device_list, fps)
plt.xlabel("Device Used")
plt.ylabel("Frames per Second")
plt.show()
plt.bar(device_list, model_load_time)
plt.xlabel("Device Used")
plt.ylabel("Model Loading Time in Seconds")
plt.show()
```
# Step 3: Update Proposal Document
Now that you've completed your hardware testing, you should go back to the proposal document and validate or update your originally proposed hardware. Once you've updated your proposal, you can move onto the next scenario.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from sympy import S, solve
import plotutils as pu
%matplotlib inline
```
# numbers on a plane
Numbers can be a lot more interesting than just a value if you're just willing to shift your perspective a bit.
# integers
When we are dealing with integers we are dealing with all the whole numbers, zero and all the negative whole numbers. In math this set of numbers is often denoted with the symbol $\mathbb{Z}$. This is a *countable infinite* set and even though the numbers are a bit basic we can try to get some more insight into the structure of numbers.
# squares
If we take a number and multiply it with itself we get a *square number*. These are called square because we can easily plot them as squares in a plot.
```
def plot_rect(ax, p, fmt='b'):
x, y = p
ax.plot([0, x], [y, y], fmt) # horizontal line
ax.plot([x, x], [0, y], fmt) # vertical line
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 4), ylim=(-1, 4))
for x in [1,2,3]: plot_rect(axes, (x, x))
```
However, what happens we have a non-square number such as $5$?. We can't easily plot this as two equal lenghts, we'll have to turn it into a rectangle of $1 \times 5$ or $5 \times 1$.
```
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 6), ylim=(-1, 6))
for x, y in [(1, 5), (5, 1)]:
plot_rect(axes, (x, y))
```
The first thing we notice is that we can take one thing and project it as two things. The fact that this happens is perfectly natural because we decided to take a single value and project it in two-dimensions in a way that suits us. Nothing really weird about it but still it's worth to think about it for a moment. Apparently it's perfectly valid to have something even though the way we got there doesn't matter. We could either take the rectangle standing up or the one lying down.
Another interesting question to ask is whether we can get on the other sides of the axes. So far we have been happily plotting in the positive quadrant where $0 \le x$ and $0 \le y$ but what about the other three? Are they even reachable using just integer numbers?
We could make up some factor like $-1 \times -5$ and that would put us in the lower left. That would be equal to the same rectangles projected in the top right. And negative numbers would be either in the top left or bottom right. Although trivial this is interesting because now we find that if we project a single dimension into two dimensions we sometimes get 1 possibility, sometimes 2 and usually 4.
If we project zero we just get zero. However if we project $1$ we get either $1 \times 1$ or $-1 \times -1$. If we project $5$ we get $5 \times 1$, $1 \times 5$, $-5 \times -1$ and $-1 \times -5$.
| github_jupyter |
```
# code by Tae Hwan Jung(Jeff Jung) @graykode, modify by wmathor
import torch
import numpy as np
import torch.nn as nn
import torch.utils.data as Data
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# S: Symbol that shows starting of decoding input
# E: Symbol that shows starting of decoding output
# ?: Symbol that will fill in blank sequence if current batch data size is short than n_step
letter = [c for c in 'SE?abcdefghijklmnopqrstuvwxyz']
letter2idx = {n: i for i, n in enumerate(letter)}
seq_data = [['man', 'women'], ['black', 'white'], ['king', 'queen'], ['girl', 'boy'], ['up', 'down'], ['high', 'low']]
# Seq2Seq Parameter
n_step = max([max(len(i), len(j)) for i, j in seq_data]) # max_len(=5)
n_hidden = 128
n_class = len(letter2idx) # classfication problem
batch_size = 3
def make_data(seq_data):
enc_input_all, dec_input_all, dec_output_all = [], [], []
for seq in seq_data:
for i in range(2):
seq[i] = seq[i] + '?' * (n_step - len(seq[i])) # 'man??', 'women'
enc_input = [letter2idx[n] for n in (seq[0] + 'E')] # ['m', 'a', 'n', '?', '?', 'E']
dec_input = [letter2idx[n] for n in ('S' + seq[1])] # ['S', 'w', 'o', 'm', 'e', 'n']
dec_output = [letter2idx[n] for n in (seq[1] + 'E')] # ['w', 'o', 'm', 'e', 'n', 'E']
enc_input_all.append(np.eye(n_class)[enc_input])
dec_input_all.append(np.eye(n_class)[dec_input])
dec_output_all.append(dec_output) # not one-hot
# make tensor
return torch.Tensor(enc_input_all), torch.Tensor(dec_input_all), torch.LongTensor(dec_output_all)
'''
enc_input_all: [6, n_step+1 (because of 'E'), n_class]
dec_input_all: [6, n_step+1 (because of 'S'), n_class]
dec_output_all: [6, n_step+1 (because of 'E')]
'''
enc_input_all, dec_input_all, dec_output_all = make_data(seq_data)
class TranslateDataSet(Data.Dataset):
def __init__(self, enc_input_all, dec_input_all, dec_output_all):
self.enc_input_all = enc_input_all
self.dec_input_all = dec_input_all
self.dec_output_all = dec_output_all
def __len__(self): # return dataset size
return len(self.enc_input_all)
def __getitem__(self, idx):
return self.enc_input_all[idx], self.dec_input_all[idx], self.dec_output_all[idx]
loader = Data.DataLoader(TranslateDataSet(enc_input_all, dec_input_all, dec_output_all), batch_size, True)
# Model
class Seq2Seq(nn.Module):
def __init__(self):
super(Seq2Seq, self).__init__()
self.encoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5) # encoder
self.decoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5) # decoder
self.fc = nn.Linear(n_hidden, n_class)
def forward(self, enc_input, enc_hidden, dec_input):
# enc_input(=input_batch): [batch_size, n_step+1, n_class]
# dec_inpu(=output_batch): [batch_size, n_step+1, n_class]
enc_input = enc_input.transpose(0, 1) # enc_input: [n_step+1, batch_size, n_class]
dec_input = dec_input.transpose(0, 1) # dec_input: [n_step+1, batch_size, n_class]
# h_t : [num_layers(=1) * num_directions(=1), batch_size, n_hidden]
_, h_t = self.encoder(enc_input, enc_hidden)
# outputs : [n_step+1, batch_size, num_directions(=1) * n_hidden(=128)]
outputs, _ = self.decoder(dec_input, h_t)
model = self.fc(outputs) # model : [n_step+1, batch_size, n_class]
return model
model = Seq2Seq().to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(5000):
for enc_input_batch, dec_input_batch, dec_output_batch in loader:
# make hidden shape [num_layers * num_directions, batch_size, n_hidden]
h_0 = torch.zeros(1, batch_size, n_hidden).to(device)
(enc_input_batch, dec_intput_batch, dec_output_batch) = (enc_input_batch.to(device), dec_input_batch.to(device), dec_output_batch.to(device))
# enc_input_batch : [batch_size, n_step+1, n_class]
# dec_intput_batch : [batch_size, n_step+1, n_class]
# dec_output_batch : [batch_size, n_step+1], not one-hot
pred = model(enc_input_batch, h_0, dec_intput_batch)
# pred : [n_step+1, batch_size, n_class]
pred = pred.transpose(0, 1) # [batch_size, n_step+1(=6), n_class]
loss = 0
for i in range(len(dec_output_batch)):
# pred[i] : [n_step+1, n_class]
# dec_output_batch[i] : [n_step+1]
loss += criterion(pred[i], dec_output_batch[i])
if (epoch + 1) % 1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Test
def translate(word):
enc_input, dec_input, _ = make_data([[word, '?' * n_step]])
enc_input, dec_input = enc_input.to(device), dec_input.to(device)
# make hidden shape [num_layers * num_directions, batch_size, n_hidden]
hidden = torch.zeros(1, 1, n_hidden).to(device)
output = model(enc_input, hidden, dec_input)
# output : [n_step+1, batch_size, n_class]
predict = output.data.max(2, keepdim=True)[1] # select n_class dimension
decoded = [letter[i] for i in predict]
translated = ''.join(decoded[:decoded.index('E')])
return translated.replace('?', '')
print('test')
print('man ->', translate('man'))
print('mans ->', translate('mans'))
print('king ->', translate('king'))
print('black ->', translate('black'))
print('up ->', translate('up'))
```
| github_jupyter |
```
import torch
from torchvision import datasets,transforms as T,models
from torch.utils.data import DataLoader
import numpy as np
from collections import OrderedDict
from torch import optim,nn
import matplotlib.pyplot as plt
import torch.nn.functional as F
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, confusion_matrix, classification_report
import seaborn as sns
train_dir = 'data/train'
valid_dir = 'data/val'
test_dir = 'data/test'
train_transform = T.Compose([
T.Resize((320,320)),
T.RandomRotation(0,359),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
valid_transform = T.Compose([
T.Resize((320,320)),
T.ToTensor(),
T.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
])
test_transform = valid_transform
trainset = datasets.ImageFolder(train_dir,transform = train_transform)
validset = datasets.ImageFolder(valid_dir,transform = valid_transform)
testset = datasets.ImageFolder(test_dir,transform = test_transform)
Trainloader = DataLoader(trainset, batch_size = 64, shuffle = True)
Validloader = DataLoader(validset, batch_size = 64, shuffle = True)
Testloader = DataLoader(testset, batch_size = 64, shuffle = True)
model = models.inception_v3(pretrained=True)
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
fc = nn.Sequential(OrderedDict([
('fc1',nn.Linear(2048,1024)),
('relu',nn.ReLU()),
('dropout',nn.Dropout(0.5)),
('fc2',nn.Linear(1024,500)),
('relu',nn.ReLU()),
('dropout',nn.Dropout(0.5)),
('fc3',nn.Linear(500,2)),
('output',nn.LogSoftmax(dim = 1))
]))
model.fc = fc
model.cuda()
model.load_state_dict(torch.load('PneumoniaModel.pt'))
criterion = nn.NLLLoss()
train_acc = 0
valid_acc = 0
test_acc = 0
test_loss = 0
train_loss = 0
val_loss = 0
true_label = []
predict_label = []
for images,labels in Trainloader:
images = images.cuda()
labels = labels.cuda()
logps,aux = model(images)
loss = criterion(logps,labels)
train_loss += loss.item()*images.size(0)
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim = 1)
T_equals = top_class == labels.view(*top_class.shape)
train_acc += torch.mean(T_equals.type(torch.FloatTensor))
for images,labels in Validloader:
images = images.cuda()
labels = labels.cuda()
logps,aux = model(images)
loss = criterion(logps,labels)
val_loss += loss.item()*images.size(0)
ps = torch.exp(logps)
top_p,top_class = ps.topk(1, dim = 1)
V_equals = top_class == labels.view(*top_class.shape)
valid_acc += torch.mean(V_equals.type(torch.FloatTensor))
for images,labels in Testloader:
images = images.cuda()
labels = labels.cuda()
l = labels.type(torch.FloatTensor)
true_label.append(l)
logps,aux = model(images)
loss = criterion(logps,labels)
l1 = loss.type(torch.FloatTensor)
predict_label.append(l1)
test_loss += loss.item()*images.size(0)
ps = torch.exp(logps)
top_p,top_class = ps.topk(1, dim = 1)
t_equals = top_class == labels.view(*top_class.shape)
test_acc += torch.mean(t_equals.type(torch.FloatTensor))
print("Overall Training Accuracy : {}\n".format(train_acc/len(Trainloader)))
print("Overall Validation Accuracy : {}\n".format(valid_acc/len(Validloader)))
print("Overall Test Accuracy : {}\n".format(test_acc/len(Testloader)))
print("Overall Train Loss : {}\n".format(train_loss/len(Trainloader.dataset)))
print("Overall Valid Loss : {}\n".format(val_loss/len(Validloader.dataset)))
print("Overall Test Loss : {}\n".format(test_loss/len(Testloader.dataset)))
```
| github_jupyter |
```
# Import libraries - REQUIRES pip version 9.0.3
import pandas
import os
from os.path import join
import sys
import scipy.stats
import numpy
import math
import pickle
import copy
import time
import random
import warnings
# Using Cobrapy 0.13.0
import cobra
import cobra.test
from cobra.flux_analysis.sampling import OptGPSampler
from cobra.manipulation.delete import *
from cobra.flux_analysis.parsimonious import add_pfba
from cobra.medium import find_boundary_types
#from cobra.flux_analysis.sampling import OptGPSampler
# Using Gurobi solver instead of GLPK
import gurobipy
from optlang import gurobi_interface
# Estabish handler for logger
import logging
logging.basicConfig()
logger = logging.getLogger('logger')
# Verbose exception printing
%xmode
# Define functions
# Identify potentially gapfilled reactions
def findGapfilledRxn(model, exclude=[]):
gapfilled = []
transport = findTransports(model)
if not type(exclude) is list:
exclude = [exclude]
for index in model.reactions:
if len(list(index.genes)) == 0:
if not index in model.boundary:
if not index.id in exclude or not index.id in transport:
gapfilled.append(index.id)
if len(gapfilled) > 0:
print(str(len(gapfilled)) + ' metabolic reactions not associated with genes')
return gapfilled
# Check for missing transport and exchange reactions
def missingRxns(model, extracellular=['e','Extracellular']):
transporters = set(findTransports(model))
exchanges = set([x.id for x in model.exchanges])
missing_exchanges = []
missing_transports = []
for metabolite in model.metabolites:
if not metabolite.compartment in extracellular or metabolite.id.split('_')[1] != 'e':
continue
curr_rxns = set([x.id for x in list(metabolite.reactions)])
if bool(curr_rxns & transporters) == False:
missing_transports.append(metabolite.id)
if bool(curr_rxns & exchanges) == False:
missing_exchanges.append(metabolite.id)
if len(missing_transports) != 0:
print(str(len(missing_transports)) + ' extracellular metabolites are missing transport reactions')
if len(missing_exchanges) != 0:
print(str(len(missing_exchanges)) + ' extracellular metabolites are missing exchange reactions')
return missing_transports, missing_exchanges
# Checks which cytosolic metabolites are generated for free (bacteria only)
def checkFreeMass(raw_model, cytosol='Cytosol'):
with raw_model as model:
# Close all exchanges
for index in model.boundary:
model.reactions.get_by_id(index.id).lower_bound = 0.
# Identify all metabolites that are produced within the network
demand_metabolites = [x.reactants[0].id for x in model.demands if len(x.reactants) > 0] + [x.products[0].id for x in model.demands if len(x.products) > 0]
free = []
for index in model.metabolites:
if index.id in demand_metabolites:
continue
elif not index.compartment in cytosol:
continue
else:
demand = model.add_boundary(index, type='demand')
model.objective = demand
obj_val = model.slim_optimize(error_value=0.)
if obj_val > 1e-8:
free.append(index.id)
model.remove_reactions([demand])
if len(free) > 0:
print(str(len(free)) + ' metabolites are generated for free')
return(free)
# Check for mass and charge balance in reactions
def checkBalance(raw_model, exclude=[]):
with raw_model as model:
imbalanced = []
mass_imbal = 0
charge_imbal = 0
elem_set = set()
for metabolite in model.metabolites:
try:
elem_set |= set(metabolite.elements.keys())
except:
pass
if len(elem_set) == 0:
imbalanced = model.reactions
mass_imbal = len(model.reactions)
charge_imbal = len(model.reactions)
print('No elemental data associated with metabolites!')
else:
if not type(exclude) is list:
exclude = [exclude]
for index in model.reactions:
if index in model.boundary or index.id in exclude:
continue
else:
try:
test = index.check_mass_balance()
except ValueError:
continue
if len(list(test)) > 0:
imbalanced.append(index.id)
if 'charge' in test.keys():
charge_imbal += 1
if len(set(test.keys()).intersection(elem_set)) > 0:
mass_imbal += 1
if mass_imbal != 0:
print(str(mass_imbal) + ' reactions are mass imbalanced')
if charge_imbal != 0:
print(str(charge_imbal) + ' reactions are charge imbalanced')
return(imbalanced)
# Identifies blocked reactions, 1% cutoff for fraction of optimum
def blockedReactions(model):
blocked = cobra.flux_analysis.variability.find_blocked_reactions(model)
if len(blocked) != 0:
print(str(len(blocked)) + ' reactions are blocked')
return blocked
# Checks the quality of models by a couple metrics and returns problems
def checkQuality(model, exclude=[], cytosol='c'):
start_time = time.time()
if model.name != None:
model_name = model.name
else:
model_name = 'model'
gaps = findGapfilledRxn(model, exclude)
freemass = checkFreeMass(model, cytosol)
balance = checkBalance(model, exclude)
#blocked = blockedReactions(model)
trans, exch = missingRxns(model)
test = gaps + freemass + balance
if len(test) == 0:
print('No inconsistencies detected')
duration = int(round(time.time() - start_time))
print('Took ' + str(duration) + ' seconds to analyze ' + model_name)
return gaps, freemass, balance, trans, exch
# Trace back through reactions immediately adjacent to a given reaction to identify blocked precursor synthesis
def checkPrecursors(model, reaction):
if isinstance(reaction, str) == True:
reaction = model.reactions.get_by_id(reaction)
model.objective = reaction
obj_val = max(model.optimize(objective_sense='maximize').objective_value, abs(model.optimize(objective_sense='minimize').objective_value))
if obj_val > 0.001:
print('Able to produce all precursors for this reaction.')
return None
else:
reactants = reaction.reactants
check = 0
for reactant in reactants:
sub_reactions = list(reactant.reactions)
for sub_reaction in sub_reactions:
model.objective = sub_reaction
obj_val = max(model.optimize(objective_sense='maximize').objective_value, abs(model.optimize(objective_sense='minimize').objective_value))
if obj_val < 0.001 and reactant in sub_reaction.products:
print('Cannot acquire ' + str(reactant.id) + ' via ' + str(sub_reaction.id))
elif obj_val < 0.001 and check < 1 and reactant in sub_reaction.reactants:
print(str(reactant.id) + ' not produced in any reactions.')
check += 1
#------------------------------------------------------------------------------------#
# Function to calculate doubling time from objective value
def doubling(model):
with model as m:
growth = (1 / float(m.slim_optimize())) * 3600
growth = str(round(growth, 3)) + ' minutes'
return growth
# Function to change media condition based on a list
def changeMedia(model, media_list):
for index in model.exchanges:
if index.id in media_list:
model.reactions.get_by_id(index.id).lower_bound = -1000.0
else:
model.reactions.get_by_id(index.id).lower_bound = 0.0
return model
# Rough transcriptomic integration
def roughContextualize(model, transcript_profile, condition):
orig_OV = model.optimize().objective_value
model_context = copy.deepcopy(model)
abundances = []
with open(transcript_profile, 'r') as transcription:
transcript_dict = {}
abundances = []
header = transcription.readline().strip().split(',')
column = header.index(condition)
for line in transcription:
line = line.split(',')
transcript_dict[line[0]] = float(line[column])
abundances.append(float(line[column]))
min_transcription = numpy.percentile(abundances, 50)
penalty_bound = 10
hits = 0
fails = 0
for gene in list(model_context.genes):
gene = gene.name
try:
curr_rxns = list(model_context.genes.get_by_id(gene).reactions)
hits += 1
except KeyError:
fails += 1
continue
try:
curr_transcription = transcript_dict[gene]
except KeyError:
continue
for reaction in curr_rxns:
curr_id = reaction.id
if curr_transcription >= min_transcription:
model_context.reactions.get_by_id(curr_id).lower_bound = -1000
model_context.reactions.get_by_id(curr_id).upper_bound = 1000
elif curr_transcription < min_transcription:
if model_context.reactions.get_by_id(curr_id).lower_bound != 0:
model_context.reactions.get_by_id(curr_id).lower_bound = -penalty_bound
if model_context.reactions.get_by_id(curr_id).upper_bound != 0:
model_context.reactions.get_by_id(curr_id).upper_bound = penalty_bound
#print('Gene hits across data types: ' + str(hits))
#print('KeyErrors across data types: ' + str(fails) + '\n')
new_OV = model_context.optimize().objective_value
print('New objective value: ' + str(new_OV))
print('Contextualized doubling time: ' + doubling(new_OV))
return(model_context)
# Checks for availability of reactants of a given reaction
def availability(model, target_rxn):
precursors = model.reactions.get_by_id(target_rxn).reactants
total = 0
unsuccessful = set()
limited = set()
for precursor in precursors:
precursor_rxn = list(model.metabolites.get_by_id(precursor.id).reactions)
for rxn in precursor_rxn:
if rxn.id == target_rxn:
continue
elif precursor in model.reactions.get_by_id(rxn.id).reactants:
model.objective = rxn
obj_val = model.slim_optimize()
if obj_val < 1e-8:
unsuccessful |= set([rxn.id])
limited |= set([precursor.id])
print('Failed reactions: ' + str(len(unsuccessful)))
print('Limiting reactants: ' + str(len(limited)))
return unsuccessful, limited
# Removes all metabolites in a list of metabolite ids and all reactions associated with them
def removeAll(model, metabolite_list):
new_model = copy.deepcopy(model)
for metabolite in metabolite_list:
try:
metabolite = new_model.metabolites.get_by_id(metabolite)
new_model.remove_reactions(metabolite.reactions)
new_model.remove_metabolites([metabolite])
except KeyError:
print(metabolite + ' not found')
continue
return new_model
# Identify transport reactions (for any number compartments)
def findTransports(model):
transporters = []
compartments = set(list(model.compartments))
if len(compartments) == 1:
raise Exception('Model only has one compartment!')
for reaction in model.reactions:
reactant_compartments = set([x.compartment for x in reaction.reactants])
product_compartments = set([x.compartment for x in reaction.products])
reactant_baseID = set([x.id.split('_')[0] for x in reaction.reactants])
product_baseID = set([x.id.split('_')[0] for x in reaction.products])
if reactant_compartments == product_compartments and reactant_baseID != product_baseID:
continue
elif bool(compartments & reactant_compartments) == True and bool(compartments & product_compartments) == True:
transporters.append(reaction.id)
return(transporters)
# Removes a given percentage of reactions from a model, ignoring objective
def generate_gaps(model, percentage=0.2, prune=False, ignore=[]):
number_to_remove = int(round(len(model.reactions) * percentage))
rxn_ids = [x.id for x in model.reactions]
random.shuffle(rxn_ids)
rxns_to_remove = rxn_ids[-number_to_remove:]
for rxn in ignore:
try:
rxns_to_remove.remove(rxn)
except ValueError:
continue
truncated_model = copy.deepcopy(model)
truncated_model.remove_reactions(rxns_to_remove)
if prune == True:
unused_cpds = prune_unused_metabolites(truncated_model)
print('Reactions removed: ' + str(len(rxns_to_remove)))
print('New objective value: ' + str(truncated_model.slim_optimize()))
return truncated_model, rxns_to_remove
# Calculates the sum of fluxes for a given model
def sum_of_fluxes(model):
with model as m:
solution = m.optimize()
flux_sum = sum(list(solution.fluxes))
return flux_sum
# Very fast and efficient gap filling function
def pFBA_GapFill(model, bag, obj=None, obj_lb=10., obj_constraint=False,
iters=1, tasks=None, task_lb=0.05,
add_exchanges=True, extracellular='e'):
'''
Function that utilizes iterations of pFBA solution with a universal reaction bag
in order to gapfill a model.
Parameters
----------
model : cobra.Model
Model to be gapfilled
bag : cobra.Model
Reaction bag reference to use during gapfilling
obj : string
Reaction ID for objective function in model to be gapfilled.
obj_lb : float
Lower bound for objective function
obj_constraint : bool
Sets objective as contstraint which must be maximized
tasks : list or None
List of reactions IDs (strings) of metabolic tasks
to set a minimum lower bound for
task_lb : float
Lower bound for any metabolic tasks
iters : int
Number of gapfilling rounds. Unique reactions from each round are
saved and the union is added simulatneously to the model
add_exchanges : bool
Identifies extracellular metabolites added during gapfilling that
are not associated with exchange reactions and creates them
extracellular : string
Label for extracellular compartment of model
'''
start_time = time.time()
# Save some basic network info for downstream membership testing
orig_rxn_ids = set([str(x.id) for x in model.reactions])
orig_cpd_ids = set([str(y.id) for y in model.metabolites])
univ_rxn_ids = set([str(z.id) for z in bag.reactions])
# Find overlap in model and reaction bag
overlap_rxn_ids = univ_rxn_ids.intersection(orig_rxn_ids)
# Get model objective reaction ID
if obj == None:
obj = get_objective(model)
else:
obj = obj
# Modify universal reaction bag
new_rxn_ids = set()
print('Creating universal model...')
with bag as universal:
# Remove overlapping reactions from universal bag, and reset objective if needed
for rxn in overlap_rxn_ids:
universal.reactions.get_by_id(rxn).remove_from_model()
# Set objective in universal if told by user
# Made constraint as fraction of minimum in next step
if obj_constraint == True:
universal.add_reactions([model.reactions.get_by_id(obj)])
universal.objective = obj
orig_rxn_ids.remove(obj)
orig_rxns = []
for rxn in orig_rxn_ids:
orig_rxns.append(copy.deepcopy(model.reactions.get_by_id(rxn)))
else:
orig_rxns = list(copy.deepcopy(model.reactions))
# Add pFBA to universal model and add model reactions
add_pfba(universal)
universal = copy.deepcopy(universal) # reset solver
universal.add_reactions(orig_rxns)
# If previous objective not set as constraint, set minimum lower bound
if obj_constraint == False:
universal.reactions.get_by_id(obj).lower_bound = obj_lb
# Set metabolic tasks that must carry flux in gapfilled solution
if tasks != None:
for task in tasks:
universal.reactions.get_by_id(task).lower_bound = task_lb
# Run FBA and save solution
print('Optimizing model with combined reactions...')
solution = universal.optimize()
if iters > 1:
print('Generating flux sampling object...')
optgp_object = OptGPSampler(universal, processes=4)
# Assess the sampled flux distributions
print('Sampling ' + str(iters) + ' flux distributions...')
flux_samples = optgp_object.sample(iters)
rxns = list(flux_samples.columns)
for distribution in flux_samples.iterrows():
for flux in range(0, len(list(distribution[1]))):
if abs(list(distribution[1])[flux]) > 1e-6:
new_rxn_ids |= set([rxns[flux]]).difference(orig_rxn_ids)
else:
rxns = list(solution.fluxes.index)
fluxes = list(solution.fluxes)
for flux in range(0, len(fluxes)):
if abs(fluxes[flux]) > 1e-6:
new_rxn_ids |= set([rxns[flux]])
# Screen new reaction IDs
if obj in new_rxn_ids: new_rxn_ids.remove(obj)
for rxn in orig_rxn_ids:
try:
new_rxn_ids.remove(rxn)
except:
continue
# Get reactions and metabolites to be added to the model
print('Retrieving reactions and metabolites needed for gapfilling...')
new_rxns = copy.deepcopy([bag.reactions.get_by_id(rxn) for rxn in new_rxn_ids])
new_cpd_ids = set()
for rxn in new_rxns: new_cpd_ids |= set([str(x.id) for x in list(rxn.metabolites)])
new_cpd_ids = new_cpd_ids.difference(orig_cpd_ids)
new_cpds = copy.deepcopy([bag.metabolites.get_by_id(cpd) for cpd in new_cpd_ids])
# Copy model and gapfill
print('Gapfilling model...')
new_model = copy.deepcopy(model)
new_model.add_metabolites(new_cpds)
new_model.add_reactions(new_rxns)
# Identify extracellular metabolites with no exchanges
if add_exchanges == True:
new_exchanges = extend_exchanges(new_model, new_cpd_ids, extracellular)
if len(new_exchanges) > 0: new_rxn_ids |= new_exchanges
duration = int(round(time.time() - start_time))
print('Took ' + str(duration) + ' seconds to gapfill ' + str(len(new_rxn_ids)) + \
' reactions and ' + str(len(new_cpd_ids)) + ' metabolites.')
new_obj_val = new_model.slim_optimize()
if new_obj_val > 1e-6:
print('Gapfilled model objective now carries flux (' + str(new_obj_val) + ').')
else:
print('Gapfilled model objective still does not carry flux.')
return new_model
# Adds missing exchanges for extracellulart metbaolites
def extend_exchanges(model, cpd_ids, ex):
model_exchanges = set(find_boundary_types(model, 'exchange', external_compartment=ex))
new_ex_ids = set()
for cpd in cpd_ids:
cpd = model.metabolites.get_by_id(cpd)
if str(cpd.compartment) != ex:
continue
else:
if bool(set(cpd.reactions) & model_exchanges) == False:
try:
new_id = 'EX_' + cpd.id
model.add_boundary(cpd, type='exchange', reaction_id=new_id, lb=-1000.0, ub=1000.0)
new_ex_ids |= set([new_id])
except ValueError:
pass
return new_ex_ids
# Returns the reaction ID of the objective reaction
def get_objective(model):
if len(list(model.objective.variables)) == 0:
raise IndexError('Model has no objective set.')
expression = str(model.objective.expression).split()
if 'reverse' in expression[0]:
obj_id = expression[2].split('*')[-1]
else:
obj_id = expression[0].split('*')[-1]
return obj_id
```
### Toy Model
```
toy = cobra.io.read_sbml_model('data/toy_model.sbml')
toy
# Remove specific reactions
test_model = copy.deepcopy(toy)
test_model.remove_reactions(['rxn3','rxn4','rxn5','rxn6'])
test_model.objective = 'biomass_rxn'
test_gapfill = pFBA_GapFill(test_model, toy, iters=2, extracellular='e')
test_gapfill
test_gapfill.slim_optimize()
```
### iML1515
```
iML1515 = cobra.io.read_sbml_model('data/iML1515.xml')
iML1515
test_model, removed_rxns = generate_gaps(iML1515, percentage=0.1, prune=True, ignore=['BIOMASS_Ec_iML1515_core_75p37M_reverse','BIOMASS_Ec_iML1515_core_75p37M'])
test_model
test_gapfill = pFBA_GapFill(test_model, iML1515, obj_lb=10.)
test_gapfill
```
### New C. difficile model
```
# Read in models
universal = cobra.io.load_json_model('data/universal.json')
cd630_PATRIC = cobra.io.read_sbml_model('data/PATRIC/temp_cd630PATRIC.sbml') # partially curated
test_gapfill = pFBA_GapFill(cd630_PATRIC, universal, iters=1, obj_lb=50., extracellular='Extracellular')
doubling(test_gapfill)
test_gapfill
unused = prune_unused_metabolites(test_gapfill)
new_rxns = set([str(x.id) for x in test_gapfill.reactions]).difference(set([str(y.id) for y in cd630_PATRIC.reactions]))
new_rxns
test_gapfill.reactions.get_by_id('rxn00553_c')
cobra.io.write_sbml_model(test_gapfill, 'data/cd630_gapfilled.sbml')
new_rxns = set([str(x.id) for x in test_gapfill.reactions]).difference(set([str(y.id) for y in cd630_PATRIC.reactions]))
new_rxns
#need to probably fix reversibility of gapfilled reactions...
test_gapfill.reactions.get_by_id('rxn13784_c')
no_genes, freemass, unbalanced, no_transport, no_exchange = checkQuality(test_gapfill)
gaps
test_gapfill = pFBA_GapFill(cd630_PATRIC, universal, iters=1000, obj_lb=50., extracellular='Extracellular')
```
| github_jupyter |
# Pre-process LINCS L1000 dataset
Pre-processing steps include:
1. Normalize data
2. Partition dataset into training and validation sets
Note: Using python 2 in order to support parsing cmap function
```
import pandas as pd
import os
import numpy as np
from scipy.stats import variation
from sklearn import preprocessing
import seaborn as sns
import matplotlib.pyplot as plt
import sys
from cmapPy.pandasGEXpress.parse import parse
randomState = 123
from numpy.random import seed
seed(randomState)
# Output files
train_file = "/home/alexandra/Documents/Data/LINCS_tuning/train_model_input.txt.xz"
validation_file = "/home/alexandra/Documents/Data/LINCS_tuning/validation_model_input.txt.xz"
```
## About the data
Read in gene expression data (GCToo object with 3 embedded dataframes include data_df)
Data downloaded from https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE92742
cid = samples
rid = genes
values = normalized and imputed (based on landmark genes) gene expression --> log fold change compared against negative control
Note: Data is too large to be housed in repo so instead it is housed on local desktop
```
%%time
data_file = "/home/alexandra/Documents/Data/LINCS/GSE92742_Broad_LINCS_Level3_INF_mlr12k_n1319138x12328.gctx"
# Keep only landmark genes
gene_info_file = os.path.join(
os.path.dirname(
os.getcwd()), "metadata","GSE92742_Broad_LINCS_gene_info.txt")
gene_info = pd.read_table(gene_info_file, dtype=str)
landmark_gene_row_ids = gene_info["pr_gene_id"][gene_info["pr_is_lm"] == "1"]
data = parse(data_file, rid = landmark_gene_row_ids)
data_df = data.data_df.T
data_df.shape
# Normalization
# scale data to range (0,1) per gene
data_scaled_df = (
preprocessing
.MinMaxScaler()
.fit_transform(data_df)
)
data_scaled_df = pd.DataFrame(data_scaled_df,
columns=data_df.columns,
index=data_df.index)
del data_df
data_scaled_df.head(5)
print(data_scaled_df.shape)
sns.distplot(data_scaled_df.iloc[5])
# Subsample dataset in order to tune parameters
subsample_frac = 0.01
subsample_data = data_scaled_df.sample(frac=subsample_frac, random_state=randomState)
print(subsample_data.shape)
# Split dataset into training and validation sets
validation_frac = 0.2
validation_df = subsample_data.sample(frac=validation_frac, random_state=randomState)
train_df = subsample_data.drop(validation_df.index)
print(validation_df.shape)
print(train_df.shape)
# Output
train_df.to_csv(train_file, sep='\t', compression='xz')
validation_df.to_csv(validation_file, sep='\t', compression='xz')
```
| github_jupyter |
```
#loading dataset
import pandas as pd
#visualisation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# data preprocessing
from sklearn.preprocessing import StandardScaler
# data splitting
from sklearn.model_selection import train_test_split
# data modeling
from sklearn.metrics import confusion_matrix,accuracy_score,roc_curve,classification_report,auc
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
```
### Importing the Dataset
```
df = pd.read_csv("Lumpy skin disease data.csv")
df.head()
df.describe()
df.info()
df.isna().sum(axis=0)
df.columns
```
### Dropping Unnecessary Columns
```
df.drop(columns=['region','country','reportingDate','X5_Ct_2010_Da','X5_Bf_2010_Da'],inplace=True)
df.head()
df.corr()
```
### Exploratory Data Analysis
```
plt.figure(figsize=(3,3),dpi=150)
plt.style.use('dark_background')
sns.countplot(x='lumpy', data = df)
plt.xlabel('Lumpiness classes')
plt.ylabel('count of each class')
plt.title('Lumpiness class distribution')
plt.figure(figsize=(15, 15))
heatmap = sns.heatmap(df.corr(), vmin= -1, vmax = 1, annot=True)
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':12})
```
### Partitioning the dataset into training and test sets
```
X=df.iloc[:,:-1]
y=df.iloc[:,-1]
print("//Independent features//")
print(X.head())
print("\n\n//Dependent feature//")
print(y.head())
```
### Train Test Split
```
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=0)
```
### Feature Scaling
```
scaler=StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test=scaler.transform(X_test)
# Logistic Regression
lr=LogisticRegression()
lr_mdl=lr.fit(X_train,y_train)
lr_pred=lr.predict(X_test)
lr_con_matrix=confusion_matrix(y_test,lr_pred)
lr_acc=accuracy_score(y_test,lr_pred)
print("Confusion Matrix",'\n',lr_con_matrix)
print('\n')
print("Accuracy of Logistic Regression: ",lr_acc*100,'\n')
print(classification_report(y_test,lr_pred))
#Random Forest Classfier
rf = RandomForestClassifier()
rf.fit(X_train,y_train)
rf_pred = rf.predict(X_test)
rf_con_matrix = confusion_matrix(y_test, rf_pred)
rf_acc = accuracy_score(y_test, rf_pred)
print("Confusion Matrix\n",rf_con_matrix)
print("\n")
print("Accuracy of Random Forest:",rf_acc*100,'\n')
print(classification_report(y_test,rf_pred))
#DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
dt_pred = dt.predict(X_test)
dt_con_matrix = confusion_matrix(y_test, dt_pred)
dt_acc = accuracy_score(y_test, dt_pred)
print("Confusion Matrix\n",dt_con_matrix)
print("\n")
print("Accuracy of Decision Tree Classifier:",dt_acc*100,'\n')
print(classification_report(y_test,dt_pred))
y_score1 = lr.predict_proba(X_test)[:,1]
y_score2 = rf.predict_proba(X_test)[:,1]
y_score3 = dt.predict_proba(X_test)[:,1]
false_positive_rate1, true_positive_rate1, threshold1 = roc_curve(y_test, y_score1)
false_positive_rate2, true_positive_rate2, threshold2 = roc_curve(y_test, y_score2)
false_positive_rate3, true_positive_rate3, threshold3 = roc_curve(y_test, y_score3)
plt.figure(figsize=(5,5),dpi=150)
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.plot(false_positive_rate1,true_positive_rate1, color='red', label = "Logistic Regression")
plt.plot(false_positive_rate2,true_positive_rate2, color='blue', label = "Random Forest")
plt.plot(false_positive_rate3,true_positive_rate3, color='green', label = "Decision Tree")
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
mdl_evl = pd.DataFrame({'Model': ['Logistic Regression','Random Forest', 'Decision Tree'], 'Accuracy': [lr_acc*100,rf_acc*100,dt_acc*100]})
mdl_evl
pal=['red','blue','green']
fig, ax = plt.subplots(figsize=(20,10))
sns.barplot(x="Model",y="Accuracy",palette=pal,data=mdl_evl)
plt.title('Model Accuracy')
plt.xlabel('Model')
plt.ylabel('Accuracy')
```
So according to the accuracy scores the best model is Random Forest.
| github_jupyter |
# Quantum pipeline using JAX backend
This performs an exact classical simulation.
```
from jax import numpy as np
def read_data(filename):
labels, sentences = [], []
with open(filename) as f:
for line in f:
labels.append([1, 0] if line[0] == '1' else [0, 1])
sentences.append(line[1:].strip())
return np.array(labels), sentences
train_labels, train_data = read_data('datasets/mc_train_data.txt')
dev_labels, dev_data = read_data('datasets/mc_dev_data.txt')
test_labels, test_data = read_data('datasets/mc_test_data.txt')
```
### Create diagrams
```
from lambeq.ccg2discocat import DepCCGParser
reader = DepCCGParser(possible_root_cats=['S[dcl]'])
raw_train_diagrams = reader.sentences2diagrams(train_data)
raw_dev_diagrams = reader.sentences2diagrams(dev_data)
raw_test_diagrams = reader.sentences2diagrams(test_data)
from discopy.rigid import Id
def remove_cups(diagram):
# Remove cups to reduce post-selection in the circuit, for faster execution
diags = []
for box, offset in zip(diagram.boxes, diagram.offsets):
if not box.dom: # word box
diags.insert(offset, box)
else: # cup (the only other type of box in these diagrams)
i = 0
off = offset
while off != len(diags[i].cod) - 1:
assert off > 0
off -= len(diags[i].cod)
i += 1
left, right = diags[i:i+2]
if len(left.cod) == 1:
new_diag = right >> (left.r.dagger() @ Id(right.cod[1:]))
else:
assert len(right.cod) == 1
new_diag = left >> (Id(left.cod[:-1]) @ right.l.dagger())
diags[i:i+2] = [new_diag]
assert len(diags) == 1
return diags[0]
train_diagrams = [remove_cups(diagram) for diagram in raw_train_diagrams]
dev_diagrams = [remove_cups(diagram) for diagram in raw_dev_diagrams]
test_diagrams = [remove_cups(diagram) for diagram in raw_test_diagrams]
train_diagrams[0].draw()
```
### Create circuits
```
from lambeq.circuit import IQPAnsatz
from lambeq.core.types import AtomicType
ansatz = IQPAnsatz({AtomicType.NOUN: 1, AtomicType.SENTENCE: 1},
n_layers=1, n_single_qubit_params=3)
train_circuits = [ansatz(diagram) for diagram in train_diagrams]
dev_circuits = [ansatz(diagram) for diagram in dev_diagrams]
test_circuits = [ansatz(diagram) for diagram in test_diagrams]
train_circuits[0].draw(figsize=(9, 12))
```
### Parameterise
```
from sympy import default_sort_key
all_circuits = train_circuits + dev_circuits + test_circuits
# sort the symbols since they are returned as a set
parameters = sorted(
{s for circ in all_circuits for s in circ.free_symbols},
key=default_sort_key)
from discopy.quantum import Circuit
from discopy.tensor import Tensor
from jax import jit
Tensor.np = np
def normalise(predictions):
# apply smoothing to predictions
predictions = np.abs(predictions) + 1e-9
return predictions / predictions.sum()
def make_pred_fn(circuits):
circuit_fns = [c.lambdify(*parameters) for c in circuits]
def predict(params):
outputs = Circuit.eval(*(c(*params) for c in circuit_fns))
return np.array([normalise(output.array) for output in outputs])
return predict
train_pred_fn = jit(make_pred_fn(train_circuits))
dev_pred_fn = jit(make_pred_fn(dev_circuits))
test_pred_fn = make_pred_fn(test_circuits)
```
### Train
```
from noisyopt import minimizeSPSA
import numpy
def make_cost_fn(pred_fn, labels):
def cost_fn(params, **kwargs):
predictions = pred_fn(params)
cost = -np.sum(labels * np.log(predictions)) / len(labels) # binary cross-entropy loss
costs.append(cost)
acc = np.sum(np.round(predictions) == labels) / len(labels) / 2 # half due to double-counting
accuracies.append(acc)
return cost
costs, accuracies = [], []
return cost_fn, costs, accuracies
train_cost_fn, train_costs, train_accs = make_cost_fn(train_pred_fn, train_labels)
dev_cost_fn, dev_costs, dev_accs = make_cost_fn(dev_pred_fn, dev_labels)
SEED = 0
rng = numpy.random.default_rng(SEED)
x0 = np.array(rng.random(len(parameters)))
numpy.random.seed(SEED)
result = minimizeSPSA(train_cost_fn, x0=x0, a=0.2, c=0.06, niter=80, callback=dev_cost_fn)
```
### Show results
```
import matplotlib.pyplot as plt
fig, ((ax_tl, ax_tr), (ax_bl, ax_br)) = plt.subplots(2, 2, sharex=True, sharey='row', figsize=(10, 6))
ax_tl.set_title('Training set')
ax_tr.set_title('Development set')
ax_bl.set_xlabel('Iterations')
ax_br.set_xlabel('Iterations')
ax_bl.set_ylabel('Accuracy')
ax_tl.set_ylabel('Loss')
colours = iter(plt.rcParams['axes.prop_cycle'].by_key()['color'])
ax_tl.plot(train_costs[1::2], color=next(colours)) # training evaluates twice per iteration
ax_bl.plot(train_accs[1::2], color=next(colours)) # so take every other entry
ax_tr.plot(dev_costs, color=next(colours))
ax_br.plot(dev_accs, color=next(colours))
# print test accuracy
test_cost_fn, _, test_accs = make_cost_fn(test_pred_fn, test_labels)
test_cost_fn(result.x)
print('Test accuracy:', test_accs[0])
```
| github_jupyter |
## DW Finals 2017 solutions <br>
### These solutions are student compiled and might contain errors (especially for qn 1) <br> Credit goes to Team Anonymous on Piazza
```
class MySMClass(sm.SM):
startState=('forward',0.0)
def getNextValues(self, state, inp):
state, orig_angle = state
angle = util.fixAnglePlusMinusPi(inp.odometry.theta)
front_dist = inp.sonars[2]
eps = 0.01
if state=='forward':
if front_dist <=0.5:
next_state = ('rotate', angle)
forward = 0.0
rotation = 0.1
else:
next_state = (Q1, Q2)
forward = 0.1
rotation = 0.0
elif state == 'rotate':
if not util.nearAngle(abs(angle-orig_angle), math.pi/2.0,eps):
next_state = (Q3, Q4)
forward = 0.0
rotation = 0.1
else:
next_state = (Q5, Q6)
forward = 0.1
rotation = 0.0
print next_state, forward, rotation
return (next_state, io.Action(fvel = forward, rvel = rotation))
```
### Part A
#### Q.1 [10 points]
#### One student was not satisfied with his boundary follower program for the mini project. He wanted to rotate exactly 90 degrees when the eBot found a boundary. To do that, he wrote a state machine to test his 90 degrees rotation as shown below. The eBot should move forward until it sees a boundary at around 0.5 m and starts rotating. After it rotates for 90 degrees, it should move forward again.
##### a) [ 3 points] Specify the values of Q1 to Q6 at lines 14, 19, and 23.
##### b) [ 5 points] Draw the state transition diagram when you consider only the first item in the tuple of your state variable. Hint: you can use Δθ to denote abs(angle - orig_angle) in your state diagram.
##### c) [ 1 points] Explain the purpose of using util.fixAnglePlusMinusPi(). What problems may be encountered if we do not use this function? Hint: Check libdw.util module documentation.
##### d) [ 1 point] Draw one kind of world with its boundary where this state machine will not work.
**Sample answer 1:** <br>
a) Q1: 'forward' Q2: 0.0 Q3: 'rotate' Q4: angle Q5: 'forward' Q6: 0.0<br>
b) - awkward stare -<br>
c) util.fixAnglePlusMinusPi() returns an equivalent angle between plus and minus pi with a radian input. Without the function, the robot will turn 270 degrees after 4 turns to satisfy abs(270-360) $\approx$ 90.<br>
d) Any world where the boundary cannot be picked up by sonar? Like staying within a tabletop / cliff boundary.
#### Q.2[10 points]<br>
#### My friend Julius-C explained to me this awesome trick to share secret messages. He told me it was super secure back in the day. You just shift each letter of your message according to its order in the alphabet, by a super secret constant factor! So for instance when ciphering "AB" with key=3, the result is "DE". To decrypt, one just shifts back the same number of positions. I realized this was easy to implement in Python with my DW skills. To test my implementation he sent me an encrypted message for me to decrypt. But something went wrong with my code below :(
```
alphabet = {'A':0,'B':1,'C':2,'D':3,'E':4,'F':5,'G':6,'H':7,'I':8,
'J':9,'K':10,'L':11,'M':12,'N':13,'O':14,'P':15,'Q':16,'R':17,'S':18,
'T':19,'U':20,'V':21,'W':22,'X':23,'Y':24,'Z':25}
# the next line is just to create the inverse alphabet dictionary
inverse = {v: k for k, v in alphabet.items()}
#Encrypting function
def cipher(m,key):
c = ""
for x in m:
y = alphabet[x] + key % 26 # fix: (alphabet[y] + key) % 26
c = c + inverse[y]
return c
#Decrypting function
def decipher(c,key):
m = ""
for y in c:
x = alphabet[y] - key % 26 # fix: (alphabet[y] - key) % 26
m = m + inverse[x]
return m
#I'm not sure if this was the key, I forgot!
key = 5
#But I'm pretty sure that the secret messages starts with 'D'..
c = "QJEHYRM"
#print cipher(m,k)
print (decipher(c,key))
```
##### a)[5 points] Can you spot and fix the logical error in the implementation? This is the run-time error I get when running the code:
line 20, in decipher<br>
m = m + inverse[x]<br>
KeyError: -1<br>
Note: There are no syntax errors in the code, I checked!
##### b)[5 points] I thought the key was 5 but I think I forgot. Can you help me recover the secret key and the secret message? Julius-C told me that the first letter of the secret message was "D". Explain your answer.
** Sample solution:**<br>
a) Add parenthesis when performing the modulo operation. Error fixed is commented out. <br>
b) Matching fist letter of c: 'Q' of index 16 with letter 'D' of index 3, we can deduce that the secret key is 16 - 3 = 13 to yield the (not so) secret message of 'DWRULEZ'
### Part B
#### Q.3 [10 points]
#### All books published before 2007 were assigned a 10-digit reference number called ISBN (International Standard Book Number). The 10 digits can be denoted as: $d1d2d3d4d5d6d7d8d9d10$. The last digit, d10, is called ‘checksum’, which is calculated from the other nine digits using the following formula:<br> $$ (d1×1+d2×2+d3×3+d4×4+d5×5+d6×6+d7×7+d8×8+d9×9) \% 11 $$ <br> If the checksum is 10, the last digit is denoted as ‘X’, according to the ISBN convention. Write a function named complete_ISBN that takes the first 9 digits as one string argument, then returns the full 10-digit ISBN number (including leading zeros if any) as a string.
```
def complete_ISBN(string):
integer_list=[]
for i in range(9): # For "takes fist 9 digits" requirement
integer_list.append(int(list(string)[i]))
for index, num in enumerate(integer_list):
integer_list[index] = (index+1) * num # index starts from 0 so we want to multiply with index + 1
d10 = sum(integer_list) % 11
return string + str(d10) if d10 != 10 else string + 'X'
## TEST CASES##
print ('Test case 1: input=013601267')
print (complete_ISBN('013601267'))
print ('Test case 2: input=013031997')
print (complete_ISBN('013031997'))
print ('Test case 3: input=020139829')
print (complete_ISBN('020139829'))
```
### Q.4 [15points] <br> Write a function get_products(inlist, test). The function returns two outputs (as a tuple). The first output is a dictionary d. For each tuple in inlist, the product of the entries is calculated. The products calculated are the keys of the dictionary d. The corresponding values is a list of the tuples that give the product. The second output is a list of tuples from inlist that have test as the product of its entries. If there is no corresponding value, the second output should be a None object.
```
import numpy as np
def get_products(inlist,test):
d ={}
for i in inlist:
d.setdefault(np.prod(i),[]).append(i) # setdefault() is handy for creating dictionary keys with multiple values
return d, d.get(test)
## TEST CASES ##
inlist = [(3,5),(2,2),(2,2,3),(12,2),(7,3),(3,7,1)]
d,o =get_products(inlist, 15)
print (sorted(d.keys()))
print (sorted(d.values()))
print (o)
d,o = get_products(inlist, 21)
print (o)
d,o = get_products(inlist, 11)
print (o)
```
### Q.5 [15points]<br>A fictional language for an upcoming fantasy television show has been invented using the letters of the roman alphabet. Soon, the scriptwriters will have lots of text written for this language, and your task is to build a spell checker program for documents produced in this language. The rules of the written form of this language is as follows. <br>-Each word must have only two letters. <br>-The first letter must beone of the following lower-case consonantletters: k g s t d n h b m r <br>-The second letter must be one of the following lower-case vowel letters: a e i o u <br>-There must be at least one space after the end of each word. <br> This language does not have upper-case letters or punctuation. This spell-checker can be implemented using a finite state machine. Write a class named SpellCheckSMto implement this spell-checking state machine. SpellCheckSMis a subclass of sm.SM, which is obtained by having from libdw import sm at the start of your code
```
from libdw import sm
consonant = ['k', 'g', 's', 't', 'd', 'n', 'h', 'b', 'm', 'r']
vowel = ['a', 'e', 'i', 'o', 'u']
class SpellCheckSM(sm.SM):
start_state = 'new word'
def get_next_values(self, state, inp): # Just follow transition diagram
if state == 'new word':
if inp == ' ':
return ('new word','')
if inp in consonant:
return ('consonant','')
else:
return ('error','')
if state == 'consonant':
if inp == ' ':
return ('new word','error')
if inp in vowel:
return ('vowel','')
else:
return ('error','')
if state == 'vowel':
if inp == ' ':
return ('new word', 'ok')
else:
return ('error','')
if state == 'error':
if inp == ' ':
return ('new word', 'error')
else:
return ('error','')
# TEST CASES #
a = SpellCheckSM()
print ('test case A')
line1 = 'a si tu ne mai me pas je '
print (a.transduce(line1))
print ('test case B')
line2 = 'hi ka ru no de '
print (a.transduce(line2))
print ('test case C')
line3= 'mu '
a.transduce(line3,verbose=True)
```
### Q.6 [Total: 25points]<br>A parallelogram is a quadrilateral with two pairs of parallel sides. The opposite sides of a parallelogram are equal and the opposite angles of a parallelogram are equal. A rhombus is a parallelogram in which all four sides are equal. A rectangle is a parallelogram in whichall angles are right angles. A square is a rectangle in which two adjacent sides have equal length.... blah blah..
```
class Parallelogram(object):
def __init__(self, side1, side2, diagonal):
self.s1 = float(side1)
self.s2 = float(side2)
self.d = float(diagonal)
def __diagonal__(self):
return self.d
def __str__(self):
return "%.2f" % (self.d)
def get(self):
return self.d
def set(self,d):
self.d = d if d>0 else 0
diagonal = property(get,set)
def __call__(self):
return self.s1 + self.s2 > self.d
def calc_area(self):
s = (self.s1 + self.s2 + self.d) /2.
a = 2*(s*(s-self.s1)*(s-self.s2)*(s-self.d))**.5
return round(a,2)
class Rhombus(Parallelogram): # subclass containing subclass specific code
def __call__(self):
return True if self.s1 + self.s2 > self.d and self.s1 == self.s2 else False
def calc_area(self):
a = 0.25*self.d*((4*self.s1)**2 - 4*(self.d)**2)**.5
return round(a,2)
class Rectangle(Parallelogram):
def __call__(self):
return True if self.s1**2 + self.s2**2 == self.d**2 and abs(self.s1 - self.s2)>0 else False
def calc_area(self):
return round(self.s1*self.s2,2)
class Square(Rectangle):
def __call__(self):
return True if abs(self.s1**2 + self.s2**2 - self.d**2)<0.01 and self.s1 == self.s2 else False
def calc_area(self):
return round(self.s1**2,2)
## TEST CASES ##
para = Parallelogram(2,3,4)
print (para)
para.diagonal = 3
print (para)
para.diagonal = -1
print (para)
para = Parallelogram(3,4,5)
print (para())
para = Parallelogram(3,4,8)
print (para())
rect = Rectangle(3,4,6)
print (rect())
rhom = Rhombus(3,3,2)
print (rhom())
squr = Square(2,2,3)
print (squr())
squr = Square(2,2,8**.5)
print (squr())
para = Parallelogram(3,4,2)
print (para.calc_area())
para = Parallelogram(5,7,9)
print (para.calc_area())
rect = Rectangle(3,4,5)
print (rect.calc_area())
rhom = Rhombus(3,3,4)
print (rhom.calc_area())
squr=Square(2,2,2.83)
print (squr.calc_area())
```
### Q.7 [15points]<br>Learn the art of procrastination. If we can do something later, why do it now? Let us try to write a program to plan the procrastination.Given n tasks due today. The i-th task takes xi units of time and must be finished by time ti. Suppose we can only work on one task at a time, and once we begin a task, we must work until it is finished. What is the latest time that we must start to ensure that all the deadlines are met? Note that timeunits are integer and start from 0... Will type the rest later...
```
class MyTask(object):
def __init__(self, deadline, duration):
self.deadline = deadline
self.duration = duration
def __str__(self):
return 'T(%d,%d)' %(self.deadline, self.duration)
import numpy as np
def procrastination(ls):
## Compile assignments in a dictionary ##
d = {}
start_times = []
for i in range(len(ls)):
d.setdefault(ls[i].deadline,[]).append(ls[i].duration)
## Prepare above dictionary for easy manupilation ##
processed_ls = sorted(list(zip(d.keys(),[np.sum(i) for i in d.values()])),reverse=True)
## Calculate potential start times ##
while processed_ls:
start_times.append(processed_ls[0][0] - np.sum([i[1] for i in processed_ls]))
processed_ls.pop(0)
## Satisfy "Procrastination" and "-1" condition ##
return np.min(start_times) if np.min(start_times)>= 0 else -1
## TEST CASES ##
assignments = [ MyTask(9,1), MyTask(9,2), MyTask(7,1) ]
print (procrastination(assignments))
assignments1 = [ MyTask(3,2), MyTask(3,2) ]
print (procrastination(assignments1))
assignments2= [ MyTask(9,1), MyTask(9,2), MyTask(4,3) ]
print (procrastination(assignments2))
assignments3= [MyTask(14,10), MyTask(33,2), MyTask(5,3), MyTask(14,1), MyTask(10,2)]
print (procrastination(assignments3))
```
### Dynamic approach with the same logic
```
class MyTask(object):
def __init__(self, deadline, duration):
self.deadline = deadline
self.duration = duration
def __str__(self):
return 'T(%d,%d)' %(self.deadline, self.duration)
def procrastination(assignments):
lst = []
for i in assignments:
time = i.deadline
for j in assignments:
if i.deadline >= j.deadline:
time -= j.duration
lst.append(time)
if min(lst) < 0:
return -1
else:
return min(lst)
## TEST CASES ##
assignments = [ MyTask(9,1), MyTask(9,2), MyTask(7,1) ]
print (procrastination(assignments))
assignments1 = [ MyTask(3,2), MyTask(3,2) ]
print (procrastination(assignments1))
assignments2= [ MyTask(9,1), MyTask(9,2), MyTask(4,3) ]
print (procrastination(assignments2))
assignments3= [MyTask(14,10), MyTask(33,2), MyTask(5,3), MyTask(14,1), MyTask(10,2)]
print (procrastination(assignments3))
```
#### Notes for Q7: <br>
-- Compile assignments in a dictionary --<br>
Organise input in the format d[deadline] = duration with repeat<br>
input: [ MyTask(9,1), MyTask(9,2), MyTask(7,1)] <br>
output: {9: [1,2], 7: [1]}<br>
setdefault() is handy for this step.<br>
<br>
-- Prepare above dictionary for easy manupilation --<br>
Process the dictionary into a list of tuples for manupilation in 3 steps<br>
Step 1: Add all values of the same key and put them in a list<br>
input: {9: [1,2], 7: [1]}<br>
output: [3,1]<br>
Step 2: Map dictionary keys and the sum of the key's values into a list of tuples <br>
input: ([9,7],[3,1])<br>
output: [(9,3),(7,1)]<br>
This step 'extracts' the output of the MyTask class objects into something you can manupilate on.<br>
Step 3: Sort the list from largest to smallest<br>
input: ([7,1],[9,3])<br>
output: ([9,3],[7,1])<br>
This step makes my next step easier<br>
-- Calculate potential start times --<br>
-A good procrastinator always manages to complete all work before their deadlines in single sitting at the lastest minute possible. <br>-
Thus, a potential start time is obtained by taking a deadline and substracting the time neded to complete of all work due before that deadline. <br>
In implementation, we can compute the potential start times dynamically by sorting by taking the latest (largest) deadline, subtract the sum of all duration values before it, and removing the tuple from the list.
<br>
<br>
--Satisfy "Procrastination" and "-1" condition --<br>
The rationale of this step is best explained with: <br>
-To meet deadlines, a good procrastinator must starts at the lowest start time value. This is as the earliest start time already barely grants you enough time to complete the first homework. Since all work is done in one sitting, if the earliest start time fails the condition, no other timing can.<br>
<br>
| github_jupyter |
# PerfForesightConsumerType
```
# Initial imports and notebook setup, click arrow to show
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
from HARK.utilities import plotFuncs
from time import clock
import matplotlib.pyplot as plt
import numpy as np
mystr = lambda number : "{:.4f}".format(number)
```
The module $\texttt{HARK.ConsumptionSaving.ConsIndShockModel}$ concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks that are either fully transitory or fully permanent.
$\texttt{ConsIndShockModel}$ currently includes three models:
1. A very basic "perfect foresight" model with no uncertainty (shocks are zero).
2. A model with risk over transitory and permanent income shocks.
3. The model described in (2), with an interest rate for debt that differs from the interest rate for savings.
This notebook provides documentation for the first of these three models.
$\newcommand{\CRRA}{\rho}$
$\newcommand{\DiePrb}{\mathsf{D}}$
$\newcommand{\PermGroFac}{\Gamma}$
$\newcommand{\Rfree}{\mathsf{R}}$
$\newcommand{\DiscFac}{\beta}$
## Statement of the model
The $\texttt{PerfForesightConsumerType}$ class solves the problem of a consumer with Constant Relative Risk Aversion utility
${\CRRA}$
\begin{equation}
U(C) = \frac{C^{1-\CRRA}}{1-\rho},
\end{equation}
who has perfect foresight about everything except whether he will die between the end of period $t$ and the beginning of period $t+1$. Permanent labor income $P_t$ grows from period $t$ to period $t+1$ by factor $\PermGroFac_{t+1}$. The consumer faces no artificial borrowing constraint: He is able to borrow against his entire future stream of income.
At the beginning of period $t$, the consumer has market resources $M_t$ (which includes both market wealth and currrent income) and must choose how much to consume $C_t$ and how much to retain in a riskless asset $A_t$, which will earn return factor $\Rfree$. The agent's flow of future utility $U(C_{t+n})$ from consumption is geometrically discounted by factor $\DiscFac$ per period. The consumer only experiences future value if he survives, which occurs with probability $1-\DiePrb_{t+1}$.
For parallelism with the treatment of more complicated problems, we write the problem rather elaborately in Bellman form as:
\begin{eqnarray*}
V_t(M_t,P_t) &=& \max_{C_t}~U(C_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) V_{t+1}(M_{t+1},P_{t+1}), \\
& s.t. & \\
A_t &=& M_t - C_t, \\
M_{t+1} &=& \Rfree A_t + Y_{t+1}, \\
Y_{t+1} &=& P_{t+1}, \\
P_{t+1} &=& \PermGroFac_{t+1} P_t.
\end{eqnarray*}
The parameters of the consumer's problem are the coefficient of relative risk aversion $\CRRA$, the intertemporal discount factor $\DiscFac$, an interest factor $\Rfree$, and age-varying sequences of the permanent income growth factor $\PermGroFac_t$ and survival probability $(1 - \DiePrb_t)$. [These lecture notes](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA) show that under these assumptions the problem can be transformed into an equivalent problem stated in terms of *normalized* variables (represented in lower case); all real variables are divided by permanent income $P_t$ and value is divided by $P_t^{1-\CRRA}$. The Bellman form of the normalized model (see the lecture notes for details) is:
\begin{eqnarray*}
v_t(m_t) &=& \max_{c_t}~U(c_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) \PermGroFac_{t+1}^{1-\CRRA} v_{t+1}(m_{t+1}), \\
& s.t. & \\
a_t &=& m_t - c_t, \\
m_{t+1} &=& a_t (\Rfree/\PermGroFac_{t+1} )+ 1.
\end{eqnarray*}
## Solution method for PerfForesightConsumerType
Because of the assumptions of CRRA utility, no risk other than mortality, and no artificial borrowing constraint, the problem has a closed form solution in which consumption is a linear function of resources, and the utility-inverse of the value function is also linear (that is, $u^{-1}(v)$ is linear in $m$). Details of the mathematical solution of this model can be found in the lecture notes [PerfForesightCRRA](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA).
The one period problem for this model is solved by the function $\texttt{solveConsPerfForesight}$, which creates an instance of the class $\texttt{ConsPerfForesightSolver}$. To construct an instance of the class $\texttt{PerfForesightConsumerType}$, several parameters must be passed to this constructor.
## Example parameter values
| Parameter | Description | Code | Example value | Time-varying? |
| :---: | --- | --- | --- | --- |
| $\DiscFac$ |Intertemporal discount factor | $\texttt{DiscFac}$ | $0.96$ | |
| $\CRRA $ |Coefficient of relative risk aversion | $\texttt{CRRA}$ | $2.0$ | |
| $\Rfree$ | Risk free interest factor | $\texttt{Rfree}$ | $1.03$ | |
| $1 - \DiePrb_{t+1}$ |Survival probability | $\texttt{LivPrb}$ | $[0.98]$ | $\surd$ |
|$\PermGroFac_{t+1}$|Permanent income growth factor|$\texttt{PermGroFac}$| $[1.01]$ | $\surd$ |
|$T$| Number of periods in this type's "cycle" |$\texttt{T_cycle}$| $1$ | |
|(none)| Number of times the "cycle" occurs |$\texttt{cycles}$| $0$ | |
Note that the survival probability and income growth factor have time subscripts; likewise, the example values for these parameters are *lists* rather than simply single floats. This is because those parameters are in principle *time-varying*: their values can depend on which period of the problem the agent is in (for example, mortality probability depends on age). All time-varying parameters *must* be specified as lists, even when the model is being solved for an infinite horizon case where in practice the parameter takes the same value in every period.
The last two parameters in the table specify the "nature of time" for this type: the number of (non-terminal) periods in this type's "cycle", and the number of times that the "cycle" occurs. *Every* subclass of $\texttt{AgentType}$ uses these two code parameters to define the nature of time. Here, $\texttt{T_cycle}$ has the value $1$, indicating that there is exactly one period in the cycle, while $\texttt{cycles}$ is $0$, indicating that the cycle is repeated in *infinite* number of times-- it is an infinite horizon model, with the same "kind" of period repeated over and over.
In contrast, we could instead specify a life-cycle model by setting $\texttt{T_cycle}$ to $1$, and specifying age-varying sequences of income growth and survival probability. In all cases, the number of elements in each time-varying parameter should exactly equal $\texttt{T_cycle}$.
The parameter $\texttt{AgentCount}$ specifies how many consumers there are of this *type*-- how many individuals have these exact parameter values and are *ex ante* homogeneous. This information is not relevant for solving the model, but is needed in order to simulate a population of agents, introducing *ex post* heterogeneity through idiosyncratic shocks. Of course, simulating a perfect foresight model is quite boring, as there are *no* idiosyncratic shocks other than death!
The cell below defines a dictionary that can be passed to the constructor method for $\texttt{PerfForesightConsumerType}$, with the values from the table here.
```
PerfForesightDict = {
# Parameters actually used in the solution method
"CRRA" : 2.0, # Coefficient of relative risk aversion
"Rfree" : 1.03, # Interest factor on assets
"DiscFac" : 0.96, # Default intertemporal discount factor
"LivPrb" : [0.98], # Survival probability
"PermGroFac" :[1.01], # Permanent income growth factor
# Parameters that characterize the nature of time
"T_cycle" : 1, # Number of periods in the cycle for this agent type
"cycles" : 0 # Number of times the cycle occurs (0 --> infinitely repeated)
}
```
## Inspecting the solution
With the dictionary we have just defined, we can create an instance of $\texttt{PerfForesightConsumerType}$ by passing the dictionary to the class (as if the class were a function). This instance can then be solved by invoking its $\texttt{solve}$ method.
```
PFexample = PerfForesightConsumerType(**PerfForesightDict)
PFexample.cycles = 0
PFexample.solve()
```
The $\texttt{solve}$ method fills in the instance's attribute $\texttt{solution}$ as a time-varying list of solutions to each period of the consumer's problem. In this case, $\texttt{solution}$ will be a list with exactly one instance of the class $\texttt{ConsumerSolution}$, representing the solution to the infinite horizon model we specified.
```
print(PFexample.solution)
```
Each element of $\texttt{solution}$ has a few attributes. To see all of them, we can use the $\texttt{vars}$ built in function: the consumption functions are instantiated in the attribute $\texttt{cFunc}$ of each element of $\texttt{ConsumerType.solution}$. This method creates a (time varying) attribute $\texttt{cFunc}$ that contains a list of consumption functions by age.
```
print(vars(PFexample.solution[0]))
```
The two most important attributes of a single period solution are the (normalized) consumption function $\texttt{cFunc}$ and the (normalized) value function $\texttt{vFunc}$; the marginal value function $\texttt{vPfunc}$ is also constructed. Let's plot those functions near the lower bound of the permissible state space (the attribute $\texttt{mNrmMin}$ tells us the lower bound of $m_t$ where the consumption function is defined).
```
print('Linear perfect foresight consumption function:')
mMin = PFexample.solution[0].mNrmMin
plotFuncs(PFexample.solution[0].cFunc,mMin,mMin+10.)
print('Perfect foresight value function:')
plotFuncs(PFexample.solution[0].vFunc,mMin+0.1,mMin+10.1)
```
## Solution Method
### Recursive Formula for $\kappa_{t}$
The paper [BufferStockTheory](https://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory/) has a few other results that are used in the solution code. One is [the recursive formula for the MPC](https://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory/#MPCnvrs). Starting with the last period, in which $\kappa_{T}=1$, the inverse MPC's (and therefore the MPC's themselves) can be constructed using the recursive formula:
\begin{align}
\kappa_{t}^{-1} & = & 1 + \kappa_{t+1}^{-1}(\Rfree \beta)^{1/\rho}/G
\end{align}
### Consumption Function
For the perfect foresight problem, there is a well-known [analytical solution]( http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA/#cFuncAnalytical) for the consumption function: Calling $o_{t}$ 'overall wealth' (including market wealth plus human wealth $h_{t}$) and designating the marginal propensity to consume in period $t$ by $\kappa_{t}$:
\begin{align}
\mathrm{c}_{t} & = o_{t}\kappa_{t}
\end{align}
and in our normalized model $o_{t} = m_{t}-1+h_{t}$ (the '-1' term subtracts off the normalized current income of 1 from market resources $m$ which were market wealth plus current income).
### Value Function
A convenient feature of the perfect foresight problem is that the value function has a simple [analytical form](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA/#vFuncAnalytical):
\begin{align}
\mathrm{v}_{t} & = \mathrm{u}(\mathrm{c}_{t}(m))\kappa_{t}^{-1}\\
&= \mathrm{u}(o_{t} \kappa_{t}) \kappa_{t}^{-1} \\
&= \mathrm{u}(o_{t})\kappa_{t}^{1-\rho} \kappa_{t}^{-1} \\
&= \mathrm{u}(o_{t})\kappa_{t}^{-\rho}
\end{align}
This means that the utility-inverse of the value function, ${\scriptsize \Lambda} \equiv \mathrm{u}^{-1}(\mathrm{v})$, is linear:
\begin{align}
\scriptsize \Lambda_{t} & = o_{t} \kappa_{t}^{-\rho/(1-\rho)}
\end{align}
When uncertainty or liquidity constraints are added to the problem, the ${\scriptsize \Lambda}$ function is no longer linear. But even in these cases, the utility-inverse of the value function is much better behaved (e.g., closer to linear; bounded over any feasible finite range of $m$) than the uninverted function (which, for example, approaches $-\infty$ as $m$ approaches its lower bound).
Our procedure will therefore generically be to construct the inverse value function, and to obtain the value function from it by uninverting. That is, we construct an interpolating approximation of $\scriptsize \Lambda_{t}$ and compute value on-the-fly from
\begin{align}
\mathrm{v}_{t}(m) & = \mathrm{u}({\scriptsize \Lambda_{t}}(m))
\end{align}
In this case, the interpolation is exact, not an approximation: We need only two points to construct a line, so we choose the minimum possible value of normalized market resources, $\texttt{mNrmMin}$, where $o_{t}=0$ so that $c_{t}=0$, and that minimum plus 1, where the inverted value function will have the value $\kappa_{t}^{-\rho/(1-\rho)}$. From these we construct $vFuncNvrs$ as a linear interpolating function (which automatically extrapolates to the whole number line).
## Checking Solution Conditions
The code performs tests for whether the supplied parameter values meet various conditions that determine the properties of the solution. Some conditions (like the Finite Human Wealth Condition) are required for the model to have a sensible solution, and if these conditions are violated the code generates a warning message. Other conditions govern characteristics of the model like whether consumption is falling (whether the consumer is 'absolutely impatient'). All conditions can manually be performed using the syntax below. The function returns "False" if none of the key conditions has been violated.
```
PFexample.checkConditions(verbose=True,public_call=True)
```
An element of $\texttt{solution}$ also includes the (normalized) marginal value function $\texttt{vPfunc}$, and the lower and upper bounds of the marginal propensity to consume (MPC) $\texttt{MPCmin}$ and $\texttt{MPCmax}$. Note that with a linear consumption function, the MPC is constant, so its lower and upper bound are identical.
## Simulating the model
Suppose we wanted to simulate many consumers who share the parameter values that we passed to $\texttt{PerfForesightConsumerType}$-- an *ex ante* homogeneous *type* of consumers. To do this, our instance would have to know *how many* agents there are of this type, as well as their initial levels of assets $a_t$ and permanent income $P_t$.
### Setting Parameters
Let's fill in this information by passing another dictionary to $\texttt{PFexample}$ with simulation parameters. The table below lists the parameters that an instance of $\texttt{PerfForesightConsumerType}$ needs in order to successfully simulate its model using the $\texttt{simulate}$ method.
| Description | Code | Example value |
| :---: | --- | --- |
| Number of consumers of this type | $\texttt{AgentCount}$ | $10000$ |
| Number of periods to simulate | $\texttt{T_sim}$ | $120$ |
| Mean of initial log (normalized) assets | $\texttt{aNrmInitMean}$ | $-6.0$ |
| Stdev of initial log (normalized) assets | $\texttt{aNrmInitStd}$ | $1.0$ |
| Mean of initial log permanent income | $\texttt{pLvlInitMean}$ | $0.0$ |
| Stdev of initial log permanent income | $\texttt{pLvlInitStd}$ | $0.0$ |
| Aggregrate productivity growth factor | $\texttt{PermGroFacAgg}$ | $1.0$ |
| Age after which consumers are automatically killed | $\texttt{T_age}$ | $None$ |
We have specified the model so that initial assets and permanent income are both distributed lognormally, with mean and standard deviation of the underlying normal distributions provided by the user.
The parameter $\texttt{PermGroFacAgg}$ exists for compatibility with more advanced models that employ aggregate productivity shocks; it can simply be set to 1.
In infinite horizon models, it might be useful to prevent agents from living extraordinarily long lives through a fortuitous sequence of mortality shocks. We have thus provided the option of setting $\texttt{T_age}$ to specify the maximum number of periods that a consumer can live before they are automatically killed (and replaced with a new consumer with initial state drawn from the specified distributions). This can be turned off by setting it to $\texttt{None}$.
The cell below puts these parameters into a dictionary, then gives them to $\texttt{PFexample}$. Note that all of these parameters *could* have been passed as part of the original dictionary; we omitted them above for simplicity.
```
# Create parameter values necessary for simulation
SimulationParams = {
"AgentCount" : 10000, # Number of agents of this type
"T_sim" : 120, # Number of periods to simulate
"aNrmInitMean" : -6.0, # Mean of log initial assets
"aNrmInitStd" : 1.0, # Standard deviation of log initial assets
"pLvlInitMean" : 0.0, # Mean of log initial permanent income
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor
"T_age" : None, # Age after which simulated agents are automatically killed
}
PFexample(**SimulationParams) # This implicitly uses the assignParameters method of AgentType
```
To generate simulated data, we need to specify which variables we want to track the "history" of for this instance. To do so, we set the $\texttt{track_vars}$ attribute of our $\texttt{PerfForesightConsumerType}$ instance to be a list of strings with the simulation variables we want to track.
In this model, valid elments of $\texttt{track_vars}$ include $\texttt{mNrmNow}$, $\texttt{cNrmNow}$, $\texttt{aNrmNow}$, and $\texttt{pLvlNow}$. Because this model has no idiosyncratic shocks, our simulated data will be quite boring.
### Generating simulated data
Before simulating, the $\texttt{initializeSim}$ method must be invoked. This resets our instance back to its initial state, drawing a set of initial $\texttt{aNrmNow}$ and $\texttt{pLvlNow}$ values from the specified distributions and storing them in the attributes $\texttt{aNrmNow_init}$ and $\texttt{pLvlNow_init}$. It also resets this instance's internal random number generator, so that the same initial states will be set every time $\texttt{initializeSim}$ is called. In models with non-trivial shocks, this also ensures that the same sequence of shocks will be generated on every simulation run.
Finally, the $\texttt{simulate}$ method can be called.
```
# Create PFexample object
PFexample.track_vars = ['mNrmNow']
PFexample.initializeSim()
PFexample.simulate()
```
Each simulation variable $\texttt{X}$ named in $\texttt{track_vars}$ will have the *history* of that variable for each agent stored in the attribute $\texttt{X_hist}$ as an array of shape $(\texttt{T_sim},\texttt{AgentCount})$. To see that the simulation worked as intended, we can plot the mean of $m_t$ in each simulated period:
```
# Plot market resources over time
plt.plot(np.mean(PFexample.mNrmNow_hist,axis=1))
plt.xlabel('Time')
plt.ylabel('Mean normalized market resources')
plt.show()
```
A perfect foresight consumer can borrow against the PDV of his future income-- his human wealth-- and thus as time goes on, our simulated impatient agents approach the (very negative) steady state level of $m_t$ while being steadily replaced with consumers with roughly $m_t=1$.
The slight wiggles in the plotted curve are due to consumers randomly dying and being replaced; their replacement will have an initial state drawn from the distributions specified by the user. To see the current distribution of ages, we can look at the attribute $\texttt{t_age}$.
```
# Plot the CDF
N = PFexample.AgentCount
F = np.linspace(0.,1.,N)
plt.plot(np.sort(PFexample.t_age),F)
plt.xlabel('Current age of consumers')
plt.ylabel('Cumulative distribution')
plt.show()
```
The distribution is (discretely) exponential, with a point mass at 120 with consumers who have survived since the beginning of the simulation.
One might wonder why HARK requires users to call $\texttt{initializeSim}$ before calling $\texttt{simulate}$: Why doesn't $\texttt{simulate}$ just call $\texttt{initializeSim}$ as its first step? We have broken up these two steps so that users can simulate some number of periods, change something in the environment, and then resume the simulation.
When called with no argument, $\texttt{simulate}$ will simulate the model for $\texttt{T_sim}$ periods. The user can optionally pass an integer specifying the number of periods to simulate (which should not exceed $\texttt{T_sim}$).
In the cell below, we simulate our perfect foresight consumers for 80 periods, then seize a bunch of their assets (dragging their wealth even more negative), then simulate for the reamining 40 periods.
```
# The final resulting distribution is reasonably coherent
PFexample.initializeSim()
PFexample.simulate(80)
PFexample.aNrmNow += -5. # Adjust all simulated consumers' assets downward by 5
PFexample.simulate(40)
plt.plot(np.mean(PFexample.mNrmNow_hist,axis=1))
plt.xlabel('Time')
plt.ylabel('Mean normalized market resources')
plt.show()
```
| github_jupyter |
```
#@title 小波神经网络 { display-mode: "both" }
# 程序实现包含一个小波隐藏层的小波神经网络,小波函数为 morlet 函数
# 单个隐层的小波神经网络的能力与双隐层的普通神经网络相当
# 详见 NN.py 和 NN.ipynb
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
from numpy.linalg import norm
import numpy as np
import time
# 计时装饰器
def timer(func):
def wrapper(*args, **kwargs):
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
print('Training time is :{:.2f} s.'.format(end_time - start_time))
return wrapper
```
## 定义网络结构类
```
class WaveletNeuralNet(object):
# 初始化神经网络,sizes是神经网络的层数和每层神经元个数
def __init__(self, sizes):
self.sizes_ = sizes
self.num_layers_ = len(sizes) # 层数
if self.num_layers_ > 3:
print('ERROR!')
self.num_nuerals_ = sizes[1]
self.w_ = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] # w_、b_初始化为正态分布随机数
self.b_ = [np.random.randn(y, 1) for y in sizes[1:]]
self.t_ = np.random.randint(2, 15, (self.num_nuerals_, 1))
# self.t_ = np.random.normal(5, 2., (self.num_nuerals_, 1))
self.s_ = 2 * np.random.randn(self.num_nuerals_, 1)
# 标签转化
def one_hot(self, x, num_classes):
x = x.flatten().astype('uint8')
m = x.shape[0]
x_onehot = np.zeros((m, num_classes))
for i in range(m):
x_onehot[i, x[i]] = 1
return x_onehot
# Sigmoid函数,S型曲线,
def sigmoid(self, z):
return 1.0 / (1.0 + np.exp(-z))
# Sigmoid函数的导函数
def sigmoid_der(self, z):
return self.sigmoid(z) * (1 - self.sigmoid(z))
# morlet小波母函数
def phi(self, z, t=1, s=0):
z_ = (z - s) / t
return np.cos(1.75 * z_) * np.exp(-z_**2 / 2.)
# 小波函数导数
def phi_der(self, z, t=1, s=0):
z_ = (z - s) / t
return (-1.75 * np.sin(1.75 * z_) * np.exp(-z_**2 / 2) - z_ * np.cos(1.75 * z_) * np.exp(-z_**2 / 2)) / t
def feedforward(self, x): # 前向
n = self.w_[0].shape[1]
x = x.reshape(n, -1)
x1 = self.phi(np.dot(self.w_[0], x) + self.b_[0], self.t_, self.s_)
x2 = self.sigmoid(np.dot(self.w_[1], x1) + self.b_[1])
return x2
# 反向传播
def backprop(self, x, y):
b_new = [np.zeros(b.shape) for b in self.b_]
w_new = [np.zeros(w.shape) for w in self.w_]
t_new = self.t_
s_new = self.s_
activation = x
activations = [x] # activations代表着每层的输出
zs = [] # zs代表着每层的输入,即前层输出与权重的和
z = np.dot(self.w_[0], activation) + self.b_[0]
zs.append(z)
activation = self.phi(z, t_new, s_new)
activations.append(activation)
z = np.dot(self.w_[1], activation) + self.b_[1]
zs.append(z)
activation = self.sigmoid(z)
activations.append(activation)
delta = self.cost_derivative(activations[-1], y) * self.sigmoid_der(zs[-1])
b_new[-1] = delta
w_new[-1] = np.dot(delta, activations[-2].transpose())
delta_last = delta.copy()
z = zs[-2]
sp = self.phi_der(z, t_new, s_new)
delta = np.dot(self.w_[-1].transpose(), delta_last) * sp
b_new[-2] = delta
w_new[-2] = np.dot(delta, activations[-3].transpose())
sp_t = -.5 * t_new**-1.5 * self.phi((z-s_new) / t_new) - t_new**-2.5 * (z - s_new) * self.phi_der((z - s_new) / t_new)
sp_s = -t_new**-1.5 * self.phi_der((z-s_new) / t_new)
# t_new = np.dot(self.w_[-1].transpose(), delta_last)*sp_t # loss函数对小波函数缩放系数的偏导
# s_new = np.dot(self.w_[-1].transpose(), delta_last)*sp_s # loss函数对小波函数平移系数的偏导
t_new = delta * sp_t # loss函数对小波函数缩放系数的偏导
s_new = delta * sp_s # loss函数对小波函数平移系数的偏导
return (b_new, w_new, t_new, s_new)
# 更新权值w,偏移b,缩放因子t,偏移因子s
def update_mini_batch(self, mini_batch, lr):
b_new = [np.zeros(b.shape) for b in self.b_]
w_new = [np.zeros(w.shape) for w in self.w_]
a, b = mini_batch[:, :-1], self.one_hot(mini_batch[:, -1], num_classes=10)
n = np.float(mini_batch.shape[0])
for i in range(int(n)):
x, y = a[i, :].reshape(-1, 1), b[i, :].reshape(-1, 1)
delta_b_new, delta_w_new, t_new, s_new = self.backprop(x, y)
b_new = [nb + dnb for nb, dnb in zip(b_new, delta_b_new)]
w_new = [nw + dnw for nw, dnw in zip(w_new, delta_w_new)]
self.w_ = [w - lr * nw for w, nw in zip(self.w_, w_new)]
self.b_ = [b - lr * nb for b, nb in zip(self.b_, b_new)]
self.t_ = self.t_ - lr * t_new
self.s_ = self.s_ - lr * s_new
# training_data是训练数据(x, y), epochs是训练次数, mini_batch_size是每次训练样本数, lr是learning rate,step是展示的迭代间隔
@timer
def SGD(self, training_data, epochs=50, mini_batch_size=32, lr=.1, step=10):
assert type(step) == int, 'Step must be a integer.'
n = training_data[0].shape[0]
for j in range(epochs):
ss = np.hstack((training_data[0],training_data[1].reshape(n, -1)))
np.random.shuffle(ss)
mini_batches = [ss[k:k + mini_batch_size, :] for k in range(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, lr)
accur = self.evaluate(training_data) * 100
mse_loss = self.mse_loss(training_data)
if (j + 1) % step == 0 or j == 0:
print("Epoch {0}, mse_loss: {1:.4f}, accury on the training set :{2:.2f}{3}".format(j+1, mse_loss, accur, '%'))
# print("Epoch {0}: {1} / {2}".format(j, self.evaluate(training_data), n))
# 计算正确率
def evaluate(self, data):
x_t, x_label = data
test_results = [(np.argmax(self.feedforward(x)), y) for (x, y) in zip(list(x_t), list(x_label))]
acc = sum(int(x == y) for (x, y) in test_results) / x_t.shape[0]
return acc
# mse_loss的导数
def cost_derivative(self, output_activations, y):
return (output_activations - y)
# mse_loss
def mse_loss(self, training_data):
x_t,x_label = training_data
test_results = [.5 * norm(self.feedforward(x).flatten() - self.one_hot(y, num_classes=10))**2
for (x, y) in zip(list(x_t), list(x_label))]
return np.array(test_results).mean()
# 预测
def predict(self, data):
data = data.reshape(-1, self.sizes_[0])
value = np.array([np.argmax(net.feedforward(x)) for x in data], dtype='uint8')
return value
# 保存训练模型
def save(self):
pass # 把_w和_b保存到文件(pickle)
def load(self):
pass
```
## 主程序区
```
mnist = input_data.read_data_sets('./MNIST_data', one_hot=False)
training_data = mnist.train.next_batch(5000)
testing_data = mnist.test.next_batch(1000)
num_classes = 10
net = WaveletNeuralNet([784, 128, num_classes])
net.SGD(training_data, epochs=200, mini_batch_size=32, lr=.1, step=20)
net.t_.flatten()
net.s_.flatten()
```
## 验证区
```
# testing_data = mnist.test.next_batch(1000)
net.evaluate(testing_data)
plt.imshow(training_data[0][10].reshape(28,-1), 'gray')
plt.xticks([]), plt.yticks([])
plt.show()
training_data[1][10]
net.predict(training_data[0][10])
```
| github_jupyter |
```
import pandas as pd
import traitlets
import ipywidgets
import bqplot
import numpy as np
states = pd.read_csv("us-states.csv", parse_dates = ["date"])
states.loc
states.iloc
states.head()
states.loc[0:3]
states.iloc[0:3]
states_by_date = states.set_index("date")
states_by_date
states_by_date.loc['2020-01-21':'2020-01-23']
states_by_date.iloc[0:4]
states_by_date.loc['2020-01-21':'2020-01-25']
states_by_date.groupby("state").max()["cases"]
total_cases = states_by_date.groupby("date").sum()["cases"]
states_by_date.groupby("state").get_group("Illinois")
states_timeseries = dict(tuple(_) for _ in states_by_date.groupby("state"))
states_timeseries['Illinois']
case_counts = states.groupby("fips")["cases"].max().to_dict()
proj = bqplot.AlbersUSA()
color_sc = bqplot.ColorScale(scheme = 'Reds')
color_ax = bqplot.ColorAxis(scale = color_sc, label = "Case Count", reverse = True)
mark = bqplot.Map(map_data = bqplot.topo_load("map_data/USStatesMap.json"),
scales = {'projection': proj, 'color': color_sc},
color = case_counts,
colors = {17: '#ff0000'},
hovered_styles = {'hovered_fill': 'none',
'hovered_stroke': 'black',
'hovered_stroke_width': 5.0}
)
fig = bqplot.Figure(marks = [mark], axes = [color_ax])
display(fig)
date_sc = bqplot.DateScale()
case_sc = bqplot.LogScale()
date_ax = bqplot.Axis(scale = date_sc)
case_ax = bqplot.Axis(scale = case_sc, orientation = 'vertical')
lines = bqplot.Lines(x = total_cases.index, y = total_cases,
scales = {'x': date_sc, 'y': case_sc})
interval_selector = bqplot.interacts.FastIntervalSelector(scale = date_sc)
fig = bqplot.Figure(marks = [lines], axes = [date_ax, case_ax], interaction = interval_selector)
display(fig)
case_counts = states_by_date.groupby("fips")["cases"].max().to_dict()
proj = bqplot.AlbersUSA()
color_sc = bqplot.ColorScale(scheme = 'Reds', min = states_by_date["cases"].min(), max = states_by_date["cases"].max())
color_ax = bqplot.ColorAxis(scale = color_sc, label = "Case Count", reverse = True)
mark = bqplot.Map(map_data = bqplot.topo_load("map_data/USStatesMap.json"),
scales = {'projection': proj, 'color': color_sc},
color = case_counts,
colors = {'default_color': 'white'},
hovered_styles = {'hovered_fill': 'none',
'hovered_stroke': 'black',
'hovered_stroke_width': 5.0}
)
fig_map = bqplot.Figure(marks = [mark], axes = [color_ax])
date_sc = bqplot.DateScale()
case_sc = bqplot.LogScale()
date_ax = bqplot.Axis(scale = date_sc)
case_ax = bqplot.Axis(scale = case_sc, orientation = 'vertical')
lines = bqplot.Lines(x = total_cases.index, y = total_cases,
scales = {'x': date_sc, 'y': case_sc})
interval_selector = bqplot.interacts.FastIntervalSelector(scale = date_sc)
fig_line = bqplot.Figure(marks = [lines], axes = [date_ax, case_ax], interaction = interval_selector)
def on_selection_change(change):
if change['new'] is None: return
start_date, stop_date = change['new']
new_color = states_by_date.loc[start_date:stop_date].groupby("fips").max()["cases"].to_dict()
mark.color = new_color
interval_selector.observe(on_selection_change, "selected")
display(ipywidgets.VBox([fig_map, fig_line]))
mark.interactions = {'click': 'select'}
mark.selected
case_counts = states_by_date.groupby("fips")["cases"].max().to_dict()
proj = bqplot.AlbersUSA()
color_sc = bqplot.ColorScale(scheme = 'Reds', min = states_by_date["cases"].min(), max = states_by_date["cases"].max())
color_ax = bqplot.ColorAxis(scale = color_sc, label = "Case Count", reverse = True)
mark = bqplot.Map(map_data = bqplot.topo_load("map_data/USStatesMap.json"),
scales = {'projection': proj, 'color': color_sc},
color = case_counts,
colors = {'default_color': 'white'},
hovered_styles = {'hovered_fill': 'none',
'hovered_stroke': 'black',
'hovered_stroke_width': 5.0}
)
mark.interactions = {'click': 'select'}
fig_map = bqplot.Figure(marks = [mark], axes = [color_ax])
date_sc = bqplot.DateScale()
case_sc = bqplot.LogScale()
date_ax = bqplot.Axis(scale = date_sc)
case_ax = bqplot.Axis(scale = case_sc, orientation = 'vertical')
lines = bqplot.Lines(x = total_cases.index, y = total_cases,
scales = {'x': date_sc, 'y': case_sc})
interval_selector = bqplot.interacts.FastIntervalSelector(scale = date_sc)
fig_line = bqplot.Figure(marks = [lines], axes = [date_ax, case_ax], interaction = interval_selector)
def on_state_selection_change(change):
if change['new'] is None: return
new_data = [total_cases]
fips_groupby = states_by_date.groupby("fips")
for fips_value in change['new']:
new_data.append(fips_groupby.get_group(fips_value)["cases"])
lines.y = pd.DataFrame(new_data)
mark.observe(on_state_selection_change, "selected")
def on_selection_change(change):
if change['new'] is None: return
start_date, stop_date = change['new']
new_color = states_by_date.loc[start_date:stop_date].groupby("fips").max()["cases"].to_dict()
mark.color = new_color
interval_selector.observe(on_selection_change, "selected")
display(ipywidgets.VBox([fig_map, fig_line]))
```
| github_jupyter |
# Gorilla in the data
Reproduce data from this paper:
https://www.biorxiv.org/content/10.1101/2020.07.30.228916v1.full
```
library(tidyverse)
library(jpeg)
download.file('https://classroomclipart.com/images/gallery/Clipart/Black_and_White_Clipart/Animals/gorilla-waving-cartoon-black-white-outline-clipart-914.jpg', 'gorilla.jpg')
gorilla <- readJPEG("gorilla.jpg")
tidy_gorilla <- gorilla[,,1] %>%
as_tibble %>%
mutate(row=n()-row_number()) %>%
pivot_longer(V1:V412,names_to="column",values_to="intensity") %>%
mutate(column = as.integer(str_remove(column,"V")))
tidy_gorilla %>%
filter(intensity<.4) %>%
ggplot(aes(column, row)) +
geom_point()
tidy_gorilla %>%
filter(intensity<.4) %>%
sample_n(1786) %>%
ggplot(aes(column, row)) +
geom_point()
fake_data <- tidy_gorilla %>%
filter(intensity<.4) %>%
sample_n(1786) %>%
transmute(
bmi = (row/max(row)) * 17 + 15,
steps = 15000-column*15000/max(column)
)
fake_data %>%
ggplot(aes(steps,bmi)) + geom_point()
fake_data <- fake_data %>%
mutate(
i=steps*(1+rnorm(n(),0,10)),
sex=if_else(i<=median(steps),"female","male")
) %>%
select(-i)
fake_data %>%
count(sex)
fake_data %>%
ggplot(aes(steps,bmi,color=sex)) + geom_point()
fake_data %>% filter(sex=="female") %>% select(steps,bmi) %>% write_tsv("data9b_w.txt")
fake_data %>% filter(sex=="male") %>% select(steps,bmi) %>% write_tsv("data9b_m.txt")
```
## Tasks
### Hypothesis focused group
Download the two files data9b_w.txt and data9b_m.txt. Each row in both files contains for one person (women in data9b_w.txt, men in data9b_m.txt9) the number of steps that this person took on a particular day (steps) and the body mass index (bmi). Assume that both traits are normally distributed for males and for females. Consider the following (alternative, not null) hypotheses:
a) There is a difference in the mean number of steps between women and men.
b) The correlation coefficient between steps and bmi is negative for women.
c) The correlation coefficient between steps and bmi is positive for men.
Think about which test to use and calculate the corresponding P-value.
Which other conclusions can you draw from the data?
### Hypothesis free group
Download the two files data9b_w.txt and data9b_m.txt. Each row in both files contains for one person (women in data9b_w.txt, men in data9b_m.txt9) the number of steps that this person took on a particular day (steps) and the body mass index (bmi). Assume that both traits are normally distributed for males and for females.
Examine the data appropriately! What do you notice? What conclusions can you draw from the data?
| github_jupyter |
# FMI Hirlam, MET Norway HARMONIE and NCEP GFS comparison demo
In this demo notebook we provide short comparison of using three different weather forecast models:
GFS -- http://data.planetos.com/datasets/noaa_gfs_pgrb2_global_forecast_recompute_0.25degree
HIRLAM -- http://data.planetos.com/datasets/fmi_hirlam_surface
HARMONIE -- http://data.planetos.com/datasets/metno_harmonie_metcoop
You can get more information about the datasets by opening links to their detail pages, but their main difference is that GFS is a global, medium range weather forecast model with lower resolution, and HIRLAM and HARMONIE are limited area models, meaning they cover only small part of the globe, but provide higher resolution of all forecasted field, in return.
First we compare the datasets by showing their spatial coverages, then we demonstrate their resolutions by showing forecast field as a discrete grid (so one can see the difference in grid cell size and resolved surface details) and finally we demonstrate plotting weather forecast for the same variable from three models.
We try to keep this demo short, but in case you are interested in creating a more interactive notebook, please refer to our other examples:
https://github.com/planet-os/demos/blob/master/notebooks/PlanetOS_WAve_Models.ipynb
https://github.com/planet-os/notebooks/blob/master/api-examples/GFS_public_full_demo_main.ipynb
Unlike previous notebooks, we have moved most of the parsing code to external library dh_py_access, which you should get automatically if you get this notebook by cloning the git repository.
If you have any questions, contact our team at https://data.planetos.com
At first, let's import some modules. If you do not have them, download them (ie. using pip or conda).
If you encounter some errors, make sure you have the same numpy, basemap and matplotlib versions.
```
%matplotlib notebook
import numpy as np
print ('numpy version is ', np.__version__)
import matplotlib.pyplot as plt
import mpl_toolkits.basemap
print ('mpl_toolkits.basemap version is ', mpl_toolkits.basemap.__version__)
from mpl_toolkits.basemap import Basemap
import warnings
import datetime
import dateutil.parser
import matplotlib
print ('Matplotlib version is ',matplotlib.__version__)
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
import xarray as xr
```
Import datahub parsing library
```
from API_client.python.lib.dataset import dataset
import dh_py_access.lib.datahub as datahub
from dh_py_access import package_api
# from dh_py_access.lib.dataset import dataset as dataset
# import dh_py_access.lib.datahub as datahub
# from dh_py_access import package_api
```
Now we define hirlam and harmonie namespaces. Add server address and our API key.
<font color='red'>Please add your API key below:</font>
```
server = 'http://api.planetos.com/v1/datasets/'
API_key = open('APIKEY').read().strip()
dh=datahub.datahub_main(API_key)
fmi_hirlam_surface=dataset('fmi_hirlam_surface',dh)
metno_harmonie_metcoop=dataset('metno_harmonie_metcoop',dh)
gfs=dataset('noaa_gfs_pgrb2_global_forecast_recompute_0.25degree',dh)
```
One can easily see what kind of variables are available in given dataset by just calling methods:
1. long_names -- gives a long human readable name for variable, which is unfortunately not standardised in any way
2. standard_names -- gives variable names as defined in CF convention standard name table http://cfconventions.org/standard-names.html
3. variable_names -- names by which you can actually query data from the API
on a given dataset instance.
```
sample_var_names = {fmi_hirlam_surface:'Temperature_height_above_ground',
metno_harmonie_metcoop:'air_temperature_2m',
gfs:'tmp_m'}
today = datetime.datetime.today()
day_ago = today - datetime.timedelta(days=1)
reftime_start = datetime.datetime.strftime(day_ago, '%Y-%m-%dT') + '11:00:00'
reftime_end = datetime.datetime.strftime(day_ago, '%Y-%m-%dT') + '13:00:00'
def get_max_coverage_package(dataset, area_name, varfilter = 'temp'):
"""Download full coverage for limited area datasets"""
coords = dataset.get_dataset_boundaries()
ds_west = np.amin([i[0] for i in coords])
ds_east = np.amax([i[0] for i in coords])
ds_south = np.amin([i[1] for i in coords])
ds_north = np.amax([i[1] for i in coords])
temperature_variable = sample_var_names[dataset]
assert len(temperature_variable) >= 1, "something wrong {0}".format(temperature_variable)
assert type(temperature_variable) == str
return package_api.package_api(dh,dataset.datasetkey,temperature_variable,ds_west,ds_east,ds_south,ds_north,area_name=area_name)
area_name = 'maximum_04'
package_harmonie = get_max_coverage_package(metno_harmonie_metcoop, area_name=area_name)
package_fmi_hirlam = get_max_coverage_package(fmi_hirlam_surface, area_name=area_name)
package_harmonie.make_package()
package_fmi_hirlam.make_package()
package_harmonie.download_package()
package_fmi_hirlam.download_package()
data_harmonie = xr.open_dataset(package_harmonie.get_local_file_name())
data_fmi_hirlam = xr.open_dataset(package_fmi_hirlam.get_local_file_name(),decode_cf=False)
```
Take GFS for area of HARMONIE
```
left = np.amin(data_harmonie['longitude'].data)
right = np.amax(data_harmonie['longitude'].data)
bottom = np.amin(data_harmonie['latitude'].data)
top = np.amax(data_harmonie['latitude'].data)
package_gfs = package_api.package_api(dh,gfs.datasetkey,sample_var_names[gfs],left,right,bottom,top,area_name=area_name)
package_gfs.make_package()
package_gfs.download_package()
data_gfs = xr.open_dataset(package_gfs.get_local_file_name(),decode_cf=False)
```
## Dataset extent and resolution
Get some arbitrary field for demonstration, we use 2m temperature and as you can see, variable names may actually differ a lot between datasets. Please note that "get_tds_field" method is just for getting arbitrary preview image, if you wan't to query data for specific time and reftime, please refer to examples for our raster API (shown in other notebooks referenced to above) or use THREDDS server link given in dataset detail pages.
### Extent
The easiest way to show dataset extent is to plot it on a map with proper projection. We do not show GFS here, because, well, it is global.
```
m = Basemap(projection='ortho',lon_0=10,lat_0=50,resolution='l')
hir_x,hir_y=np.meshgrid(data_fmi_hirlam['lon'],data_fmi_hirlam['lat'])
X_hir,Y_hir=m(hir_x,hir_y)
fig=plt.figure()
plt.subplot(221)
air2d = data_fmi_hirlam[sample_var_names[fmi_hirlam_surface]][0,0,:,:]
air2d = np.ma.masked_where(air2d>500,air2d)
random_data = np.random.rand(947, 5294)
random_x = np.random.rand(947, 5294)
random_y = np.random.rand(947, 5294)
#m.pcolormesh(X_hir,Y_hir,random_data)
m.pcolormesh(random_x,random_y,random_data,vmin=np.min(random_data),vmax=np.max(random_data))
m.drawcoastlines()
plt.subplot(222)
harm_x,harm_y=np.meshgrid(data_harmonie.longitude,data_harmonie.latitude)
X_harm,Y_harm=m(harm_x,harm_y)
m.pcolormesh(X_harm,Y_harm,data_harmonie[sample_var_names[metno_harmonie_metcoop]][0,0,:,:])
m.drawcoastlines()
plt.colorbar()
```
### Resolution
Let's zoom in a little to illustrate difference in resolutions. By plotting the gridded data as a mesh, one can easily get the grid size from the figures. Plot's given for the Norwegian coast.
```
lon1,lon2 = 5,7
lat1,lat2 = 58,59
m2 = Basemap(projection='merc',llcrnrlat=lat1,urcrnrlat=lat2,\
llcrnrlon=lon1,urcrnrlon=lon2,lat_ts=58,resolution='i')
fig=plt.figure(figsize=(8,8))
plt.subplot(221)
## we cannot use .sel() method on hirlam data because
##it was opened with decode_cf=False
## which was because it contains both missing_value and fill_value, see https://github.com/pydata/xarray/issues/1749
x1 = np.argmin(np.abs(data_fmi_hirlam.lon-360-lon1)).data
x2 = np.argmin(np.abs(data_fmi_hirlam.lon-360-lon2)).data+1
y1 = np.argmin(np.abs(data_fmi_hirlam.lat-lat1)).data
y2 = np.argmin(np.abs(data_fmi_hirlam.lat-lat2)).data+1
height = int(np.argmin(np.abs(data_fmi_hirlam.height_above_ground-2)).data)
hir_x,hir_y=np.meshgrid(data_fmi_hirlam.lon[x1:x2].data,data_fmi_hirlam.lat[y1:y2].data)
X,Y=m2(hir_x-360,hir_y)
air2d_hirlam=data_fmi_hirlam.variables[sample_var_names[fmi_hirlam_surface]].isel(time=0,height_above_ground=height,lon=slice(x1,x2),lat=slice(y1,y2))
m2.pcolormesh(X,Y,air2d_hirlam)
m2.drawcoastlines()
plt.colorbar()
plt.subplot(222)
X,Y=m2(harm_x,harm_y)
air2d_harm = data_harmonie[sample_var_names[metno_harmonie_metcoop]].isel(time=0).sel(height1=2,longitude=slice(lon1,lon2),latitude=slice(lat1,lat2))
X,Y=m2(air2d_harm.longitude.data,air2d_harm.latitude.data)
m2.pcolormesh(X,Y,air2d_harm)
m2.drawcoastlines()
plt.colorbar()
plt.subplot(223)
ggg = data_gfs[sample_var_names[gfs]].isel(time1=0).sel(height_above_ground2=2,lon=slice(lon1,lon2),lat=slice(lat2,lat1))
x,y=np.meshgrid(ggg.lon,ggg.lat)
X,Y=m2(x,y)
m2.pcolormesh(X,Y,ggg)
m2.drawcoastlines()
plt.colorbar()
```
Can you guess which model is on which map by just looking at these images?
### Forecast for a single location
First, get point data for all datasets for given variable and for as long time range as the forecast goes.
```
longitude= 25.60
latitude = 58.36
ds = dataset('noaa_rbsn_timeseries',dh)
obs_data = ds.get_station_data_as_pandas(['26233'],variables='temperature',start = reftime_start)
sample_point_data = [(k,k.get_json_data_in_pandas(**{'var':v,'lon':longitude,'lat':latitude,'count':1000,'reftime_start':reftime_start,'reftime_end':reftime_end})) for k,v in sample_var_names.items()]
fig = plt.figure(figsize=(11,6))
for ddd in sample_point_data:
zlevels = [2.]
for i in zlevels:
pdata = np.array(ddd[1][ddd[1]['z']==i][sample_var_names[ddd[0]]],dtype=np.float) - 273.15
if np.sum(np.isnan(pdata)) != pdata.shape[0]:
time = ddd[1][ddd[1]['z']==i]['time']
if 'gfs' in ddd[0].datasetkey:
time = time[:-95]
pdata = pdata[:-95]
plt.plot(time, pdata, label = ddd[0].datasetkey)
plt.plot(obs_data['26233'].index,obs_data['26233']['temperature'].values,label = 'observations')
plt.legend()
plt.grid()
fig.autofmt_xdate()
plt.title('2m temperature forecast in different weather models')
plt.show()
```
| github_jupyter |
# Practical Deep Neural Network Performance Prediction for Hyperparameter Optimization
```
%matplotlib inline
from concurrent import futures
from functools import reduce, wraps
from IPython.display import display
import json
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import os
import pandas as pd
from sklearn.utils import shuffle
import sys
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
tf.logging.set_verbosity(tf.logging.WARN)
print(tf.__version__)
```
## Model
```
N_hidden = 16
model_dir = 'model'
def model(n_hidden):
def model_loss(y, t):
t = tf.reshape(t, [-1])
mse = tf.reduce_mean(tf.square(y - t))
return mse
def training(loss):
optimizer = tf.train.AdamOptimizer()
train_step = optimizer.minimize(loss)
return train_step
x = tf.placeholder(tf.float32, shape=[None, None, 1])
t = tf.placeholder(tf.float32, shape=[None, 1])
n_batch = tf.placeholder(tf.int32, shape=[])
sequence_length = tf.placeholder(tf.int32, shape=[None])
output_keep_prob = tf.placeholder_with_default(1.0, shape=())
cell = tf.contrib.rnn.DropoutWrapper(
tf.nn.rnn_cell.LSTMCell(n_hidden),
output_keep_prob=output_keep_prob,
input_size=x.shape[-1],
variational_recurrent=True,
dtype=tf.float32)
zero_state = cell.zero_state(n_batch, tf.float32)
c_state = tf.placeholder(tf.float32, shape=[None, n_hidden])
h_state = tf.placeholder(tf.float32, shape=[None, n_hidden])
outputs, state = tf.nn.dynamic_rnn(
cell, x, initial_state=tf.nn.rnn_cell.LSTMStateTuple(c_state, h_state),
sequence_length=sequence_length, dtype=tf.float32)
h = tf.transpose(state.h)
W = tf.Variable(tf.truncated_normal([1, n_hidden], stddev=0.01))
b = tf.Variable(tf.zeros([1], dtype=tf.float32))
y = tf.sigmoid(tf.matmul(W, h) + b)
y = tf.reshape(y, [n_batch])
loss = model_loss(y, t)
train_step = training(loss)
init = tf.global_variables_initializer()
return x, t, n_batch, sequence_length, output_keep_prob, y, \
c_state, h_state, zero_state, state, loss, train_step, init
# Create model
(x, t, n_batch, s_len,
output_keep_prob, y, c_state,
h_state, zero_state, lstm_state,
loss, train_step, init) = model(n_hidden=N_hidden)
```
## Training
```
dataname = 'mnist'
batch_size = 16
epochs = 1000
output_keep_rate = 0.5
N_runs = 100
N_validation = 50
N_train = N_runs - N_validation
N_ensembles = 8
class EarlyStopping():
def __init__(self, sess, saver,
fname, patience=30, verbose=0):
self._saver = saver
self._sess = sess
self._fname = fname
self.patience = patience
self.verbose = verbose
self._loss = float('inf')
self._step = 0
def validate(self, loss):
if self._loss <= loss:
self._step += 1
if self._step > self.patience:
if self.verbose:
print('early stopping')
return True
else:
self._step = 0
self._loss = loss
self._saver.save(self._sess, self._fname)
return False
def prepare_data(df):
inputs = []
outputs = []
sequence_lengths = []
for i in range(1, df.shape[1]):
inputs.append(df.iloc[:, :i])
tmp = df.iloc[:, i:i + 1]
tmp.columns = [0]
outputs.append(tmp)
sequence_lengths.extend([i] * df.shape[0])
inputs = reduce(pd.DataFrame.append, inputs)
outputs = reduce(pd.DataFrame.append, outputs)
inputs.fillna(0, inplace=True)
outputs.fillna(0, inplace=True)
inputs.reset_index(inplace=True, drop=True)
outputs.reset_index(inplace=True, drop=True)
sequence_lengths = np.reshape(sequence_lengths, -1)
X = np.array(inputs).reshape(len(inputs), -1, 1)
Y = np.array(outputs).reshape(len(outputs), -1)
return X, Y, sequence_lengths
# Train data
df = pd.read_json('%s.json' % (dataname), orient='split')
display(df.head())
df.head().T.plot(title='Previous learning curves')
dlen = df.shape[1]
tmp = df.copy()
for i in range(1, df.shape[1]):
tmp.iloc[:, i] = (df.iloc[:, i] - df.iloc[:, i - 1]) / (1 - df.iloc[:, i - 1])
tmp.fillna(0, inplace=True)
df = tmp
# Training
with tf.Session() as sess:
for e in list(range(N_ensembles)):
shuffled_idx = np.arange(N_runs)
np.random.shuffle(shuffled_idx)
sess.run(init)
saver = tf.train.Saver()
early_stopping = EarlyStopping(
sess, saver, "%s/%s/%d" % (model_dir, dataname, e))
df_t = df.iloc[shuffled_idx][:N_train]
tmp = np.array(df_t).reshape(-1)
X_train, Y_train, SL_train = prepare_data(df_t)
df_v = df.iloc[shuffled_idx][N_train:]
X_validation, Y_validation, SL_validation = prepare_data(df_v)
for epoch in range(epochs):
X_, Y_, SL_ = shuffle(X_train, Y_train, SL_train)
N_batches = X_.shape[0] // batch_size
for i in range(N_batches):
z = sess.run(zero_state, feed_dict={n_batch: batch_size})
start = i * batch_size
end = start + batch_size
sess.run([train_step, loss], feed_dict={
x: X_[start:end],
t: Y_[start:end],
s_len: SL_[start:end],
n_batch: batch_size,
output_keep_prob: output_keep_rate,
c_state: z[0],
h_state: z[1]
})
z = sess.run(zero_state, feed_dict={n_batch: len(X_validation)})
val_loss = loss.eval(session=sess, feed_dict={
x: X_validation,
t: Y_validation,
s_len: SL_validation,
n_batch: len(X_validation),
c_state: z[0],
h_state: z[1]
})
print('\rensemble: %s\tepoch: %s\tvalidation loss:%s' % (
e, epoch, val_loss), end='')
if early_stopping.validate(val_loss):
break
```
## Prediction
```
dataname = 'mnist'
validation_dataname = 'mnist_test'
ylim = [0.95, 1.0] # [0, 1]
dlen = 20 # 300
inputlen = 1
plot_ticks = np.array(range(dlen))
N_test_cases = 1 # 20
N_sigma = 2
N_ensembles = 8
models = list(range(N_ensembles))
max_workers = len(models)
saver = tf.train.Saver()
config = tf.ConfigProto(device_count={"GPU": 0})
class Predictor():
def __init__(self,
dataname,
validation_dataname,
N_sigma,
dlen,
inputlen,
input_df,
modelpath):
self.dataname = dataname
self.validation_dataname = validation_dataname
self.N_sigma = N_sigma
self.dlen = dlen
self.inputlen = inputlen
self.input_df = input_df
self.modelpath = modelpath
def __call__(self):
with tf.Session(config=config) as sess:
sess.run(init)
saver.restore(sess, self.modelpath)
sess.graph.finalize()
predicted = self.input_df.values.tolist()
z = sess.run(zero_state, feed_dict={n_batch: 1})
y_, z = sess.run([y, lstm_state], feed_dict={
x: np.array(predicted).reshape(1, -1, 1),
n_batch: 1,
s_len: [len(predicted)],
c_state: z[0],
h_state: z[1]
})
predicted.append(y_.reshape(-1)[0])
for _ in range(self.dlen - len(predicted)):
y_, z = sess.run([y, lstm_state], feed_dict={
x: np.array(predicted)[-1:].reshape(1, -1, 1),
n_batch: 1,
s_len: [1],
c_state: z[0],
h_state: z[1]
})
predicted.append(y_.reshape(-1)[0])
for i in range(1, len(predicted)):
predicted[i] = predicted[i - 1] + (1 - predicted[i - 1]) * predicted[i]
predicted = np.array(predicted)
return predicted
class MultiPredictor():
def __init__(self,
dataname,
validation_dataname,
N_sigma,
dlen,
inputlen,
N_ensembles,
max_workers):
self.dataname = dataname
self.validation_dataname = validation_dataname
self.N_sigma = N_sigma
self.dlen = dlen
self.inputlen = inputlen
self.N_ensembles = N_ensembles
self.max_workers = max_workers
self.models = ['%s/%s/%d' % (model_dir, self.dataname, e) for e in models]
self.executor = futures.ProcessPoolExecutor(max_workers=self.max_workers)
def predict(self, input_df):
predictions = []
fs = [self.executor.submit(
Predictor(self.dataname,
self.validation_dataname,
self.N_sigma,
self.dlen,
self.inputlen,
input_df,
m)) for m in self.models]
for future in futures.as_completed(fs):
predictions.append(future.result())
predictions = pd.DataFrame(predictions).iloc[:, input_df.shape[0]:]
return predictions
def __del__(self):
self.executor.shutdown()
def plot(mean, original):
plt.figure()
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.xticks(plot_ticks, plot_ticks + 1)
plt.ylim(ylim)
ax = plt.gca()
plt.plot(mean, color='red', label='Prediction')
original.T.plot(ax=ax, linestyle='dashed', color='gray', label='Ground truth')
plt.legend()
plt.grid()
plt.show()
plt.close()
predictor = MultiPredictor(dataname, validation_dataname, N_sigma, dlen,
inputlen, N_ensembles, max_workers)
pred_df = pd.read_json('%s.json' % (validation_dataname), orient='split')
for target_num in range(N_test_cases):
for i in range(dlen - inputlen):
input_df = pred_df.iloc[target_num, :inputlen + i]
tmp = input_df.copy()
for i in range(1, input_df.shape[0]):
tmp[i] = (input_df[i] - input_df[i - 1]) / (1 - input_df[i - 1])
input_df = tmp
predictions = predictor.predict(input_df)
mean = predictions.mean()
std = predictions.std()
original = pred_df.iloc[target_num]
print('test case: %s\nnumber of inputs: %s\npredictive mean: %s\npredictive std: %s\nground truth: %s' % (
target_num, inputlen + i, mean.values[-1], std.values[-1], original.values[-1]))
plot(mean, original)
```
| github_jupyter |
# Get Started with Notebooks in Azure Machine Learning
Azure Machine Learning is a cloud-based service for creating and managing machine learning solutions. It's designed to help data scientists and machine learning engineers leverage their existing data processing and model development skills and frameworks, and scale their workloads to the cloud.
A lot of data science and machine learning work is accomplished in notebooks like this one. Notebooks consist of *cells*, some of which (like the one containing this text) are used for notes, graphics, and other content usually written using *markdown*; while others (like the cell below this one) contain code that you can run interactively within the notebook.
## The Azure Machine Learning Python SDK
You can run pretty much any Python code in a notebook, provided the required Python packages are installed in the environment where you're running it. In this case, you're running the notebook in a *Conda* environment on an Azure Machine Learning compute instance. This environment is installed in the compute instance by default, and contains common Python packages that data scientists typically work with. It also includes the Azure Machine Learning Python SDK, which is a Python package that enables you to write code that uses resources in your Azure Machine Learning workspace.
Run the cell below to import the **azureml-core** package and checking the version of the SDK that is installed.
```
import azureml.core
print("Ready to use Azure ML", azureml.core.VERSION)
```
## Connect to your workspace
All experiments and associated resources are managed within your Azure Machine Learning workspace. You can connect to an existing workspace, or create a new one using the Azure Machine Learning SDK.
In most cases, you should store workspace connection information in a JSON configuration file. This makes it easier to connect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal or from the workspace details pane in Azure Machine Learning studio, but if you're using a compute instance within your wokspace, the configuration file has already been downloaded to the root folder.
The code below uses the configuration file to connect to your workspace.
> **Note**: The first time you connect to your workspace in a notebook session, you may be prompted to sign into Azure by clicking the `https://microsoft.com/devicelogin` link, entering an automatically generated code, and signing into Azure. After you have successfully signed in, you can close the browser tab that was opened and return to this notebook.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, "loaded")
```
## View Azure Machine Learning resources in the workspace
Now that you have a connection to your workspace, you can work with the resources. For example, you can use the following code to enumerate the compute resources in your workspace.
```
print("Compute Resources:")
for compute_name in ws.compute_targets:
compute = ws.compute_targets[compute_name]
print("\t", compute.name, ':', compute.type)
```
When you've finished exploring this notebook, you can save any changes you have made and close it.
| github_jupyter |
# Validating the 10m Eastern Africa Cropland Mask
## Description
Previously, in the `6_Accuracy_assessment_20m.ipynb` notebook, we were doing preliminary validations on 20m resolution testing crop-masks. The crop-mask was stored on disk as a geotiff. The final cropland extent mask, produced at 10m resolution, is stored in the datacube and requires a different method for validating.
> NOTE: A very big sandbox is required (256GiB RAM) to run this script.
This notebook will output a `confusion error matrix` containing Overall, Producer's, and User's accuracy, along with the F1 score for each class.
***
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load Packages
```
import os
import sys
import glob
import rasterio
import datacube
import pandas as pd
import numpy as np
import seaborn as sn
import matplotlib.pyplot as plt
import geopandas as gpd
from sklearn.metrics import f1_score
from rasterstats import zonal_stats
```
## Analysis Parameters
* `product` : name of crop-mask we're validating
* `bands`: the bands of the crop-mask we want to load and validate. Can one of either `'mask'` or `'filtered'`
* `grd_truth` : a shapefile containing crop/no-crop points to serve as the "ground-truth" dataset
```
product = "crop_mask_eastern"
band = 'mask'
grd_truth = 'data/validation_samples.shp'
```
### Load the datasets
`the cropland extent mask`
```
#connect to the datacube
dc = datacube.Datacube(app='feature_layers')
#load 10m cropmask
ds = dc.load(product=product, measurements=[band]).squeeze()
print(ds)
```
`Ground truth points`
```
#ground truth shapefile
ground_truth = gpd.read_file(grd_truth).to_crs('EPSG:6933')
# rename the class column to 'actual'
ground_truth = ground_truth.rename(columns={'Class':'Actual'})
# reclassifer into int
ground_truth['Actual'] = np.where(ground_truth['Actual']=='non-crop', 0, ground_truth['Actual'])
ground_truth['Actual'] = np.where(ground_truth['Actual']=='crop', 1, ground_truth['Actual'])
ground_truth.head()
```
## Convert points into polygons
When the validation data was collected, 40x40m polygons were evaluated as either crop/non-crop rather than points, so we want to sample the raster using the same small polygons. We'll find the majority or 'mode' statistic within the polygon and use that to compare with the validation dataset.
```
#set radius (in metres) around points
radius = 20
#create circle buffer around points, then find envelope
ground_truth['geometry'] = ground_truth['geometry'].buffer(radius).envelope
```
### Calculate zonal statistics
We want to know what the majority pixel value is inside each validation polygon.
```
def custom_majority(x):
a=np.ma.MaskedArray.count(x)
b=np.sum(x)
c=b/a
if c>0.5:
return 1
if c<=0.5:
return 0
#calculate stats
stats = zonal_stats(ground_truth.geometry,
ds[band].values,
affine=ds.geobox.affine,
add_stats={'majority':custom_majority},
nodata=255)
#append stats to grd truth df
ground_truth['Prediction']=[i['majority'] for i in stats]
ground_truth.head()
```
***
## Create a confusion matrix
```
confusion_matrix = pd.crosstab(ground_truth['Actual'],
ground_truth['Prediction'],
rownames=['Actual'],
colnames=['Prediction'],
margins=True)
confusion_matrix
```
### Calculate User's and Producer's Accuracy
`Producer's Accuracy`
```
confusion_matrix["Producer's"] = [confusion_matrix.loc[0, 0] / confusion_matrix.loc[0, 'All'] * 100,
confusion_matrix.loc[1, 1] / confusion_matrix.loc[1, 'All'] * 100,
np.nan]
```
`User's Accuracy`
```
users_accuracy = pd.Series([confusion_matrix[0][0] / confusion_matrix[0]['All'] * 100,
confusion_matrix[1][1] / confusion_matrix[1]['All'] * 100]
).rename("User's")
confusion_matrix = confusion_matrix.append(users_accuracy)
```
`Overall Accuracy`
```
confusion_matrix.loc["User's","Producer's"] = (confusion_matrix.loc[0, 0] +
confusion_matrix.loc[1, 1]) / confusion_matrix.loc['All', 'All'] * 100
```
`F1 Score`
The F1 score is the harmonic mean of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall), and is calculated as:
$$
\begin{aligned}
\text{Fscore} = 2 \times \frac{\text{UA} \times \text{PA}}{\text{UA} + \text{PA}}.
\end{aligned}
$$
Where UA = Users Accuracy, and PA = Producer's Accuracy
```
fscore = pd.Series([(2*(confusion_matrix.loc["User's", 0]*confusion_matrix.loc[0, "Producer's"]) / (confusion_matrix.loc["User's", 0]+confusion_matrix.loc[0, "Producer's"])) / 100,
f1_score(ground_truth['Actual'].astype(np.int8), ground_truth['Prediction'].astype(np.int8), average='binary')]
).rename("F-score")
confusion_matrix = confusion_matrix.append(fscore)
```
### Tidy Confusion Matrix
* Limit decimal places,
* Add readable class names
* Remove non-sensical values
```
# round numbers
confusion_matrix = confusion_matrix.round(decimals=2)
# rename booleans to class names
confusion_matrix = confusion_matrix.rename(columns={0:'Non-crop', 1:'Crop', 'All':'Total'},
index={0:'Non-crop', 1:'Crop', 'All':'Total'})
#remove the nonsensical values in the table
confusion_matrix.loc["User's", 'Total'] = '--'
confusion_matrix.loc['Total', "Producer's"] = '--'
confusion_matrix.loc["F-score", 'Total'] = '--'
confusion_matrix.loc["F-score", "Producer's"] = '--'
confusion_matrix
```
### Export csv
```
confusion_matrix.to_csv('results/Eastern_10m_accuracy_assessment_confusion_matrix.csv')
```
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
**Last modified:** Dec 2020
| github_jupyter |
# Revisiting Food-Safety Inspections from the Chicago Dataset - A Tutorial (Part 2)
David Lewis, Russell Hofvendahl, Jason Trager
* I switched name order here and put my bio second at the bottom
## 0. Foreward
* probably touch this up
Sustainabilist often works on data that is related to quality assurance and control (QA/QC) inspections of public or private infrastructure. Typically, this infrastructure takes the form of solar energy systems or energy efficiency upgrades for buildings. These data sets almost exclusively belong to private entities that have commissioned a study to evaluate how safe and/or well-installed the infrastructure that they financed is. For this reason, it has been very difficult to put anything up in the public sphere about how our work is conducted and any public documentation of what kind of analysis we do.
Enter Epicodus, a coding bootcamp in Portland, OR. Several weeks ago, I met David and Russell - two eager coding students who were just learning how to code. They were attending the first meeting of CleanWeb Portland’s first meeting, which Sustainabilist organized. We were talking about the lack of public datasets in sustainability, and I mentioned how Chicago’s food science data set was very similar to many of the QA/QC data sets that I have looked at. Just like that, a project was born.
The coding work demonstrated herein is 100% that of the student interns, under my guidance for how to structure, examine, and explore the data. The work was conducted using Google Collaboratory, iPython notebooks, and Anaconda’s scientific computing packages.
## 1. Review
* foreward?
* To prevent foodborne illness inspectors enforce stringent food codes, sometimes with the help of predictive violation models
* We seek to expand the work of the CDPH, exploring highres predictions and neural nets
* We want to to focus on helping restaurants prevent illness and avoid costly violations
* We cleaned and pre-processed data from the following sources (databases)
* ...(probably more stuff)
## 2. Feature engineering
* something on how the model works, what we're building it for, the thing about blinding the model to outcome and then comparing it to actual outcome
* how by training model to guess outcome for canvass inspections we're building a tool that we can feed same paramaters at any time to guess outcome of a simulated canvass inspection
* Somthing on feature selection, why it makes sense to try out what we're trying out
* should we explain features here or below? idk
## 3. Food Inspection Features
* load inspections and select what we want from it to use as basis for model data
* Something on what this data is, where it comes from, why we're using it?
```
import numpy as np
import pandas as pd
import os.path
root_path = os.path.dirname(os.getcwd())
# Load food inspection data
inspections = pd.read_csv(os.path.join(root_path, "DATA/food_inspections.csv"))
# Create basis for model_data
data = inspections.loc[:, ["inspection_id", "license", "inspection_date", "facility_type"]]
```
### 3.1. Pass / Fail Flags
* pass fail flags denote inspection outcome, this is something that will be "covered" so model can guess it
* converted to individual presence/absence flags to help with something or other (what and why specifically?)
```
# Create pass / fail flags
data["pass_flag"] = inspections.results.apply(lambda x: 1 if x == "Pass" else 0)
data["fail_flag"] = inspections.results.apply(lambda x: 1 if x == "Fail" else 0)
```
### 3.2. Facility Risk Flags
* Facilities like restaurants pose greater risk than packaged food kiosks and are given higher risk levels
* Higher risk levels mean greater inspection frequency also (unsure if this is relevant)
* Again converted to numeric form to fit with (specs? what?)
```
# Create risk flags
data["risk_1"] = inspections.results.apply(lambda x: 1 if x == "Risk 1 (High)" else 0)
data["risk_2"] = inspections.results.apply(lambda x: 1 if x == "Risk 2 (Medium)" else 0)
data["risk_3"] = inspections.results.apply(lambda x: 1 if x == "Risk 3 (Low)" else 0)
```
### 3.3. Violation Data
* Violation data is also something the model will be guessing, another part of the inspection outcome
* The data consists of a bunch of rows (representing inspection outcomes) with binary values for whether a specific health code was violated in that inspection
* Merged on inspection ID (each row of data is matched and merged with a violation data row with same ID. rows with no matches are excluded.)
```
# Load violation data
values = pd.read_csv(os.path.join(root_path, "DATA/violation_values.csv"))
counts = pd.read_csv(os.path.join(root_path, "DATA/violation_counts.csv"))
# Merge with violation data, filtering missing data
data = pd.merge(data, values, on="inspection_id")
data = pd.merge(data, counts, on="inspection_id")
```
### 3.4. Past Fails
* Passed fails refers to the previous inspection outcome for that license (as a binary flag)
* This is a strong predictor of inspection outcomes
* Passed fails is something the model will have access to when predicting inspection outcomes, and will be used to guess the actual and current outcome.
* We first create a dataframe of past data by arranging inspections chronologically, grouping by license and shifting each group of inspections by 1, so that the data for each inspection lines up with the row of the next inspection (the first row for each license will by empty and the last inspection is not used). The pre-grouping order is preserved upon shifting.
* (this could use visualization)
* We can then simply attach the fail_flag column to our data as past fails, setting the empty first value as 0 (no previous fail)
```
# Sort inspections by date
grouped = data.sort_values(by="inspection_date", inplace=True)
# Find previous inspections by shifting each license group
past_data = data.groupby("license").shift(1)
# Add past fails, with 0 for first inspections
data["past_fail"] = past_data.fail_flag.fillna(0)
```
### 3.5. Past Violation Data
* individual past violation values might well be good for predicting individual violations (eg watch out mr. restaurant, you violated these codes last inspection so you're at risk for them)
* We can use the same past_data to get past violation values
* We'll modify the names to pv_1, etc
* If we drop inspection_id we can just tack them on to the end of the data using join
* first records are set to 0 (no past violation)
* For past_critical, past_serious and past_minor we can similarly just grab each column and add it as a new column in data
```
# Select past violation values, remove past inspection id
past_values = past_data[values.columns].drop("inspection_id", axis=1).add_prefix("p")
# Add past values to model data, with 0 for first records
data = data.join(past_values.fillna(0))
# Add past violation counts, with 0 for first records
data["past_critical"] = past_data.critical_count.fillna(0)
data["past_serious"] = past_data.serious_count.fillna(0)
data["past_minor"] = past_data.minor_count.fillna(0)
```
### 3.6. Time Since Last
* One potential risk factor is greater time since last inspection (do we say we got this from Chicago team or just give our own justification?)
* To access this convert each inspection date to a python datetime, subtract the previous datetime from the later to create a series of delta objects and convert to days.
* the default is set to two.
```
# Calculate time since previous inspection
deltas = pd.to_datetime(data.inspection_date) - pd.to_datetime(past_data.inspection_date)
# Add years since previous inspection (default to 2)
data["time_since_last"] = deltas.apply(lambda x: x.days / 365.25).fillna(2)
```
### 3.7. First Record
* Actually not sure why this would matter in predicting outcomes? (check)
* Maybe first records are more likely to fail?
* To get it we simply put 1s for rows where data is absent in the shifted past_data.
```
# Check if first record
data["first_record"] = past_data.inspection_id.map(lambda x: 1 if pd.isnull(x) else 0)
```
## 4. Business License Features
* These are the features derived from the busuiness license dataset
* What is a business license? other background info?
### 4.1. Matching Inspections with Licenses
* Load data, see publication 1
```
# Load business license data
licenses = pd.read_csv(os.path.join(root_path, "DATA/business_licenses.csv"))
```
* In order to link food inspections to the business licenses of the facilities inspected we create a table of matches, each linking an inspection to a license
* Many business licenses can be matched by license number to an inspection, but to account for licence discrepancies we also matched based on venue (street address and name)
* Due to formatting differences it was necessary to use only the street number
```
# Business licenses have numbers on end preventing simple match
# so using street number instead
def get_street_number(address):
return address.split()[0]
licenses["street_number"] = licenses.address.apply(get_street_number)
inspections["street_number"] = inspections.address.apply(get_street_number)
# Match based on DBA name and street number
venue_matches = pd.merge(inspections, licenses, left_on=["dba_name", "street_number"], right_on=["doing_business_as_name", "street_number"])
# Match based on license numbers
licence_matches = pd.merge(inspections, licenses, left_on="license", right_on="license_number")
```
* to create the working matches dataset we then appended venue and licence matches and dropped any duplicate inspection / business licence matches.
```
# Join matches, reset index, drop duplicates
matches = venue_matches.append(license_matches, sort=False)
matches.reset_index(drop=True, inplace=True)
matches.drop_duplicates(["inspection_id", "id"], inplace=True)
# Restrict to matches where inspection falls within license period
matches = matches.loc[matches.inspection_date.between(matches.license_start_date, matches.expiration_date)]
```
### 4.2. Filterering by Category
* (This isn't a feature but is only convenient to do once we have the matches dataset. what to do?)
* many non-retail establishments eg schools, hospitals follow different inspection schedules, so to ensure consistent data we filter matches to include only inspections of retail food establishments
* to do this we select the inspection id's of all retail matches, drop any duplicates and merge these id's with the model data
* by default merge includes only rows with keys present in each dataset (inner join)
```
# Select retail food establishment inspection IDs
retail = matches.loc[matches.license_description == "Retail Food Establishment", ["inspection_id"]]
retail.drop_duplicates(inplace=True)
# FILTER: ONLY CONSIDER INSPECTIONS MATCHED WITH RETAIL LICENSES
data = pd.merge(data, retail, on="inspection_id")
```
### 4.3. Calculating Age at Inspection
* What might age at inspection tell?
* One feature previously found significant in predicting inspection outcomes is the age of the facility
* To calculate this we first convert all dates to datetime objects
* We then group by licence and within each group find the earliest license start date
* Finally we subtract this min date from the inspection date and merge the resulting age in with our model data
```
# Convert dates to datetime format
matches.inspection_date = pd.to_datetime(matches.inspection_date)
matches.license_start_date = pd.to_datetime(matches.license_start_date)
def get_age_data(group):
min_date = group.license_start_date.min()
deltas = group.inspection_date - min_date
group["age_at_inspection"] = deltas.apply(lambda x: x.days / 365.25)
return group[["inspection_id", "age_at_inspection"]]
# Calculate (3 mins), drop duplicates
age_data = matches.groupby("license").apply(get_age_data).drop_duplicates()
# Merge in age_at_inspection
data = pd.merge(data, age_data, on="inspection_id", how="left")
```
### 4.4. Calculating Category Data
* The chicago team found the categories of licences attributed to an establishment to be significant in predicting violation outcomes
* This data is derived from the licence_description column of the business licences dataset
* We will be noting the presence or absence of these categories as a series of binary flags
* To derive these features we first set up a dictionary linking the column entries to our desired snake case column titles
* We then group matches by inspection id to gather all licence descriptions for each inspection
* To generate the entries we apply our get_category_data method, using our dictionary to translate from licence_description entries to column titles
* Finally we fill missing entries as 0 and merge the results in with our model data
```
# Translate categories to snake-case titles
categories = {
"Consumption on Premises - Incidental Activity": "consumption_on_premises_incidental_activity",
"Tobacco": "tobacco",
"Package Goods": "package_goods",
"Limited Business License": "limited_business_license",
"Outdoor Patio": "outdoor_patio",
"Public Place of Amusement": "public_place_of_amusement",
"Children's Services Facility License": "childrens_services_facility_license",
"Tavern": "tavern",
"Regulated Business License": "regulated_business_license",
"Filling Station": "filling_station",
"Caterer's Liquor License": "caterers_liquor_license",
"Mobile Food License": "mobile_food_license"
}
# Create binary markers for license categories
def get_category_data(group):
df = group[["inspection_id"]].iloc[[0]]
for category in group.license_description:
if category in categories:
df[categories[category]] = 1
return df
# group by inspection, get categories (2 mins)
category_data = matches.groupby("inspection_id").apply(get_category_data)
# Reset index, set absent categories to 0
category_data.reset_index(drop=True, inplace=True)
category_data.fillna(0, inplace=True)
# Merge in category data, fill nan with 0
data = pd.merge(data, category_data, on="inspection_id", how="left").fillna(0)
```
## 5. Crime Density
* (I'm not sure whether to separate these by dataset or lump them as density features)
* Why we're including this
* Stuff on what kernel density is, why we're including it
* stuff on how we're doing it
* Stuff on how we chose the bandwidth and other params WHICH WE HAVEN'T DONE YET
```
# Load observation datasets
burglaries = pd.read_csv(os.path.join(root_path, "DATA/burglaries.csv"))
# Create datetime columns
inspections["datetime"] = pd.to_datetime(inspections.inspection_date)
burglaries["datetime"] = pd.to_datetime(burglaries.date)
# FILTER: consider only inspections since 2012
# Otherwise early inspections have few/no observations within window
inspections = inspections.loc[inspections.inspection_date >= "2012"]
from datetime import datetime, timedelta
from scipy import stats
def get_kde(observations, column_name, window, bandwidth):
# Sort chronologically and index by datetime
observations.sort_values("datetime", inplace=True)
observations.index = observations.datetime.values
# Generate kernel from 90 days of observations
def get_kde_given_date(group):
stop = group.datetime.iloc[0]
start = stop - timedelta(days=window)
recent = observations.loc[start:stop]
x1 = recent.longitude
y1 = recent.latitude
values = np.vstack([x1, y1])
kernel = stats.gaussian_kde(values)
x2 = group.longitude
y2 = group.latitude
samples = np.vstack([x2, y2])
group[column_name] = kernel(samples)
return group[["inspection_id", column_name]]
# Group inspections by date, generate kernels, sample
return inspections.groupby("inspection_date").apply(get_kde_given_date)
# Calculate burglary density estimates
burglary_kde = get_kde(burglaries, "burglary_kde", 90, 1)
# FILTER: only consider data since 2012 (with good kde data)
data = pd.merge(data, burglary_kde, on="inspection_id")
```
## 6. Garbage Cart Density
* Why we're including this feature
* With our kernel density methods already defined...
```
# Load observation datasets
carts = pd.read_csv(os.path.join(root_path, "DATA/garbage_carts.csv"))
# Create datetime columns
carts["datetime"] = pd.to_datetime(carts.creation_date)
# Calculate garbage cart density estimates
cart_kde = get_kde(carts, "cart_kde", 90, 1)
# FILTER: only consider data since 2012 (with good kde data)
data = pd.merge(data, cart_kde, on="inspection_id")
```
## 7. Sanitation Complaint Density
* Why we're including this feature
* As with crime and garbage carts...
```
# Load observation datasets
complaints = pd.read_csv(os.path.join(root_path, "DATA/sanitation_complaints.csv"))
# Create datetime columns
complaints["datetime"] = pd.to_datetime(complaints.creation_date)
# Calculate sanitation complaint density estimates
complaint_kde = get_kde(complaints, "complaint_kde", 90, 1)
# FILTER: only consider data since 2012 (with good kde data)
data = pd.merge(data, complaint_kde, on="inspection_id")
```
## 8. Weather Features
* What these features are
* where they came from
* why we're including them
```
# Load weather data
weather = pd.read_csv(os.path.join(root_path, "DATA/weather.csv"))
# Merge weather data with model data
data = pd.merge(data, weather, on="inspection_id")
```
## 9. Next Steps
* (just made these up pretty quickly)
* Choosing a model
* tuning the model
* training the model (a neural net probably?)
* building the tool
* distributing the tool
* Russell Hofvendahl is a web application developer with a great fondness for data driven decision making. Russell is excited to explore the applications of data science and machine learning in improving human judgement.
* David Lewis is a seasoned corporate responsibility professional working to utilize technology to help improve the health and well being of human populations through environmental stewardship.
* Jason S. Trager, Ph.D. is the managing partner at Sustainabilist and an expert in process improvement for distributed systems. Jason’s work portfolio includes the creation of novel data-driven methods for improving contractor performance, machine learning to optimize value in energy efficiency sales, and equipment maintenance optimization methodologies.
| github_jupyter |
# How can we interact with a blockchain?
So, we have all these nodes distributed all over the world which are able to trustlessly save immutable transactions over a distributed ledger. Nice, but: how can we use it?
At first, we need an account for it.
## Getting an address
### At first, let's use the "secrets" library for generating a new private key
Beware: for a lot of reasons which include the safety of your private key, NEVER use unsafe random key generators when you create a private key that you will use on a production blockchain. This can easily translate in a hacked account and in **all the money lost**.
This is related to the fact that random number generation performed by computers is not truly random, since the current computational devices are inherently deterministic. So, unless you have a quantum computer at home, it is better if you use safety specific random generators which include some external random input in the private key creation.
In this example we will use secrets library, which i suggest you to not use in a production environment.
Also, we will use the already known and installed package *eth_utils* to perform operations on the keys, together with *eth_keys* one. You can find instructions on how to install this package on <a href="https://github.com/ethereum/eth-keys">eth-keys page on GitHub</a>.
```
import secrets #a secure random hex generator for python
from eth_keys import keys
from eth_utils import decode_hex
#A working address with some ether on it
fixed_private = '5619099d74e5a0c24616de8eabbf4c3a2610d52caa8a7ea18e48ad3a895488f2'
fixed_priv_key_bytes = decode_hex(fixed_private) #decode it in a bytes object
fixed_priv_key = keys.PrivateKey(fixed_priv_key_bytes) #convert it to hex representation
fixed_pub_key = fixed_priv_key.public_key #and calculate the public key
fixed_address = fixed_pub_key.to_checksum_address()
my_private = secrets.token_hex(32) #generate a random 32 digits hexadecimal,
#this will be the bytes representation of our private key
print ('Private key:', my_private)
priv_key_bytes = decode_hex(my_private) #decode it in a bytes object
priv_key = keys.PrivateKey(priv_key_bytes) #convert it to hex representation
pub_key = priv_key.public_key #and calculate the public key
#Finally, extract our Ethereum address from the public key
address = pub_key.to_checksum_address()
print ("Address: ", address)
```
## Connect to the chain
### Install the web3 APIs
In order to interact with the ethereum network, you will have to speak its language. To do so, you will have to use the web3 APIs, a library which "translates" code to ethereum commands.
Web3 APIs exist for a great variety of programming languages, but since we are wokring on python, we will have to install the python web3 APIs. If you did not do it already, you can always follow the instructions it the python web3 <a href="https://github.com/ethereum/web3.py">page on GitHub</a>.
Otherwise, you can just run the following command on shell:
### Select your Ethereum node
Now, we will have to connect to an existing ethereum chain. This can be done in two ways:
- **hosting a local node:** this means we will be come one of the mining nodes of the whole network, and we would be able to be part of the P2P network sustaining the platform. This implies downloading the full blockchain before having the possibility to do so, which will be infeasible for our purposes;
- **use a hosted remote node:** in principle you can use any ethereum node, provided you have access to it, **but remember: it is not a node controlled by you, so it can make a lot of bad things with your (virtual) money**.
Since we will test our system on a test ethereum network, for which the existing eth can be so easily obtained that have no value at all, we will use an external node hosted on <a href="https://infura.io">https://infura.io</a>.
In order to have access to an infura node, the only thing you just have to do is to register and create an account, and get a project url to which our node will connect.
In this case, I already did it for you, and you can access to a infura hosted node at this url: <a href="https://ropsten.infura.io/v3/53b3b1921ddf43f58cabf2eeb7ea7696">href="https://kovan.infura.io/v3/53b3b1921ddf43f58cabf2eeb7ea7696</a>. If something happens during your tests, however, you can easily register at the infura website and use the url that they will provide to you.
This particular node connects to the *ropsten* network, which is one of the ethereum test networks. This network runs **exactly** the same ethereum mainnet protocols and can be used for safely testing Ethereum Dapps, without risking precious eth.
Well, now we have a private address (which means, barely, an account), the APIs and a node. Let's start interacting with the blockchain!
### At first, let's get some money
the balace of your account can be checked by looking for its balance. The node will just follow the entire history of the blockchain, and compute the current account balance by (roughly) summing the previous transactions.
Let's check if we can connect to the node:
```
from web3 import HTTPProvider, Web3, exceptions
w3 = Web3(HTTPProvider("https://ropsten.infura.io/v3/53b3b1921ddf43f58cabf2eeb7ea7696"))
print(w3.isConnected())
```
Then, let's check our balance.
```
myBalance = w3.eth.getBalance(address)
print (myBalance)
```
If everything went well, at this point you should have been able to connect to the ropsten network through infura, and to get the balance of your account (which, unless you are way extremely lucky, should be 0)
### Time to get some money
Since we are working on a test network (Ropsten), it is extremely easy to get money (or better, the Ethreum virtual currency, eth). This can be done by using a faucet, e.g. a website in ehich you give your address as input and you get eth on it in exchange.
To do so, go on the faucet <a href="https://faucet.ropsten.be">website</a>, input your address (Remember? You print it in the previous sections) and you will magically receive some test ethers.
at first, let'check if everything worked well:
```
myBalance = w3.eth.getBalance(address)
print ("Balance in gweis:", myBalance)
print ("Balance in eth:", Web3.fromWei(myBalance, 'ether'))
```
Great, we just got 1 eth from the faucet. With this sum, we should be able to perform all the tests we want.
## Sending ethers to other accounts
One of the first things we can do is to send money to other accounts.
This can be done, again , by using web3 API.
As any other transaction performed on the ethereum network, sending money to other accounts changes the internal state of memory on the blockchain, this means that miners should be paid for processing this information.
The payment of these fees is managed internally by ethereum APIs, which associate a cost to each transaction. This cost is calculated in *gas*, a virtual quantity which roughly estimates the computational complexity of the required transaction.
Since each transaction has a *gas* cost, miners can be paid by offering them a certain amount of *ether* per each unit of *gas* processed.
### How to send a transaction
In order to send a transaction, we should send the corrects instructions to the ethereum network through our node. In order to prove that the transaction order came from the real owner of the address, it should be digitally signed by means of our private key (in the end, this is the reason because it should be kept private, since anyone which know it can digitally sign and commend any transaction).
Then, the digitally signed transaction should be send to the Eth network, where the nodes will verify it and perform its instructions, appending their effects on the upcoming block.
how to do it is shown in the next code block:
```
#Check the target address balance
target_address = '0xd3CdA913deB6f67967B99D67aCDFa1712C293601'
targetBalance = w3.eth.getBalance(target_address)
print ("Original balance in gweis:", targetBalance)
print ("Original balance in eth:", Web3.fromWei(targetBalance, 'ether'))
#at first, define a eth transaction
my_transaction = {
"nonce":w3.eth.getTransactionCount(address), #get the increasing id of the transaction
"gasPrice":w3.eth.gasPrice, #set gas price to the current average
"gas":100000, #Set the maximum gas that should be paid by the transaction
"to":target_address, #Set the address to which teh transaction will be sent
"value":12345, #Define the amount of gwei (one billionth of ether) that should be sent from our address to the new address
"data":b'', #Yo can ìalways add a small message to teh transaction. You will pay for it.
}
#now, let's digitally sign the transaction
signed_txn = w3.eth.account.signTransaction(
my_transaction, #the transaction dictionary
priv_key, #as defined in the first code block
)
#And finally, send the transaction. (It returns the digital signature of the transaction)
transaction_signature = w3.eth.sendRawTransaction(signed_txn.rawTransaction)
print (transaction_signature)
#Just wait some time that the transaction gets accepted
import time
time.sleep(1)
#Print the transaction info
print (w3.eth.getTransaction(transaction_signature))
#check the new balance of the target address:
targetBalance = w3.eth.getBalance(target_address)
print ("Target balance in gweis:", targetBalance)
print ("target balance in eth:", Web3.fromWei(targetBalance, 'ether'))
#And also check the new balance of our address
myBalance = w3.eth.getBalance(address)
print ("Balance in gweis:", myBalance)
print ("Balance in eth:", Web3.fromWei(myBalance, 'ether'))
```
| github_jupyter |
# Building Models in PyMC3
Bayesian inference begins with specification of a probability model relating unknown variables to data. PyMC3 provides the basic building blocks for Bayesian probability models:
1. stochastic random variables
2. deterministic variables
3. factor potentials.
A **stochastic random variable** is a factor whose value is not completely determined by its parents, while the value of a **deterministic random variable** is entirely determined by its parents. Most models can be constructed using only these two variable types. The third quantity, the **factor potential**, is *not* a variable but simply a
log-likelihood term or constraint that is added to the joint log-probability to modify it.
## Example: Inferring patterns in UK coal mining disasters
To motivate this section, let's model a different dataset: a time series of recorded coal mining
disasters in the UK from 1851 to 1962.
Occurrences of disasters in the time series is thought to be derived from a
Poisson process with a large rate parameter in the early part of the time
series, and from one with a smaller rate in the later part. We are interested
in locating the change point in the series, which perhaps is related to changes
in mining safety regulations.
```
import numpy as np
year = np.arange(1851, 1962)
disasters_data = np.array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
fig, ax = plt.subplots(figsize=(12.5, 3.5))
n_count_data = len(disasters_data)
ax.bar(year, disasters_data, color="#348ABD")
ax.set_xlabel("Year")
ax.set_ylabel("Disasters")
ax.set_title("UK coal mining disasters, 1851-1962")
ax.set_xlim(1851, 1962);
```
We are going to use Poisson random variables for this type of count data. Denoting year $i$'s accident count by $y_i$,
$$ y_i \sim \text{Poisson}(\lambda) $$
The modeling problem revolves around estimating the values of the $\lambda$ parameters. Looking at the time series above, it appears that the rate declines later in the time series.
A ***changepoint model*** identifies a point (year) during the observation period (call it $\tau$) after which the parameter $\lambda$ drops to a lower value. So we are estimating two $\lambda$ parameters: one for the early period and another for the late period.
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
We need to assign prior probabilities to both $\lambda$ parameters. The gamma distribution not only provides a continuous density function for positive numbers, but it is also **conjugate** with the Poisson sampling distribution. We will specify suitably vague hyperparameters $\alpha$ and $\beta$ for both priors.
$$\begin{aligned}
\lambda_1 &\sim \text{Gamma}( \alpha, \beta ) \cr
\lambda_2 &\sim \text{Gamma}( \alpha, \beta )
\end{aligned}$$
Since we do not have any intuition about the location of the changepoint (prior to viewing the data), we will assign a discrete uniform prior over all years 1851-1962.
$$\begin{aligned}
& \tau \sim \text{Uniform(1851,1962) }\cr
& \Rightarrow P( \tau = k ) = \frac{1}{111}
\end{aligned}$$
## The FreeRV class
A stochastic variable is represented in PyMC3 by a `FreeRV` class. This structure adds functionality to Theano's `TensorVariable` class, by mixing in the PyMC `Factor` class. A `Factor` is used whenever a variable contributes a log-probability term to a model. Hence, you know a variable is a subclass of `Factor` whenever it has a `logp` method, as we saw in the previous section.
A `FreeRV` object has several important attributes:
`dshape`
: The variable's shape.
`dsize`
: The overall size of the variable.
`distribution`
: The probability density or mass function that describes the distribution of the variable's values.
`logp`
: The log-probability of the variable's current value given the values
of its parents.
`init_value`
: The initial value of the variable, used by many algorithms as a starting point for model fitting.
`model`
: The PyMC model to which the variable belongs.
### Creation of stochastic random variables
There are two ways to create stochastic random variables (`FreeRV` objects), which we will call the **automatic**, and **manual** interfaces.
#### Automatic
Stochastic random variables with standard distributions provided by PyMC3 can be created in a single line using special subclasses of the `Distribution` class. For example, as we have seen, the uniformly-distributed discrete variable $\tau$ in the coal mining disasters model is created using the automatic interface as follows:
```
from pymc3 import Model, Uniform
with Model() as disaster_model:
switchpoint = Uniform('switchpoint', lower=0, upper=110)
```
Similarly, the rate parameters can automatically be given exponential priors:
```
from pymc3 import Exponential
with disaster_model:
early_mean = Exponential('early_mean', lam=1)
late_mean = Exponential('late_mean', lam=1)
```
PyMC includes most of the probability density functions (for continuous variables) and probability mass functions (for discrete variables) used in statistical modeling. Continuous variables are represented by a specialized subclass of `Distribution` called `Continuous` and discrete variables by the `Discrete` subclass.
The main differences between these two sublcasses are in the `dtype` attribute (`int64` for `Discrete` and `float64` for `Continuous`) and the `defaults` attribute, which determines which summary statistic to use for initial values when one is not specified ('mode' for `Discrete` and 'median', 'mean', and 'mode' for `Continuous`).
```
switchpoint.distribution.defaults
```
Sometimes we wish to use a particular statistical distribution, without using it as a variable in a model; for example, to generate random numbers from the distribution. This class method allows that.
```
Exponential.dist(1)
```
#### Manual
The uniformly-distributed discrete stochastic variable `switchpoint` in the disasters model could alternatively be created from a function that computes its log-probability as follows:
```
from pymc3 import DensityDist
from pymc3.math import switch
with Model():
def uniform_logp(value, lower=0, upper=111):
"""The switchpoint for the rate of disaster occurrence."""
return switch((value > upper) | (value < lower), -np.inf, -np.log(upper - lower + 1))
switchpoint = DensityDist('switchpoint', logp=uniform_logp, dtype='int64')
switchpoint.logp({'switchpoint':4})
switchpoint.logp({'switchpoint': 44})
switchpoint.logp({'switchpoint':-1})
```
A couple of things to notice: while the function specified for the `logp` argument can be an arbitrary Python function, it must use **Theano operators and functions** (in this case, `switch`) in its body. This is because one or more of the arguments passed to the function may be `TensorVariables`, and they must be supported. Also, we passed the value to be evaluated by the `logp` function as a **dictionary**, rather than as a plain integer. By convention, values in PyMC3 are passed around as a data structure called a `Point`. Points in parameter space are represented by dictionaries with parameter names as they keys and the value of the parameters as the values.
To emphasize, the Python function passed to `DensityDist` should compute the *log*-density or *log*-probability of the variable. That is why the return value in the example above is `-log(upper-lower+1)` rather than `1/(upper-lower+1)`.
## The ObservedRV Class
Stochastic random variables whose values are observed (*i.e.* data likelihoods) are represented by a different class than unobserved random variables. A `ObservedRV` object is instantiated any time a stochastic variable is specified with data passed as the `observed` argument.
Otherwise, observed stochastic random variables are created via the same interfaces as unobserved: **automatic** or **manual**. As an example of an automatic instantiation, consider a Poisson data likelihood :
```
from pymc3 import Poisson
with disaster_model:
disasters = Poisson('disasters', mu=3, observed=[3,4,1,2,0,2,2])
```
A manual instantiation would be similar to that for a stochastic random variable, except `DensityDist` would recieve an `observed` argument. Here is an example of an *exponential survivial likelihood*:
```python
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
```
Notice in this example that there are two vetors observed data for the likelihood `x`, passed as a dictionary.
An important responsibility of `ObservedRV` is to automatically handle missing values in the data, when they are present (absent?). See PyMC3 documentation for details.
## Deterministic Variables
A deterministic variable is one whose values are **completely determined** by the values of their parents. For example, in our disasters model, `rate` is a deterministic variable.
```python
with disaster_model:
rate = pm.Deterministic('rate', switch(switchpoint >= np.arange(112), early_mean, late_mean))
```
so `rate`'s value can be computed exactly from the values of its parents `early_mean`, `late_mean` and `switchpoint`.
There are two types of deterministic variables in PyMC3
#### Anonymous deterministic variables
The easiest way to create a deterministic variable is to operate on or transform one or more variables in a model directly. For example, the simplest way to specify the `rate` variable above is as follows:
```
with disaster_model:
rate = switch(switchpoint >= np.arange(112), early_mean, late_mean)
```
Or, let's say we wanted to use the mean of the `early_mean` and `late_mean` variables somehere in our model:
```
with disaster_model:
mean_of_means = (early_mean + late_mean)/2
```
These are called *anonymous* variables because we did not wrap it with a call to `Determinstic`, which gives it a name as its first argument. We simply specified the variable as a Python (or, Theano) expression. This is therefore the simplest way to construct a determinstic variable. The only caveat is that the values generated by anonymous determinstics at every iteration of a MCMC algorithm, for example, are not recorded to the resulting trace. So, this approach is only appropriate for intermediate values in your model that you do not wish to obtain posterior estimates for, alongside the other variables in the model.
#### Named deterministic variables
To ensure that deterministic variables' values are accumulated during sampling, they should be instantiated using the **named deterministic** interface; this uses the `Deterministic` function to create the variable. Two things happen when a variable is created this way:
1. The variable is given a name (passed as the first argument)
2. The variable is appended to the model's list of random variables, which ensures that its values are tallied.
```
from pymc3 import Deterministic
with disaster_model:
rate = Deterministic('rate', switch(switchpoint >= np.arange(112), early_mean, late_mean))
disaster_model.named_vars
```
## Factor Potentials
For some applications, we want to be able to modify the joint density by incorporating terms that don't correspond to probabilities of variables conditional on parents, for example:
$$p(x_0, x_2, \ldots x_{N-1}) \propto \prod_{i=0}^{N-2} \psi_i(x_i, x_{i+1})$$
In other cases we may want to add probability terms to existing models. For example, suppose we want to constrain the early mean to be greater than the late mean in the disaster model, so that the joint density becomes:
$$p(y,\tau,\lambda_1,\lambda_2) \propto p(y|\tau,\lambda_1,\lambda_2) p(\tau) p(\lambda_1) p(\lambda_2) I(|\lambda_1-\lambda_2| \gt 0)$$
We call such log-probability terms **factor potentials** (Jordan 2004). Bayesian
hierarchical notation doesn't accomodate these potentials.
### Creation of Potentials
A potential can be created via the `Potential` function, in a way very similar to `Deterministic`'s named interface:
```
from pymc3 import Potential
with disaster_model:
rate_constraint = Potential('rate_constraint', switch((late_mean - early_mean)>0, -np.inf, 0))
```
The function takes just a `name` as its first argument and an expression returning the appropriate log-probability as the second argument.
## Sampling with MCMC
PyMC's core business is using Markov chain Monte Carlo to fit virtually any probability model. This involves the assignment and coordination of a suite of **step methods**, each of which is responsible for updating one or more variables.
The user's interface to PyMC's sampling algorithms is the `sample` function:
```python
sample(draws=500, step=None, init='auto', n_init=200000, start=None, trace=None, chain_idx=0, chains=None, cores=None, tune=500, progressbar=True, model=None, random_seed=None, discard_tuned_samples=True, compute_convergence_checks=True)
```
`sample` assigns particular samplers to model variables, and generates samples from them. The `draws` argument
controls the total number of MCMC iterations. PyMC can automate most of the details of sampling, outside of the selection of the number of draws, using default settings for several parameters that control how the sampling is set up and conducted. However, users may manually intervene in the specification of the sampling by passing values to a number of keyword argumetns for `sample`.
### Assigning step methods
The `step` argument allows users to assign a MCMC sampling algorithm to the entire model, or to a subset of the variables in the model. For example, if we wanted to use the Metropolis-Hastings sampler to fit our model, we could pass an instance of that step method to `sample` via the `step` argument:
```python
with my_model:
trace = sample(1000, step=Metropolis())
```
or if we only wanted to assign `Metropolis` to a parameter called `β`:
```python
with my_model:
trace = sample(1000, step=Metropolis(vars=[β]))
```
When `step` is not specified by the user, PyMC3 will assign step methods to variables automatically. To do so, each step method implements a class method called `competence`. This method returns a value from 0 (incompatible) to 3 (ideal), based on the attributes of the random variable in question. `sample` assigns the step method that returns the highest competence value to each of its unallocated stochastic random variables. In general:
* Binary variables will be assigned to `BinaryMetropolis` (Metropolis-Hastings for binary values)
* Discrete variables will be assigned to `Metropolis`
* Continuous variables will be assigned to `NUTS` (No U-turn Sampler)
### Starting values
The `start` argument allows for the specification of starting values for stochastic random variables in the model. MCMC algorithms begin by initializing all unknown quantities to arbitrary starting values. Though in theory the value can be any value under the support of the distribution describing the random variable, we can make sampling more difficult if an initial value is chosen in the extreme tail of the distribution, for example. If starting values are not passed by the user, default values are chosen from the mean, median or mode of the distribution.
One might be tempted to initialize a MCMC simulation at the maximum *a posteriori* (MAP) estimate:
```
with Model() as disaster_model:
switchpoint = Uniform('switchpoint', lower=year.min(), upper=year.max())
early_mean = Exponential('early_mean', lam=0.5)
late_mean = Exponential('late_mean', lam=0.5)
rate = switch(switchpoint >= year, early_mean, late_mean)
disasters = Poisson('disasters', rate, observed=disasters_data)
from pymc3 import find_MAP
with disaster_model:
start = find_MAP()
```
Except for small models, starting a sampler at the posterior mode is **not recommended**. As we saw in the introduction to Hamiltonian Monte Carlo, even though the probability density is highest around the mode, the volume of the posterior distribution is very low there. Hence, it is often not in (or near) the typical set.
However, for our small model things should work out okay.
```
start
from pymc3 import sample, Metropolis
with disaster_model:
trace = sample(step=Metropolis(), cores=2, start=start)
```
### Storing samples
Notice in the above call to `sample` that output is assigned to a variable we have called `trace`.
```
trace
```
This `MultiTrace` object is a data structure that stores the samples from an MCMC run in a tabular structure. By default, `sample` will create a new `MultiTrace` object that stores its samples in memory, as a NumPy `ndarray`. We can override the default behavior by specifying the `trace` argument. There are three options:
1. Selecting an alternative database backend to keeping samples in an `ndarray`. Passing either `"text"` or `"sqlite"`, for example, will save samples to text files or a SQLite database, respectively. An instance of a backend can also be passed.
2. Passing a list of variables will only record samples for the subset of variables specified in the list. These will be stored in memory.
3. An existing `MultiTrace` object. This will add samples to an existing backend.
```
with disaster_model:
db_trace = sample(100, tune=0, cores=2, trace='sqlite')
# Cleaning up!
!rm mcmc.sqlite
```
I recommend converting MCMC sample output to an ArviZ `InferenceData` object. Output data are stored in a robust and flexible xarray `Dataset` and allows for easy export to NetCDF for serialization.
```
from arviz import from_pymc3
model_output = from_pymc3(trace)
model_output
```
We will explore the output more in the next section, but the `InferenceData` object stores the posterior samples, the data that was used to fit the model, as well as a number of statistics related to the sampling procedure, which are useful for convergence diagnostic purposes.
```
type(model_output.posterior)
model_output.to_netcdf('trace.netcdf')
```
Serialized `InferenceData` objects can easily be re-imported.
```
from arviz import from_netcdf
imported_model_output = from_netcdf('trace.netcdf')
assert imported_model_output.posterior==model_output.posterior
#Clean up
!rm trace.netcdf
```
### Parallel sampling
Nearly all modern desktop computers have multiple CPU cores, and running multiple MCMC chains is an **embarrasingly parallel** computing task. It is therefore relatively simple to run chains in parallel in PyMC3. This is done by setting the `cores` argument in `sample` to some value between 2 and the number of cores on your machine (you can specify more chains than cores, but you will not gain efficiency by doing so). The default value of `cores` is `None`, which will select the number of CPUs on your machine, to a maximum of 4.
> Keep in mind that some chains might themselves be multithreaded via openmp or BLAS. In those cases it might be faster to set this to 1.
By default, PyMC3 will run a sample a minimum of 2 and a maximum of `cores` chains. However, the number of chains sampled can be set independently of the number of cores by specifying the `chains` argument.
```
with disaster_model:
ptrace = sample(100, tune=100, chains=4, cores=2)
```
Running $n$ iterations with $c$ chains will result in $n \times c$ samples.
```
ptrace['early_mean'].shape
```
If you want to specify different arguments for each chain, a list of argument values can be passed to `sample` as appropriate. For example, if we want to initialize random variables to particular (*e.g.* dispersed) values, we can pass a list of dictionaries to `start`:
```
with disaster_model:
ptrace = sample(10, tune=100, cores=2, discard_tuned_samples=False, init=None,
start=[{'early_mean':0.1}, {'early_mean':10}])
[chain[:5] for chain in ptrace.get_values('early_mean', combine=False)]
```
Generating several chains is generally recommended because it aids in model checking, allowing statistics such as the potential scale reduction factor ($\hat{R}$) and effective sample size to be calculated, as we will see in the model checking section.
## Step methods
Step method classes handle individual stochastic variables, or sometimes groups of them. They are responsible for making the variables they handle take **single MCMC steps** conditional on the rest of the model. Each PyMC step method (usually subclasses of `ArrayStep`) implements a method called `astep()`, which is called iteratively by `sample`.
All step methods share an optional argument `vars` that allows a particular subset of variables to be handled by the step method instance. Particular step methods will have additional arguments for setting parameters and preferences specific to that sampling algorithm.
> NB: when a PyMC function or method has an argument called `vars` it is expecting a list of variables (*i.e.* the variables themselves), whereas arguments called `varnames` expect a list of variables names (*i.e.* strings)
### HamiltonianMC
The Hamiltonian Monte Carlo algorithm is implemented in the `HamiltonianMC` class. Being a gradient-based sampler, it is only suitable for **continuous random variables**. Several optional arguments can be provided by the user. The algorithm is **non-adaptive**, so the parameter values passed at instantiation are fixed at those values throughout sampling.
`HamiltonianMC` requires a scaling matrix parameter `scaling`, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although it is used somewhat differently here. The matrix gives an approximate shape of the posterior distribution, so that `HamiltonianMC` does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions.
Fortunately, `HamiltonianMC` can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by `find_MAP`), it will look at the **local curvature** of the log posterior-density (the diagonal of the Hessian matrix) at that point to guess values for a good scaling vector, which can result in a good scaling value. Also, the MAP estimate is often a good point to use to initiate sampling.
- `scaling`
: Scaling for momentum distribution. If a 1-dimensional array is passed, it is interpreted as a matrix diagonal.
- `step_scale`
: Size of steps to take, automatically scaled down by $1/n^{0.25}$. Defaults to .25.
- `path_length`
: total length to travel during leapfrog. Defaults to 2.
- `is_cov`
: Flag for treating scaling as a covariance matrix/vector, if True. Treated as precision otherwise.
- `step_rand`
: A function which takes the step size and returns an new one used to randomize the step size at each iteration.
### NUTS
A disadgantage of the HMC sampler is that there are key hyperparameters that require tuning for sampling to proceed efficiently. Hoffman and Gelman (2014) developed an auto-tuning variant of HMC that takes care of selecting path lengths and step sizes.
`NUTS` is the No U-turn Sampler of Hoffman and Gelman (2014), an adaptive version of Hamiltonian MC that **automatically tunes** the step size and number on the fly. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution. True to its name, it stops automatically when it starts to double back and retrace its steps.
The algorithm employs **binary doubling**, which takes leapfrog steps alternating in direction with respect to the initial gradient. That is, one step is taken in the forward direction, two in the reverse direction, then four, eight, etc. The result is a balanced, binary tree with nodes comprised of Hamiltonian states.

Doubling process builds a balanced binary tree whose leaf nodes correspond to
position-momentum states. Doubling is halted when the subtrajectory from the
leftmost to the rightmost nodes of any balanced subtree of the overall binary tree starts to double back on itself

To ensure detailed balance, a slice variable is sampled from:
$$ u \sim \text{Uniform}(0, \exp[L(\theta) - 0.5 r \cdot r])$$
where $r$ is the initial momentum vector. The next sample is then chosen uniformly from the points in the remaining balanced tree.
In addition to the arguments to `HamiltonianMC`, `NUTS` takes additional parameters to controls the tuning. The most important of these is the target acceptance rate for the Metropolis acceptance phase of the algorithm, `taget_accept`.
Sometimes if the NUTS struggles to sample efficiently, changing this parameter above the default target rate of 0.8 will improve sampling (the original recommendation by Hoffman & Gelman was 0.6). Increasing the rate very high will also make the sampler more conservative, however, taking many small steps at every iteration.
```
with disaster_model:
trace_99 = sample(100, tune=200, cores=2, target_accept=0.99)
```
There is rarely a reason to use `HamiltonianMC` rather than `NUTS`. It is the default sampler for continuous variables in PyMC3.
### Metropolis
``Metropolis`` implements a Metropolis-Hastings step, as described the theory section, and is designed to handle float- and integer-valued variables.
A `Metropolis` step method can be instantiated with any of several optional arguments:
- `S`
: This sets the proposal standard deviation or covariance matrix.
- `proposal_dist`
: A function that generates zero-mean random deviates used as proposals. Defaults to the normal distribution.
- `scaling`
: An initial scale factor for the proposal
- `tune_interval`
: The number of intervals between tuning updates to `scaling` factor.
When the step method is instantiated, the `proposal_dist` is parameterized with the value passed for `S`. While sampling, the value of `scaling` is used to scale the value proposed by `proposal_dist`, and this value is tuned throughout the MCMC run. During tuning, the acceptance ratio of the step method is examined, and this scaling factor
is updated accordingly. Tuning only occurs when the acceptance rate is **lower than 20%** or **higher than 50%**; rates between 20-50% are considered optimal for Metropolis-Hastings sampling. The default tuning interval (`tune_interval`) is 100 iterations.
Although tuning will continue throughout the sampling loop, it is important to verify that the
**diminishing tuning** condition of [Roberts and Rosenthal (2007)](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jap/1183667414) is satisfied: the
amount of tuning should decrease to zero, or tuning should become very infrequent.
`Metropolis` handles discrete variable types automatically by rounding the proposed values and casting them to integers.
### BinaryMetropolis
While binary (boolean) variables can be handled by the `Metropolis` step method, sampling will be very inefficient. The `BinaryMetropolis` class is optimized to handle binary variables, by one of only two possible values. The only tuneable parameter is the `scaling` argument, which is used to vary the Bernoulli probability:
p_jump = 1. - .5 ** self.scaling
This value is compared to pseudo-random numbers generated by the step method, to determine whether a 0 or 1 is proposed.
`BinaryMetropolis` will be automatically selected for random variables that are distributed as Bernoulli, or categorical with only 2 categories.
### Slice
Though the Metropolis-Hastings algorithm is easy to implement for a variety of models, its efficiency is poor. We have seen that it is possible to tune Metropolis samplers, but it would be nice to have a "black-box" method that works for arbitrary continuous distributions, which we may know little about a priori.
The **slice sampler** (Neal 2003) improves upon the Metropolis sampler by being both efficient and easy to program generally. The idea is to first sample from the conditional distribution for $y$ (i.e., $Pr(x)$) given some current value of $x$, which is uniform over the $(0,f(x))$, and conditional on this value for $y$, then sample $x$, which is uniform on $S = {x : y < f (x)}$.
The steps required to perform a single iteration of the slice sampler to update the current value of $x_i$ is as follows:
1. Sample $y$ uniformly on (0,f(xi)).
2. Use this value $y$ to define a horizontal *slice* $S = {x : y < f (x)}$.
3. Establish an interval, I=(xa,xb), around xi that contains most of the slice.
4. Sample $x_{i+1}$ from the region of the slice overlaping I.
Hence, slice sampling employs an **auxilliary variable** ($y$) that is not retained at the end of the iteration. Note that in practice one may operate on the log scale such that $g(x) = \log(f (x))$ to avoid floating-point underflow. In this case, the auxiliary variable becomes $z = log(y) = g(x_i) − e$, where $e \sim \text{Exp}(1)$, resulting in the slice $S = \{x : z < g(x)\}$.
There are many ways of establishing and sampling from the interval $I$, with the only restriction being that the resulting Markov chain leaves $f(x)$ **invariant**. The objective is to include as much of the slice as possible, so that the potential step size can be large, but not (much) larger than the slice, so that the sampling of invalid points is minimized. Ideally, we would like it to be the slice itself, but it may not always be feasible to determine (and certainly not automatically).
In PyMC3, the `Slice` class implements the **univariate** slice sampler. It is suitable for univariate, continuous variables. There is a single user-defined parameter `w`, which sets the width of the initial slice. If not specified, it defaults to a width of 1.
```
from pymc3 import Slice
with disaster_model:
slice_trace = sample(2000, cores=2, step=Slice())
from arviz import plot_trace
plot_trace(slice_trace, var_names=['early_mean','late_mean']);
```
---
## To Learn More
- Hoffman MD, Gelman A. 2014. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research. 15(1):1593-1623.
- M.I. Jordan. 2004. Graphical models. Statist. Sci., 19(1):140–155.
- Neal, R. M. 2003. Slice sampling. The Annals of Statistics, 31(3), 705–767. doi:10.1111/1467-9868.00198
| github_jupyter |
## Test Shock Cooling
Use the Piro et al. (2015) model to fit for multi-band early-time light curves.
```
import pandas as pd
import numpy as np
import scipy.optimize as op
from helper import phys
from allsn_info import get_at2019dge
import emcee
import time
import corner
from multiprocessing import Pool
from helper.arnett import model_arnett_modified
from helper.models import model_piro15_recast, model_piro15_bol_recast
import matplotlib
import matplotlib.pyplot as plt
fs = 14
matplotlib.rcParams['font.size']=fs
result = get_at2019dge(colorplt=False)
lc = result['tb']
lc = lc[lc.instrument!='P60+SEDM']
lcdet = lc.sort_values(by = ['mjd'])
t0mjd = result['t_max']
dates = np.unique(lcdet["date"].values)
lcdet["phase"] = lcdet["mjd"].values - t0mjd
ixearly = (lcdet["phase"].values < 20)#&(lcdet["instrument"].values != "Swift")
lcearly = lcdet[ixearly]
filts = np.unique(lcearly["filter"].values)
filts
np.unique(lcearly["wave"].values)
tt = lcearly["phase"].values
wv = lcearly["wave"].values
filters = lcearly["filter"].values
wv[filters=="u"] = 3477
wv[filters=="U"] = 3477
wv[filters=="g"] = 4800
wv[filters=="r"] = 6300
wv[filters=="i"] = 7800
wv[filters=="z"] = 9670
filters = lcearly["filter"].values
Llambda = lcearly["Llambda"].values
Llambda_unc = lcearly["Llambda_unc"].values
lgL = np.log10(Llambda)
lgL_unc = Llambda_unc / Llambda / np.log(10)
```
scp models.py yyao@private.caltech.edu:/scratch/yyao/AT2019dge/playground/helper/
main_shockmodel()
This takes some time to run so put it on another machine
scp -r yyao@private.caltech.edu:/scratch/yyao/AT2019dge/playground/helper/piromodel .
Inspect results by different tcuts, I select tcut = 5.0 (the same as tcut = 5.5).
```
tcuts = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
for tcut in tcuts:
filename = "./helper/piromodel/"+"%.1f"%tcut+"/sampler.h5"
reader = emcee.backends.HDFBackend(filename)
lgprobs = reader.get_log_prob(discard=1000, flat=True)
print (np.median(lgprobs))
filename = "./helper/piromodel/2.0/sampler.h5"
reader = emcee.backends.HDFBackend(filename)
samples = reader.get_chain(discard=1000, flat=True)
lgprobs = reader.get_log_prob(discard=1000, flat=True)
print (samples.shape)
print (lgprobs.shape)
lgR_sigmas = np.percentile(samples[:,0], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
lgM_sigmas = np.percentile(samples[:,1], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
t0_sigmas = np.percentile(samples[:,2], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
E51_sigmas = np.percentile(samples[:,3], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
Eenvs_sigmas = np.percentile(samples[:,4], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87)) * 1e+49
Eenv = Eenvs_sigmas[3]
Eenv_unc_left = Eenvs_sigmas[3]-Eenvs_sigmas[2]
Eenv_unc_right = Eenvs_sigmas[4] - Eenvs_sigmas[3]
print ("%.2f (+%.2f) (-%.2f) 1e+50 erg"%(Eenv/1e+50, Eenv_unc_right/1e+50, Eenv_unc_left/1e+50))
Renv = 10**lgR_sigmas[3]
Renv_unc_left = 10**lgR_sigmas[3] - 10**lgR_sigmas[2]
Renv_unc_right = 10**lgR_sigmas[4] - 10**lgR_sigmas[3]
print ("%.2f (+%.2f) (-%.2f) e+12 cm"%(Renv / 1e+12, Renv_unc_right/1e+12, Renv_unc_left/1e+12))
print ("%.1f (+%.1f) (-%.1f) Rsun"%(Renv / phys.sr, Renv_unc_right/phys.sr, Renv_unc_left/phys.sr))
Menv = 10**lgM_sigmas[3]
Menv_unc_left = 10**lgM_sigmas[3] - 10**lgM_sigmas[2]
Menv_unc_right = 10**lgM_sigmas[4] - 10**lgM_sigmas[3]
print ("%.2f (+%.2f) (-%.2f) 1e-2 Msun"%(Menv*100, Menv_unc_right*100, Menv_unc_left*100))
deltat0 = t0_sigmas[3]
deltat0_unc_left = t0_sigmas[3]-t0_sigmas[2]
deltat0_unc_right = t0_sigmas[4] - t0_sigmas[3]
print ("%.2f (+%.2f) (-%.2f) day"%(deltat0, deltat0_unc_right, deltat0_unc_left))
t0 =t0_sigmas[3]
E51 = E51_sigmas[3]
```
### Plot the model
```
plt.figure(figsize=(6, 5.5))
ax = plt.subplot(111)
wvs = np.array([2079. , 2255.1, 2614.2, 3477, 4359.1, 4800. , 5430.1, 6300. , 7800. , 9670. ])
names = np.array(["$UVW2$", "$UVM2$", "$UVW1$", "$U$", "$B$", "$g$", "$V$", "$r$", "$i$", "$z$"])
colors = np.array(["k", "navy", "b", "indigo", "blueviolet", "royalblue", "darkcyan", "crimson", "gold", "pink"])
tgrid = np.linspace(0, 10, 100)
for i in range(len(wvs)):
wave = wvs[i]
color = colors[i]
name = names[i]
ix = (wv == wave)&(tt<2)
ix1 = wv == wave
ax.errorbar(tt[ix], lgL[ix], lgL_unc[ix], fmt="o-", color = color)
ax.errorbar(tt[ix1], lgL[ix1], lgL_unc[ix1], fmt="o-", color = color, alpha = 0.3)
mymodel = model_piro15_recast(tgrid, wv=wave, Renv=Renv, Menv_=Menv, E51 = E51, Eext49 = Eenv/1e+49)
lgLmodel = np.log10(mymodel)
tnew = tgrid+t0
ix = tnew < 2
ax.plot(tnew[ix], lgLmodel[ix], color = color, label = name)
ax.plot(tnew, lgLmodel, color = color, alpha = 0.5)
ax.set_ylim(36, 39.3)
ax.set_xlim(-3.5, 4.5)
ax.plot([t0, t0], [36, 39.3], linestyle ="--", color = "grey")
ax.legend(ncol = 3, frameon = False, loc = "best")
ax.set_xlabel(r"$\Delta t$"+" (day)")
ax.set_ylabel('log('+r'$L_{\lambda}/\rm(erg\,s^{-1}\,\AA^{-1}))$')
plt.tight_layout()
plt.savefig("../paper/figures/P15model.pdf")
plt.show()
```
### Make corner plot
```
paramsNames= ['log' +r'$R_\mathrm{ext}$',
'log' +r'$M_\mathrm{ext}$',
r"$t_{\rm exp}$",
r"$E_{51}$",
r"$E_{\rm ext, 49}$"]
quantiles=[0.1587, 0.5, 0.8413]
corner.corner(samples, labels = paramsNames, quantiles = quantiles,
range = [0.975, 0.99, 0.975, 0.9999, 0.9999],
show_titles=True, plot_datapoints=False,
title_kwargs = {"fontsize": fs})
plt.savefig("../paper/figures/corner_P15_notused.pdf")
#plt.close()
paramsNames_final = ['log' +r'$R_\mathrm{ext}$',
'log' +r'$M_\mathrm{ext}$',
r"$t_{\rm exp}$",
r"$E_{\rm ext, 49}$"]
samples_final = np.hstack([samples[:, :3], samples[:, -1].reshape(samples.shape[0], 1)])
corner.corner(samples_final, labels = paramsNames_final, quantiles = quantiles,
range = [0.995, 0.995, 0.995, 0.995],
show_titles=True, plot_datapoints=False,
label_kwargs = {"fontsize": fs+3},
title_kwargs = {"fontsize": fs+1})
plt.savefig("../paper/figures/corner_P15.pdf")
filename1 = "./helper/arnettmodel/sampler.h5"
reader1 = emcee.backends.HDFBackend(filename1)
samples1 = reader1.get_chain(discard=200, flat=True)
taum_sigmas = np.percentile(samples1[:,0], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
lgMni_sigmas = np.percentile(samples1[:,1], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
t0_sigmas = np.percentile(samples1[:,2], (0.13, 2.27, 15.87, 50, 84.13, 97.73, 99.87))
taum_ = taum_sigmas[3]
lgMni_ = lgMni_sigmas[3]
Mni_ = 10**(lgMni_)
t0_ = t0_sigmas[3]
tgrid = np.linspace(0.1, 70, 200)
Lp15 = model_piro15_bol_recast(tgrid, Renv, Menv, E51, Eenv / 1e+49)
lgLp15 = np.log10(Lp15)
Lnidecay = model_arnett_modified(tgrid, taum_ = taum_, Mni_ = Mni_, t0_ = t0_)
lgLnidecay = np.log10(Lnidecay)
Ltot = Lp15 + Lnidecay
lgLtot = np.log10(Ltot)
result = get_at2019dge()
tb0 = result['tb']
z = result['z']
data = pd.read_csv('../data/otherSN/Yao2020/bbdata.csv')
data.head()
t_data = data['phase'].values
L_data = data['Lbb'].values
L_unc_data = data['Lbb_unc'].values
lgL_data = data['lgLbb'].values
lgL_unc_data = data['lgLbb_unc'].values
lgL_uncr_data = data['lgLbb_uncr'].values
lgL_uncl_data = data['lgLbb_uncl'].values
tb0 = tb0[tb0['filter'].values=='r']
tb0 = tb0[tb0.instrument!="P60+SEDM"]
tb0 = tb0[tb0.tmax_of > max(t_data)]
t_quasi = tb0["tmax_of"].values
Lquasi = tb0["Llambda"].values * tb0['wave'].values
Lquasi_unc = tb0["Llambda_unc"].values * tb0['wave'].values
lgLquasi = np.log10(Lquasi)
lgLquasi_unc = Lquasi_unc / Lquasi / np.log(10)
def get_refineLbbaxis(ax):
ax.set_ylabel(r'${\rm log} ( L/{\rm(erg\,s^{-1} } ))$')
ax.set_xlabel('Time since explosion (days)')
ax.xaxis.set_major_locator(plt.MultipleLocator(10))
ax.xaxis.set_minor_locator(plt.MultipleLocator(2))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_minor_locator(plt.MultipleLocator(0.1))
ax.tick_params(direction='in', axis='both', which = 'both', top=True, right=True)
ax.tick_params(which = 'major', length = 4)
ax.tick_params(which = 'minor', length = 2)
plt.figure(figsize=(6, 5.2))
ax = plt.subplot(111)
ax.errorbar(t_quasi-t0, lgLquasi, lgLquasi_unc, fmt='--o', color = "grey", markerfacecolor='none', zorder = 3, markersize=7)
ax.errorbar(t_data-t0, lgL_data, [lgL_uncl_data, lgL_uncr_data], fmt='ok', zorder = 3, markersize=5)
# Piro 2015 model
ax.plot(tgrid, lgLp15, color = "mediumseagreen", linestyle = "--", zorder = 2, label = "Shock Cooling")
# Modified Arnett model
ax.plot(tgrid, lgLnidecay, color = "b", linestyle = ":", zorder = 2, label = "Nickel Decay")
# Combined
ax.plot(tgrid, lgLtot, color = "tomato", linestyle = "-", zorder = 1, linewidth=2, label = "Total")
get_refineLbbaxis(ax)
ax.set_xlim(-2, 65)
ax.set_ylim(40.2, 43.1)
plt.tight_layout(h_pad=0)
plt.legend(loc = "upper right", fontsize= fs)
plt.savefig("../paper/figures/Lbb.pdf")
t0
plt.figure(figsize=(6, 5.2))
ax = plt.subplot(111)
#ax.errorbar(t_quasi, lgLquasi, lgLquasi_unc, fmt='--o', color = "grey", markerfacecolor='none', zorder = 3, markersize=7)
ax.errorbar(t_data, lgL_data, [lgL_uncl_data, lgL_uncr_data], fmt='ok', zorder = 3, markersize=5)
# Piro 2015 model
ax.plot(tgrid+t0, lgLp15, color = "mediumseagreen", linestyle = "--", zorder = 2, label = "Shock Cooling")
# Modified Arnett model
ax.plot(tgrid+t0, lgLnidecay, color = "b", linestyle = ":", zorder = 2, label = "Nickel Decay")
# Combined
ax.plot(tgrid+t0, lgLtot, color = "tomato", linestyle = "-", zorder = 1, linewidth=2, label = "Total")
get_refineLbbaxis(ax)
ax.set_xlabel('Days since $r$-band max')
ax.set_xlim(-5, 65)
ax.set_ylim(40.2, 43.1)
plt.tight_layout(h_pad=0)
plt.legend(loc = "lower right", fontsize= fs)
plt.savefig("../random/Lbb.pdf")
```
| github_jupyter |
```
"""
created by Arj at 16:28 BST
Investigating the challenge notebook and running it's code.
"""
import matplotlib.pyplot as plt
import numpy as np
from qctrlvisualizer import get_qctrl_style, plot_controls
from qctrl import Qctrl
qctrl = Qctrl()
def simulate_ideal_qubit(
duration=1, values=np.array([np.pi]), shots=1024, repetitions=1
):
b = np.array([[0, 1], [0, 0]]) # Lowering operator
initial_state = np.array([[1], [0]]) # Initial state of qubit in |0>
with qctrl.create_graph() as graph:
# Create time dependent \Omega(t)
drive = qctrl.operations.pwc_signal(duration=duration, values=values)
# Construct Hamiltonian (\Omega(t) b + \Omega^*(t) b^\dagger)/2
hamiltonian = qctrl.operations.pwc_operator_hermitian_part(
qctrl.operations.pwc_operator(signal=drive, operator=b)
)
# Solve Schrodinger's equation and get total unitary at the end
unitary = qctrl.operations.time_evolution_operators_pwc(
hamiltonian=hamiltonian,
sample_times=np.array([duration]),
)[-1]
unitary.name = "unitary"
# Repeat final unitary
repeated_unitary = np.eye(2)
for _ in range(repetitions):
repeated_unitary = repeated_unitary @ unitary
repeated_unitary.name = "repeated_unitary"
# Calculate final state.
state = repeated_unitary @ initial_state
# Calculate final populations.
populations = qctrl.operations.abs(state[:, 0]) ** 2
# Normalize populations because of numerical precision
norm = qctrl.operations.sum(populations)
populations = populations / norm
populations.name = "populations"
# Evaluate graph.
result = qctrl.functions.calculate_graph(
graph=graph,
output_node_names=["unitary", "repeated_unitary", "populations"],
)
# Extract outputs.
unitary = result.output["unitary"]["value"]
repeated_unitary = result.output["repeated_unitary"]["value"]
populations = result.output["populations"]["value"]
# Sample projective measurements.
measurements = np.random.choice(2, size=shots, p=populations)
results = {"unitary": unitary, "measurements": measurements}
return results
duration = 10
values = np.array([-1, 3, 2, 3, -2, -1])
def get_pulse_plot_dict(name="default", duration=1, values=np.array([1.0])):
segments = len(values)
segment_durations = duration / segments
pulse_plot_dict = {
name: [{"duration": segment_durations, "value": v} for v in values]
}
return pulse_plot_dict
example_pulse = get_pulse_plot_dict(name="$\Omega$", duration=duration, values=values)
fig = plt.figure()
plot_controls(fig, example_pulse, polar=False)
plt.show()
max_rabi_rate = 20 * 2 * np.pi # MHz
not_duration = np.pi / (max_rabi_rate) # us
not_values = np.array([max_rabi_rate])
h_duration = 3 * np.pi / (2 * max_rabi_rate) # us
h_values = np.array([-1j * max_rabi_rate, max_rabi_rate, max_rabi_rate])
not_pulse = get_pulse_plot_dict(
name="$\Omega_{NOT}$", duration=not_duration, values=not_values
)
h_pulse = get_pulse_plot_dict(name="$\Omega_{H}$", duration=h_duration, values=h_values)
both_pulses = {**not_pulse, **h_pulse}
fig = plt.figure()
plot_controls(fig, both_pulses, polar=False)
plt.show()
shots = 1024
not_results = simulate_ideal_qubit(
duration=not_duration, values=not_values, shots=shots
)
h_results = simulate_ideal_qubit(duration=h_duration, values=h_values, shots=shots)
error_norm = (
lambda operate_a, operator_b: 1
- np.abs(np.trace((operate_a.conj().T @ operator_b)) / 2) ** 2
)
realised_not_gate = not_results["unitary"]
ideal_not_gate = np.array([[0, 1], [1, 0]])
not_error = error_norm(realised_not_gate, ideal_not_gate)
realised_h_gate = h_results["unitary"]
ideal_h_gate = (1 / np.sqrt(2)) * np.array([[1, 1], [1, -1]])
h_error = error_norm(realised_h_gate, ideal_h_gate)
print("Realised NOT Gate:")
print(realised_not_gate)
print("Ideal NOT Gate:")
print(ideal_not_gate)
print("NOT Gate Error:" + str(not_error) + "\n")
print("Realised H Gate:")
print(realised_h_gate)
print("Ideal H Gate:")
print(ideal_h_gate)
print("H Gate Error:" + str(h_error))
not_measurements = not_results["measurements"]
h_measurements = h_results["measurements"]
def estimate_probability_of_one(measurements):
size = len(measurements)
probability = np.mean(measurements)
standard_error = np.std(measurements) / np.sqrt(size)
return (probability, standard_error)
not_probability, not_standard_error = estimate_probability_of_one(not_measurements)
h_probability, h_standard_error = estimate_probability_of_one(h_measurements)
print("NOT estimated probability of getting 1:" + str(not_probability))
print("NOT estimate standard error:" + str(not_standard_error))
print("H estimated probability of getting 1:" + str(h_probability))
print("H estimate standard error:" + str(h_standard_error))
epsilon = 0.003
h_duration = (np.pi + epsilon) / (2 * max_rabi_rate) # us
repetitions = np.array([1, 17, 127])
repetition_results = []
for reps in repetitions:
repetition_results.append(
simulate_ideal_qubit(
duration=h_duration, values=values, shots=shots, repetitions=reps
)
)
probability_estimates = np.zeros(3)
standard_errors = np.zeros(3)
for count, result in enumerate(repetition_results):
probability_estimates[count], standard_errors[count] = estimate_probability_of_one(
result["measurements"]
)
plt.plot(repetitions, probability_estimates, "s", color="#680CE9")
plt.plot(repetitions, probability_estimates + standard_errors, "_", color="#680CE9")
plt.plot(repetitions, probability_estimates - standard_errors, "_", color="#680CE9")
plt.hlines(0.5, 0, 127, linestyle=":")
plt.xlabel("Repetition")
plt.ylabel("Probability Estimate")
plt.show()
repetitions
probability_estimates
```
| github_jupyter |
# Run constrained emissions-driven ensemble in SSP2-4.5
Theme Song: The Bartender And The Thief<br>
Artist: Stereophonics<br>
Album: Performance and Cocktails<br>
Released: 1998
```
import os.path
import copy
import json
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyam
from fair.forward import fair_scm
from scmdata import ScmRun, run_append
from tqdm import tqdm_notebook
import openscm_runner
from openscm_runner.run import run
from openscm_runner.adapters import FAIR
openscm_runner.__version__
fair = FAIR()
fair.get_version()
with open('../data_output_large/fair-samples/fair-1.6.2-wg3-params.json') as f:
config_list = json.load(f)
species = [
'Emissions|BC',
'Emissions|CH4',
'Emissions|CO',
'Emissions|CO2|MAGICC AFOLU',
'Emissions|CO2|MAGICC Fossil and Industrial',
'Emissions|F-Gases|HFC|HFC125',
'Emissions|F-Gases|HFC|HFC134a',
'Emissions|F-Gases|HFC|HFC143a',
'Emissions|F-Gases|HFC|HFC227ea',
'Emissions|F-Gases|HFC|HFC23',
'Emissions|F-Gases|HFC|HFC245fa',
'Emissions|F-Gases|HFC|HFC32',
'Emissions|F-Gases|HFC|HFC4310mee',
'Emissions|F-Gases|PFC|C2F6',
'Emissions|F-Gases|PFC|C6F14',
'Emissions|F-Gases|PFC|CF4',
'Emissions|F-Gases|SF6',
'Emissions|Montreal Gases|CCl4',
'Emissions|Montreal Gases|CFC|CFC11',
'Emissions|Montreal Gases|CFC|CFC113',
'Emissions|Montreal Gases|CFC|CFC114',
'Emissions|Montreal Gases|CFC|CFC115',
'Emissions|Montreal Gases|CFC|CFC12',
'Emissions|Montreal Gases|CH3Br',
'Emissions|Montreal Gases|CH3CCl3',
'Emissions|Montreal Gases|CH3Cl',
'Emissions|Montreal Gases|HCFC141b',
'Emissions|Montreal Gases|HCFC142b',
'Emissions|Montreal Gases|HCFC22',
'Emissions|Montreal Gases|Halon1202',
'Emissions|Montreal Gases|Halon1211',
'Emissions|Montreal Gases|Halon1301',
'Emissions|Montreal Gases|Halon2402',
'Emissions|N2O',
'Emissions|NH3',
'Emissions|NOx',
'Emissions|OC',
'Emissions|Sulfur',
'Emissions|VOC']
df_fair = ScmRun('../data_input_large/rcmip-emissions-annual-means-v5-1-0.csv', lowercase_cols=True)
df_fair.filter(
scenario=['ssp245'],
# year=range(2015,2301),
year=range(2015,2101),
variable=species,
region='World',
inplace=True
)
print(len(df_fair))
#df_fair.head(50)
nt = df_fair.time_points.years()[-1] - 1750 + 1
nt
# drop beyond 2100
updated_config = copy.copy(config_list)
for i in range(len(config_list)):
updated_config[i]['F_solar'] = updated_config[i]['F_solar'][:nt]
updated_config[i]['F_volcanic'] = updated_config[i]['F_volcanic'][:nt]
updated_config[i]['natural'] = updated_config[i]['natural'][:nt]
# need parallel FaIR in openscm-runner
x = run(
climate_models_cfgs={
"FAIR": updated_config,
},
scenarios=df_fair,
output_variables=(
"Surface Air Temperature Change",
"Atmospheric Concentrations|CO2",
"Atmospheric Concentrations|CH4",
"Atmospheric Concentrations|N2O",
"Effective Radiative Forcing",
"Effective Radiative Forcing|CO2",
"Effective Radiative Forcing|CH4",
"Effective Radiative Forcing|N2O",
"Effective Radiative Forcing|Greenhouse Gases",
"Effective Radiative Forcing|Tropospheric Ozone",
"Effective Radiative Forcing|CH4 Oxidation Stratospheric H2O",
"Effective Radiative Forcing|Contrails",
"Effective Radiative Forcing|Aerosols",
"Effective Radiative Forcing|Aerosols|Direct Effect|BC",
"Effective Radiative Forcing|Aerosols|Direct Effect|OC",
"Effective Radiative Forcing|Aerosols|Direct Effect|SOx",
"Effective Radiative Forcing|Aerosols|Direct Effect|Nitrate",
"Effective Radiative Forcing|Aerosols|Direct Effect",
"Effective Radiative Forcing|Aerosols|Indirect Effect",
"Effective Radiative Forcing|Black Carbon on Snow",
"Effective Radiative Forcing|Land-use Change"
),
)
# convert to ScmRun for better plotting functionality
x = ScmRun(x.timeseries())
x.tail()
# co2 409.9
np.percentile(x.filter(variable="Atmospheric Concentrations|CO2", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# ch4 1866.3
np.percentile(x.filter(variable="Atmospheric Concentrations|CH4", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# n2o 332.1
np.percentile(x.filter(variable="Atmospheric Concentrations|N2O", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# co2 1.90 2.16 2.41
# any differences to GHG forcing is as likely to be with the pre-industrial concentration as it is with FaIR itself
np.percentile(x.filter(variable="Effective Radiative Forcing|CO2", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# ch4 0.43 0.54 0.65
np.percentile(x.filter(variable="Effective Radiative Forcing|CH4", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# n2o 0.18 0.21 0.24
np.percentile(x.filter(variable="Effective Radiative Forcing|N2O", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# other wmghg 0.33 0.41 0.49
np.percentile(
(
x.filter(variable="Effective Radiative Forcing|Greenhouse Gases", scenario='ssp245', year=2019).timeseries().values.squeeze() -
x.filter(variable="Effective Radiative Forcing|CO2", scenario='ssp245', year=2019).timeseries().values.squeeze() -
x.filter(variable="Effective Radiative Forcing|CH4", scenario='ssp245', year=2019).timeseries().values.squeeze() -
x.filter(variable="Effective Radiative Forcing|N2O", scenario='ssp245', year=2019).timeseries().values.squeeze()
)
, (5, 50, 95))
x.filter(variable="Effective Radiative Forcing|N2O", scenario='ssp245', year=2019).timeseries()
# n2o 0.18 0.21 0.24
np.percentile(x.filter(variable="Effective Radiative Forcing|N2O", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# o3 0.24 0.47 0.71
np.percentile(x.filter(variable="Effective Radiative Forcing|Tropospheric Ozone", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# h2o 0 0.05 0.10
np.percentile(x.filter(variable="Effective Radiative Forcing|CH4 Oxidation Stratospheric H2O", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# ERFaer -0.6 -0.3 -0.0
np.percentile(x.filter(variable="Effective Radiative Forcing|Aerosols|Direct Effect", scenario='ssp245', year=np.arange(2005,2015)).timeseries().values.squeeze(), (5, 50, 95))
# ERFaci -1.7 -1.0 -0.3 - very hard to get
np.percentile(x.filter(variable="Effective Radiative Forcing|Aerosols|Indirect Effect", scenario='ssp245', year=np.arange(2005,2015)).timeseries().values.squeeze(), (5, 50, 95))
# BC Snow 0.02 0.08 0.18
np.percentile(x.filter(variable="Effective Radiative Forcing|Black Carbon on Snow", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
# Land use -0.30 -0.20 -0.10
np.percentile(x.filter(variable="Effective Radiative Forcing|Land-use Change", scenario='ssp245', year=2019).timeseries().values.squeeze(), (5, 50, 95))
config_list[0]['scale']
scale_normals = np.load('../data_input_large/fair-samples/scale_normals.npy')
scale_normals[838]
```
| github_jupyter |
# Multiparty computation in pytorch demo
Model owner code
## Imports
```
from interface.distributed_interface import DistributedInterface
from shared_variable import SharedVariable
import torch
from torch.autograd import Variable
import spdz
```
## Define an iterface for sending tensors
The interface used in this demo is the distributed interface. The distributed interface uses pytorch.distributed to send tensors. Notably this interface should only be intialized once per machine
```
interface = DistributedInterface(0)
```
## Recieving data from other party
Must pass swap shares tensor of the same size as the tensor being received. For this demo the values of the tensor on the receiving end do not matter as they are ignored
```
raw_data= spdz.swap_shares(torch.LongTensor(1,2).zero_(),interface)
data = SharedVariable(Variable(raw_data),interface)
```
## Define and send weights to other party
The weights here are defined as a float tensor and then encoded into a fixed point representation. This fixed point representation allows us to avoid overflow errors and allows for easier multiplication
```
raw_weights = spdz.encode(torch.FloatTensor([[2],[2]]))
```
After the weights are encoded they are divided up so each party gets one part of the weights. These shares add up to the true value of the weights
```
weights_self,weights_other = spdz.share(raw_weights)
spdz.swap_shares(weights_other,interface)
weights = SharedVariable(Variable(weights_self,requires_grad=True),interface)
```
## Actual computation
The actual computation of this demo is to compute a matrix multiplication between the weights and the data. This computation in its entirety is take care of in the following cell. Adding additional computations is as simple as chaining them together
```
output = data @weights
output
```
## Backwards pass
The backwards pass is handled by the following cell. It simply calls backward as you would with a variable. We can call this backward pass without recombining the result
```
output.backward(torch.LongTensor([[1]]))
weights.grad
```
## Recombining weight gradient
By simply adding the gradients together, we get the true gradient
```
weights_grad_other = spdz.swap_shares(weights.grad.data,interface)
weights_grad = spdz.decode(spdz.reconstruct([weights.grad.data,weights_grad_other]))
weights_grad
```
We get no gradient for data because we set it not to require a gradient
```
data.grad
```
## Recombining output
Finally we can recombine the output and see that the calculation was correct
```
output_other = spdz.swap_shares(output.var.data,interface)
output_total = spdz.decode(spdz.reconstruct([output.var.data,output_other]))
output_total
torch.FloatTensor([[3,3]])@torch.FloatTensor([[2],[2]])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/unica-ml/ml/blob/master/notebooks/ml06.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Elements of Linear Discriminant Functions
This is the notebook associated to Part 4 of the ML course.
Let's start by importing some utility functions.
```
from matplotlib import pyplot as plt
import numpy as np
def plot_function(fun, grid_limits=([0, 0], [1, 1]),
background=False, resolution=0.02, alpha=1.0, loop=False):
"""Plot function on 2D space."""
x1_min, x1_max = grid_limits[0][0], grid_limits[1][0]
x2_min, x2_max = grid_limits[0][1], grid_limits[1][1]
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
x = np.array([xx1.ravel(), xx2.ravel()]).T
if loop:
scores = np.zeros(shape=(x.shape[0],))
for i in range(x.shape[0]):
scores[i] = fun(x[i, :])
else:
scores = fun(x)
Z = scores.reshape(xx1.shape)
if background: # plot decision function
plt.contourf(xx1, xx2, Z, cmap='jet', levels=50, alpha=alpha)
plt.colorbar()
else:
# plot decision boundary
plt.contourf(xx1, xx2, Z, levels=[-0.01, 0, 0.01], colors=('k',))
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
return
def plot_dataset(x, y, feat0=0, feat1=1):
colors = ['r.', 'b.', 'g.', 'k.', 'c.', 'm.']
class_labels = np.unique(y).astype(int)
for k in class_labels:
plt.plot(x[y == k, feat0], x[y == k, feat1], colors[k % 7])
```
Now let's code a simple class implementing a linear classifier $f(x)=w^T x + b$, and display its decision boundary on a bi-dimensional toy example.
Note that, if we set $h(x) = f(x) \cdot k$ (being $k$ a constant value), we obtain a linear classifier $h(x)$ with $w^\prime = kw$ and $b^\prime = kb$. While this classifier has the same decision boundary (in fact, $h(x)=0$ is equivalent to $f(x)=0$), it exhibits a different slope. For example, if $k>1$ the decision function will change more rapidly around each point $x$. You can compare the plots at the end of this section to note the difference.
```
from sklearn.datasets import make_blobs
class LinearClassifier:
"""Simple class implementing f(x) = w'x +b."""
def __init__(self, w, b):
self.w = w
self.b = b
@property
def w(self):
return self._w
@w.setter
def w(self, w):
self._w = np.array(w)
@property
def b(self):
return self._b
@b.setter
def b(self, b):
self._b = np.array(b)
def decision_function(self, x):
return x.dot(self.w)+b
def predict(self, x):
return np.array(self.decision_function(x) >= 0)
x, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=3)
w = [1, 1]
b = 0.1
clf = LinearClassifier(w,b)
grid_limits = (x.min(axis=0)-0.5, x.max(axis=0)+0.5)
plt.figure(figsize=(13.5,3))
plt.subplot(1, 3, 1)
plot_dataset(x, y)
plot_function(clf.decision_function, background=False, grid_limits=grid_limits)
plt.axis('equal')
plt.title('Decision boundary at f(x)=0')
plt.xlabel(r'Feature $x_1$')
plt.ylabel(r'Feature $x_2$')
plt.subplot(1, 3, 2)
plot_dataset(x, y)
plot_function(clf.decision_function, background=True, grid_limits=grid_limits)
plt.clim([-20, 20])
plot_function(clf.decision_function, background=False, grid_limits=grid_limits)
plt.axis('equal')
plt.title('f(x)')
plt.xlabel(r'Feature $x_1$')
plt.ylabel(r'Feature $x_2$')
plt.subplot(1, 3, 3)
plot_dataset(x, y)
clf.w = 2*clf.w
clf.b = 2*clf.b
plot_function(clf.decision_function, background=True, grid_limits=grid_limits)
plt.clim([-20, 20])
plot_function(clf.decision_function, background=False, grid_limits=grid_limits)
plt.axis('equal')
plt.title('2*f(x)')
plt.xlabel(r'Feature $x_1$')
plt.ylabel(r'Feature $x_2$')
plt.show()
```
## Optimizing the Loss Function
We have described so far the basics of linear classification, namely, how samples are predicted by a linear classifier.
The question that remains to be addressed is how one can learn the parameters $\theta = (w,b)$ for a linear classifier from the training data $D=(x_i, y_i)_{i=1}^n$.
This is typically achieved by formulating the learning problem as an optimization problem:
$$ \theta^\star \in \arg\min_\theta L(D, \theta),$$
where the objective function $L(D, \theta)$ is a proxy function to evaluating the classification error. This problem is typically solved efficiently via gradient descent.
Depending on the choice of the objective function $L(D, \theta)$, one can implement many different learning algorithms. Note that this formulation also holds for nonlinear classification functions and more complex algorithms, including neural networks and deep-learning algorithms.
Let's start from something easy. First of all, let's assume that the loss function can be decomposed as the sum of the loss on each training point: $L(D, \theta) = \frac{1}{n}\sum_{i=1}^n \ell(y_i, f(x_i; \theta))$.
It is not difficult to see that, if we take $\ell$ to be 1 for correct predictions and 0 otherwise, $L$ will correspond to measuring the fraction of training points that are wrongly predicted (i.e., the training error). This is called the zero-one loss.
Below, we plot the zero-one loss along with the so-called hinge loss (i.e., its closest convex upper bound) as function of $y f(x)$.
In fact, loss functions can be normally expressed as a function of the product $y f(x)$, given that, if $y f(x) \geq 0$, the point $x$ is correctly predicted ($y$ and $f$ agree in sign), otherwise it is misclassified.
Here are the equations:
- zero-one loss: $\ell(y, f(x)) = \begin{cases} 1, \; {\rm if} \; yf(x) < 0, \\ 0, \; {\rm otherwise.}\end{cases}$.
- hinge loss: $\ell(y, f(x)) = \max(0, 1-yf(x))$.
```
yf = np.linspace(-3, 3, num=100)
hinge = 1-yf
hinge[hinge<=0]=0
zero_one = yf < 0
plt.figure(figsize=(5,4))
plt.plot(yf, zero_one, 'b', label='zero-one loss')
plt.plot(yf, hinge, 'r', label='hinge loss')
plt.xlabel(r'$y \cdot f(x)$')
plt.ylabel(r'$\ell(y, f(x))$')
plt.title('Loss functions')
plt.legend()
plt.show()
```
Let's have a look at how these losses behave in the space of parameters $(w_1, w_2)$, assuming that $b=0$ (not optimized).
Every point in this space is a linear classifier (i.e., a hyperplane passing through the origin) and we report (using the color axis) the corresponding error on the training set (i.e., the training loss).
```
class Loss:
"""Class implementing basic loss functions."""
def __init__(self, clf, x, y):
self._clf = clf # classifier to be evaluated
self._x = x # training points
self._y = y # training labels
def zero_one_loss(self, w=None):
if w is not None:
self._clf.w = w
y = 2*self._y - 1 # convert {0,1} to {-1,+1}
scores = self._clf.decision_function(self._x)
return np.mean(y*scores < 0)
def hinge_loss(self, w=None):
if w is not None:
self._clf.w = w
y = 2*self._y - 1 # convert {0,1} to {-1,+1}
scores = self._clf.decision_function(self._x)
hinge = 1-y*scores
hinge[hinge <= 0] = 0
return np.mean(hinge)
clf = LinearClassifier(w=[1, 1],b=0)
loss = Loss(clf, x, y)
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
plot_function(loss.zero_one_loss, background=True,
loop=True, resolution=0.1,
grid_limits=([-10,-10], [10, 10]))
plt.xlabel(r'$w_1$')
plt.ylabel(r'$w_2$')
plt.title('zero-one loss')
plt.subplot(1, 2, 2)
plot_function(loss.hinge_loss, background=True,
loop=True, resolution=0.1,
grid_limits=([-10,-10], [10, 10]))
plt.xlabel(r'$w_1$')
plt.ylabel(r'$w_2$')
plt.title('hinge loss')
plt.show()
# we now fix w2=0 and let only w1 change
n_points=100
w1 = np.linspace(-10, 10, num=n_points)
w2 = np.zeros(shape=(n_points,))
w = np.vstack((w1,w2)).T
zero_one = np.zeros(shape=(n_points,))
hinge = np.zeros(shape=(n_points,))
for i in range(n_points):
zero_one[i] = loss.zero_one_loss(w[i, :])
hinge[i] = loss.hinge_loss(w[i, :])
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
plt.plot(w1, zero_one)
plt.xlabel(r'$w_1$')
plt.title(r'zero-one loss (at $w_2=0$)')
plt.subplot(1, 2, 2)
plt.plot(w1, hinge)
plt.xlabel(r'$w_1$')
plt.title(r'hinge loss (at $w_2=0$)')
plt.show()
```
Let's extend our class now by adding the derivative of the hinge, and let's run gradient descent to optimize the loss.
The hinge loss $\ell(y, f(x)) = \max(0, 1-yf(x))$ is not differentiable at the hinge, i.e., when $yf(x)=1$, but subgradients can be used.
In this case, we can assume that the gradient is zero at the hinge. Accordingly, we can set the gradient to zero when the loss is zero, and instead differentiate $1-yf(x)$ w.r.t. $w$ when the hinge loss is not null. We thus get:
$$\nabla_w \ell(y, f(x))=\begin{cases} 0, \; {\rm if} \; 1-yf(x) \leq 0, \\ -yx, \; {\rm otherwise.}\end{cases}$$
We also report the derivative w.r.t. $b$ for completeness:
$$\nabla_b \ell(y, f(x))=\begin{cases} 0, \; {\rm if} \; 1-yf(x) \leq 0, \\ -y, \; {\rm otherwise.}\end{cases}$$
Recall that these are derivatives of the loss computed for each training point. We will then need to average these values over all training points.
```
class LossGrad(Loss):
"""Extend previous class by adding the hinge loss gradient."""
def __init__(self, clf, x, y):
Loss.__init__(self, clf, x, y)
def hinge_loss_gradient(self, w=None):
if w is not None:
self._clf.w = w
y = 2*self._y - 1 # convert {0,1} to {-1,+1}
scores = self._clf.decision_function(self._x)
hinge = 1-y*scores
hinge[hinge <= 0] = 0
grad = np.zeros(shape=self._x.shape) # one grad per point
grad[hinge > 0, :] = self._x[hinge>0, :]
y = np.atleast_2d(y) # required to broadcast (on each column of grad)
grad *= -y.T
return np.mean(grad, axis=0)
# let's start optimizing. We start from w=[10,6]
n_iter = 20
w = np.zeros(shape=(n_iter+1, 2))
hinge = np.zeros(shape=(n_iter+1, )) # objective at w in each iter
w[0, :] = np.array([10., 6.]) # init
clf = LinearClassifier(w=w[0, :], b=0)
loss = LossGrad(clf, x, y)
hinge[0] = loss.hinge_loss(w=clf.w)
eta = 0.5 # gradient step size
for i in range(n_iter):
clf.w -= eta * loss.hinge_loss_gradient(w=clf.w)
w[i+1, :] = clf.w
hinge[i+1] = loss.hinge_loss(w=clf.w)
plt.figure(figsize=(15,3.5))
plt.subplot(1, 3, 1)
plot_function(loss.hinge_loss, background=True,
loop=True, resolution=0.1,
grid_limits=([-10,-10], [10, 10]))
plt.plot(w[:, 0], w[:, 1], 'rx:')
plt.xlabel(r'$w_1$')
plt.ylabel(r'$w_2$')
plt.title('hinge loss')
plt.subplot(1, 3, 2)
plt.plot(hinge)
plt.xlabel('Iteration')
plt.title('hinge loss (along the descent path)')
plt.subplot(1, 3, 3)
plot_dataset(x, y)
for i in range(n_iter+1):
clf.w = w[i, :]
plot_function(clf.decision_function, grid_limits=grid_limits)
plt.show()
```
| github_jupyter |
# Training LeNet using MNIST and Joey
In this notebook, we will construct and train LeNet using Joey, data from MNIST and the SGD with momentum PyTorch optimizer.
Let's start with importing the prerequisites:
```
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import joey as ml
import numpy as np
import matplotlib.pyplot as plt
from devito import logger
```
In order to speed up processing, we'll not print performance messages coming from Devito.
```
logger.set_log_noperf()
```
`create_lenet()` returns a `Net` instance representing LeNet.
```
def create_lenet():
# Six 3x3 filters, activation RELU
layer1 = ml.Conv(kernel_size=(6, 3, 3),
input_size=(batch_size, 1, 32, 32),
activation=ml.activation.ReLU())
# Max 2x2 subsampling
layer2 = ml.MaxPooling(kernel_size=(2, 2),
input_size=(batch_size, 6, 30, 30),
stride=(2, 2))
# Sixteen 3x3 filters, activation RELU
layer3 = ml.Conv(kernel_size=(16, 3, 3),
input_size=(batch_size, 6, 15, 15),
activation=ml.activation.ReLU())
# Max 2x2 subsampling
layer4 = ml.MaxPooling(kernel_size=(2, 2),
input_size=(batch_size, 16, 13, 13),
stride=(2, 2),
strict_stride_check=False)
# Full connection (16 * 6 * 6 -> 120), activation RELU
layer5 = ml.FullyConnected(weight_size=(120, 576),
input_size=(576, batch_size),
activation=ml.activation.ReLU())
# Full connection (120 -> 84), activation RELU
layer6 = ml.FullyConnected(weight_size=(84, 120),
input_size=(120, batch_size),
activation=ml.activation.ReLU())
# Full connection (84 -> 10), output layer
layer7 = ml.FullyConnectedSoftmax(weight_size=(10, 84),
input_size=(84, batch_size))
# Flattening layer necessary between layer 4 and 5
layer_flat = ml.Flat(input_size=(batch_size, 16, 6, 6))
layers = [layer1, layer2, layer3, layer4,
layer_flat, layer5, layer6, layer7]
return (ml.Net(layers), layers)
```
A proper training iteration is carried out in `train()`. Note that we pass a PyTorch optimizer to `net.backward()`. Joey will take care to use it for updating weights appropriately.
```
def train(net, input_data, expected_results, pytorch_optimizer):
outputs = net.forward(input_data)
def loss_grad(layer, expected):
gradients = []
for b in range(len(expected)):
row = []
for i in range(10):
result = layer.result.data[i, b]
if i == expected[b]:
result -= 1
row.append(result)
gradients.append(row)
return gradients
net.backward(expected_results, loss_grad, pytorch_optimizer)
```
In this example, every batch will consist of 4 images and the training session will be capped at 100 iterations.
```
batch_size = 4
iterations = 100
```
Before starting training, we need to download MNIST data using PyTorch.
```
transform = transforms.Compose(
[transforms.Resize((32, 32)),
transforms.ToTensor(),
transforms.Normalize(0.5, 0.5)])
trainset = torchvision.datasets.MNIST(root='./mnist', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, num_workers=2)
classes = ('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')
```
Afterwards, let's instantiate Joey's LeNet along with the SGD with momentum PyTorch optimizer.
```
devito_net, devito_layers = create_lenet()
optimizer = optim.SGD(devito_net.pytorch_parameters, lr=0.001, momentum=0.9)
```
We're almost ready! The last thing to do is saving our original parameters as they will be required for making later comparisons with PyTorch.
```
layer1_kernel = torch.tensor(devito_layers[0].kernel.data)
layer1_bias = torch.tensor(devito_layers[0].bias.data)
layer3_kernel = torch.tensor(devito_layers[2].kernel.data)
layer3_bias = torch.tensor(devito_layers[2].bias.data)
layer5_kernel = torch.tensor(devito_layers[5].kernel.data)
layer5_bias = torch.tensor(devito_layers[5].bias.data)
layer6_kernel = torch.tensor(devito_layers[6].kernel.data)
layer6_bias = torch.tensor(devito_layers[6].bias.data)
layer7_kernel = torch.tensor(devito_layers[7].kernel.data)
layer7_bias = torch.tensor(devito_layers[7].bias.data)
```
We can start the Joey training session now.
```
for i, data in enumerate(trainloader, 0):
images, labels = data
images.double()
train(devito_net, images, labels, optimizer)
if i == iterations - 1:
break
```
Afterwards, let's create a PyTorch equivalent of Joey's LeNet, train it using the same initial weights and data and compare the results.
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
self.fc1 = nn.Linear(16 * 6 * 6, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
net.double()
with torch.no_grad():
net.conv1.weight[:] = layer1_kernel
net.conv1.bias[:] = layer1_bias
net.conv2.weight[:] = layer3_kernel
net.conv2.bias[:] = layer3_bias
net.fc1.weight[:] = layer5_kernel
net.fc1.bias[:] = layer5_bias
net.fc2.weight[:] = layer6_kernel
net.fc2.bias[:] = layer6_bias
net.fc3.weight[:] = layer7_kernel
net.fc3.bias[:] = layer7_bias
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
criterion = nn.CrossEntropyLoss()
for i, data in enumerate(trainloader, 0):
images, labels = data
optimizer.zero_grad()
outputs = net(images.double())
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if i == iterations - 1:
break
layers = [devito_layers[0], devito_layers[2], devito_layers[5], devito_layers[6], devito_layers[7]]
pytorch_layers = [net.conv1, net.conv2, net.fc1, net.fc2, net.fc3]
max_error = 0
index = -1
for i in range(5):
kernel = layers[i].kernel.data
pytorch_kernel = pytorch_layers[i].weight.detach().numpy()
kernel_error = abs(kernel - pytorch_kernel) / abs(pytorch_kernel)
bias = layers[i].bias.data
pytorch_bias = pytorch_layers[i].bias.detach().numpy()
bias_error = abs(bias - pytorch_bias) / abs(pytorch_bias)
error = max(np.nanmax(kernel_error), np.nanmax(bias_error))
print('layers[' + str(i) + '] maximum relative error: ' + str(error))
if error > max_error:
max_error = error
index = i
print()
print('Maximum relative error is in layers[' + str(index) + ']: ' + str(max_error))
```
As we can see, the maximum relative error is low enough to consider the training session in Joey numerically correct.
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('D:/Chandrashekar S/New Volume/Machine learning/Kaggle Competion/titanic/test.csv',index_col=['PassengerId'])
df.head()
#pd.set_option('display.height', 1000)
pd.set_option('display.max_rows', 800)
pd.set_option('display.max_columns', 800)
pd.set_option('display.width', 1000)
df.head()
#df.info()
sns.heatmap(df.corr())
df.Cabin.str.get(0).value_counts()
sns.heatmap(df.isna())
df.Embarked.fillna(df.Embarked.mode()[0],inplace=True)
df["Age_tf"] = df["Age"]
df.head(2)
sns.set_style(style='whitegrid')
sns.boxplot("SibSp","Age",hue="Embarked"
,data=df)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
def age_tf(sipsp,embarked,age):
for x in df.index:
if(np.isnan(df.Age_tf[x]) ):
if((df.SibSp[x] == sipsp) & (df.Embarked[x] == embarked) ):
df.Age_tf[x] = age
#print(x)
pass
pass
pass
pass
age_tf(0,'S',29)
age_tf(0,'C',30)
age_tf(0,'Q',27)
age_tf(1,'S',31)
age_tf(1,'C',28)
age_tf(1,'Q',33)
age_tf(2,'S',24)
age_tf(2,'C',5)
age_tf(2,'Q',44)
age_tf(3,'S',10)
age_tf(4,'S',7)
age_tf(4,'Q',6)
age_tf(5,'S',11)
df.info()
tr = df.Age_tf.isna()==True
Agena = df.loc[tr, ["Age_tf","SibSp",'Embarked']]
Agena.Age_tf.fillna(True,inplace=True)
sns.barplot('SibSp','Age_tf',hue="Embarked",data=Agena)
Agena
#sns.barplot("SibSp",'Age',data=df)
age_tf(8,'S',1)
sns.heatmap(df.isna())
df.drop(columns = ['Cabin'],inplace=True)
df.info()
#sns.jointplot(x=['Age','Fare'],y='Survived',data=df)
sns.heatmap(df.corr())
df.columns
#for x in df.columns:
# print(f'{x} : {df[x].value_counts()} \n')
df.Name
names = df['Name'].str.split(',',expand=True)
df['surname'] = names[0]
df['Names'] = names[1]
#df.drop(columns = ['Name'],inplace=True)
#df.rename(columns={'Names':'Name'},inplace=True)
Marriage_status = df['Names'].str.split('.',expand=True)
df['Marriage_status'] = Marriage_status[0]
df.head()
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
LB = LabelEncoder()
df['Sex_tf'] = LB.fit_transform(df['Sex'])
Sex_OH = OneHotEncoder().fit_transform(df[['Sex_tf']]).toarray()
df['gender_male'] = Sex_OH[:,1]
df['gender_female'] = Sex_OH[:,0]
df.drop(columns=['Sex_tf'],inplace=True)
df.drop(columns=['Names'],inplace=True)
df.head(1)
df['Embarked_tf']= LB.fit_transform(df.Embarked)
OH_embarked = OneHotEncoder().fit_transform(df[['Embarked_tf']]).toarray()
#lb_results_df = pd.DataFrame(lb_results, columns=lb.classes_)
df['Embarked_S'] = OH_embarked[:,2]
df['Embarked_Q'] = OH_embarked[:,1]
df['Embarked_C'] = OH_embarked[:,0]
df.drop(columns=['Embarked_tf'],inplace=True)
df.head()
df.info()
df.Marriage_status.value_counts()
#for x in df.index:
#print(df.Marriage_status[x])
# if (df.Marriage_status[x] != " Mr"): # or " Miss" or " Mrs" or " Master"):
# print(df.Marriage_status[x])
#df.Marriage_status[x] = 'others'
# elif (df.Marriage_status[x] != " Miss"):
#df.Marriage_status[x] = 'others'
# elif (df.Marriage_status[x] != " Mrs"):
#df.Marriage_status[x] = 'others'
# elif (df.Marriage_status[x] != " Master"):
#df.Marriage_status[x] = 'others'
# pass
df = pd.get_dummies(df, columns=['Marriage_status'], prefix = ['Name_prefix'])
#print(df.Marriage_status[x])
#df.Marriage_status.value_counts()
df.info()
#df.surname.value_counts()
#df = pd.get_dummies(df, columns=['surname'], prefix = ['surname'])
df.head(3)
#codes, uniques= pd.factorize(df['surname'])
#df.head(3)
#sns.countplot('Embarked_tf' ,data=df_final, hue='Survived',orient='v')
df_final = df
df.drop(columns=['Name','Sex','Age','Ticket','Embarked','surname'],inplace=True)
df_final.head(2)
sns.heatmap(df_final.corr())
df.info()
#sns.boxplot("Age_tf","Fare",hue="Pclass",data=df)
df.loc[1044,:]
filt2 = (df["Pclass"]==3) & (df["Embarked_S"] ==1.0)& (df["gender_male"] ==1.0) & (df["SibSp"] ==0) & (df["Parch"] ==0)& (df["Name_prefix_ Mr"] ==1.0)
a = df.loc[filt2,["Fare","Age_tf"]]
sns.jointplot('Age_tf',"Fare",data=a)
df.Fare.fillna(7,inplace=True)
df.info()
df.to_csv("titanic_test_tf3.csv")
```
| github_jupyter |
## Machine Learning with Concrete Strength
Concrete strength is affected by factors such as water to cement ratio, raw material quality, the ratio of coarse or fine aggregate, concrete age, concrete compaction, temperature, relative humidity, and other factors during the curing of the concrete. The data includes the following information for 1030 concrete samples.
- **Input variables:**
- Cement: kg/m$^3$ mixture
- Blast Furnace Slag: kg/m$^3$ mixture
- Fly Ash: kg/m$^3$ mixture
- Water: kg/m$^3$ mixture
- Superplasticizer: kg/m$^3$ mixture
- Coarse Aggregate: kg/m$^3$ mixture
- Fine Aggregate: kg/m$^3$ mixture
- Age: Day (1~365)
- **Output variable:**
- Concrete compressive strength: MPa
```python
url = 'http://apmonitor.com/pds/uploads/Main/cement_strength.txt'
```

The full problem statement is on the [Machine Learning for Engineers course website](http://apmonitor.com/pds/index.php/Main/CementStrength).
### Import Packages
```
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import cm
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import plot_confusion_matrix
from sklearn.feature_selection import SelectKBest, f_classif, f_regression
# Import classifier models
from sklearn.linear_model import LogisticRegression # Logistic Regression
from sklearn.naive_bayes import GaussianNB # Naïve Bayes
from sklearn.linear_model import SGDClassifier # Stochastic Gradient Descent
from sklearn.neighbors import KNeighborsClassifier # K-Nearest Neighbors
from sklearn.tree import DecisionTreeClassifier # Decision Tree
from sklearn.ensemble import RandomForestClassifier # Random Forest
from sklearn.svm import SVC # Support Vector Classifier
from sklearn.neural_network import MLPClassifier # Neural Network
# Import regression models
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn import svm
import statsmodels.api as sm
```
### Import Data
```
url = 'http://apmonitor.com/pds/uploads/Main/cement_strength.txt'
data = pd.read_csv(url)
```
### Pair Plot of Data
### Divide Data between High and Low Strength
## Part 1: Data Visualization and Cleansing
### Summary Statistics
Generate summary information to [statistically describe the data](https://apmonitor.com/pds/index.php/Main/StatisticsMath).
Check for balanced classification dataset ('high' and 'low' strength should have about equal amounts)
### Convert String Categories (Text) to Binary (0 or 1)
One-hot encoding translates character labels into a binary representation (0 or 1) for classification. Investigate the data types with `data.dtypes`.
### Data Cleansing
There is one row that contains an outlier. Identify the outlier with boxplots.
View outliers
Remove rows that contain outliers.
Verify that the outliers are removed with another box plot.
### Data Correlation
Generate a heat map of the data correlation.
### Data Distributions and Pair Plot
## Part 2: Classification
Train and test a classifier to distinguish between high and low strength concrete. Test at least 8 classifiers of your choice. Recommend a best classifier among the 8 that are tested.
### Divide Input Features (X) and Output Label (y)
Divide the data into sets that contain the input features (X) and output label (y=`csMPa`). Save data feature columns with `X_names=list(data.columns)` and remove `strength` with `X_names.remove('strength')` and remove `csMPa` with `X_names.remove('csMPa')`.
### Data scaling
Scale the input features with a `StandardScaler` or a `MinMaxScaler`. Why do classifiers return an error if the output label is scaled with `StandardScaler`?
Answer: The output label should not be scaled because it needs to be categorical data as an integer instead of a continuous real number.
### Train / Test Split
Randomly select values that split the data into a train (80%) and test (20%) set by using the sklearn `train_test_split` function with `shuffle=True`.
### Evaluate the Best Features
Use `SelectKBest` to evaluate the best features for the classifier.
### Train (fit) and Test Classification with Logistic Regression
### Train 8 Classifiers
Create 8 classifier objects and train.
### Classifier Evaluation
Report the confusion matrix on the test set for each classifier. Discuss the performance of each. A confusion matrix shows correct classification (diagonals) and incorrect classification (off-diagonals) groups from the test set. Generate a confusion matrix for each classifier.
## Part 3: Regression
Develop a regression model to predict Tension Strength (MPa). Compare predicted concrete strength with the regression model.
### Scale Data
Scale `data` with `StandardScaler` or `MinMaxScaler`.
### Select Input Features (X) and Output Label (y)
Using the 8 concrete properties as the input features.
- Cement: kg/m$^3$ mixture
- Blast Furnace Slag: kg/m$^3$ mixture
- Fly Ash: kg/m$^3$ mixture
- Water: kg/m$^3$ mixture
- Superplasticizer: kg/m$^3$ mixture
- Coarse Aggregate: kg/m$^3$ mixture
- Fine Aggregate: kg/m$^3$ mixture
- Age: Day (1~365)
The output label is the `csMPa`.
- Concrete Strength (MPa)
Divide the data into sets that contain the input features (X) and output label (y=`csMPa`). Save data feature columns with `X_names=list(data.columns)[0:8]`.
### Select Best Features for Regression
### Split Data
Randomly select values that split the data into a train (80%) and test (20%) set.
### Regression Fit
Use 3 regression methods. Use Linear Regression, Neural Network (Deep Learning), and another regression method of your choice. Discuss the performance of each. Possible regression methods are:
- Linear Regression
- Neural Network (Deep Learning)
- K-Nearest Neighbors
- Support Vector Regressor
### Validation
Report the correlation coefficient ($R^2$) for the train and test sets.
### Parity Plot
A parity plot is a scatter plot with predicted versus measured. A parity plot of the training and test data is a good way to see the overall fit of tension strength.
A joint plot shows two variables, with the univariate and joint distributions.
| github_jupyter |
```
###############################################################
# Script:
# testExc.py
# Usage:
# python testExc.py <input_file> <pass1_file> <output_file>
# Description:
# Build the prediction model based on training data
# Pass 2: prediction based on Sunday exceptions
# Authors:
# Jasmin Nakic, jnakic@salesforce.com
# Samir Pilipovic, spilipovic@salesforce.com
##############################################################
import sys
import numpy as np
from sklearn import linear_model
from sklearn.externals import joblib
# Imports required for visualization (plotly)
import plotly.graph_objs as go
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
# Script debugging flag
debugFlag = False
# Feature list for holiday hours and Sunday hours
hourHolidayCols = ["isHoliday",
"isHour0", "isHour1", "isHour2", "isHour3", "isHour4", "isHour5", "isHour6", "isHour7",
"isHour8", "isHour9", "isHour10", "isHour11", "isHour12", "isHour13", "isHour14", "isHour15",
"isHour16", "isHour17", "isHour18", "isHour19", "isHour20", "isHour21", "isHour22", "isHour23"]
hourSundayCols = ["isSunday",
"isHour0", "isHour1", "isHour2", "isHour3", "isHour4", "isHour5", "isHour6", "isHour7",
"isHour8", "isHour9", "isHour10", "isHour11", "isHour12", "isHour13", "isHour14", "isHour15",
"isHour16", "isHour17", "isHour18", "isHour19", "isHour20", "isHour21", "isHour22", "isHour23"]
# Add columns to the existing array and populate with data
def addColumns(dest, src, colNames):
# Initialize temporary array
tmpArr = np.empty(src.shape[0])
cols = 0
# Copy column content
for name in colNames:
if cols == 0: # first column
tmpArr = np.copy(src[name])
tmpArr = np.reshape(tmpArr,(-1,1))
else:
tmpCol = np.copy(src[name])
tmpCol = np.reshape(tmpCol,(-1,1))
tmpArr = np.append(tmpArr,tmpCol,1)
cols = cols + 1
return np.append(dest,tmpArr,1)
#end addColumns
# Get prediction using saved linear regression model
def getPredictions(rawData,calcData,modelName):
# Initialize array
X = np.zeros(rawData.shape[0])
X = np.reshape(X,(-1,1))
# Add columns for holidays by hour
X = addColumns(X,rawData,hourHolidayCols)
X[:, 2] = rawData["isHoliday"]*rawData["isHour0"]
X[:, 3] = rawData["isHoliday"]*rawData["isHour1"]
X[:, 4] = rawData["isHoliday"]*rawData["isHour2"]
X[:, 5] = rawData["isHoliday"]*rawData["isHour3"]
X[:, 6] = rawData["isHoliday"]*rawData["isHour4"]
X[:, 7] = rawData["isHoliday"]*rawData["isHour5"]
X[:, 8] = rawData["isHoliday"]*rawData["isHour6"]
X[:, 9] = rawData["isHoliday"]*rawData["isHour7"]
X[:,10] = rawData["isHoliday"]*rawData["isHour8"]
X[:,11] = rawData["isHoliday"]*rawData["isHour9"]
X[:,12] = rawData["isHoliday"]*rawData["isHour10"]
X[:,13] = rawData["isHoliday"]*rawData["isHour11"]
X[:,14] = rawData["isHoliday"]*rawData["isHour12"]
X[:,15] = rawData["isHoliday"]*rawData["isHour13"]
X[:,16] = rawData["isHoliday"]*rawData["isHour14"]
X[:,17] = rawData["isHoliday"]*rawData["isHour15"]
X[:,18] = rawData["isHoliday"]*rawData["isHour16"]
X[:,19] = rawData["isHoliday"]*rawData["isHour17"]
X[:,20] = rawData["isHoliday"]*rawData["isHour18"]
X[:,21] = rawData["isHoliday"]*rawData["isHour19"]
X[:,22] = rawData["isHoliday"]*rawData["isHour20"]
X[:,23] = rawData["isHoliday"]*rawData["isHour21"]
X[:,24] = rawData["isHoliday"]*rawData["isHour22"]
X[:,25] = rawData["isHoliday"]*rawData["isHour23"]
# Add columns for holidays by hour
X = addColumns(X,rawData,hourSundayCols)
X[:,27] = rawData["isSunday"]*rawData["isHour0"]
X[:,28] = rawData["isSunday"]*rawData["isHour1"]
X[:,29] = rawData["isSunday"]*rawData["isHour2"]
X[:,30] = rawData["isSunday"]*rawData["isHour3"]
X[:,31] = rawData["isSunday"]*rawData["isHour4"]
X[:,32] = rawData["isSunday"]*rawData["isHour5"]
X[:,33] = rawData["isSunday"]*rawData["isHour6"]
X[:,34] = rawData["isSunday"]*rawData["isHour7"]
X[:,35] = rawData["isSunday"]*rawData["isHour8"]
X[:,36] = rawData["isSunday"]*rawData["isHour9"]
X[:,37] = rawData["isSunday"]*rawData["isHour10"]
X[:,38] = rawData["isSunday"]*rawData["isHour11"]
X[:,39] = rawData["isSunday"]*rawData["isHour12"]
X[:,40] = rawData["isSunday"]*rawData["isHour13"]
X[:,41] = rawData["isSunday"]*rawData["isHour14"]
X[:,42] = rawData["isSunday"]*rawData["isHour15"]
X[:,43] = rawData["isSunday"]*rawData["isHour16"]
X[:,44] = rawData["isSunday"]*rawData["isHour17"]
X[:,45] = rawData["isSunday"]*rawData["isHour18"]
X[:,46] = rawData["isSunday"]*rawData["isHour19"]
X[:,47] = rawData["isSunday"]*rawData["isHour20"]
X[:,48] = rawData["isSunday"]*rawData["isHour21"]
X[:,49] = rawData["isSunday"]*rawData["isHour22"]
X[:,50] = rawData["isSunday"]*rawData["isHour23"]
XnoHS = np.zeros(rawData.shape[0])
XnoHS = (1-rawData["isHoliday"])*(1-rawData["isSunday"])*calcData["predHourWeek"]
XnoHS = np.reshape(XnoHS,(-1,1))
X = np.append(X,XnoHS,1)
if debugFlag:
print("X 0: ", X[0:5])
Y = np.copy(rawData["cnt"])
if debugFlag:
print("Y 0: ", Y[0:5])
model = joblib.load(modelName)
P = model.predict(X)
print("SCORE values: ", model.score(X,Y))
if debugFlag:
print("P 0-5: ", P[0:5])
return P
#end genModel
# Write predictions to the output file
def writeResult(output,rawData,calcData,p5):
# generate result file
result = np.array(
np.empty(rawData.shape[0]),
dtype=[
("timeStamp","|U19"),
("dateFrac",float),
("isHoliday",int),
("isSunday",int),
("cnt",int),
("predSimple",int),
("predTrig",int),
("predHourDay",int),
("predHourWeek",int),
("predHS",int)
]
)
result["timeStamp"] = rawData["timeStamp"]
result["dateFrac"] = rawData["dateFrac"]
result["isHoliday"] = rawData["isHoliday"]
result["isSunday"] = rawData["isSunday"]
result["cnt"] = rawData["cnt"]
result["predSimple"] = calcData["predSimple"]
result["predTrig"] = calcData["predTrig"]
result["predHourDay"] = calcData["predHourDay"]
result["predHourWeek"] = calcData["predHourWeek"]
result["predHS"] = p5
if debugFlag:
print("R 0-5: ", result[0:5])
hdr = "timeStamp\tdateFrac\tisHoliday\tisSunday\tcnt\tpredSimple\tpredTrig\tpredHourDay\tpredHourWeek\tpredHS"
np.savetxt(output,result,fmt="%s",delimiter="\t",header=hdr,comments="")
#end writeResult
# Start
inputFileName = "test_data.txt"
hourlyFileName = "test_hourly.txt"
outputFileName = "test_exc.txt"
# All input columns - data types are strings, float and int
# All input columns - data types are strings, float and int
inputData = np.genfromtxt(
inputFileName,
delimiter='\t',
names=True,
dtype=("|U19","|U10",int,float,int,float,float,int,float,float,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int
)
)
# timeStamp dateFrac isHoliday isSunday cnt predSimple predTrig predHourDay predHourWeek
hourlyData = np.genfromtxt(
hourlyFileName,
delimiter='\t',
names=True,
dtype=("|U19",float,int,int,int,int,int,int,int)
)
PHS = getPredictions(inputData,hourlyData,"modelExc")
writeResult(outputFileName,inputData,hourlyData,PHS)
# Load results from file generated above using correct data types
results = np.genfromtxt(
outputFileName,
dtype=("|U19",float,int,int,int,int,int,int,int,int),
delimiter='\t',
names=True
)
# Examine result data
print("Shape:", results.shape)
print("Columns:", len(results.dtype.names))
print(results[1:5])
# Generate chart with predicitons based on training data (using plotly)
print("Plotly version", __version__) # requires plotly version >= 1.9.0
init_notebook_mode(connected=True)
set1 = go.Bar(
x=results["dateFrac"],
y=results["cnt"],
# marker=dict(color='blue'),
name='Actual'
)
set2 = go.Bar(
x=results["dateFrac"],
y=results["predHS"],
# marker=dict(color='crimson'),
opacity=0.6,
name='Prediction'
)
barData = [set1, set2]
barLayout = go.Layout(barmode='group', title="Prediction vs. Actual")
fig = go.Figure(data=barData, layout=barLayout)
iplot(fig)
```
| github_jupyter |
Generate Isoform report for Summary.html #212
Wrapper for isoform_report.R . Based on version:
https://github.com/UCSC-Treehouse/isoform_report/blob/ca339b96e0a9dda1281b0b6b064af29dfd4a3e70/isoform_report.R with slight modification.
Dependencies - the following R libraries are required to run this script:
- dplyr >= 0.8.0
- tidyverse >= 1.3.0
- treemapify (2.5.3)
- Sushi (1.12.0)
Sub-dependency notes
- Sushi requires BiocManager (1.30.10) to install
- tidyverse 1.3.0 has a dependency vctrs; as of 2020/01/15 the latest vctrs (0.2.1) isn't installing, so vctrs 0.2.0 is used instead.
Outputs:
Output files are placed into an isoform-report subdirectory. For each gene:
- {gene}\_isoform_expression.tsv
- Expressed\_{gene}\_isoforms\_in\_{sample_id}.pdf
- Expressed\_{gene}\_isoforms\_in\_{sample_id}.png
- Frequency\_of\_{gene}\_transcript\_biotypes\_in\_{sample_id}.png
```
import os
import glob
import json
import logging
import base64
import pandas as pd
from PIL import Image
from string import Template
from collections import OrderedDict
from distutils.version import LooseVersion
with open("conf.json","r") as conf:
c=json.load(conf)
sample_id = c["sample_id"]
print("Running on sample: {}".format(sample_id))
logging.basicConfig(**c["info"]["logging_config"])
def and_log(s):
logging.info(s)
return s
j = {}
def load_notebook_output(notebook_num):
outputfile = "{}.json".format(notebook_num)
try:
with open(outputfile, "r") as f:
result = json.load(f)
return result
except IOError:
print("Error! Couldn't find output of previous notebook at {}".format(outputfile))
return {}
print and_log("8.25: Isoform Report")
notebook_name = "8.25"
# Locate rsem-isoforms.results
# pipeline_name = glob-able basedir within secondary (format = PIPELINE-NAME-*)
# internal_path = path to file within that pipeline name
def locate_in_secondary(pipeline_name, internal_path):
all_pipeline_versions = sorted(glob.glob(os.path.join(c["dir"]["secondary"], pipeline_name)),
key=LooseVersion)
# Get the file, or most recent if there are more than one.
if len(all_pipeline_versions) >= 1:
return os.path.join(all_pipeline_versions[-1], internal_path)
else:
return False
rsem_isoforms_path = locate_in_secondary(
"ucsc_cgl-rnaseq-cgl-pipeline-*", os.path.join("RSEM", "rsem_isoforms.results"))
j["found_isoforms_file"] = os.access(rsem_isoforms_path, os.R_OK) # we'll only expect results if file can be read
# get genes of interest as comma-separated
nb8_results = load_notebook_output("8")
leads = nb8_results["automated_leads_identified"]
genes_of_interest_list = { k:v for (k, v) in leads["results"].items() if
leads["assay"][k] == "druggableUpOutlier"}.values()
genes_of_interest = ",".join(genes_of_interest_list)
# Parameters for Rscript
sid = sample_id
genes = genes_of_interest
enshugo = c["ref_file"]["ensembl_hugo_mapping_file"]
rir = rsem_isoforms_path
gtf = c["ref_file"]["treehouse_druggable_gencode"] # Treehouse_druggable_gencode.v23.annotation.gtf.gz
outdir = c["dir"]["isoform_report_plots_dir"] # isoform-report
print("Making isoform report for genes {}".format(genes_of_interest))
# make output dir
try:
os.makedirs(outdir)
print("Made output dir {}".format(outdir))
except OSError as e:
print("Found error, but perhaps the dir simply already exists?\nError: {}".format(e))
if not os.path.isdir(outdir):
raise
```
Run the Rscript. In the case where there is no rsem_isoforms file, it will fail to generate output files, which will be caught in the following steps.
```
%%script Rscript - "$sid" "$genes" "$enshugo" "$rir" "$gtf" "$outdir"
params <-
list(date = structure(18249, class = "Date"))
library(tidyverse)
library(Sushi)
library(knitr)
library(treemapify)
library(RColorBrewer)
f_RMD <- isTRUE(getOption('knitr.in.progress')) | interactive()
# parameters when run as a script
if (! f_RMD) {
args<-commandArgs(TRUE)
sample_of_interest <- args[1]
genes_of_interest_hugo <- strsplit(args[2], ",") %>% unlist
EnsGeneID_Hugo_Observed_Conversions_file <- args[3] # "EnsGeneID_Hugo_Observed_Conversions.txt"
rsem_isoforms.results_file <- args[4] # "rsem_isoforms.results"
gtf_file <- args[5] # "Treehouse_druggable_gencode.v23.annotation.gtf.gz"
output_dir <- args[6] # "isoform-report"
}
# parameters when run interactively or knitted
if (f_RMD) {
genes_of_interest_hugo <- c("KIT", "PDGFRA")
sample_of_interest <- "TH34_1349_S02"
EnsGeneID_Hugo_Observed_Conversions_file <- "EnsGeneID_Hugo_Observed_Conversions.txt"
rsem_isoforms.results_file <- "rsem_isoforms.results"
gtf_file <- "Treehouse_druggable_gencode.v23.annotation.gtf.gz"
output_dir <- "isoform-report"
}
# if (f_RMD) {
print(paste(genes_of_interest_hugo, collapse = "-"))
print(sample_of_interest)
# }
ens_hugo_conversions <- read_tsv(EnsGeneID_Hugo_Observed_Conversions_file) %>% na.omit
genes_of_interest_ensembl <- ens_hugo_conversions$EnsGeneID[
ens_hugo_conversions$HugoID %in% genes_of_interest_hugo]
iso_results <- read_tsv(rsem_isoforms.results_file)
these_genes_iso_results <- iso_results %>% filter(gene_id %in% genes_of_interest_ensembl)
gtf_colnames <- c("seqname", "source", "feature", "start", "end", "score", "strand", "frame", "attribute")
# gtf_file <- "gencode.v23.annotation.gtf.gz"
# gtf_file <- "Treehouse_druggable_gencode.v23.annotation.gtf.gz"
gencode_v23 <- read_tsv(gtf_file, comment = "#", col_names = gtf_colnames)
gencode_v23_these_genes <- gencode_v23 %>%
mutate(gene_id = gsub("\".*$", "",
gsub("^gene_id \"", "\\1", attribute)),
transcript_id = gsub("^.*transcript_id \"([A-Z0-9\\.]*)\".*$",
"\\1", attribute),
feature_length = end - start
) %>%
filter(gene_id %in% genes_of_interest_ensembl)
gencode_v23_these_genes_transcripts <- gencode_v23_these_genes %>%
filter(feature == "transcript")
KVsep <- fixed("; ") #key-value separator
Vsep <- fixed(" ") #value separator
gencode_v23_these_genes_transcript_minutia <- gencode_v23_these_genes_transcripts %>%
mutate(KVpairs = str_split(attribute, KVsep)) %>%
unnest(KVpairs) %>%
separate(KVpairs, into = c("key", "value"), Vsep) %>%
filter( !(key == "tag" & value !="basic")) %>% # keep tag only if basic
filter(! key %in% c("transcript_id", "gene_id")) %>% # these value were already extracted
mutate(value = gsub("\"", "", value)) %>%
spread(key, value)
these_genes_iso_results_anno <- these_genes_iso_results %>%
left_join(gencode_v23_these_genes_transcript_minutia %>%
dplyr::select (-gene_id),
by="transcript_id") %>%
mutate(transcript_id = fct_reorder(transcript_id, IsoPct))
n_transcripts_to_analyze <- 10
top_iso_results_anno <- these_genes_iso_results_anno %>%
top_n(n_transcripts_to_analyze, IsoPct)
# Error in .f(.x[[i]], ...) : object 'transcript_type' not found
# Calls: %>% ... <Anonymous> -> vars_select_eval -> map_if -> map -> .f
#Execution halted
exon_locations <- gencode_v23_these_genes %>%
filter(feature %in% c("exon", "UTR")) %>%
left_join(top_iso_results_anno %>%
dplyr::select(transcript_id, IsoPct, TPM, transcript_type, transcript_name, hugo_id = gene_name),
by=c("transcript_id")) %>%
mutate(score = IsoPct,
transcript_label = paste0(transcript_name, " (",
#IsoPct, "%, ", transcript_type, ")")) %>%
IsoPct, "%)")) %>%
dplyr::select(chrom = seqname, start, stop = end,
gene = transcript_id, score,
strand, type = feature, IsoPct, TPM, transcript_label, transcript_name, hugo_id) %>%
arrange(desc(IsoPct))
plot_gene <- function(submitted_bed_data, buffer_size = 5e2, plot_title = ""){
bed_data = data.frame(submitted_bed_data)
chrom <- bed_data$chrom[1]
chromstart = min(bed_data$start) - buffer_size
chromend = max(bed_data$stop) + buffer_size
# Only colorby if there is more than one unique score or it will crash
if( length(unique(bed_data$score))==1 ){
pg = plotGenes(bed_data,chrom,chromstart,chromend,
#colorby=log10(bed_data$score+0.001),
#colorby=bed_data$score,
#colorbycol= SushiColors(5),colorbyrange=c(0,1.0),
labeltext=TRUE,maxrows=50,height=0.4,plotgenetype="box",
packrow = FALSE
)
} else {
pg = plotGenes(bed_data,chrom,chromstart,chromend,
#colorby=log10(bed_data$score+0.001),
colorby=bed_data$score,
#colorbycol= SushiColors(5),colorbyrange=c(0,1.0),
labeltext=TRUE,maxrows=50,height=0.4,plotgenetype="box",
packrow = FALSE
)
}
labelgenome( chrom, chromstart,chromend,n=3,scale="Mb")
# note: add legend has to be hand-placed for each plot, so I've omitted it here
title(main = plot_title, sub = "Colored by isoform percent, also reported in label")
}
multi_plot_gene <- function(bed_data) {
# bed_data <- t5
this_title <- paste("Expressed", bed_data$hugo_id[1], "isoforms in", sample_of_interest)
base_filename <- gsub(" ", "_", this_title)
## If using RMD to generate html output, make plot
if( f_RMD ) plot_gene (bed_data, plot_title = this_title)
## If scripted, make plots in output files
if( ! f_RMD ) {
## Make PDF (small file size, can be endlessly enlarged, inconvenient to embed in html)
pdf(file = file.path(output_dir, paste0(base_filename, ".pdf")),
width = 8, height = 4)
plot_gene (bed_data, plot_title = this_title)
dev.off()
## Make high res PNG (large file size, convenient to embed in html)
png(file = file.path(output_dir, paste0(base_filename, ".png")),
width = 12, height = 6, units = "in", res = 600)
plot_gene (bed_data, plot_title = this_title)
dev.off()
}
}
expressed_transcripts <- exon_locations %>%
mutate(gene = transcript_label) %>%
dplyr::filter(TPM > 0)
expressed_transcripts %>%
group_by(hugo_id) %>%
group_split %>%
lapply(multi_plot_gene)
transcript_biotypes <- c("protein_coding", "processed_transcript", "retained_intron", "processed_pseudogene", "nonsense_mediated_decay", "transcribed_processed_pseudogene")
# biotype_color_codes <- tibble(transcript_biotypes, brewer.pal(12, "Set1")[1:length(transcript_biotypes)])
biotype_color_codes <- tibble(transcript_biotype = transcript_biotypes, color_code = brewer.pal(length(transcript_biotypes), "Set1"))
transcript_biotype_colors <- biotype_color_codes$color_code
names(transcript_biotype_colors) <- biotype_color_codes$transcript_biotype
plot_biotype_frequency <- function(this_gene_iso_results_anno){
biotype_freq <- this_gene_iso_results_anno %>%
group_by(transcript_type) %>%
summarize(total_isoform_pct_for_type = sum(IsoPct)) %>%
mutate(biotype_label = paste0(transcript_type, " (", total_isoform_pct_for_type, "%)"))
this_title <- paste("Frequency of", this_gene_iso_results_anno$gene_name[1] ,"transcript biotypes in", sample_of_interest)
base_filename <- gsub(" ", "_", this_title)
ggplot(biotype_freq,
aes(fill = transcript_type,
area = total_isoform_pct_for_type,
label = biotype_label)) +
geom_treemap() +
geom_treemap_text(colour = "white",
place = "centre") +
labs(title = this_title) +
scale_fill_manual(values = transcript_biotype_colors)
ggsave(file.path(output_dir, paste0(base_filename, ".png")), width=5, height = 5)
}
these_genes_iso_results_anno %>%
group_by(gene_name) %>%
group_split %>%
lapply(plot_biotype_frequency)
output_table <- these_genes_iso_results_anno %>%
dplyr::filter(TPM > 0) %>%
mutate(log2TPM1 = round(log2(TPM +1),2)) %>%
dplyr::select(transcript_name, length, log2TPM1, strand, IsoPct, transcript_type, transcript_id, gene_name) %>%
arrange(desc(IsoPct)) %>%
group_by(gene_name)
if(f_RMD){
list_of_output_tables <- output_table %>%
group_split
for(i in list_of_output_tables) {
print(kable(x = i))
}
}
for_silent_output <- output_table %>%
group_split %>%
lapply(function(x) {write_tsv(x, file.path(output_dir, paste0(x$gene_name[1], "_isoform_expression.tsv")))})
def image_to_json(path, filename):
try:
with open(os.path.join(path,filename), "rb") as f:
return base64.b64encode(f.read())
except IOError:
print("Couldn't read {}/{}; skipping".format(path, filename))
return False
def downscale_png(png_path):
try:
png_to_resize = Image.open(png_path)
png_to_resize.thumbnail((1080,540))
png_to_resize.save(png_path, "PNG")
return True
except IOError:
print("Couldn't resize image at {}; skipping".format(png_path))
return False
expressed_isoforms_png = Template("Expressed_${gene}_isoforms_in_${sample_id}.png")
# We also generate Expressed_${gene}_isoforms_in_${sample_id}.pdf"
frequency_png = Template("Frequency_of_${gene}_transcript_biotypes_in_${sample_id}.png")
expression_tsv = Template("${gene}_isoform_expression.tsv")
# Downscale expressed isoform PNGs and import PNGs and table into JSON file
# any file not found will be imported as False
j["isoform_results"] = {}
for gene in genes_of_interest_list:
j["isoform_results"][gene]={}
png_path = os.path.join(outdir,
expressed_isoforms_png.substitute(gene=gene, sample_id=sample_id))
downscale_png(png_path)
# Load images into JSON - will be False if file not found
j["isoform_results"][gene]["expressed_isoforms_img_data"]=image_to_json(outdir,
expressed_isoforms_png.substitute(gene=gene, sample_id=sample_id))
j["isoform_results"][gene]["transcript_biotypes_img_data"]=image_to_json(outdir,
frequency_png.substitute(gene=gene, sample_id=sample_id))
try:
isoform_table_file=os.path.join(outdir,expression_tsv.substitute(gene=gene))
isoform_table_json=json.loads(
pd.read_csv(isoform_table_file, delimiter="\t", dtype="str", na_filter=False
).to_json(orient="records"),object_pairs_hook=OrderedDict)
except IOError:
print("Couldn't read {}; skipping".format(isoform_table_file))
isoform_table_json = False
j["isoform_results"][gene]["isoform_table"]=isoform_table_json
# Store order of isoform keys for proper display in summary
j["isoform_table_key_order"]=[
"transcript_name",
"length",
"log2TPM1",
"strand",
"IsoPct",
"transcript_type",
"transcript_id",
"gene_name"
]
with open("{}.json".format(notebook_name), "w") as jsonfile:
json.dump(j, jsonfile, indent=2)
print("Done.")
```
| github_jupyter |
# 📃 Solution for Exercise M6.04
The aim of this exercise is to:
* verify if a GBDT tends to overfit if the number of estimators is not
appropriate as previously seen for AdaBoost;
* use the early-stopping strategy to avoid adding unnecessary trees, to
get the best statistical performances.
we will use the California housing dataset to conduct our experiments.
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0, test_size=0.5)
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
Similarly to the previous exercise, create a gradient boosting decision tree
and create a validation curve to assess the impact of the number of trees
on the statistical performance of the model. Use the mean absolute error
to assess the statistical performance of the model.
```
import numpy as np
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import validation_curve
gbdt = GradientBoostingRegressor()
param_range = np.unique(np.logspace(0, 1.8, num=30).astype(int))
train_scores, test_scores = validation_curve(
gbdt,
data_train,
target_train,
param_name="n_estimators",
param_range=param_range,
scoring="neg_mean_absolute_error",
n_jobs=-1,
)
train_errors, test_errors = -train_scores, -test_scores
import matplotlib.pyplot as plt
plt.errorbar(
param_range,
train_errors.mean(axis=1),
yerr=train_errors.std(axis=1),
label="Training score",
)
plt.errorbar(
param_range,
test_errors.mean(axis=1),
yerr=test_errors.std(axis=1),
label="Cross-validation score",
)
plt.legend()
plt.ylabel("Mean absolute error in k$\n(smaller is better)")
plt.xlabel("# estimators")
_ = plt.title("Validation curve for GBDT regressor")
```
Unlike AdaBoost, the gradient boosting model will always improve when
increasing the number of trees in the ensemble. However, it will reach a
plateau where adding new trees will just make fitting and scoring slower.
To avoid adding new unnecessary tree, gradient boosting offers an
early-stopping option. Internally, the algorithm will use an out-of-sample
set to compute the statistical performance of the model at each addition of a
tree. Thus, if the statistical performance are not improving for several
iterations, it will stop adding trees.
Now, create a gradient-boosting model with `n_estimators=1000`. This number
of trees will be too large. Change the parameter `n_iter_no_change` such
that the gradient boosting fitting will stop after adding 5 trees that do not
improve the overall statistical performance.
```
gbdt = GradientBoostingRegressor(n_estimators=1000, n_iter_no_change=5)
gbdt.fit(data_train, target_train)
gbdt.n_estimators_
```
We see that the number of trees used is far below 1000 with the current
dataset. Training the GBDT with the entire 1000 trees would have been
useless.
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../../../src/')
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from models.chen2017.transforms import *
import datasets.divahisdb as diva
import experiment.data as exd
from datasets.array import Tiles
import math
import torch
env = exd.Environment()
diva_dataset = diva.HisDBDataset(env.dataset('DIVA-HisDB'),gt=True)
tile_data = Tiles(env.dataset('Chen2017_np_tiles_balanced'))
tile_data.y
class_count = np.unique(tile_data.y, return_counts=True)[1]
# class_count / sum(class_count)
class_count
n_label_max = sorted(np.unique(tile_data.data['y'], return_counts=True)[1])[-2]
idx = np.where(tile_data.y == 0)[0]
np.random.shuffle(idx)
idx = idx[:n_label_max]
idx = np.append(idx,np.where(tile_data.y == 1)[0][:n_label_max])
idx = np.append(idx,np.where(tile_data.y == 2)[0][:n_label_max])
idx = np.append(idx,np.where(tile_data.y == 3)[0][:n_label_max])
# tile_data.x[(idx[:10],)].shape
tile_data.y[idx]
np.unique(tile_data.y[idx], return_counts=True)
# idx
# n_label_max
i = 6010
plt.imshow(tile_data[i][0],'gray')
tile_data[i][1]
std = np.std(tile_data.x)
mean = np.mean(tile_data.x)
mean, std
type(tile_data.data['x'][2])
tile_data.data.keys()
tile_data[i][0]
from models.chen2017.chennet import ChenNet as Model
model = Model(n_classes=4, in_channels=1, layers=1)
np.histogram(model.conv.conv0.weight.data)
model.conv
tile_data.y
np.sum(tile_data.y !=np.random.randint(0,4, size=tile_data.y.shape)) / len(tile_data.y)
import json
import os
from pathlib import Path
import torch
import torchvision
import torchvision.transforms as transforms
from inferno.trainers.basic import Trainer
from inferno.trainers.callbacks.logging.tensorboard import TensorboardLogger
from models.chen2017.chennet import ChenNet as Model
import datasets.array as array
from experiment.data import Environment, TrainLog
env = Environment()
dataset_name = 'Chen2017_np_tiles_balanced'
dataset_path = env.dataset(dataset_name)
mean =168.61987390394304
std =56.83193208713197
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((mean, mean, mean), (std,std,std))
])
train_set = array.Tiles(dataset_path, transforms=transform)
test_set = array.Tiles(dataset_path, train=False, transforms=transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32,
shuffle=True, num_workers=2)
it = iter(train_loader)
batch = it.next()
# plt.imshow(batch[0][0,0])
batch[1][0]
hist, bins = np.histogram(batch[0][:,0],bins=100)
# batch[0][:,0].shape
# plt.bar(hist[1],hist[0])
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
# plt.ylim([0,50])
plt.show()
loaded = Trainer()
loaded = loaded.load(from_directory=str(env.models_folder/ 'ChenNet'/'Chen2017_np_tiles_balanced'/'trained'/'ChenNet4_4_01522891722'))
# model = Model(n_classes=4, in_channels=1, layers=2)
act = loaded.model(torch.autograd.Variable(batch[0]).cuda())
# batch[1]
act
loss = torch.nn.CrossEntropyLoss()
loss(act, torch.autograd.Variable(batch[1]).cuda())
!ls /media/jakob/bigdata/models/thesis/ChenNet/Chen2017_np_tiles_balanced/trained/ChenNet2_4_01522868516/
img_set = Tiles(dataset_path)
img_set[14000][0]
import itertools as it
path = env.dataset('tile_img')
n = 6000
positions = np.array([],dtype=np.int)
for i in range(4):
pos = np.where(img_set.y == i)[0]
np.random.shuffle(pos)
pos = pos[:n]
positions = np.concatenate((positions, pos))
img_set.y[(positions,)]
for i in positions:
img, y = img_set[i]
class_folder = path / str(y)
class_folder.mkdir(exist_ok=True,parents=True)
img.save(class_folder / 'img{}.jpg'.format(i))
path
len(img_set.y)
path = path = env.dataset('tile_img')
env.config
list(range(1))
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
#Data can be downloaded from repository
df = pd.read_excel("/Users/guneykan/Desktop/PS1Data.xlsx", index_col=0)
df.head()
# Calculate market return from risk premium, risk free is given as "1+risk-free" that is whhy we substract 1
df["Rm"] = df["Rm-r"] + df["r"] - 1
# Mean Risk Premium
hist_Rp = np.mean(df["Rm-r"])
# Mean Risk-Free
hist_Rf = np.mean(df["r"])
# Gamma represent risk avereness of the investors
gamma = np.arange(2, 101)
# Given the utility function, and definition of the stochastic discount factor, check the report for the details
df["m"] = 1/df["dc"]
# Holders
inverse_dc = np.array(df["m"])
m_gamma = np.arange(6237, dtype=np.float64).reshape(63, 99)
cov_Rm_m = np.arange(99, dtype=np.float64)
c_capm_Rp = np.arange(99, dtype=np.float64)
c_capm_Rf = np.arange(99, dtype=np.float64)
hist_rp_array = np.full((99,), hist_Rp)
hist_rf_array = np.full((99,), hist_Rf)
# Applying C-CAPM formula
for i in range(99):
m_gamma[:, i]=np.power(a, gamma[i])
cov_Rm_m[i]=np.cov(m_gamma[:, i], df["Rm"])[0][1]
c_capm_Rp[i]= -1*cov_Rm_m[i]*hist_Rf
c_capm_Rf[i]= 1/np.mean(m_gamma[:, i])
plt.figure(figsize=(20,10))
plt.title("C-CAPM R_p for each Gamma", size=20)
plt.xlabel('Gamma', size=15)
plt.ylabel('R_p', size=15)
plt.plot(c_capm_Rp, label = "C-CAPM R_p")
plt.plot(hist_rp_array, label = "Historical R_p")
plt.legend(prop={'size': 15})
plt.figure(figsize=(20,10))
plt.title("C-CAPM R_f for each Gamma", size=20)
plt.xlabel('Gamma', size=15)
plt.ylabel('R_f', size=15)
plt.plot(c_capm_Rf, label = "C-CAPM R_f")
plt.plot(hist_rf_array, label = "Historical R_f")
plt.legend(prop={'size': 15})
correspond_rf = gamma[np.where(np.absolute(c_capm_Rf-hist_Rf) == np.amin(np.absolute(c_capm_Rf-hist_Rf)))[0]]
correspond_rp = gamma[np.where(np.absolute(c_capm_Rp-hist_Rp) == np.amin(np.absolute(c_capm_Rp-hist_Rp)))[0]]
print("gamma that corresponds to hist. Rp:", correspond_rp)
print("gamma that corresponds to hist. Rf:", correspond_rf)
m_gamma_abel = np.arange(62*99, dtype=np.float64).reshape(62, 99)
cov_Rm_m_abel = np.arange(99, dtype=np.float64)
c_capm_Rp_abel = np.arange(99, dtype=np.float64)
c_capm_Rf_abel = np.arange(99, dtype=np.float64)
# Applying C-CAPM with the Abel's utility function
for i in range(99):
for t in range(62):
m_gamma_abel[t, i]=np.power(a[t], 1-gamma[i])*np.power(a[t+1], gamma[i])
cov_Rm_m_abel[i]=np.cov(m_gamma_abel[:, i], df["Rm"][1:])[0][1]
c_capm_Rp_abel[i]= -1*cov_Rm_m_abel[i]*hist_Rf
c_capm_Rf_abel[i]= 1/np.mean(m_gamma_abel[:, i])
plt.figure(figsize=(20,10))
plt.title("C-CAPM-Abel R_p for each Gamma", size=20)
plt.xlabel('Gamma', size=15)
plt.ylabel('R_p', size=15)
plt.plot(c_capm_Rp_abel, label = "C-CAPM R_p_Abel")
plt.plot(hist_rp_array, label = "Historical R_p")
plt.legend(prop={'size': 15})
plt.figure(figsize=(20,10))
plt.title("C-CAPM-Abel R_f for each Gamma", size=20)
plt.xlabel('Gamma', size=15)
plt.ylabel('R_p', size=15)
plt.plot(c_capm_Rf_abel, label = "C-CAPM R_f_Abel")
plt.plot(hist_rf_array, label = "Historical R_f")
plt.legend(prop={'size': 15})
correspond_rf = gamma[np.where(np.absolute(c_capm_Rf_abel-hist_Rf) == np.amin(np.absolute(c_capm_Rf_abel-hist_Rf)))[0]]
correspond_rp = gamma[np.where(np.absolute(c_capm_Rp_abel-hist_Rp) == np.amin(np.absolute(c_capm_Rp_abel-hist_Rp)))[0]]
```
| github_jupyter |
# Logarithm
Here we analyse how accurate are the approximate functions for Logarithm
We compare two methods:
- Newton Raphson
- 6th order HouseHolder
We show how they perform in the context of encrypted computation, show that 6th order HouseHolder is better suited and discuss how to improve initialization of this method.
### Define a benchmark method
```
import os, sys
sys.path.insert(1, os.path.join(sys.path[0], '..'))
import torch as th
import matplotlib.pyplot as plt
import numpy as np
def benchmark(real_func, approx_func, interval, n_points=100, approx_kwargs={}, forward_transformer=lambda x:x, backward_transformer=lambda x:x):
"""
Benchmark an approximation function compared to an exact function.
Compute and print the relative divergence
Args:
real_func:
approx_func:
interval:
n_points:
approx_kwargs: optional kwargs to provide to the approximation function
forward_transformer: optional input transformation to apply before calling approx_func
backward_transformer: optional output transformation to apply after calling approx_func
"""
start, stop = interval
points = np.linspace(start, stop, num=n_points)
real_values = []
approx_values = []
for x in points:
x = th.tensor([x])
real_value = real_func(x)
real_values.append(real_value.item())
x_syft = forward_transformer(x)
approx_value_syft = approx_func(x_syft, **approx_kwargs)
approx_value = backward_transformer(approx_value_syft)
approx_values.append(approx_value.item())
plt.figure(figsize=(15,4))
plt.subplot(121, title="Real and approximate logarithm")
real_values = np.array(real_values)
approx_values = np.array(approx_values)
plt.plot(points, real_values)
plt.plot(points, approx_values)
plt.subplot(122, title="Relative error")
norm_diff = 2 * np.abs(real_values - approx_values)/np.abs(real_values + approx_values)
plt.plot(points, norm_diff)
plt.show()
```
## 1. Using the Newton Raphson method
```
from funcs import log_newton, log_householder
```
## 1.A Approximation alone
We analyse here the loss incurred by the approximation using only normal pytorch tensors
```
if not hasattr(th, 'native_exp'):
th.native_exp = th.exp
def hook_exp(x, **kwargs):
return th.native_exp(x)
th.exp = hook_exp
th.Tensor.refresh = lambda x:x
benchmark(
th.log,
log_newton,
interval = (3, 15),
approx_kwargs={'iterations': 3}
)
```
This is great but it is limited to a small interval $[3, 5]$. On a full range interval $[0.1, 250]$ it behaves poorly. We show here the result with different number of iterations.
```
for it in [0, 1, 2, 3]:
benchmark(
th.log,
log_newton,
interval = (0.1, 250),
approx_kwargs={'iterations': it}
)
```
## 1.B Approximation with AdditiveSharingTensors
```
import syft as sy
hook = sy.TorchHook(th)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
charlie = sy.VirtualWorker(hook, id="charlie")
crypto = sy.VirtualWorker(hook, id="crypto_provider")
th.Tensor.native_refresh = th.Tensor.refresh
benchmark(
th.log,
log_newton,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
Interestingly here, the approximation only works on a given range, roughly $[70:160]$
```
benchmark(
th.log,
log_newton,
interval = (70, 160),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
With more iterations $2 \rightarrow 8$, results are a bit better but are much more expensive to compute:
```
benchmark(
th.log,
log_newton,
interval = (70, 160),
n_points=20,
approx_kwargs={'iterations': 8, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
### Remarks
- The approximation and its range of validity depends on the initialization chosen
Here is an alternate initialization
```
from funcs import exp
def log_newton(x, iterations=2, exp_iterations=8):
"""Approximates the logarithm using the Newton Raphson method
Args:
iterations (int): number of iterations for Newton Raphson approximation.
exp_iterations (int): number of iterations for limit approximation of exp
.. inspired by https://github.com/facebookresearch/CrypTen
"""
# PREVIOUS:
y = x / 40 + 1.9 - 8 * exp(-2 * x - 0.3, iterations=exp_iterations)
# NEW:
#y = x / 120 - 20 * exp(-2 * x - 1.0, iterations=exp_iterations) + 3.0
for i in range(iterations):
h = [1 - x * exp((-y).refresh(), iterations=exp_iterations)]
for i in range(1, 5):
h.append(h[-1] * h[0])
y -= h[0] * (1 + h[0] + h[1] + h[2] + h[3] + h[4])
return y
```
The field of validity is now very different!
```
benchmark(
th.log,
log_newton,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
On $[5:23]$:
```
benchmark(
th.log,
log_newton,
interval = (5, 23),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
The reason for this is that Newton's method is really instable, in section 2 we study the HouseHolder method which is a better fit for this task.
# 2. Using the HouseHolder method
## 1.A Approximation alone
We analyse here the loss incurred by the approximation using only normal pytorch tensors
```
th.Tensor.refresh = lambda x:x
benchmark(
th.log,
log_householder,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8}
)
```
Results are much better with this approximation, right?
What about adding AdditiveSharingTensors in the loop?
## 2.B Approximation with AdditiveSharingTensors
_We re-instantiate refresh as we work with AdditiveSharingTensors_
```
th.Tensor.refresh = th.Tensor.native_refresh
benchmark(
th.log,
log_householder,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
This is still very good!
One interesting quesiton is now to see how the initialisation provided influence the global approximation. We'll investigate in the following part how to find the best initialisation.
# 3. Optimisation of the initialisation
```
import torch as th
import torch.nn as nn
from funcs import exp_limit
class ApproxModel(nn.Module):
def __init__(self):
super(ApproxModel, self).__init__()
self.w1 = nn.Parameter(th.tensor(1/120.))
self.b1 = nn.Parameter(th.tensor(3.))
self.alpha = nn.Parameter(th.tensor(-20.))
self.w2 = nn.Parameter(th.tensor(-2.))
self.b2 = nn.Parameter(th.tensor(-1.))
def forward(self, x):
y = x * self.w1 + self.b1 + self.alpha * exp_limit(x * self.w2 + self.b2)
for i in range(2):
h = [1 - x * exp_limit(-y)]
for i in range(1, 5):
h.append(h[-1] * h[0])
y -= h[0] * (1 + h[0] / 2 + h[1] / 3 + h[2] / 4 + h[3] / 5 + h[4] / 6)
return y
# Training settings
model = ApproxModel()
optimizer = th.optim.Adam(params=model.parameters(), lr=0.001)
n_points = 1000
batch_size = 100
# 1. Built the training set
# np.logspace(-3, 2.4) is a range from 0.001 to 250
data = th.tensor(np.logspace(-3, 2.4, num=n_points))
# permute data and reshape
data = data[th.randperm(n_points)].view(-1, 1)
# 2. compute the target
target = th.log(data)
for epoch in range(10000):
# randomly shuffle at each epoch
rand_idx = th.randperm(n_points)
for i in range(int(n_points/batch_size)):
if i == 1 and epoch % 100 == 0:
print(
round(1/model.w1.item(), 2),
round(model.b1.item(), 2),
round(model.alpha.item(), 2),
round(model.w2.item(), 2),
round(model.b2.item(), 2),
loss.item()
)
data_batch = data[rand_idx[i:i+batch_size]]
target_batch = target[rand_idx[i:i+batch_size]]
optimizer.zero_grad()
pred = model(data)
# the loss chosen is a normalized MSE
loss = (((pred - target)/(pred + target))**2).mean()
loss.backward()
optimizer.step()
```
The params seem to be converging, we will keep those one for our implementation. Note that the relative error is very small and is close to 10e-3.
| github_jupyter |
```
%matplotlib inline
```
Tensor
=======
PyTorch에서의 Tensor는 Torch에서와 거의 동일하게 동작합니다.
초기화되지 않은 (5 x 7) 크기의 tensor를 생성합니다:
```
import torch
a = torch.empty(5, 7, dtype=torch.float)
```
평균 0, 분산 1의 정규분포를 따르는 무작위 숫자로 dobule tensor를 초기화합니다:
```
a = torch.randn(5, 7, dtype=torch.double)
print(a)
print(a.size())
```
<div class="alert alert-info"><h4>Note</h4><p>``torch.Size`` 는 튜플(tuple)과 같으며, 모든 튜플 연산에 사용할 수 있습니다.</p></div>
In-place / Out-of-place
------------------------
첫 번째 차이점은 tensor의 모든 In-place 연산은 ``_`` 접미사를 갖는다는 것입니다.
예를 들어, ``add`` 는 연산 결과를 돌려주는 Out-of-place 연산을 하고, ``add_`` 는
In-place 연산을 합니다.
```
a.fill_(3.5)
# a가 값 3.5로 채워집니다.
b = a.add(4.0)
# a는 여전히 3.5입니다.
# 3.5 + 4.0 = 7.5의 값이 반환되어 새로운 tensor b가 됩니다.
print(a, b)
```
``narrow`` 와 같은 일부 연산들은 In-place 형태를 갖지 않기 때문에 ``.narrow_`` 는
존재하지 않습니다. 또한, ``fill_`` 은 Out-of-place 형태를 갖지 않기 떄문에 역시
``.fill`` 도 존재하지 않습니다.
0-인덱스(Zero Indexing)
-----------------------
또 다른 차이점은 Tensor의 인덱스는 0부터 시작(0-인덱스)는 점입니다.
(Lua에서 tensor는 1-인덱스를 갖습니다.)
```
b = a[0, 3] # select 1st row, 4th column from a
```
Python의 슬라이싱(slicing)으로도 Tensor를 인덱스 할 수 있습니다.
```
b = a[:, 3:5] # selects all rows, 4th column and 5th column from a
```
카멜표기법(Camel Case) 없음
----------------------------
그 외에도 카멜표기법을 사용하지 않는 사소한 차이가 있습니다.
예를 들어 ``indexAdd`` 는 ``index_add_`` 라고 표기합니다.
```
x = torch.ones(5, 5)
print(x)
z = torch.empty(5, 2)
z[:, 0] = 10
z[:, 1] = 100
print(z)
x.index_add_(1, torch.tensor([4, 0], dtype=torch.long), z)
print(x)
```
NumPy 변환(Bridge)
------------------
Torch Tensor를 NumPy 배열(array)로 변환하거나, 그 반대로 하는 것은 매우 쉽습니다.
Torch Tensor와 NumPy 배열은 저장 공간을 공유하기 때문에, 하나를 변경하면 다른 하나도
변경됩니다.
Torch Tensor를 NumPy 배열로 변환하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
a.add_(1)
print(a)
print(b) # see how the numpy array changed in value
```
NumPy 배열을 Torch Tensor로 변환하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b) # see how changing the np array changed the torch Tensor automatically
```
CharTensor를 제외한 CPU 상의 모든 Tensor는 NumPy로의 변환을 지원하며,
(NumPy에서 Tensor로의) 반대 변환도 지원합니다.
CUDA Tensors
------------
PyTorch에서 CUDA Tensor는 멋지고 쉽습니다. 그리고 CUDA Tensor를 CPU에서 GPU로 옮겨도
기본 형식(underlying type)은 유지됩니다.
```
# 이 코드는 CUDA가 사용 가능한 환경에서만 실행됩니다.
if torch.cuda.is_available():
# LongTensor를 생성하고 이를 torch.cuda.LongTensor로 GPU로 옮깁니다.
a = torch.full((10,), 3, device=torch.device("cuda"))
print(type(a))
b = a.to(torch.device("cpu"))
# CPU로 다시 전송을 하면, torch.LongTensor로 되돌아옵니다.
```
| github_jupyter |
# Tabular data
```
from fastai.gen_doc.nbdoc import *
from fastai.tabular.models import *
```
[`tabular`](/tabular.html#tabular) contains all the necessary classes to deal with tabular data, across two modules:
- [`tabular.transform`](/tabular.transform.html#tabular.transform): defines the [`TabularTransform`](/tabular.transform.html#TabularTransform) class to help with preprocessing;
- [`tabular.data`](/tabular.data.html#tabular.data): defines the [`TabularDataset`](/tabular.data.html#TabularDataset) that handles that data, as well as the methods to quickly get a [`TabularDataBunch`](/tabular.data.html#TabularDataBunch).
To create a model, you'll need to use [`models.tabular`](/tabular.html#tabular). See below for an end-to-end example using all these modules.
## Preprocessing tabular data
First, let's import everything we need for the tabular application.
```
from fastai import *
from fastai.tabular import *
```
Tabular data usually comes in the form of a delimited file (such as .csv) containing variables of different kinds: text/category, numbers, and perhaps some missing values. The example we'll work with in this section is a sample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult) which has some census information on individuals. We'll use it to train a model to predict whether salary is greater than \$50k or not.
```
path = untar_data(URLs.ADULT_SAMPLE)
path
df = pd.read_csv(path/'adult.csv')
df.head()
```
Here all the information that will form our input is in the 14 first columns, and the dependent variable is the last column. We will split our input between two types of variables: categorical and continuous.
- Categorical variables will be replaced by a category - a unique id that identifies them - before they are passed through an embedding layer.
- Continuous variables will be normalized and then directly fed to the model.
Another thing we need to handle are the missing values: our model isn't going to like receiving NaNs so we should remove them in a smart way. All of this preprocessing is done by [`TabularTransform`](/tabular.transform.html#TabularTransform) objects and [`TabularDataset`](/tabular.data.html#TabularDataset).
We can define a bunch of Transforms that will be applied to our variables. Here we transform all categorical variables into categories. We also replace missing values for continuous variables by the median column value and normalize those.
```
procs = [FillMissing, Categorify, Normalize]
```
To split our data into training and validation sets, we use valid indexes
```
valid_idx = range(len(df)-2000, len(df))
```
Then let's manually split our variables into categorical and continuous variables (we can ignore the dependant variable at this stage). fastai will assume all variables that aren't dependent or categorical are continuous, unless we explicitly pass a list to the `cont_names` parameter when constructing our [`DataBunch`](/basic_data.html#DataBunch).
```
dep_var = '>=50k'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
```
Now we're ready to pass this information to [`TabularDataBunch.from_df`](/tabular.data.html#TabularDataBunch.from_df) to create the [`DataBunch`](/basic_data.html#DataBunch) that we'll use for training.
```
data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names)
print(data.train_ds.cont_names) # `cont_names` defaults to: set(df)-set(cat_names)-{dep_var}
```
We can grab a mini-batch of data and take a look (note that [`to_np`](/torch_core.html#to_np) here converts from pytorch tensor to numpy):
```
(cat_x,cont_x),y = next(iter(data.train_dl))
for o in (cat_x, cont_x, y): print(to_np(o[:5]))
```
After being processed in [`TabularDataset`](/tabular.data.html#TabularDataset), the categorical variables are replaced by ids and the continuous variables are normalized. The codes corresponding to categorical variables are all put together, as are all the continuous variables.
## Defining a model
Once we have our data ready in a [`DataBunch`](/basic_data.html#DataBunch), we just need to create a model to then define a [`Learner`](/basic_train.html#Learner) and start training. The fastai library has a flexible and powerful [`TabularModel`](/tabular.models.html#TabularModel) in [`models.tabular`](/tabular.html#tabular). To use that function, we just need to specify the embedding sizes for each of our categorical variables.
```
learn = tabular_learner(data, layers=[200,100], emb_szs={'native-country': 10}, metrics=accuracy)
learn.fit_one_cycle(1, 1e-2)
```
As usual, we can use the [`Learner.predict`](/basic_train.html#Learner.predict) method to get predictions. In this case, we need to pass the row of a dataframe that has the same names of categorical and continuous variables as our training or validation dataframe.
```
learn.predict(df.iloc[0])
```
| github_jupyter |
```
import sys
sys.path.insert(0,"/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/LibFolder")
from Lib_GeneralFunctions import *
from Lib_GeneralSignalProcNAnalysis import *
from Lib_SigmoidProcessing import *
import pandas as pd
from matplotlib.gridspec import GridSpec
# Save into a class the
class SSCreference:
def __init__(self, filename, coordinates, RefSource="SEM2DPACK"):
line = pd.read_csv(filename.format("slip"), header=None)
self.Time = line[0]
self.Slip = line[1]
line = pd.read_csv(filename.format("sr"), header=None)
self.SlipRate = line[1]
self.Coord = coordinates #Only used for labels and printing
self.RefSource = RefSource
#end __init__
# Default object printing information
def __repr__(self):
return "The TPV3reference object was generated from: {} and the receiver is located at {}".format(self.RefSource, self.Coord)
#end __repr__
def __str__(self):
return "The TPV3reference object was generated from: {} and the receiver is located at {}".format(self.RefSource, self.Coord)
#end __str__
def PlotReference(self, ax, SlipSlipRate, filtering=True, **kwargs):
if SlipSlipRate=="Slip":
if(filtering):
ax.plot(self.Time, Butterworth(self.Slip, **kwargs), label = "", c = "k", ls = "--", zorder=1)
else:
ax.plot(self.Time, self.Slip, label = "", c = "k", ls = "--", zorder=1)
elif SlipSlipRate=="SlipRate":
if(filtering):
ax.plot(self.Time, Butterworth(self.SlipRate, **kwargs), label = "", c = "k", ls = "--", zorder=1)
else:
ax.plot(self.Time, self.SlipRate, label = "", c = "k", ls = "--", zorder=1)
return ax
def GenericFigAxis():
fig = plt.figure(figsize=[15,5])
gs = GridSpec(1, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[0, 1])
return fig, [ax1, ax2]
def format_axes(fig):
"""
Format a figure and 4 equidistant reveivers' lines from a single file. Receiver distance defines the color.
"""
for i, ax in enumerate(fig.axes):
ax.set_xlim(-0.5,4)
ax.set_ylim(-0.5,8)
ax.set_xlabel("time(s)")
Lines = fig.axes[-1].get_lines()
legend2 = fig.axes[-1].legend(Lines, ['2km','4km', '6km', '8km'], loc=1)
fig.axes[-1].add_artist(legend2)
fig.axes[-1].set_ylabel("Slip Rate (m/s)")
fig.axes[0].set_ylabel("Slip (m)")
def Multi_format_axes(fig,cmap, LabelsPerColor):
"""
Format a figure that contains different files with
information from several receivers for simulations under sets of blending parameters.
"""
ColorDict = dict(enumerate(LabelsPerColor))
for i, ax in enumerate(fig.axes):
ax.set_xlim(-0.5,4)
ax.set_ylim(-0.5,8)
ax.set_xlabel("time(s)")
Lines = []
for idx,colcol in enumerate(cmap.colors):
Lines.append(mlines.Line2D([], [], color = colcol,
linewidth = 3, label = ColorDict.get(idx)))
legend2 = fig.axes[-1].legend(Lines, LabelsPerColor, loc = 2)
fig.axes[-1].add_artist(legend2)
fig.axes[-1].set_ylabel("Slip Rate (m/s)")
fig.axes[0].set_ylabel("Slip (m)")
path = "/home/nico/Documents/TEAR/Codes_TEAR/ProfilePicking/Output/"
# Reference saved into a list of objects
RefList = [SSCreference(path + "Reference/sem2dpack/sem2d-{}-1.txt", "2km"),
SSCreference(path + "Reference/sem2dpack/sem2d-{}-2.txt", "4km"),
SSCreference(path + "Reference/sem2dpack/sem2d-{}-3.txt", "6km"),
SSCreference(path + "Reference/sem2dpack/sem2d-{}-4.txt", "8km"),
]
FolderSigmoidPath = "/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/[SSC]Sigmoid/ProcessedData/"
FolderTiltedPath = "/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/[SSC]Sigmoid/ProcessedData/20210129-Tilting/"
SigmoidFile = LoadPickleFile(FolderSigmoidPath, "20210118-T2-50x50-P3-100.05")
TiltedFile = LoadPickleFile(Filename = "TPList_t4090_d100.05.pickle",FolderPath = FolderTiltedPath)
from matplotlib.colors import ListedColormap
import matplotlib.lines as mlines
from palettable.scientific.sequential import Oslo_3
cmap = ListedColormap(Oslo_3.mpl_colors[:])
fig, axis = GenericFigAxis()
# Sigmoid case plotting
iidx = 0
for Test1 in SigmoidFile:
axis[0].plot(Test1.Time, Test1.Slip, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.SlipRate, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
# Tilted case plotting
iidx = 1
for Test1 in TiltedFile[:-1]:
axis[0].plot(Test1.Time, Test1.DispX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.VelX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
LabelsPerColor= ["Sigmoid", "Tilted"]
Multi_format_axes(fig, cmap, LabelsPerColor)
fig.suptitle("Cell Dims:50x50 - P3 - $\delta$100.05 \nA:6.0p/$\delta$ , $\phi_o$:0.65$\delta$")
[item.PlotReference(axis[0], "Slip", filtering=False) for item in RefList]
[item.PlotReference(axis[1], "SlipRate", filtering=False) for item in RefList]
fig, axis = GenericFigAxis()
FolderTiltedPath = "/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/[SSC]Sigmoid/ProcessedData/20210201-Tilting/"
TiltedFile = LoadPickleFile(Filename = "TPList_t8180_d50.025.pickle",FolderPath = FolderTiltedPath)
# Tilted case plotting
iidx = 0
for Test1 in TiltedFile[:-1]:
axis[0].plot(Test1.Time, Test1.DispX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.VelX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
LabelsPerColor= ["Tilted"]
Multi_format_axes(fig, cmap, LabelsPerColor)
fig.suptitle("Cell Dims:25x25 - P3 - $\delta$50.025 \nA:6.0p/$\delta$ , $\phi_o$:0.65$\delta$")
[item.PlotReference(axis[0], "Slip", filtering=False) for item in RefList]
[item.PlotReference(axis[1], "SlipRate", filtering=False) for item in RefList]
fig, axis = GenericFigAxis()
FolderTiltedPath = "/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/[SSC]Sigmoid/ProcessedData/20210203-Tilting/"
TiltedFile = LoadPickleFile(Filename = "TPList_t8180_d50.025.pickle",FolderPath = FolderTiltedPath)
# Tilted case plotting
iidx = 0
for Test1 in TiltedFile[:-1]:
axis[0].plot(Test1.Time, Test1.DispX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.VelX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
LabelsPerColor= ["Tilted"]
Multi_format_axes(fig, cmap, LabelsPerColor)
fig.suptitle("Cell Dims:25x25 - P3 - $\delta$50.025 \nA:4.0p/$\delta$ , $\phi_o$:0.65$\delta$")
[item.PlotReference(axis[0], "Slip", filtering=False) for item in RefList]
[item.PlotReference(axis[1], "SlipRate", filtering=False) for item in RefList]
fig, axis = GenericFigAxis()
FolderSigmoidPath = "/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/[SSC]Sigmoid/ProcessedData/"
SigmoidFile = LoadPickleFile(FolderSigmoidPath, "20210209-T1-25x25-P3-50.025")
# Tilted case plotting
iidx = 0
for Test1 in SigmoidFile:
axis[0].plot(Test1.Time, Test1.Slip, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.SlipRate, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
LabelsPerColor= ["Sigmoid"]
Multi_format_axes(fig, cmap, LabelsPerColor)
fig.suptitle("Cell Dims:25x25 - P3 - $\delta$50.025 \nA:6.0p/$\delta$ , $\phi_o$:0.65$\delta$")
[item.PlotReference(axis[0], "Slip", filtering=False) for item in RefList]
[item.PlotReference(axis[1], "SlipRate", filtering=False) for item in RefList]
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.