text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
SD211 TP2: Régression logistique
*<p>Author: Pengfei Mi</p>*
*<p>Date: 12/05/2017</p>*
```
import numpy as np
import matplotlib.pyplot as plt
from cervicalcancerutils import load_cervical_cancer
from scipy.optimize import check_grad
from time import time
from sklearn.metrics import classification_report
```
## Partie 1: Régularisation de Tikhonov
$\textbf{Question 1.1}\quad\text{Calculer le gradient et la matrice hessienne.}$
<div class="alert alert-success">
<p>
Notons $\tilde{X} = (\tilde{\mathbf{x}}_1,...,\tilde{\mathbf{ x}}_n)^T$, où $\tilde{\mathbf{x}}_i = \begin{pmatrix}1\\
\mathbf{x}_i\end{pmatrix}\in \mathbb{R}^{p+1}$, $\tilde{\mathbf{\omega}} = \begin{pmatrix}
\omega_0\\\mathbf{\omega}\end{pmatrix}\in \mathbb{R}^{p+1}$, et la matrice
$$A = diag(0,1,...,1) =
\begin{pmatrix}
0&0&\cdots&0\\
0&1&&0\\
\vdots&&\ddots&\vdots\\
0&0&\cdots&1
\end{pmatrix}
$$
</p>
<p>
On a:
$$
\begin{aligned}
f_1(\omega_0, \omega) &= \frac{1}{n}\sum_{i=1}^{n}\text{log}\big(1+e^{-y_i(x_i^T\omega+\omega_0)}\big)+\frac{\rho}{2}\|\omega\|_2^2 \\
& = \frac{1}{n}\sum_{i=1}^{n}\text{log}\big(1+e^{-y_i\tilde x_i^T \tilde \omega}\big)+\frac{\rho}{2}\tilde{\omega}^TA\tilde{\omega}
\end{aligned}
$$
</p>
<p>
Ainsi on obtient le gradient:
$$
\begin{aligned}
\nabla{f_1}(\omega_0, \omega) &= \frac{1}{n}\sum_{i=1}^{n}\frac{-e^{-y_i\tilde x_i^T \tilde \omega}y_i\tilde{\mathbf{x}}_i}{1+e^{-y_i\tilde x_i^T \tilde \omega}} + \rho A\tilde{\mathbf{\omega}} \\
&= \frac{1}{n}\sum_{i=1}^{n}\frac{-y_i\tilde{\mathbf{x}}_i}{1+e^{y_i\tilde x_i^T \tilde \omega}} +
\rho A\tilde{\mathbf{\omega}}
\end{aligned}
$$
</p>
<p>
et la matrice hessienne:
$$
\begin{aligned}
\mathbf{H} = \nabla^2f_1(\omega_0, \omega) &= \frac{1}{n}\sum_{i=1}^{n}\frac{e^{y_i\tilde x_i^T \tilde \omega}(y_i\tilde{\mathbf{x}}_i)(y_i\tilde{\mathbf{x}}_i)^T}{(1+e^{y_i\tilde x_i^T \tilde \omega})^2} + \rho A \\
& = \frac{1}{n}\sum_{i=1}^{n}\frac{(y_i\tilde{\mathbf{x}}_i)(y_i\tilde{\mathbf{x}}_i)^T}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} + \rho A
\end{aligned}
$$
</p>
</div>
<div class="alert alert-success">
<p>
Soient $\omega \in \mathbb{R}^{p+1}$, on a:
$$
\begin{aligned}
\omega^TH\omega &= \frac{1}{n}\sum_{i=1}^{n}\frac{\omega^T (y_i\tilde{\mathbf{x}}_i)(y_i\tilde{\mathbf{x}}_i)^T \omega}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} + \rho \omega^T A \omega \\
&= \frac{1}{n}\sum_{i=1}^{n}\frac{(\omega^T y_i\tilde{\mathbf{x}}_i)(\omega^T y_i\tilde{\mathbf{x}}_i)^T}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} + \rho \omega^T A^2 \omega \\
&= \frac{1}{n}\sum_{i=1}^{n}\frac{\|\omega^T y_i\tilde{\mathbf{x}}_i\|_2^2}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} + \rho \|A\omega\|_2^2 \geq 0
\end{aligned}
$$
</p>
<p>Donc, la matrice hessienne est semi-définie positive, la fonction $f_1$ est convexe.</p>
</div>
$\textbf{Question 1.2}\quad\text{Coder une fonction qui retourne la valeur de la fonction, son gradient et sa hessienne.}$
<div class="alert alert-success">
<p>On insère une colonne de $1$ à gauche de $X$ pour simplifier le calcul.</p>
</div>
```
X, y = load_cervical_cancer("riskfactorscervicalcancer.csv")
print "Before the insertion:"
print X.shape, y.shape
n, p = X.shape
X = np.c_[np.ones(n), X]
print "After the insertion:"
print X.shape, y.shape
def objective(w_, X, y, rho, return_grad=True, return_H=True):
"""
X: matrix of size n*(p+1)
y: vector of size n
w0: real number
w: vector of size p
"""
# Initialize elementary intermediate variables;
n, p = X.shape
w = w_[1:]
y_x = np.array([y[i] * X[i, :] for i in range(n)])
yx_w = np.array([np.sum(y_x[i, :]*w_) for i in range(n)])
exp_yxw_1 = np.array([np.exp(yx_w[i]) for i in range(n)]) + 1
exp_neg_yxw_1 = np.array([np.exp(-yx_w[i]) for i in range(n)]) + 1
# Compute function value
val = np.mean(np.log(exp_neg_yxw_1)) + np.sum(w**2)*rho/2.
if return_grad == False:
return val
else:
# Compute gradient
grad = np.mean(-np.array([y_x[i]/exp_yxw_1[i] for i in range(n)]), axis=0) + rho*np.r_[0, w]
if return_H == False:
return val, grad
else:
# Compute the Hessian matrix
H = np.mean(np.array([y_x[i].reshape(-1, 1).dot(y_x[i].reshape(1, -1) / (exp_yxw_1[i]*exp_neg_yxw_1[i])) for i in range(n)]), axis=0) + rho*np.diag(np.r_[0, np.ones(p-1)])
return val, grad, H
def funcMask(w_, X, y, rho):
val, grad = objective(w_, X, y, rho, return_H=False)
return val
def gradMask(w_, X, y, rho):
val, grad = objective(w_, X, y, rho, return_H=False)
return grad
rho = 1./n
t0 = time()
print "The difference of gradient is: %0.12f" % check_grad(funcMask, gradMask, np.zeros(p+1), X, y, rho)
print "Done in %0.3fs." % (time()-t0)
def gradMask(w_, X, y, rho):
val, grad = objective(w_, X, y, rho, return_H=False)
return grad.sum()
def hessianMask(w_, X, y, rho):
val, grad, H = objective(w_, X, y, rho)
return np.sum(H, axis=1)
t0 = time()
rho = 1./n
print "The difference of Hessian matrix is: %0.12f" % check_grad(gradMask, hessianMask, np.zeros(p+1), X, y, rho)
print "Done in %0.3fs." % (time()-t0)
```
<div class="alert alert-success">
<p>On a vérifié le calcul de gradient et de matrice hessienne.</p>
</div>
$\textbf{Question 1.3}\quad\text{Coder la méthode de Newton.}$
<div class="alert alert-success">
<p>
Selon la définition de méthode de Newton, on a:
$$\omega^{k+1} = \omega^k - (\nabla^2f_1(\omega^k))^{-1}\nabla f_1(\omega^k)$$
</p>
</div>
```
def minimize_Newton(func, w_, X, y, rho, tol=1e-10):
n, p = X.shape
val, grad, H = func(w_, X, y, rho)
grad_norm = np.sqrt(np.sum(grad**2))
norms = [grad_norm]
cnt = 0
while (grad_norm > tol):
w_ = w_ - np.linalg.solve(H, np.identity(p)).dot(grad)
val, grad, H = func(w_, X, y, rho)
grad_norm = np.sqrt(np.sum(grad**2))
norms.append(grad_norm)
cnt = cnt + 1
return val, w_, cnt, norms
t0 = time()
rho = 1./n
val, w, cnt, grad_norms = minimize_Newton(objective, np.zeros(p+1), X, y, rho, tol=1e-10)
print "The value minimal of the objective function is: %0.12f" % val
print "Done in %0.3fs, number of iterations: %d" % (time()-t0, cnt)
print w
plt.figure(1, figsize=(8,6))
plt.title("The norm of gradient, $\omega^0 = 0$")
plt.semilogy(range(0, len(grad_norms)), grad_norms)
plt.xlabel("Number of iteration")
plt.ylabel("Norm of gradient")
plt.xlim(0, len(grad_norms))
plt.show()
```
$\textbf{Question 1.4}\quad\text{Lancer avec comme condition initiale }(\omega_0^0,\omega^0) = 0.3e\text{, où }e_i=0\text{ pour tout }i.$
```
t0 = time()
val, grad, H, cnt, grad_norms = minimize_Newton(objective, 0.3*np.ones(p+1), X, y, rho, tol=1e-10)
print "The value minimal of the objective function is: %0.12f" % val
print "Done in %0.3fs, number of iterations: %d" % (time()-t0, cnt)
```
<div class="alert alert-success">
<p>On a vu que avec cette condition initiale, la fonction objectif ne converge pas. C'est à cause de le point initiale est hors le domaine de convergence.</p>
</div>
$\textbf{Question 1.5}\quad\text{Coder la méthode de recherche linéaire d'Armijo.}$
<div class="alert alert-success">
<p>Notons $\omega^+(\gamma_k)=\omega^k - \gamma_k(\nabla^2 f_1(\omega^k))^{-1}\nabla f_1(\omega^k)$, soient $a \in (0,1)$, $b>0$ et $\beta \in (0,1)$, on cherche le premier entier $l$ non-négatif tel que:</p>
$$f_1(\omega^+(ba^l)) \leq f_1(\omega^k) + \beta\langle\nabla_{f_1}(\omega^k),\,\omega^+(ba^l)-\omega^k\rangle$$
</div>
<div class="alert alert-success">
<p>Ici, on prend $\beta = 0.5$, ainsi que la recherche linéaire d'Armijo devient équicalente à la recherche linéaire de Taylor.</p>
<p> On fixe $b_0 = 1$ et $b_k = 2\gamma_{k-1}$, c'est un choix classique.</p>
<p> On fixe $a = 0.5$, c'est pour faire un compromis entre la précision de recherche et la vitesse de convergence.</p>
</div>
```
def minimize_Newton_Armijo(func, w_, X, y, rho, a, b, beta, tol=1e-10, max_iter=500):
n, p = X.shape
val, grad, H = func(w_, X, y, rho)
grad_norm = np.sqrt(np.sum(grad**2))
norms = [grad_norm]
d = np.linalg.solve(H, np.identity(p)).dot(grad)
gamma = b / 2.
cnt = 0
while (grad_norm > tol and cnt < max_iter):
gamma = 2*gamma
val_ = func(w_ - gamma*d, X, y, rho, return_grad=False)
while (val_ > val - beta*gamma*np.sum(d*grad)):
gamma = gamma*a
val_ = func(w_ - gamma*d, X, y, rho, return_grad=False)
w_ = w_ - gamma*d
val, grad, H = func(w_, X, y, rho)
d = np.linalg.solve(H, np.identity(p)).dot(grad)
grad_norm = np.sqrt(np.sum(grad**2))
norms.append(grad_norm)
cnt = cnt + 1
return val, w_, cnt, norms
t0 = time()
rho = 1./n
a = 0.5
b = 1
beta = 0.5
val_nls, w_nls, cnt_nls, grad_norms_nls = minimize_Newton_Armijo(objective, 0.3*np.ones(p+1), X, y, rho, a, b, beta, tol=1e-10, max_iter=500)
print "The value minimal of the objective function is: %0.12f" % val_nls
t_nls = time()-t0
print "Done in %0.3fs, number of iterations: %d" % (t_nls, cnt_nls)
print w_nls
plt.figure(2, figsize=(8,6))
plt.title("The norm of gradient by Newton with linear search")
plt.semilogy(range(0, len(grad_norms_nls)), grad_norms_nls)
plt.xlabel("Number of iteration")
plt.ylabel("Norm of gradient")
plt.xlim(0, len(grad_norms_nls))
plt.show()
```
## Partie 2: Régularisation pour la parcimoine
$\textbf{Question 2.1}\quad\text{Pourquoi ne peut-on pas utiliser la méthode de Newton pour résoudre ce problème?}$
<div class="alert alert-success">
<p>Parce que la fonction objectif ici n'est pas différentiable, on ne peut pas utiliser le gradient et la matrice hessienne.</p>
</div>
$\textbf{Question 2.2}\quad\text{Écrire la fonction objectif sous la forme }F_2 = f_2 + g_2\text{ où }f_2\text{ est dérivable et l’opérateur proximal de }g_2\text{ est simple.}$
<div class="alert alert-success">
<p>
$$
\begin{aligned}
F_2(\omega_0,\omega) &= \frac{1}{n}\sum_{i=1}^{n}\text{log}\big(1+e^{-y_i(x_i^T\omega+\omega_0)}\big)+\rho\|\omega\|_1 \\
&= f_2+g_2
\end{aligned}
$$
où $f_2 = \frac{1}{n}\sum_{i=1}^{n}\text{log}\big(1+e^{-y_i(x_i^T\omega+\omega_0)}\big)$ est dérivable, $g_2 = \rho\|\omega\|_1$ de laquelle l'opérateur proximal est simple.
</p>
</div>
<div class="alert alert-success">
<p>
On a le gradient de $f_2$:
$$
\begin{aligned}
\nabla{f_2}(\omega_0, \omega) &= \frac{1}{n}\sum_{i=1}^{n}\frac{-e^{-y_i\tilde x_i^T \tilde \omega}y_i\tilde{\mathbf{x}}_i}{1+e^{-y_i\tilde x_i^T \tilde \omega}} \\
&= \frac{1}{n}\sum_{i=1}^{n}\frac{-y_i\tilde{\mathbf{x}}_i}{1+e^{y_i\tilde x_i^T \tilde \omega}}
\end{aligned}
$$
</p>
<p>
et l'opérateur proximal de $g_2$:
$$
\begin{aligned}
\text{prox}_{g_2}(x) &= \text{arg}\,\underset{y \in \mathbb{R}^p}{\text{min}}\, \big(g_2(y) + \frac{1}{2}\|y-x\|^2 \big) \\
&= \text{arg}\,\underset{y \in \mathbb{R}^p}{\text{min}}\, \big(\rho\|y\|_1 + \frac{1}{2}\|y-x\|^2 \big) \\
&= \text{arg}\,\underset{y \in \mathbb{R}^p}{\text{min}}\, \sum_{i=1}^{p}\big(\rho |y_i| + \frac{1}{2}(y_i-x_i)^2\big)
\end{aligned}
$$
</p>
<p>
pour $1 \leq i \leq n$, on obtient la solution:
$$
y_i^* = \left\{
\begin{align}
x_i - \rho, &\text{ si } x_i > \rho \\
x_i + \rho, &\text{ si } x_i < -\rho \\
0, &\text{ si } -\rho \leq x_i \leq \rho
\end{align}
\right.
$$
</p>
</div>
<div class="alert alert-success">
<p>
$$
\begin{aligned}
\mathbf{H_2} = \nabla^2f_2(\omega_0, \omega) &= \frac{1}{n}\sum_{i=1}^{n}\frac{e^{y_i\tilde x_i^T \tilde \omega}(y_i\tilde{\mathbf{x}}_i)(y_i\tilde{\mathbf{x}}_i)^T}{(1+e^{y_i\tilde x_i^T \tilde \omega})^2} \\
& = \frac{1}{n}\sum_{i=1}^{n}\frac{(y_i\tilde{\mathbf{x}}_i)(y_i\tilde{\mathbf{x}}_i)^T}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})}
\end{aligned}
$$
</p>
<p>
Soient $\omega \in \mathbb{R}^{p+1}$, on a:
$$
\begin{aligned}
\omega^TH_2\omega &= \frac{1}{n}\sum_{i=1}^{n}\frac{\omega^T (y_i\tilde{\mathbf{x}}_i)(y_i\tilde{\mathbf{x}}_i)^T \omega}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} \\
&= \frac{1}{n}\sum_{i=1}^{n}\frac{(\omega^T y_i\tilde{\mathbf{x}}_i)(\omega^T y_i\tilde{\mathbf{x}}_i)^T}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} \\
&= \frac{1}{n}\sum_{i=1}^{n}\frac{\|\omega^T y_i\tilde{\mathbf{x}}_i\|_2^2}{(1+e^{y_i\tilde x_i^T \tilde \omega})(1+e^{-y_i\tilde x_i^T \tilde \omega})} \geq 0
\end{aligned}
$$
</p>
<p>Donc, la matrice hessienne de $f_2$ est semi-définie positive, la fonction $f_2$ est convexe.</p>
<p>
$$
\begin{aligned}
g_2(\omega_0, \omega) &= \rho\|\omega\|_1 \\
&= \rho \sum_{i=1}^{n}|\omega_i|
\end{aligned}
$$
</p>
<p>La fonction de valeur absolue est convexe pour chaque élément de $\omega$, pour $\rho \geq 0$, $g_2$ est aussi convexe.</p>
<p>Donc $F_2 = f_2 + g_2$ est convexe pour $\rho \geq 0$.</p>
</div>
$\textbf{Question 2.3}\quad\text{Coder le gradient proximal avec recherche linéaire.}$
<div class="alert alert-success">
<p>On rajoute la recherche linéaire de Taylor.</p>
<p>On prend $a = 0.5$, $b_0 = 1b$ et $b = 2\gamma_{k-1}$. On cherche le premier entier $l$ non-négatif tel que:</p>
$$f_2(\omega^+(ba^l)) \leq f_2(\omega^k) + \langle\nabla_{f_2}(\omega^k),\,\omega^+(ba^l)-\omega^k\rangle + \frac{1}{2ba^l}\|\omega^k - \omega^+(ba^l)\|^2$$
</div>
<div class="alert alert-success">
On peut utiliser un seuillage pour la valeur de fonction objectif évaluée dans une itération comme test d'arrêt.
</div>
```
def objective_proximal(w_, X, y, rho):
"""
X: matrix of size n*(p+1)
y: vector of size n
w0: real number
w: vector of size p
"""
# Initialize elementary intermediate variables;
n, p = X.shape
w = w_[1:]
y_x = np.array([y[i] * X[i, :] for i in range(n)])
yx_w = np.array([np.sum(y_x[i, :]*w_) for i in range(n)])
exp_neg_yxw_1 = np.array([np.exp(-yx_w[i]) for i in range(n)]) + 1
# Compute function value
val = np.mean(np.log(exp_neg_yxw_1)) + rho*np.sum(np.fabs(w))
return val
def f(w_, X, y, return_grad=True):
"""
X: matrix of size n*(p+1)
y: vector of size n
w0: real number
w: vector of size p
"""
# Initialize elementary intermediate variables;
n, p = X.shape
w = w_[1:]
y_x = np.array([y[i] * X[i, :] for i in range(n)])
yx_w = np.array([np.sum(y_x[i, :]*w_) for i in range(n)])
exp_yxw_1 = np.array([np.exp(yx_w[i]) for i in range(n)]) + 1
exp_neg_yxw_1 = np.array([np.exp(-yx_w[i]) for i in range(n)]) + 1
# Compute function value
val = np.mean(np.log(exp_neg_yxw_1))
if return_grad == False:
return val
else:
# Compute gradient
grad = np.mean(-np.array([y_x[i]/exp_yxw_1[i] for i in range(n)]), axis=0)
return val, grad
def Soft_Threshold(w, rho):
w_ = np.zeros_like(w)
w_[w > rho] = w[w > rho] - rho
w_[w < -rho] = w[w < -rho] + rho
w_[0] = w[0]
return w_
def minimize_prox_grad_Taylor(func, f, w_, X, y, rho, a, b, tol=1e-10, max_iter=500):
n, p = X.shape
val = func(w_, X, y, rho)
val_f, grad_f = f(w_, X, y)
gamma = b / 2.
delta_val = tol*2
cnt = 0
while (delta_val > tol and cnt < max_iter):
gamma = 2*gamma
w_new = Soft_Threshold(w_ - gamma*grad_f, gamma*rho)
val_f_ = f(w_new, X, y, return_grad=False)
#while (val_f_ > val_f + beta*np.sum(grad_f*(w_new - w_))):
while (val_f_ > val_f + np.sum(grad_f*(w_new-w_)) + np.sum((w_new-w_)**2)/gamma):
#print val_
gamma = gamma*a
w_new = Soft_Threshold(w_ - gamma*grad_f, gamma*rho)
val_f_ = f(w_new, X, y, return_grad=False)
w_ = w_new
val_f, grad_f = f(w_, X, y)
val_ = func(w_, X, y, rho)
delta_val = val - val_
val = val_
cnt = cnt + 1
return func(w_, X, y, rho), w_, cnt
t0 = time()
rho = 0.1
a = 0.5
b = 1
val_pgls, w_pgls, cnt_pgls = minimize_prox_grad_Taylor(objective_proximal, f, 0.3*np.ones(p+1), X, y, rho, a, b, tol=1e-8, max_iter=500)
print "The value minimal of the objective function is: %0.12f" % val_pgls
t_pgls = time()-t0
print "Done in %0.3fs, number of iterations: %d" % (t_pgls, cnt_pgls)
print w_pgls
```
## Partie 3: Comparaison
$\textbf{Question 3.1}\quad\text{Comparer les propriétés des deux problèmes d’optimisation.}$
<div class="alert alert-success">
<p>1. Toutes les deux fonctions objectifs sont convexes, laquelle de régularisation de Tikhonov est différentible, l'autre n'est pas différentiable.</p>
<p>2. Selon les deux $\omega$ qu'on obtient, la régularisation de Tiknonov utilise tous les variables explicatives, la régularisation pour la parcimoine en utilise une partie.</p>
</div>
$\textbf{Question 3.2}\quad\text{Comparer les solutions obtenues avec les deux types de régularisation.}$
```
y_pred_nls = np.sign(X.dot(w_nls))
y_pred_pgls = np.sign(X.dot(w_pgls))
print "The chance level is: %f" % max(np.mean(y == 1), 1-np.mean(y == 1))
print "The score by Newton method with line search is: %f" % np.mean(y == y_pred_nls)
print "The score by proximal gradient method with line search is: %f" % np.mean(y == y_pred_pgls)
print "-"*60
print "Classification report for Newton method"
print classification_report(y, y_pred_nls)
print "-"*60
print "Classification report for proximal gradient method"
print classification_report(y, y_pred_pgls)
```
<div class="alert alert-success">
<p>En comparant les scores et les rapports de classification:</p>
<p>1. Le score obtenu par la méthode de Newton est meilleur que celui de la méthode de gradient proximal.</p>
<p>2. Selon le f1-score, la méthode de Newton est aussi meilleur.</p>
<p>3. Dans la méthode de gradient proximal, la «precision» pour class 1 est 1.0, de plus, la «recall» est 0.1. On peut conclure que cette méthode avandage la class 1.</p>
</div>
| github_jupyter |
# Preprocessing for BraTS, NFBS, and COVIDx8B datasets
#### Change file paths in this notebook to match your system!
Preprocessing steps taken:
BraTS and NFBS: Load images with SimpleITK -> z-score intensity normalization -> break into patches
COVIDx8B: Clean up file names and unzip compressed images
```
import pandas as pd
import numpy as np
import SimpleITK as sitk
import matplotlib.pyplot as plt
from tqdm import trange
import os
import subprocess
```
### Helper functions
```
# Create binary masks for BraTS dataset
def binarize_brats(base_dir):
def get_files(path):
files_list = list()
for root, _, files in os.walk(path, topdown = False):
for name in files:
files_list.append(os.path.join(root, name))
return files_list
files_list = get_files(base_dir)
for file in files_list:
if 'seg' in file:
binary_mask_name = file[0:-7] + '_binary.nii.gz'
binarize_cmd = ['c3d', file, '-binarize', '-o', binary_mask_name]
subprocess.call(binarize_cmd)
################# End of function #################
# Create a CSV with file paths for a dataset
def get_paths_csv(base_dir, name_dict, output_csv):
def get_files(path):
files_list = list()
for root, _, files in os.walk(path, topdown = False):
for name in files:
files_list.append(os.path.join(root, name))
return files_list
cols = ['id'] + list(name_dict.keys())
df = pd.DataFrame(columns = cols)
row_dict = dict.fromkeys(cols)
ids = os.listdir(base_dir)
for i in ids:
row_dict['id'] = i
path = os.path.join(base_dir, i)
files = get_files(path)
for file in files:
for img_type in name_dict.keys():
for img_string in name_dict[img_type]:
if img_string in file:
row_dict[img_type] = file
df = df.append(row_dict, ignore_index = True)
df.to_csv(output_csv, index = False)
################# End of function #################
# Read a nifti file from a given path and return it as a 3D numpy array
def ReadImagesSITK(images_list, dims):
# Read image, normalize, and get numpy array
def GetArray(path):
arr = sitk.ReadImage(path)
arr = sitk.Normalize(arr)
arr = sitk.GetArrayFromImage(arr)
return arr
image = np.empty((*dims, len(images_list)))
for i in range(len(images_list)):
image[..., i] = GetArray(images_list[i])
return image
# Read a segmentation mask from a given path and return one hot representation of mask
def ReadMaskSITK(path, classes):
num_classes = len(classes)
mask = sitk.ReadImage(path)
mask = sitk.GetArrayFromImage(mask)
mask_onehot = np.empty((*mask.shape, num_classes))
for i in range(num_classes):
mask_onehot[..., i] = mask == classes[i]
return mask_onehot
# Write slices of data from csv
def write_slices(input_csv, image_dest, mask_dest, output_csv, image_dims):
input_df = pd.read_csv(input_csv)
num_pats = len(input_df)
slice_thickness = 5
output_cols = ['id', 'image', 'mask']
output_df = pd.DataFrame(columns = output_cols)
for i in trange(num_pats):
# Get row of input dataframe
current_pat = input_df.iloc[i].to_dict()
# Read in images and masks
images_list = list(current_pat.values())[2:len(current_pat)]
img = ReadImagesSITK(images_list, dims = image_dims)
mask_binary = ReadMaskSITK(current_pat['mask'], classes = [0, 1])
img_depth = image_dims[0]
for k in range(img_depth - slice_thickness + 1):
mask_binary_slice = mask_binary[k:(k + slice_thickness), ...]
# Only take slices with foreground in them - this is for training only
if np.sum(mask_binary_slice[..., 1]) > 25:
# Get corresponding image slices
img_slice = img[k:(k + slice_thickness), ...]
# Name the slices and write them to disk
slice_name = current_pat['id'] + '_' + str(k) + '.npy'
img_slice_name = image_dest + slice_name
mask_binary_slice_name = mask_dest + slice_name
np.save(img_slice_name, img_slice)
np.save(mask_binary_slice_name, mask_binary_slice)
# Track slices with output dataframe
output_df = output_df.append({'id': current_pat['id'],
'image': img_slice_name,
'mask': mask_binary_slice_name},
ignore_index = True)
# Save dataframe to .csv and use the .csv for training the BraTS model
output_df.to_csv(output_csv, index = False)
############## END OF FUNCTION ##############
```
### Create file path CSVs for BraTS and NFBS datasets
```
################# Create binary masks for BraTS #################
# Change this to the appropriate folder on your system
brats_base_dir = '/rsrch1/ip/aecelaya/data/brats_2020/raw/train/'
binarize_brats(brats_base_dir)
################# Create CSV with BraTS file paths #################
brats_names_dict = {'mask': ['seg_binary.nii.gz'],
't1': ['t1.nii.gz'],
't2': ['t2.nii.gz'],
'tc': ['t1ce.nii.gz'],
'fl': ['flair.nii.gz']}
brats_output_csv = 'brats_paths.csv'
get_paths_csv(brats_base_dir, brats_names_dict, brats_output_csv)
################# Create CSV with NFBS file paths #################
nfbs_names_dict = {'mask': ['brainmask.nii.gz'],
't1': ['T1w.nii.gz']}
# Change this to the appropriate folder on your system
nfbs_base_dir = '/rsrch1/ip/aecelaya/data/nfbs/raw/'
nfbs_output_csv = 'nfbs_paths.csv'
get_paths_csv(nfbs_base_dir, nfbs_names_dict, nfbs_output_csv)
```
### Preprocess BraTS and NFBS and write slices to disk
```
################# Preprocess and write BraTS slices to disk #################
brats_input_csv = 'brats_paths.csv'
# Change these to the appropriate folder on your system
brats_image_dest = '/rsrch1/ip/aecelaya/github/NecrosisRecurrence/pocketnet/brats/test/images/'
brats_mask_dest = '/rsrch1/ip/aecelaya/github/NecrosisRecurrence/pocketnet/brats/test/masks/'
brats_output_csv = 'brats_slices_paths.csv'
write_slices(brats_input_csv, brats_image_dest, brats_mask_dest, brats_output_csv, image_dims = (155, 240, 240))
################# Preprocess and write NFBS slices to disk #################
nfbs_input_csv = 'nfbs_paths.csv'
# Change these to the appropriate folder on your system
nfbs_image_dest = '/rsrch1/ip/aecelaya/github/NecrosisRecurrence/pocketnet/brats/test2/images/'
nfbs_mask_dest = '/rsrch1/ip/aecelaya/github/NecrosisRecurrence/pocketnet/brats/test2/masks/'
nfbs_output_csv = 'nfbs_slices_paths.csv'
write_slices(nfbs_input_csv, nfbs_image_dest, nfbs_mask_dest, nfbs_output_csv, image_dims = (192, 256, 256))
```
### Clean up file names for COVIDx8B
```
'''
Clean up the COVIDx dataset. There are a few glitches in it. This script corrects them.
1) Some files in the COVIDx training set are compressed (i.e., end with .gz). Keras can't read
zipped files with its native image data generators. This script goes through each file
and checks to see if its compressed and unzips it if it is.
2) The original train.csv file that comes with the COVIDx dataset has incorrect file names for rows
725 - 1667. These rows only contian numbers and not the name of an image. For example, row 725 has
the entry 1 but it should be COVID1.png.
Before running this, please change the file paths in this code to match your system.
'''
def get_files(dir_name):
list_of_files = list()
for (dirpath, dirnames, filenames) in os.walk(dir_name):
list_of_files += [os.path.join(dirpath, file) for file in filenames]
return list_of_files
files_list = get_files('/rsrch1/ip/aecelaya/data/covidx/processed/train')
for file in files_list:
if '(' in file:
new_file = file.replace('(', '')
new_file = new_file.replace(')', '')
print('Renaming ' + file + ' to ' + new_file)
os.rename(file, new_file)
file = new_file
if '.gz' in file:
# Unzip files with gunzip
print('Unzipping ' + file)
subprocess.call('gunzip ' + file, shell = True)
train_df = pd.read_csv('/rsrch1/ip/aecelaya/data/covidx/raw/data/train.csv')
for i in range(724, 1667):
number = train_df.iloc[i]['image']
train_df.at[i, 'image'] = 'COVID' + number + '.png'
for i in range(len(train_df)):
file = '/rsrch1/ip/aecelaya/data/covidx/train/' + train_df.iloc[i]['image']
if not os.path.isfile(file):
print('Does not exist: ' + file + ', row = ' + str(i))
train_df.to_csv('covidx_train_clean.csv', index = False)
```
| github_jupyter |
## Smithsonian OpenAccess Collection Data API
Let's use requests to scrape some data from an API endpoint. In this case, we can use the Smithsonian's [Open Access API](https://edan.si.edu/openaccess/apidocs/#api-_), which is a REST API that responds to HTTP requests. See the documentation at [https://edan.si.edu/openaccess/apidocs/#api-_footer](https://edan.si.edu/openaccess/apidocs/#api-_footer)
The documentation for requests can be found here: http://docs.python-requests.org/en/master/
The endpoint for the search query of the "content" API, which
provides information for individual items is `https://api.si.edu/openaccess/api/v1.0/content/:id`.
To use the Smithsonian APIs, you will need an API key from the data.gov
API key generator. Register with [https://api.data.gov/signup/](https://api.data.gov/signup/) to get a key.
```
import requests
statsEndpoint = 'https://api.si.edu/openaccess/api/v1.0/stats'
API_Key = 'S26CqhCprwb819ULBJQG62Le5ySrxuCV5L3Ktiov'
```
The content API fetches metadata about objects in the Smithsonian's
collections using the ID or URL of the object.
For example, in this case to get information about an album in
the Folkways Records Collection, we will use the id `edanmdm:siris_arc_231998`.
To pass in the parameters, we can use a dictionary! Let's try using `params`
```
key = {
'api_key': API_Key
}
```
First, let's try a basic call to the stats API, to see if things are working:
```
r = requests.get(statsEndpoint, params = key)
print('You requested:',r.url)
print('HTTP server response code:',r.status_code)
print('HTTP response headers',r.headers)
# notice that the headers method returns a dictionary, too?
# We could ask what sort of content it's returning:
print('\nYour request has this content type:\n',r.headers['content-type'])
```
So the request has returned a json object! Access the response using the `.text` method.
```
r.text[:500]
type(r.text)
```
#### API Call question
We want to make a request to the Smithsonian API. Can you fill in the following & explain the missing elements?
```
https://api.si.edu/openaccess/api/v1.0/content/:_____
```
What other items might you use after the `?`...
## Object information
Now, let's try using the "content" API to get information about individual objects:
```
contentEndpoint = 'https://api.si.edu/openaccess/api/v1.0/content/'
object_id = 'edanmdm:siris_arc_231998' # Smithsonian Folkways Music of Hungary
parameters = {
'api_key' : API_Key
}
requestURL = contentEndpoint + object_id
r = requests.get(requestURL, params = parameters)
print('You requested:',r.url)
print('HTTP server response code:',r.status_code)
print('HTTP response headers',r.headers)
# notice that the headers method returns a dictionary, too?
# We could ask what sort of content it's returning:
print('\nYour request has this content type:\n',r.headers['content-type'])
```
Take a look at the response information:
```
r.text[:500]
```
Use the built-in `.json()` decoder in requests
```
object_json = r.json()
for element in object_json['response']:
print(element)
object = object_json['response']
for k, v in object.items():
print(k,':',v)
```
#### Resources
* [Real Python working with JSON data](https://realpython.com/python-json/)
* [Python json module documentation](https://docs.python.org/3/library/json.html)
### Parsing the Data from the API using json module
Now, we can get the response, let's save to a file. To do this, use the `json` module.
```
import json
data = json.loads(r.text)
# what are the keys?
for element in data:
print(element)
for key, val in data['response'].items():
print(key,':',val)
print(len(data['response']))
object_id = data['response']['id']
print(object_id)
```
Compare to the online display.
See https://collections.si.edu/search/detail/edanmdm:siris_arc_231998
Is it possible to extract each result into its own file?
```
# block testing an extaction of each result into a separate file
data = json.loads(r.text)
#grab the images into a list
objectInfo = data['response']
print(len(objectInfo))
## this is from Python 105a, TODO update
fname = 'kitten-result-'
format = '.json'
n = 0
for item in kittensList:
n = n + 1
file = fname + str(n) + format
# print(item)
with open(file, 'w') as f:
f.write(json.dumps(item))#, f, encoding='utf-8', sort_keys=True)
print('wrote',file)
print('wrote',n,'files!')
```
How could we extract the image URLs?
```
for key in objectInfo['content']:
print(key)
for info in objectInfo['content']['indexedStructured']:
print(info)
# doesn't seem to be a image url list ...
```
---
This section explores using a different object to uncover other properties. Namely, Alexander Graham Bell's 1885 Mary Had a Little Lamb recording done at Volta Labs (`edanmdm:nmah_852778`).
```
object_id = 'edanmdm:nmah_852778' # Alexander Graham Bell's 1885 Mary Had a Little Lamb from Volta Labs
parameters
request_URL = contentEndpoint + object_id
r = requests.get(request_URL, params=parameters)
print(r.url,
'\n',
r.status_code,
'\n',
r.headers)
for element in r.json():
print(element)
for element in r.json()['response']:
print(element)
object_info = json.loads(r.text)
print(json.dumps(object_info, indent=2))
object_url = 'https://collections.si.edu/search/detail/edanmdm:nmah_852778'
# see the following for information on workign with the Image Delivery Service
# https://sirismm.si.edu/siris/ImageDisplay.htm
# possible to find slideshows?
# eg https://edan.si.edu/slideshow/viewer/?damspath=/Public_Sets/NMAH/NMAH-AC/AC0300/S01
```
| github_jupyter |
```
# %matplotlib widget
from util import get_path
import pandas as pd
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
from extract_graph import generate_nx_graph, transform_list, generate_skeleton, generate_nx_graph_from_skeleton, from_connection_tab
from node_id import whole_movement_identification, second_identification
import ast
from plotutil import plot_t_tp1, compress_skeleton
from scipy import sparse
from sparse_util import dilate, zhangSuen
from realign import realign
from datetime import datetime,timedelta
import cv2
import imageio
import scipy.io as sio
plate = 13
date_init = datetime(2020,7,1,19,57)
dates_datetime = [date_init+timedelta(hours=4)*i for i in range(24)]
dates = [f'{0 if date.month<10 else ""}{date.month}{0 if date.day<10 else ""}{date.day}_{0 if date.hour<10 else ""}{date.hour}{0 if date.minute<10 else ""}{date.minute}' for date in dates_datetime]
tabs_labeled=[]
for date in dates:
tabs_labeled.append(pd.read_csv(get_path(date,plate,True,extension='_full_labeled.csv'),
converters={'origin_pos' : transform_list,'end_pos' : transform_list,'pixel_list' : ast.literal_eval}))
from_tip_growth_pattern=[]
for date in dates[:-1]:
from_tip_growth_pattern.append(from_connection_tab(pd.read_csv(get_path(date,plate,True,extension='_connection.csv'))))
tabs=[]
for date in dates:
tabs.append(pd.read_csv(get_path(date,plate,True,extension='_full_labeled_matlab.csv'),
converters={'origin_pos' : transform_list,'end_pos' : transform_list,'pixel_list' : ast.literal_eval}))
tabs_raw=[]
for date in dates:
tabs_raw.append(pd.read_csv(get_path(date,plate,True,extension='_raw_aligned_skeleton.csv'),
converters={'origin_pos' : transform_list,'end_pos' : transform_list,'pixel_list' : ast.literal_eval}))
for i, date in enumerate(dates):
tabs_labeled[i].to_csv(f'Data/graph_{date}_{plate}_full_labeled.csv')
tabs[i].to_csv(f'Data/graph_{date}_{plate}_full_labeled_matlab.csv')
tabs_raw[i].to_csv(f'Data/graph_{date}_{plate}_raw_aligned_skeleton.csv')
sio.savemat(f'Data/graph_{date}_{plate}_full_labeled.mat', {name: col.values for name, col in tabs_labeled[i].items()})
from_tip_growth_pattern_tab=[]
for date in dates[:-1]:
from_tip_growth_pattern_tab.append(pd.read_csv(get_path(date,plate,True,extension='_connection.csv')))
for i, date in enumerate(dates[:-1]):
from_tip_growth_pattern_tab[i].to_csv(f'Data/connection_{date}_{plate}.csv')
# from_tip_growth_pattern=[]
# for i in range(len(from_tip_growth_pattern_tab)):
# from_tip_growth_pattern.append(from_connection_tab(from_tip_growth_pattern_tab[i]))
tabs_labeled=[]
for date in dates:
tabs_labeled.append(pd.read_csv(f'Data/graph_{date}_{plate}_full_labeled.csv',
converters={'origin_pos' : transform_list,'end_pos' : transform_list,'pixel_list' : ast.literal_eval}))
nx_graphs=[]
poss=[]
for tab in tabs_labeled:
nx_graph,pos=generate_nx_graph(tab,labeled=True)
nx_graphs.append(nx_graph)
poss.append(pos)
nx_graph_clean=[]
for graph in nx_graphs:
S = [graph.subgraph(c).copy() for c in nx.connected_components(graph)]
len_connected=[len(nx_graph.nodes) for nx_graph in S]
nx_graph_clean.append(S[np.argmax(len_connected)])
skeletons=[]
for nx_graph in nx_graph_clean:
skeletons.append(generate_skeleton(nx_graph,dim=(20800, 46000)))
factor = 5
final_pictures = [compress_skeleton(skeletons[i],factor) for i in range(len(skeletons))]
connections = [c[0] for c in from_tip_growth_pattern]
growth_patterns = [c[1] for c in from_tip_growth_pattern]
growths = [{tip : sum([len(branch) for branch in growth_pattern[tip]]) for tip in growth_pattern.keys()} for growth_pattern in growth_patterns]
def pinpoint_anastomosis(nx_graph_tm1,nx_grapht,from_tip):
anastomosis=[]
origins=[]
tips = [node for node in nx_graph_tm1.nodes if nx_graph_tm1.degree(node)==1]
def count_neighbors_is_from_root(equ_list,nx_graph,root):
count=0
for neighbor in nx_graph.neighbors(root):
if neighbor in equ_list:
count+=1
return(count)
for tip in tips:
# print(tip)
consequence = from_tip[tip]
for node in consequence:
if node in nx_grapht.nodes and nx_grapht.degree(node)>=3 and count_neighbors_is_from_root(consequence,nx_grapht,node)<2:
# if node==2753:
# print(count_neighbors_is_from_root(consequence,nx_grapht,node))
# print(list(nx_grapht.neighbors(node)))
anastomosis.append(node)
origins.append(tip)
return(anastomosis,origins)
def find_origin_tip(node,from_tip):
for tip in from_tip.keys():
if node in from_tip[tip]:
return(tip)
anastomosiss=[pinpoint_anastomosis(nx_graph_clean[i],nx_graph_clean[i+1], connections[i])[0] for i in range (len(dates)-1)]
origins=[pinpoint_anastomosis(nx_graph_clean[i],nx_graph_clean[i+1], connections[i])[1] for i in range (len(dates)-1)]
growing_tips=[[node for node in growths[i].keys() if growths[i][node]>=20] for i in range(len(growths))]
degree3_nodes = [[node for node in nx_graph.nodes if nx_graph.degree(node)>=3] for nx_graph in nx_graph_clean]
plot_t_tp1(degree3_nodes[t],degree3_nodes[tp1],poss[t],poss[tp1],final_pictures[t],final_pictures[tp1],compress=5)
t=1
tp1=t+1
plot_t_tp1(origins[t],anastomosiss[t],poss[t],poss[tp1],final_pictures[t],final_pictures[tp1],compress=5,)
plot_t_tp1(growing_tips[t],growing_tips[t],poss[t],poss[tp1],final_pictures[t],final_pictures[tp1],compress=5,)
t=3
tp1=t+1
plot_t_tp1([2180],[2180],poss[t],poss[tp1],final_pictures[t],final_pictures[tp1],compress=5,)
plot_t_tp1(degree3_nodes[t],degree3_nodes[tp1],poss[t],poss[tp1],final_pictures[t],final_pictures[tp1],compress=5)
def make_growth_picture_per_tip(pixels_from_tip,pos,shape=(20700,45600),factor=10,max_growth=200,min_growth=10,per_tip=True):
final_picture = np.zeros(shape=(shape[0]//factor,shape[1]//factor))
number_tips = np.zeros(shape=(shape[0]//factor,shape[1]//factor))
for tip in pixels_from_tip.keys():
growth=pixels_from_tip[tip]
x=min(round(pos[tip][0]/factor),shape[0]//factor-1)
y=min(round(pos[tip][1]/factor),shape[1]//factor-1)
if growth<=max_growth:
# print(number_tips)
if growth>=min_growth:
number_tips[x,y]+=1
final_picture[x,y]+=growth
# print(growth,beginx,endx)
# for x in range(shape[0]//factor):
# if x%1==0:
# print(x/2070)
# for y in range(shape[1]//factor):
# beginx = x*factor
# endx=(x+1)*factor
# beginy = y*factor
# endy=(y+1)*factor
# tips_in_frame = [tip for tip in pixels_from_tip.keys() if (beginx<pos[tip][0]<endx) and (beginy<pos[tip][1]<endy)]
# #shouls be improved, len is not a good indicator of actual length...
# growth_in_frame = [len(pixels_from_tip[tip]) for tip in tips_in_frame]
# final_picture[x,y]=np.mean(growth_in_frame)
if per_tip:
return(final_picture/(number_tips+(number_tips==0).astype(np.int)),number_tips)
else:
return(final_picture,number_tips)
final_pictures_growth = [np.log(make_growth_picture_per_tip(growths[i],poss[i],factor=500,max_growth=4000,per_tip=True,min_growth=0)[0]+1) for i in range (len(growths))]
images = []
for i,picture in enumerate(final_pictures_growth):
fig = plt.figure(figsize=(14,12))
ax = fig.add_subplot(111)
ax.imshow(picture)
bbox_time = dict(boxstyle="square", fc="black")
ax.text(0.90, 0.90, f'{4*i}h',
horizontalalignment='right',
verticalalignment='bottom',
transform=ax.transAxes,color='white',size=10*1.5,bbox=bbox_time)
plt.savefig(f'Data/video_test/growth_timestep_{i}.png')
plt.close(fig)
images.append(imageio.imread(f'Data/video_test/growth_timestep_{i}.png'))
imageio.mimsave('Data/video_test/movie_growth.gif', images,duration=1)
paths=[]
i=5
for node in origins[i]:
node_interest=node
pos_problem=poss[i][node_interest]
xbegin=pos_problem[0]-500
ybegin=pos_problem[1]-500
xend=pos_problem[0]+500
yend=pos_problem[1]+500
kernel = np.ones((5,5),np.uint8)
skeleton_small1=skeletons[i][xbegin:xend,ybegin:yend]
skeleton_small1=cv2.dilate(skeleton_small1.todense().astype(np.uint8),kernel,iterations = 1)
skeleton_small2=skeletons[i+1][xbegin:xend,ybegin:yend]
skeleton_small2=cv2.dilate(skeleton_small2.todense().astype(np.uint8),kernel,iterations = 1)
path = f'Data/video_test/network_timestep_{i}_{node}'
pipeline.paths.append(path)
plot_t_tp1(origins[i],anastomosiss[i],poss[i],poss[i+1],skeleton_small1,skeleton_small2,
relabel_tp1=lambda node : find_origin_tip(node,connections[i]), shift=(xbegin,ybegin), save=path,time=f't={4*i}h')
images = []
for path in paths:
images.append(imageio.imread(path+'.png'))
imageio.mimsave(f'Data/video_test/{plate}_anastomosi_movie{i}.gif', images,duration=2)
node_interest=60
pos_problem=poss[0][node_interest]
xbegin=pos_problem[0]-1500
ybegin=pos_problem[1]-1500
xend=pos_problem[0]+1500
yend=pos_problem[1]+1500
skeletons_small=[]
for skeleton in skeletons:
skeletons_small.append(skeleton[xbegin:xend,ybegin:yend])
node_smalls=[]
for i,nx_graph in enumerate(nx_graph_clean):
node_smalls.append([node for node in nx_graph.nodes if (xbegin<poss[i][node][0]<xend and ybegin<poss[i][node][1]<yend and nx_graph.degree(node)>=1)])
kernel = np.ones((5,5),np.uint8)
skeletons_small_dilated=[cv2.dilate(skeleton.todense().astype(np.uint8),kernel,iterations = 1) for skeleton in skeletons_small]
for tp1 in range(len(growths)):
plot_t_tp1(node_smalls[tp1],node_smalls[tp1],poss[tp1],poss[tp1],skeletons_small_dilated[tp1],skeletons_small_dilated[tp1],shift=(xbegin,ybegin),
save=f'Data/video_test/network_timestep_{tp1}',time=f't={4*tp1}h')
images = []
for t in range(len(growths)):
images.append(imageio.imread(f'Data/video_test/network_timestep_{t}.png'))
imageio.mimsave(f'Data/video_test/{node_interest}movie.gif', images,duration=1)
node_interest=60
pos_problem=[poss[i][node_interest] for i in range(len(poss))]
xbegin=[pos_problem[i][0]-1500 for i in range(len(poss))]
ybegin=[pos_problem[i][1]-1500 for i in range(len(poss))]
xend=[pos_problem[i][0]+1500 for i in range(len(poss))]
yend=[pos_problem[i][1]+1500 for i in range(len(poss))]
skeletons_small=[]
for i,skeleton in enumerate(skeletons):
skeletons_small.append(skeleton[xbegin[i]:xend[i],ybegin[i]:yend[i]])
node_smalls=[]
for i,nx_graph in enumerate(nx_graph_clean):
node_smalls.append([node for node in nx_graph.nodes if (xbegin[i]<poss[i][node][0]<xend[i] and ybegin[i]<poss[i][node][1]<yend[i] and nx_graph.degree(node)>=1)])
kernel = np.ones((5,5),np.uint8)
skeletons_small_dilated=[cv2.dilate(skeleton.todense().astype(np.uint8),kernel,iterations = 1) for skeleton in skeletons_small]
for tp1 in range(len(growths)):
plot_t_tp1(node_smalls[tp1],node_smalls[tp1],poss[tp1],poss[tp1],skeletons_small_dilated[tp1],skeletons_small_dilated[tp1],shift=(xbegin[tp1],ybegin[tp1]),save=f'Data/video_test/network_timestep_{tp1}',time=f't={4*tp1}h')
images = []
for t in range(len(growths)):
images.append(imageio.imread(f'Data/video_test/network_timestep_{t}.png'))
imageio.mimsave(f'Data/video_test/{node_interest}movie_track.gif', images,duration=1)
def plot_t_tp1(node_list_t,node_list_tp1,pos_t,pos_tp1,imt,imtp1,relabel_t=lambda x:x,relabel_tp1=lambda x:x, shift=(0,0),compress=1,save='',time=None):
left, width = .25, .5
bottom, height = .25, .5
right = 0.90
top = 0.90
if len(save)>=1:
fig=plt.figure(figsize=(14,12))
size = 10
else:
fig = plt.figure()
size = 5
ax = fig.add_subplot(111)
ax.imshow(imtp1, cmap='gray',interpolation='none')
ax.imshow(imt, cmap='jet', alpha=0.5,interpolation='none')
bbox_time = dict(boxstyle="square", fc="black")
bbox_props1 = dict(boxstyle="circle", fc="grey")
bbox_props2 = dict(boxstyle="circle", fc="white")
for node in node_list_t:
t = ax.text((pos_t[node][1]-shift[1])//compress, (pos_t[node][0]-shift[0])//compress, str(relabel_t(node)), ha="center", va="center",
size=size,
bbox=bbox_props1)
for node in node_list_tp1:
if node in pos_tp1.keys():
t = ax.text((pos_tp1[node][1]-shift[1])//compress, (pos_tp1[node][0]-shift[0])//compress, str(relabel_tp1(node)), ha="center", va="center",
size=size,
bbox=bbox_props2)
ax.text(right, top, time,
horizontalalignment='right',
verticalalignment='bottom',
transform=ax.transAxes,color='white',size=size*1.5,bbox=bbox_time)
if len(save)>=1:
plt.savefig(save)
plt.close(fig)
else:
plt.show()
growths = [[np.log(len(growth)+1) for growth in growth_pat.values() if len(growth)+1>=10] for growth_pat in growth_pattern]
fig=plt.figure()
ax = fig.add_subplot(111)
ax.hist(growths,10)
```
| github_jupyter |
```
import dask
from dask.distributed import Client
import dask_jobqueue
import discretize
from discretize.utils import mkvc
# import deepdish as dd
import h5py
import json
import matplotlib.pyplot as plt
from matplotlib import cm as cmap
from matplotlib.colors import LogNorm, Normalize
import numpy as np
import os
import pandas as pd
import scipy.sparse as sp
import xarray as xr
import zarr
import casingSimulations as casing_sim
from SimPEG import maps
from SimPEG.electromagnetics import time_domain as tdem
from pymatsolver import Pardiso
np.random.seed(29)
directory = "test"
if not os.path.isdir(directory):
os.makedirs(directory, exist_ok=True)
from matplotlib import rcParams
rcParams["font.size"] = 16
nsamples = 2
# set bounds for the distributions of
sigma_background_bounds = np.r_[1e-4, 1]
sigma_casing_bounds = np.r_[1e4, 1e7]
d_casing_bounds = np.r_[5e-2, 30e-2]
t_casing_bounds = np.r_[0.5e-2, 2e-2]
l_casing_bounds = np.r_[20, 4e3]
# constants
sigma_air = 1e-4
sigma_inside = 1 # fluid inside the casing
mur_casing = 1 # permeability is the same as free space
src_a = np.r_[0., 0., 0.] # the radius will be updated to connect to the casing
src_b = np.r_[1000., 0, 0]
csz = 2.5 # cell-size in the z-direction
hy = np.ones(12)
hy = hy*2*np.pi / hy.sum()
# areas to compare data
z_compare = np.linspace(-100, 0, 128)
def generate_random_variables(bounds, n_samples, sig_digs=None):
min_value = bounds.min()
max_value = bounds.max()
v = np.random.rand(n_samples)
v = min_value + (v*(max_value - min_value))
if sig_digs is not None:
v = np.round((v*10**(sig_digs)))/10**(sig_digs)
return v
log10_sigma_background_dist = generate_random_variables(np.log10(sigma_background_bounds), nsamples, 2)
log10_sigma_casing_dist = generate_random_variables(np.log10(sigma_casing_bounds), nsamples, 2)
d_casing_dist = generate_random_variables(d_casing_bounds, nsamples, 2)
t_casing_dist = generate_random_variables(t_casing_bounds, nsamples, 2)
l_casing_dist = np.r_[1000, 1000] #generate_random_variables(l_casing_bounds/csz, nsamples, 0) * csz # generate by ncells
parameters = {
"log10_sigma_background":log10_sigma_background_dist,
"log10_sigma_casing":log10_sigma_casing_dist,
"d_casing":d_casing_dist,
"t_casing":t_casing_dist,
"l_casing":l_casing_dist,
}
df = pd.DataFrame(parameters)
df
df.to_hdf(f"{directory}/trial_data.h5", 'data') #for key in df.keys()
fig, ax = plt.subplots(1,5, figsize=(20, 4))
for i, key in enumerate(parameters.keys()):
ax[i].hist(df[key])
ax[i].set_title(f"{key}".replace("_", " "))
plt.tight_layout()
time_steps = [
(1e-6, 20), (1e-5, 30), (3e-5, 30), (1e-4, 40), (3e-4, 30), (1e-3, 20), (1e-2, 15)
]
df2 = pd.read_hdf(f"{directory}/trial_data.h5", 'data', start=1, stop=2)
df2["log10_sigma_background"]
i = 0
trial_directory = f"{directory}/trial_{i}/"
if not os.path.isdir(trial_directory):
os.makedirs(trial_directory, exist_ok=True)
cd = parameters["d_casing"][i]
ct = parameters["t_casing"][i]
cl = parameters["l_casing"][i]
sc = 10**(parameters["log10_sigma_casing"][i])
sb = 10**(parameters["log10_sigma_background"][i])
model = casing_sim.model.CasingInHalfspace(
directory=trial_directory,
casing_d = cd - ct, # I use diameter to the center of the casing wall
casing_l = cl,
casing_t = ct,
mur_casing = mur_casing,
sigma_air = sigma_air,
sigma_casing = sc,
sigma_back = sb,
sigma_inside = sb,
src_a = src_a,
src_b = src_b,
timeSteps = time_steps
)
model.filename = "casing.json"
np.sum(model.timeSteps)
sigmaA = model.sigma_casing * (model.casing_b**2 - model.casing_a**2)/model.casing_b**2
print(f"The approximate conductivity of the solid we use is {sigmaA:1.1e}")
model_approx_casing = model.copy()
model_approx_casing.casing_t = cd / 2.
model_approx_casing.casing_d = cd - model_approx_casing.casing_t
model_approx_casing.sigma_inside = sigmaA
model_approx_casing.sigma_casing = sigmaA
model_approx_casing.filename = "approx_casing.json"
def generate_mesh(model):
csx1 = model.casing_t/4
csx2 = 100
csz = 2.5
# esure padding goes sufficiently far in the x direction
pad_to = 1e4
npad_x = 0
npad_z = 0
padding_x = cl
padding_z = cl
pfx2 = 1.5
pfz = 1.5
# csx2 = 10
while padding_x < pad_to:
npad_x += 1
padding_x = cl + np.sum((csx2 * (np.ones(npad_x)*pfx2)**np.arange(1, npad_x+1)))
while padding_z < pad_to:
npad_z += 1
padding_z = cl + np.sum((csz * (np.ones(npad_z)*pfz)**np.arange(1, npad_z+1)))
meshGen = casing_sim.mesh.CasingMeshGenerator(
modelParameters = model,
csx1 = csx1,
csx2 = csx2,
domain_x = cl,
hy = hy,
npadx = npad_x,
npadz = npad_z,
csz = csz,
_ncx1 = np.ceil(cd / csx1)
)
mesh = meshGen.mesh
return meshGen, mesh
meshGen, mesh = generate_mesh(model)
# meshGen_approx, mesh_approx = meshGen, mesh
meshGen_approx, mesh_approx = generate_mesh(model_approx_casing)
print(model.diffusion_distance(t=0.1))
ax = mesh.plotGrid()
# ax[1].set_xlim([0, 1100])
ax2 = mesh_approx.plotGrid()
print(mesh.nC, mesh_approx.nC)
def get_source(model, mesh, meshGen):
src_theta = np.pi/2. + mesh.hy[0]/2.
model.src_a[1] = src_theta
model.src_b[1] = src_theta
src_top = casing_sim.sources.TopCasingSrc(
modelParameters=model,
meshGenerator=meshGen,
src_a=model.src_a,
src_b=model.src_b,
physics="TDEM",
filename="top_casing",
)
source_list = src_top.srcList
return source_list
source_list = get_source(model, mesh, meshGen)
source_list_approx = get_source(model_approx_casing, mesh_approx, meshGen_approx)
physprops = casing_sim.model.PhysicalProperties(modelParameters=model, meshGenerator=meshGen)
physprops_approx = casing_sim.model.PhysicalProperties(modelParameters=model_approx_casing, meshGenerator=meshGen_approx)
model.casing_b, model_approx_casing.casing_b
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
xlim = 0.5 * np.r_[-1, 1]
zlim = np.r_[-model.casing_l*1.1, 10]
physprops.plot_sigma(ax=ax[0], pcolorOpts={'norm':LogNorm()})
physprops_approx.plot_sigma(ax=ax[1], pcolorOpts={'norm':LogNorm()})
for a in ax:
a.set_xlim(xlim)
a.set_ylim(zlim)
plt.tight_layout()
np.save(f"{trial_directory}casing.npy", model.sigma(mesh))
np.save(f"{trial_directory}approx_casing.npy", model_approx_casing.sigma(mesh_approx))
survey = tdem.Survey(source_list)
survey_approx = tdem.Survey(source_list_approx)
sim = tdem.Problem3D_j(mesh=mesh, survey=survey, solver=Pardiso, time_steps=time_steps)
sim_approx = tdem.Problem3D_j(mesh=mesh_approx, survey=survey_approx, solver=Pardiso, time_steps=time_steps)
with open(f"{trial_directory}simulation.json", 'w') as outfile:
json.dump(sim.serialize(), outfile)
with open(f"{trial_directory}simulation_approx.json", 'w') as outfile:
json.dump(sim_approx.serialize(), outfile)
def compute_fields(model, simulation, trial_directory):
import deepdish as dd
import discretize
import casingSimulations as casing_sim
from SimPEG.electromagnetics import time_domain as tdem
from SimPEG import maps
from pymatsolver import Pardiso
# simulation_params = dd.io.load(f"{trial_directory}simulation.h5")
# print(f"{trial_directory}simulation.json")
with open(f"{trial_directory}{simulation}.json") as f:
simulation_params = json.load(f)
sim = tdem.Problem3D_j.deserialize(simulation_params, trusted=True)
mesh = sim.mesh
sim.solver = Pardiso
sim.sigmaMap=maps.IdentityMap(mesh)
sim.verbose=True
m = np.load(f"{trial_directory}{model}.npy")
fields = sim.fields(m)
f = fields[:, '{}Solution'.format(sim._fieldType), :]
filename = f"{model}_fields.npy"
tosave = os.path.sep.join([trial_directory, filename])
print(f"saving {tosave}")
np.save(tosave, f)
return tosave
cluster = dask_jobqueue.SLURMCluster(
cores=nsamples,
processes=nsamples*2, memory=f'{120*nsamples}GB',
job_cpu=1,
project="m3384",
job_extra = ['--constraint=haswell', '--qos=debug',],
death_timeout=360,
)
print(cluster.job_script())
client = Client(cluster)
client
# client = Client(threads_per_worker=1, n_workers=2)
# client
f = {}
for m, sim in zip(["casing", "approx_casing"], ["simulation", "simulation_approx"]):
# f[m] = compute_fields(m, trial_directory)
f[m] = dask.delayed(compute_fields)(m, sim, trial_directory)
cluster.scale(1)
fields_files = dask.compute(f)[0]
ndata = 32
ntimes = 128
xsample = np.linspace(25, 1000, ndata)
zsample = np.linspace(-cl, 0, ndata)
xz_grid = discretize.utils.ndgrid(xsample, np.r_[0], zsample)
tsample = np.logspace(-6, -2, 128)
currents = {}
for m in ["casing", "approx_casing"]:
currents[m] = np.load(f"{trial_directory}{m}_fields.npy")
def get_matching_indices(grid="x"):
vnF = getattr(mesh, f"vnF{grid}")
vnF_approx = getattr(mesh_approx, f"vnF{grid}")
x0 = np.ones(vnF[0], dtype=bool)
x0[:vnF[0] - vnF_approx[0]] = False
return np.kron(np.ones(vnF[2], dtype=bool), np.kron(np.ones(vnF[1], dtype=bool), x0))
indsFx = get_matching_indices("x")
indsFy = get_matching_indices("y")
indsFz = get_matching_indices("z")
inds = np.hstack([indsFx, indsFy, indsFz])
# compute jd
jd = currents["casing"][inds] - currents["approx_casing"]
jdx = mkvc(jd[:mesh_approx.vnF[0], :]).reshape(tuple(mesh_approx.vnFx)+(sim_approx.nT+1,), order="F")
jdz = mkvc(jd[np.sum(mesh_approx.vnF[:2]):, :]).reshape(tuple(mesh_approx.vnFz)+(sim_approx.nT+1,), order="F")
# take mean in theta-dimension jdx.mean(1)
jdx = jdx.mean(1)
jdz = jdz.mean(1)
jdxz = np.hstack([mkvc(jdx), mkvc(jdz)])
hx1a = discretize.utils.meshTensor([(meshGen.csx1, meshGen.ncx1)])
# pad to second uniform region
hx1b = discretize.utils.meshTensor([(meshGen.csx1, meshGen.npadx1, meshGen.pfx1)])
# scale padding so it matches cell size properly
dx1 = np.sum(hx1a)+np.sum(hx1b)
dx1 = 3 #np.floor(dx1/meshGen.csx2)
hx1b *= (dx1*meshGen.csx2 - np.sum(hx1a))/np.sum(hx1b)
# second uniform chunk of mesh
ncx2 = np.ceil((meshGen.domain_x - dx1)/meshGen.csx2)
hx2a = discretize.utils.meshTensor([(meshGen.csx2, ncx2)])
# pad to infinity
hx2b = discretize.utils.meshTensor([(meshGen.csx2, meshGen.npadx, meshGen.pfx2)])
hx = np.hstack([hx1a, hx1b, hx2a, hx2b])
hx1a_a = discretize.utils.meshTensor([(meshGen_approx.csx1, meshGen_approx.ncx1)])
# pad to second uniform region
hx1b_a = discretize.utils.meshTensor([(meshGen_approx.csx1, meshGen_approx.npadx1, meshGen_approx.pfx1)])
# scale padding so it matches cell size properly
dx1_a = np.sum(hx1a_a)+np.sum(hx1b_a)
dx1_a = 3 #np.floor(dx1_a/meshGen_approx.csx2)
hx1b_a *= (dx1_a*meshGen_approx.csx2 - np.sum(hx1a_a))/np.sum(hx1b_a)
# second uniform chunk of mesh
ncx2_a = np.ceil((meshGen_approx.domain_x - dx1_a)/meshGen_approx.csx2)
hx2a_a = discretize.utils.meshTensor([(meshGen_approx.csx2, ncx2_a)])
# pad to infinity
hx2b_a = discretize.utils.meshTensor([(meshGen_approx.csx2, meshGen_approx.npadx, meshGen_approx.pfx2)])
hx2 = np.hstack([hx1a_a, hx1b_a, hx2a_a, hx2b_a])
x1 = np.cumsum(np.hstack([np.r_[0], hx]))
x2 = np.cumsum(np.hstack([np.r_[0], hx2]))
mesh.vectorNx[mesh.vectorNx > 25]
mesh_approx.vectorNx[mesh_approx.vectorNx > 25]
tind = 0
print(f"{sim_approx.timeMesh.vectorNx[tind]*1e3} ms")
plt.colorbar(mesh2d.plotImage(
# mesh2d.aveF2CCV * currents["approx_casing"],
mesh2d.aveF2CCV * np.hstack([mkvc(jdx[:, :, tind]), mkvc(jdz[:, :, tind])]),
view="vec",
vType="CCv", range_x=np.r_[25, 100], range_y=[-200, 10], pcolorOpts={"norm": LogNorm()},
clim = np.r_[1e-10, 1e2],
stream_threshold=1e-10,
)[0])
# build projection matrices for data
mesh2d = discretize.CylMesh([mesh_approx.hx, 1, mesh_approx.hz], x0=mesh_approx.x0)
Px = mesh2d.getInterpolationMat(xz_grid, 'Fx')
Pz = mesh2d.getInterpolationMat(xz_grid, 'Fz')
Pt = sim_approx.time_mesh.getInterpolationMat(tsample, 'N')
Pxt = sp.kron(Pt, Px)
Pzt = sp.kron(Pt, Pz)
P = sp.vstack([Pxt, Pzt])
jdata = P * jdxz
np.save(f"{trial_directory}j_difference.npy", jdata)
a = np.r_[0, 0.5, 1.]
a.astype(bool)
# compute current inside casing
ind_casing_Fz = (mesh_approx.aveFz2CC.T * model_approx_casing.ind_casing(mesh_approx)).astype(bool)
I = discretize.utils.sdiag(mesh_approx.area) * currents["approx_casing"]
Iz = I[mesh_approx.vnF[:2].sum():, :]
Iz[~ind_casing_Fz, :] = 0
Iz = Iz.reshape(tuple(mesh_approx.vnFz) + (sim_approx.nT+1,), order="F")
Iz_casing = (Iz.sum(0)).sum(0)
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
cm = plt.get_cmap('viridis')
c_norm = LogNorm(vmin=sim_approx.timeMesh.vectorCCx[0], vmax=sim_approx.timeMesh.vectorNx[-1])
scalar_map = cmap.ScalarMappable(norm=c_norm, cmap=cm)
scalar_map.set_array([])
for i in range(sim_approx.nT):
ax[0].plot(
mesh_approx.vectorNz, -Iz_casing[:, i],
color=scalar_map.to_rgba(sim_approx.timeMesh.vectorNx[i]+1e-7)
)
ax[1].semilogy(
mesh_approx.vectorNz, np.abs(-Iz_casing[:, i]),
color=scalar_map.to_rgba(sim_approx.timeMesh.vectorNx[i]+1e-7)
)
for a in ax:
a.set_xlim([5., -1.25*model.casing_l])
a.grid(which="both", color="k", lw=0.4, alpha=0.4)
ax[1].set_ylim([1e-8, 1])
cb = plt.colorbar(scalar_map)
cb.set_label("time (s)")
plt.tight_layout()
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
cm = plt.get_cmap('viridis')
c_norm = Normalize(vmin=0, vmax=model.casing_l)
scalar_map = cmap.ScalarMappable(norm=c_norm, cmap=cm)
scalar_map.set_array([])
for i in range(mesh_approx.vnFz[2]):
ax[0].semilogx(sim_approx.timeMesh.vectorNx+1e-7, -Iz_casing[i, :], color=scalar_map.to_rgba(-mesh_approx.vectorNz[i]))
ax[1].loglog(sim_approx.timeMesh.vectorNx+1e-7, np.abs(-Iz_casing[i, :]), color=scalar_map.to_rgba(-mesh_approx.vectorNz[i]))
for a in ax:
# a.set_xlim([5., -1.25*model.casing_l])
a.grid(which="both", color="k", lw=0.4, alpha=0.4)
ax[1].set_ylim([1e-8, 1])
cb=plt.colorbar(scalar_map)
cb.set_label("depth (m)")
n_z_currents = 128
z_sample = np.linspace(-model_approx_casing.casing_l, 0, n_z_currents)
Pz_casing_currents = discretize.TensorMesh([mesh_approx.hz], [mesh_approx.x0[2]]).getInterpolationMat(
z_sample, 'N'
)
P_casing_currents = sp.kron(Pt, Pz_casing_currents)
I_casing_data = -1*P_casing_currents*discretize.utils.mkvc(Iz_casing)
np.save(f"{trial_directory}casing_currents.npy", I_casing_data)
plt.plot(I_casing_data)
```
| github_jupyter |
# Dictionaries and DataFrames
Today we are going build dictionaries. Dictionaries are datastructures that do not assume an index value for the data stored in the structures.
Dictionaries take the general form:
> my_dictionary = {key:obj}
To call the object that is linked to the key,
> *my_dictionary[key]* will output the object, *obj*.
```
dct = {}
dct
dct = {"run":"to move swiftly by foot",
"walk": "to move slowly or leisurely by foot"}
dct
dct = {"run":{},
"walk":{}}
dct
dct = {"run":{
"verb":"to move swiftly by foot",
"noun":"a period of time while one was running"},
"walk":{
"verb":"to move slowly or leisurely by foot",
"noun":"a period of time while one was walking"}
}
dct
import pandas as pd
df = pd.DataFrame(dct)
df
# call a column in the df
df["run"]
# call a row in the df
df.loc["verb"]
# call a particular cell
df.loc["verb", "run"]
dct = {"Caden":{"Age":19,
"Interesting Fact":"Played hockey in highschool"},
"Jacob P":{"Age":21,
"Interesting Fact":"Dr. Caton thought my last name was Keterson"},
"Finnian":{"Age":21,
"Interesting Fact":"Wrestled in highschool"},
"Genesis":{"Age":20,
"Interesting Fact":"Tore both ACLs"},
"Sam":{"Age":23,
"Interesting Fact":"Favorite color beige"},
"Proma":{"Age":24,
"Interesting Fact":"Learned classical dancing for 10 years"},
"Zach":{"Age":20,
"Interesting Fact":"On the track and field team for long-distance"},
"Jacob R":{"Age":20,
"Interesting Fact":"Plays classic rock on the guitar"},
"Brandon":{"Age":23,
"Interesting Fact":"Used play baseball in highschool and other leagues"},
"Gabe":{"Age":23,
"Interesting Fact":"A double major MIS and Accounting for undergrad"},
"Drew":{"Age":49,
"Interesting Fact":"Was in the Air Force and freed Keiko (Free Willy)"},
"Isaac":{"Age":21,
"Interesting Fact":"Traveling to Europe in the Summer 2022"},
"Kodjo":{"Age":30,
"Interesting Fact":"Wife is a soldier!"}}
dct
# transpose a dataframe by calling df.T
class_df = pd.DataFrame(dct).T
class_df
class_df.index
class_df["Age"]
class_df["Interesting Fact"]
dct["Dr. Caton"] = {}
dct["Dr. Caton"]["Interesting Fact"] = "I used to ride dirbikes"
dct
dct["Joe Biden"] = {"Age":78,
"Interesting Fact":"Plays Mario Kart with his grandchildren",
"Job":"President of the United States"}
dct
class_df = pd.DataFrame(dct).T
class_df
class_df.dtypes
faculty_dict = {"William Nganje":{"Website":"https://www.ndsu.edu/agecon/faculty/william_nganje/#c622350",
"Areas of Specialization":"Risk management; financial analysis; economics of obesity, food safety and food terrorism; experimental economics; and consumer choice theory",
"Bio":"NA"},
"David Bullock": {"Website":"https://www.ndsu.edu/agecon/faculty/bullock/#c622728",
"Areas of Specialization": "futures and options markets, over-the-counter derivatives, trading, risk management, agrifinance, Monte Carlo simulation, and Big Data",
"Bio":"Dr. David W. Bullock is a Research Associate Professor affiliated with the Center for Trading and Risk at NDSU. His research interests include futures and options markets, over-the-counter derivatives, trading, risk management, agrifinance, Monte Carlo simulation, and Big Data applications in agriculture. His academic research in option portfolio theory has been published in both the Journal of Economics and Business and the International Review of Economics and Finance. Additionally, he was the primary contributor behind the AgriBank Insights publication series which won a National AgriMarketing Association (NAMA) award for the best company publication in 2016. Before coming to NDSU in January 2018, Dr. Bullock held numerous positions for over 25 years in the government and private sectors including the Senior Economist at AgriBank FCB – the regional Farm Credit System funding bank for the Upper Midwest region, Director of Research and Senior Foods Economist at Fortune 500 commodity risk management firm INTL FCStone Inc., the Senior Dairy Analyst at Informa Economics, a Risk Management Specialist with the Minnesota Department of Agriculture, and the Senior Economist at the Minneapolis Grain Exchange. David began his academic career as an Assistant Professor and Extension Marketing Economist at Montana State University after graduating from Iowa State University with a Ph.D. in agricultural economics with fields in agricultural price analysis and econometrics in 1989. Prior to entering ISU, he received bachelor’s (1982) and master’s (1984) degrees in agricultural economics from Northwest Missouri State University. Dr. Bullock is originally from the small northwestern Missouri farming community of Lathrop which is located 40 miles north of the Kansas City metropolitan area. While in high school, he served as a regional state Vice-President in the Future Farmers of America (FFA) during his senior year."},
"James Caton": {"Website":"https://www.ndsu.edu/centers/pcpe/about/directory/james_caton/",
"Areas of Specialization": "Entrepreneurship, Institutions, Macroeconomics, Computation",
"Bio":"James Caton is a faculty fellow at the NDSU Center for the Study of Public Choice and Private Enterprise (PCPE) and an assistant professor in the NDSU Department of Agribusiness and Applied Economics. He teaches undergraduate courses in the areas of macroeconomics, international trade, and computation. He specializes in research related to entrepreneurship, agent-based computational economics, market process theory, and monetary economics. His research has been published in the Southern Economic Journal, Erasmus Journal for Philosophy and Economics, Journal of Entrepreneurship and Public Policy and other academic publications. He co-edited Macroeconomics, a two volume set of essays and primary sources that represent the core of macroeconomic thought. He is also a regular contributor to the American Institute for Economic Research's Sound Money Project, which conducts research and promotes awareness about monetary stability and financial privacy. He resides in Fargo with his wife, Ingrid, and their children."},
"David Englund": {"Website":"https://www.ndsu.edu/agecon/faculty/englund/#c622903",
"Areas of Specialization": "Teaches Economic Principles, Led NDSU NAMA to National Champions",
"Bio":"David Englund is a lecturer in the department. He came to the department with 16 years of teaching experience, having taught Principles of Microeconomics, Principles of Macroeconomics, Money and Banking, Consumer Behavior, Selected Topics in Business, and several other classes. He also had 10 years’ experience advising student NAMA chapters, having been awarded the Outstanding Advisor of the Year for a Developing Chapter in 2002, and the Outstanding Advisor of the Year award in 2009.\nDavid primarily teaches Survey of Economics, Principles of Microeconomics, Skills for Academic Success, Agricultural Marketing, and NAMA (co-teaches). He joined the NAMA team in the 2014-2015 school year as a co-advisor and helped coach the student team to a 3rd place finish in the national student marketing plan competition at the national conference.\nSome of David’s outside interests are jogging, photography, and writing fiction novels. His latest release, Camouflaged Encounters has received positive reviews."},
"Erik Hanson": {"Website":"https://www.ndsu.edu/agecon/faculty/hanson/#c622905",
"Areas of Specialization": "Ag Management, Ag Finance",
"Bio":"Erik Hanson is an Assistant Professor in the Department of Agricultural and Applied Economics. He teaches courses on agribusiness management and agricultural finance. Erik completed his Ph.D. at the University of Minnesota in 2016. Prior to that, Erik completed a master’s degree at the University of Illinois (2013) and a bachelor’s degree at Minnesota State University Moorhead (2011)."},
"Ronald Haugen": {"Website":"https://www.ndsu.edu/agecon/about_us/faculty/ron_haugen/#c654700",
"Areas of Specialization": "Farm management including: crop budgets, crop insurance, farm programs, custom farm rates, land rents, machinery economics, commodity price projections and agricultural income taxes. ",
"Bio":"Ron Haugen is an Extension Farm Management Specialist. He has been in the department since 1991. He computes the North Dakota Land Valuation Model."},
"Robert Hearne": {"Website":"https://www.ndsu.edu/agecon/faculty/hearne/#c622909",
"Areas of Specialization": "water resources management institutions, water markets, protected area management, and the economic valuation of environmental goods and services.",
"Bio":"Dr. Bob Hearne has been in the Department of Agribusiness and Applied Economics since 2002. He has professional experience in Europe, Asia, Latin America, and Asia."},
"Jeremy Jackson": {"Website":"https://www.ndsu.edu/centers/pcpe/about/directory/jeremy_jackson/",
"Areas of Specialization": "public choice and the political economy; the social consequences of economic freedom; happiness and well-being; and philanthropy and nonprofits.",
"Bio":" Jeremy Jackson is director of the Center for the Study of Public Choice and Private Enterprise, scholar at the Challey Institute for Global Innovation and Growth, and professor of economics in the Department of Agribusiness and Applied Economics at North Dakota State University.. He teaches undergraduate and graduate courses in the areas of microeconomics, public economics, and game theory and strategy. His research has been published in Applied Economics, The Independent Review, Public Choice, Contemporary Economic Policy, Journal of Happiness Studies, and other refereed and non-refereed sources. "},
"Prithviraj Lakkakula":{"Website":"https://www.ndsu.edu/agecon/faculty/prithviraj_lakkakula/#c623441",
"Areas of Specialization":"Blockchain, Agricultural Economics",
"Bio":""},
"Siew Lim": {"Website":"https://www.ndsu.edu/agecon/faculty/lim/#c624837",
"Areas of Specialization": "applied microeconomics, production economics, industrial organization, transportation and regional development, transportation and regional development",
"Bio":"Siew Hoon Lim is an associate professor of economics."},
"Raymond March": {"Website":"https://www.ndsu.edu/centers/pcpe/about/directory/raymond_march/",
"Areas of Specialization": "public and private provision and governance of health care in the United States, particularly in pharmaceutical markets",
"Bio":"Raymond March is a scholar at the Challey Institute for Global Innovation and Growth with the Center for the Study of Public Choice and Private Enterprise and an assistant professor of economics in the Department of Agribusiness and Applied Economics at North Dakota State University. He teaches courses in microeconomics, the history of economic thought, and health economics."},
"Dragan Miljkovic": {"Website":"https://www.ndsu.edu/agecon/faculty/miljkovic/#c625001",
"Areas of Specialization": "agricultural price analysis, international economics, and agricultural and food policy including human nutrition, obesity, and food safety",
"Bio":"Dragan Miljkovic is professor of agricultural economics in the Department of Agribusiness & Applied Economics at North Dakota State University. Dr. Miljkovic holds B.S. and M.S. degrees in Economics from the University of Belgrade, and Ph.D. in Agricultural Economics from the University of Illinois at Urbana-Champaign. Dr. Miljkovic authored over sixty peer reviewed journal articles and book chapters, and edited three books. He has more than 60 selected and invited presentations at various domestic and international conferences and universities in North America, Europe, New Zealand, and Australia. Dr. Miljkovic teaches undergraduate class in agricultural prices and graduate advanced econometrics class. Dr. Miljkovic is the Founding Editor and Editor-In-Chief of the Journal of International Agricultural Trade and Development (JIATD), and has also served as the Associate Editor of the Journal of Agricultural and Applied Economics (JAAE). He is an active member of numerous professional organizations and associations including the International Agricultural Trade Research Consortium (IATRC), the AAEA, the SAEA, WAEA, NAREA, AARES, and regional projects NCCC-134, WERA-72, and S-1016."},
"Frayne Olson": {"Website":"https://www.ndsu.edu/agecon/faculty/olson/#c625016",
"Areas of Specialization": " crop marketing strategies, crop outlook and price analysis, and the economics of crop contracting",
"Bio":"Dr. Frayne Olson is the Crop Economist/Marketing Specialist with the North Dakota State University Extension and Director of the Quentin Burdick Center for Cooperatives. Dr. Olson conducts educational programs. As Director of the Center for Cooperatives, he teaches a senior level course on cooperative business management and coordinates the Center’s research and outreach activities. Dr. Olson received his PhD from the University of Missouri in Agricultural Economics, and his M.S. and B.S. in Agricultural Economics from North Dakota State University."},
"Bryon Parman": {"Website":"https://www.ndsu.edu/agecon/faculty/parman/#c654590",
"Areas of Specialization": "",
"Bio":""},
"Tim Petry": {"Website":"https://www.ndsu.edu/agecon/faculty/petry/#c625018",
"Areas of Specialization": "Price Forecasting",
"Bio":"Tim Petry was raised on a livestock ranch in Northwestern North Dakota. He graduated from North Dakota State University with a major in Agricultural Economics in 1969 and served two years in the US Army. Petry returned to NDSU and completed a Master’s Degree in Agricultural Economics with an agricultural marketing emphasis in 1973. He was a member of the teaching/research staff in the Department of Agricultural Economics for 30 years. During his teaching tenure, he taught many marketing courses including several livestock marketing classes, and a very popular introduction to agricultural marketing class. In 2002, Petry retired from teaching and joined the NDSU Extension Service as a Livestock Marketing Economist. He travels extensively in North Dakota and the surrounding area conducting meetings on livestock marketing educational topics. Petry writes a popular monthly “Market Advisor” column on current livestock marketing issues. Copies of his presentations, columns, and other current information affecting the livestock industry are available on his web site: www.ag.ndsu.edu/livestockeconomics. Tim Petry and his wife Shirley have three grown daughters."},
"Xudong Rao": {"Website":"https://www.ndsu.edu/agecon/faculty/rao/#c629066",
"Areas of Specialization": "Farm and Agribusiness Management, Risk Analysis, Efficiency and Productivity, Technology Adoption, Food and Agricultural Policy, International Agricultural Development",
"Bio":"Rao is an assistant professor of agricultural economics at NDSU"},
"Veeshan Rayamajhee": {"Website":"https://www.ndsu.edu/centers/pcpe/about/directory/veeshan_rayamajhee/",
"Areas of Specialization": "Public Choice and New Institutional Economics",
"Bio":"Veeshan Rayamajhee is a scholar at the Challey Institute for Global Innovation and Growth with the Center for the Study of Public Choice and Private Enterprise and an assistant professor of economics in the Department of Agribusiness and Applied Economics at North Dakota State University. His research combines insights from Public Choice and New Institutional Economics to understand individual and collective responses to covariate shocks. He uses a range of empirical tools to study issues related to disasters, climate adaptation, food and energy security, and environmental externalities. His research has appeared in peer-reviewed journals such as Journal of Institutional Economics, Disasters, Economics of Disasters and Climate Change, Journal of International Development, Food Security, and Renewable Energy. For updates on his research, please visit: veeshan.rayamajhee.com/research"},
"David Ripplinger": {"Website":"https://www.ndsu.edu/agecon/faculty/ripplinger/#c629078",
"Areas of Specialization": "Bioenergy",
"Bio":"David Ripplinger is an Associate Professor in the Department of Agribusiness and Applied Economics at North Dakota State University and bioproducts/bioenergy economics specialist with NDSU Extension. In these roles, David conducts research and provides support to farmers and the bioenergy industry. "},
"David Roberts": {"Website":"https://www.ndsu.edu/agecon/faculty/roberts/#c629137",
"Areas of Specialization": "agricultural production methods on the environment and natural resources",
"Bio":"David Roberts is an Assistant Professor of Agribusiness and Applied Economics at North Dakota State University. His research focuses on the impacts of agricultural production methods on the environment and natural resources. David is particularly interested in the economics of precision agriculture technologies and the response of cropping patterns and land use change to emerging biofuels policy at the Federal level. His doctoral dissertation research investigated the relative profitability of several different mid-season optimal nitrogen rate prediction systems for winter wheat in Oklahoma, and investigated how incorporation of uncertainty in estimated and predicted production functions can increase the profitability of the prediction systems. David’s MS thesis investigated the suitability of water quality trading as a policy instrument for dealing with nutrient runoff from agricultural operations in Tennessee. Results showed conditions in polluted watersheds in Tennessee likely will not support robust trading in water quality allowances or offsets."},
"Kristi Schweiss": {"Website":"https://www.ndsu.edu/agecon/faculty/schweiss/#c629139",
"Areas of Specialization": "",
"Bio":"Assistant Director, QBCC"},
"Anupa Sharma": {"Website":"https://www.ndsu.edu/agecon/faculty/sharma/#c629150",
"Areas of Specialization": "International Trade Agreements, Trade Patterns, Distance and Missing Globalization, International Trade and Development",
"Bio":"Anupa Sharma is an Assistant Professor in the Department of Agribusiness and Applied Economics at North Dakota State University. She also serves as the Assistant Director for the Center for Agricultural Policy and Trade Studies (CAPTS). She develops quantitative methods to address issues pertinent to International Trade."},
"Cheryl Wachenheim": {"Website":"https://www.ndsu.edu/agecon/faculty/wachenheim/#c629162",
"Bio": "Cheryl Wachenheim is a Professor in the Department of Agribusiness and Applied Economics at North Dakota State University. She holds an undergraduate degree in animal sciences from the University of Minnesota, and a Master’s and doctorate in Agricultural Economics and an MBA from Michigan State University. She began her academic career at Illinois State University in Central Illinois and has been on the faculty at NDSU since 1998. She regularly teaches classes in economics, agricultural sales, agricultural finance, agricultural marketing, and strategic marketing and management. Research interests focus on eliciting perceptions and valuations from consumers, firms, students and other stakeholders and decision makers. Analysis then allows for identification of high-value marketing and management strategies. Cheryl has been a member of the MN Army National Guard since 1998. She is currently the Commander of the 204th Area Medical Support Company in Cottage Grove, Minnesota.",
"Areas of Specialization":" eliciting perceptions and valuations from consumers, firms, students and other stakeholders and decision makers"},
"William Wilson": {"Website":"https://www.ndsu.edu/agecon/faculty/wilson/#c629178",
"Bio": "Dr. William W. Wilson received his PhD in Agricultural Economics from the University of Manitoba in 1980. Since then he has been a Professor at North Dakota State University in Agribusiness and Applied Economics with periodic sabbaticals at Stanford University. Recently, he was named as a University Distinguished Professor at NDSU which an honorary position is, and a great achievement. And, in 2016 he was named the CHS Chair in Risk Management and Trading at NDSU which is an endowed position. In 2017 he was awarded the AAEA 2016 Distinguished Teaching Award (Chicago July 2017) His focus is risk and strategy as applied to agriculture and agribusiness with a particular focus on agtechnology development and commercialization, procurement, transportation and logistics, international marketing and competition. He teaches classes in Commodity Trading, Risk and AgriBusiness Strategy and has taught his Risk Class at Purdue University; and is a visiting scholar at Melbourne University where he visits 2 times/year and advises PhD students in risk and agbiotechnology. Finally, he has now created the NDSU Commodity Trading Room which is a state of art facility for teaching and research in commodity marketing, logistics and trading. He routinely has projects and/or overseas clients and travels internationally 1 week per month. He led a project for the United States on privatization of the grain marketing system in Russia in the early 1990’s. He currently has projects and/or clients in US, Russia, Ukraine, Mexico, Argentina and Australia. He regularly advises a number of large Agribusiness firms, several major railroads, and several major food and beverage companies and/or governments in other countries. He served as a Board member of the Minneapolis Grain Exchange for 12 years, on the FGIS Advisory Board, and currently serves as a Board member of several regional firms in agtechnology and venture capital (AMITY, BUSHEL), in addition to NCH Capital (New York City which is one of the largest investors in world agriculture). He regularly consults with major agribusiness firms on topics related to above and has worked extensively in the following industries: agtechnology, logistics, procurement strategy, railroads, barges, ocean shipping, elevators (shuttle development), and processed products (malting and beer, durum and pasta, wheat and bread). He was recognized as one of the top 10 Agricultural Economists in 1995 and more recently as one of the top 1% of agricultural economists by RePEc (Research Papers in Economics). Finally, he has students who are in senior positions in a number of the large agribusinesses including commodity companies, railroads and food and beverage companies.",
"Areas of Specialization":"commodity marketing, logistics and trading"},
}
faculty_dict
```
## Getting Familiar with pandas DataFrames
```
faculty_df = pd.DataFrame(faculty_dict).T
faculty_df.to_csv("facultyInfo.csv")
faculty_df.loc["Jeremy Jackson"]["Bio"]
faculty_df.index
faculty_df.keys()
faculty_df.loc["William Nganje"]
names = faculty_df.index
for name in names:
print(name)
for name in names:
print(faculty_df.loc[name])
keys = faculty_df.keys()
keys
for name in names:
print(name)
for key in keys:
print(key)#, faculty_df.loc[name][key])
print()
for name in names:
print(name)
for key in keys:
print(key+":", faculty_df.loc[name, key])
print()
faculty_df[faculty_df["Areas of Specialization"].str.contains("Risk")]
names = ["Jeremy Jackson", "Raymond March", "Veeshan Rayamajhee"]
faculty_df[faculty_df.index.isin(names)]
lst = ["a", "b", "c"]
"a" in lst
```
| github_jupyter |
# Aim of this notebook
* To construct the singular curve of universal type to finalize the solution of the optimal control problem
# Preamble
```
from sympy import *
init_printing(use_latex='mathjax')
# Plotting
%matplotlib inline
## Make inline plots raster graphics
from IPython.display import set_matplotlib_formats
## Import modules for plotting and data analysis
import matplotlib.pyplot as plt
from matplotlib import gridspec,rc,colors
import matplotlib.ticker as plticker
## Parameters for seaborn plots
import seaborn as sns
sns.set(style='white',font_scale=1.25,
rc={"xtick.major.size": 6, "ytick.major.size": 6,
'text.usetex': False, 'font.family': 'serif', 'font.serif': ['Times']})
import pandas as pd
pd.set_option('mode.chained_assignment',None)
import numpy as np
from scipy.optimize import fsolve, root
from scipy.integrate import ode
backend = 'dopri5'
import warnings
# Timer
import time
from copy import deepcopy
from itertools import cycle
palette_size = 10;
clrs = sns.color_palette("Reds",palette_size)
iclrs = cycle(clrs) # iterated colors
# Suppress warnings
import warnings
warnings.filterwarnings("ignore")
```
# Parameter values
* Birth rate and const of downregulation are defined below in order to fit some experim. data
```
d = .13 # death rate
c = .04 # cost of resistance
α = .3 # low equilibrium point at expression of the main pathway (high equilibrium is at one)
θ = .45 # threshold value for the expression of the main pathway
κ = 40 # robustness parameter
L = .2 # parameter used to model the effect of treatment (see the line below)
```
* Symbolic variables - the list insludes μ & μbar, because they will be varied later
```
σ, φ0, φ, x, μ, μbar = symbols('sigma, phi0, phi, x, mu, mubar')
```
* Main functions
```
A = 1-σ*(1-θ)*(1-L)
Θ = θ+σ*(1-θ)*L
Eminus = (α*A-Θ)**2/2
ΔE = A*(1-α)*((1+α)*A/2-Θ)
ΔEf = lambdify(σ,ΔE)
```
* Birth rate and cost of downregulation
```
b = (0.1*(exp(κ*(ΔEf(1)))+1)-0.14*(exp(κ*ΔEf(0))+1))/(exp(κ*ΔEf(1))-exp(κ*ΔEf(0))) # birth rate
χ = 1-(0.14*(exp(κ*ΔEf(0))+1)-b*exp(κ*ΔEf(0)))/b
b, χ
```
* Hamiltonian *H* and a part of it ρ that includes the control variable σ
```
h = b*(χ/(exp(κ*ΔE)+1)*(1-x)+c*x)
H = -φ0 + φ*(b*(χ/(exp(κ*ΔE)+1)-c)*x*(1-x)+μ*(1-x)/(exp(κ*ΔE)+1)-μbar*exp(-κ*Eminus)*x) + h
ρ = (φ*(b*χ*x+μ)+b*χ)/(exp(κ*ΔE)+1)*(1-x)-φ*μbar*exp(-κ*Eminus)*x
H, ρ
```
* Same but for no treatment (σ = 0)
```
h0 = h.subs(σ,0)
H0 = H.subs(σ,0)
ρ0 = ρ.subs(σ,0)
H0, ρ0
```
* Machinery: definition of the Poisson brackets
```
PoissonBrackets = lambda H1, H2: diff(H1,x)*diff(H2,φ)-diff(H1,φ)*diff(H2,x)
```
* Necessary functions and defining the right hand side of dynamical equations
```
ρf = lambdify((x,φ,σ,μ,μbar),ρ)
ρ0f = lambdify((x,φ,μ,μbar),ρ0)
dxdτ = lambdify((x,φ,σ,μ,μbar),-diff(H,φ))
dφdτ = lambdify((x,φ,σ,μ,μbar),diff(H,x))
dVdτ = lambdify((x,σ),h)
dρdσ = lambdify((σ,x,φ,μ,μbar),diff(ρ,σ))
dδρdτ = lambdify((x,φ,σ,μ,μbar),-PoissonBrackets(ρ0-ρ,H))
def ode_rhs(t,state,μ,μbar):
x, φ, V, δρ = state
σs = [0,1]
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σstar = 1.;
if ρf(x,φ,σstar,μ,μbar) < ρ0f(x,φ,μ,μbar):
sgm = 0
else:
sgm = σstar
return [dxdτ(x,φ,sgm,μ,μbar),dφdτ(x,φ,sgm,μ,μbar),dVdτ(x,sgm),dδρdτ(x,φ,σstar,μ,μbar)]
def σstarf(x,φ,μ,μbar):
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σstar = 1.;
if ρf(x,φ,σstar,μ,μbar) < ρ0f(x,φ,μ,μbar):
sgm = 0
else:
sgm = σstar
return sgm
def get_primary_field(name, experiment,μ,μbar):
solutions = {}
solver = ode(ode_rhs).set_integrator(backend)
τ0 = experiment['τ0']
tms = np.linspace(τ0,experiment['T_end'],1e3+1)
for x0 in experiment['x0']:
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
sol = []; k = 0;
while (solver.t < experiment['T_end']) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[k])
sol.append([solver.t]+list(solver.y))
k += 1
solutions[x0] = {'solution': sol}
for x0, entry in solutions.items():
entry['τ'] = [entry['solution'][j][0] for j in range(len(entry['solution']))]
entry['x'] = [entry['solution'][j][1] for j in range(len(entry['solution']))]
entry['φ'] = [entry['solution'][j][2] for j in range(len(entry['solution']))]
entry['V'] = [entry['solution'][j][3] for j in range(len(entry['solution']))]
entry['δρ'] = [entry['solution'][j][4] for j in range(len(entry['solution']))]
return solutions
def get_δρ_value(tme,x0,μ,μbar):
solver = ode(ode_rhs).set_integrator(backend)
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
while (solver.t < tme) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tme)
sol = [solver.t]+list(solver.y)
return solver.y[3]
def get_δρ_ending(params,μ,μbar):
tme, x0 = params
solver = ode(ode_rhs).set_integrator(backend)
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
δτ = 1.0e-8; tms = [tme,tme+δτ]
_k = 0; sol = []
while (_k<len(tms)):# and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append(solver.y)
_k += 1
#print(sol)
return(sol[0][3],(sol[1][3]-sol[0][3])/δτ)
def get_state(tme,x0,μ,μbar):
solver = ode(ode_rhs).set_integrator(backend)
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
δτ = 1.0e-8; tms = [tme,tme+δτ]
_k = 0; sol = []
while (solver.t < tms[-1]) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append(solver.y)
_k += 1
return(list(sol[0])+[(sol[1][3]-sol[0][3])/δτ])
```
# Machinery for the universal line
* To find the universal singular curve we need to define two parameters
```
γ0 = PoissonBrackets(PoissonBrackets(H,H0),H)
γ1 = PoissonBrackets(PoissonBrackets(H0,H),H0)
```
* The dynamics
```
dxdτSingExpr = -(γ0*diff(H0,φ)+γ1*diff(H,φ))/(γ0+γ1)
dφdτSingExpr = (γ0*diff(H0,x)+γ1*diff(H,x))/(γ0+γ1)
dVdτSingExpr = (γ0*h0+γ1*h)/(γ0+γ1)
σSingExpr = γ1*σ/(γ0+γ1)
```
* Machinery for Python: lambdify the functions above
```
dxdτSing = lambdify((x,φ,σ,μ,μbar),dxdτSingExpr)
dφdτSing = lambdify((x,φ,σ,μ,μbar),dφdτSingExpr)
dVdτSing = lambdify((x,φ,σ,μ,μbar),dVdτSingExpr)
σSing = lambdify((x,φ,σ,μ,μbar),σSingExpr)
def ode_rhs_Sing(t,state,μ,μbar):
x, φ, V = state
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σstar = 1.;
#print([σstar,σSing(x,φ,σstar,μ,μbar)])
return [dxdτSing(x,φ,σstar,μ,μbar),dφdτSing(x,φ,σstar,μ,μbar),dVdτSing(x,φ,σstar,μ,μbar)]
# def ode_rhs_Sing(t,state,μ,μbar):
# x, φ, V = state
# if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
# σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
# else:
# σstar = 1.;
# σTrav = fsolve(lambda σ: dxdτ(x,φ,σ,μ,μbar)-dxdτSing(x,φ,σstar,μ,μbar),.6)[0]
# print([σstar,σTrav])
# return [dxdτSing(x,φ,σstar,μ,μbar),dφdτSing(x,φ,σstar,μ,μbar),dVdτ(x,σTrav)]
def get_universal_curve(end_point,tmax,Nsteps,μ,μbar):
tms = np.linspace(end_point[0],tmax,Nsteps);
solver = ode(ode_rhs_Sing).set_integrator(backend)
solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar)
_k = 0; sol = []
while (solver.t < tms[-1]):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
return sol
def get_σ_universal(tme,end_point,μ,μbar):
δτ = 1.0e-8; tms = [tme,tme+δτ]
solver = ode(ode_rhs_Sing).set_integrator(backend)
solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar)
_k = 0; sol = []
while (solver.t < tme+δτ):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
x, φ = sol[0][:2]
sgm = fsolve(lambda σ: dxdτ(x,φ,σ,μ,μbar)-(sol[1][0]-sol[0][0])/δτ,θ/2)[0]
return sgm
def get_state_universal(tme,end_point,μ,μbar):
solver = ode(ode_rhs_Sing).set_integrator(backend)
solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar)
solver.integrate(tme)
return [solver.t]+list(solver.y)
def ode_rhs_with_σstar(t,state,μ,μbar):
x, φ, V = state
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σ = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σ = 1.;
return [dxdτ(x,φ,σ,μ,μbar),dφdτ(x,φ,σ,μ,μbar),dVdτ(x,σ)]
def ode_rhs_with_given_σ(t,state,σ,μ,μbar):
x, φ, V = state
return [dxdτ(x,φ,σ,μ,μbar),dφdτ(x,φ,σ,μ,μbar),dVdτ(x,σ)]
def get_trajectory_with_σstar(starting_point,tmax,Nsteps,μ,μbar):
tms = np.linspace(starting_point[0],tmax,Nsteps)
solver = ode(ode_rhs_with_σstar).set_integrator(backend)
solver.set_initial_value(starting_point[1:],starting_point[0]).set_f_params(μ,μbar)
sol = []; _k = 0;
while solver.t < max(tms) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
return sol
def get_trajectory_with_given_σ(starting_point,tmax,Nsteps,σ,μ,μbar):
tms = np.linspace(starting_point[0],tmax,100)
solver = ode(ode_rhs_with_given_σ).set_integrator(backend)
solver.set_initial_value(starting_point[1:],starting_point[0]).set_f_params(σ,μ,μbar)
sol = []; _k = 0;
while solver.t < max(tms) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
return sol
def get_state_with_σstar(tme,starting_point,μ,μbar):
solver = ode(ode_rhs_with_σstar).set_integrator(backend)
solver.set_initial_value(starting_point[1:4],starting_point[0]).set_f_params(μ,μbar)
solver.integrate(tme)
return [solver.t]+list(solver.y)
def get_finalizing_point_from_universal_curve(tme,tmx,end_point,μ,μbar):
unv_point = get_state_universal(tme,end_point,μ,μbar)
return get_state_with_σstar(tmx,unv_point,μ,μbar)[1]
```
# Field of optimal trajectories as the solution of the Bellman equation
* μ & μbar are varied by *T* and *T*bar ($\mu=1/T$ and $\bar\mu=1/\bar{T}$)
```
tmx = 720.
end_switching_curve = {'t': 24., 'x': .9/.8}
# for Τ, Τbar in zip([28]*5,[14,21,28,35,60]):
for Τ, Τbar in zip([28],[60]):
μ = 1./Τ; μbar = 1./Τbar
print("Parameters: μ = %.5f, μbar = %.5f"%(μ,μbar))
end_switching_curve['t'], end_switching_curve['x'] = fsolve(get_δρ_ending,(end_switching_curve['t'],.8*end_switching_curve['x']),args=(μ,μbar),xtol=1.0e-12)
end_point = [end_switching_curve['t']]+get_state(end_switching_curve['t'],end_switching_curve['x'],μ,μbar)
print("Ending point for the switching line: τ = %.1f days, x = %.1f%%" % (end_point[0], end_point[1]*100))
print("Checking the solution - should give zero values: ")
print(get_δρ_ending([end_switching_curve['t'],end_switching_curve['x']],μ,μbar))
print("* Constructing the primary field")
experiments = {
'sol1': { 'T_end': tmx, 'τ0': 0., 'x0': list(np.linspace(0,end_switching_curve['x']-(1e-3),10))+
list(np.linspace(end_switching_curve['x']+(1e-6),1.,10)) } }
primary_field = []
for name, values in experiments.items():
primary_field.append(get_primary_field(name,values,μ,μbar))
print("* Constructing the switching curve")
switching_curve = []
x0s = np.linspace(end_switching_curve['x'],1,21); _y = end_switching_curve['t']
for x0 in x0s:
tme = fsolve(get_δρ_value,_y,args=(x0,μ,μbar))[0]
if (tme>0):
switching_curve = switching_curve+[[tme,get_state(tme,x0,μ,μbar)[0]]]
_y = tme
print("* Constructing the universal curve")
universal_curve = get_universal_curve(end_point,tmx,25,μ,μbar)
print("* Finding the last characteristic")
#time0 = time.time()
tuniv = fsolve(get_finalizing_point_from_universal_curve,tmx-40.,args=(tmx,end_point,μ,μbar,))[0]
#print("The proccess to find the last characteristic took %0.1f minutes" % ((time.time()-time0)/60.))
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
print("The last point on the universal line:")
print(univ_point)
last_trajectory = get_trajectory_with_σstar(univ_point,tmx,50,μ,μbar)
print("Final state:")
final_state = get_state_with_σstar(tmx,univ_point,μ,μbar)
print(final_state)
print("Fold-change in tumor size: %.2f"%(exp((b-d)*tmx-final_state[-1])))
# Plotting
plt.rcParams['figure.figsize'] = (6.75, 4)
_k = 0
for solutions in primary_field:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], 'k-', linewidth=.9, color=clrs[_k%palette_size])
_k += 1
plt.plot([x[0] for x in switching_curve],[x[1] for x in switching_curve],linewidth=2,color="red")
plt.plot([end_point[0]],[end_point[1]],marker='o',color="red")
plt.plot([x[0] for x in universal_curve],[x[1] for x in universal_curve],linewidth=2,color="red")
plt.plot([x[0] for x in last_trajectory],[x[1] for x in last_trajectory],linewidth=.9,color="black")
plt.xlim([0,tmx]); plt.ylim([0,1]);
plt.xlabel("time, days"); plt.ylabel("fraction of resistant cells")
plt.show()
print()
import csv
from numpy.linalg import norm
File = open("../figures/draft/sensitivity_mu-example.csv", 'w')
File.write("T,Tbar,mu,mubar,sw_end_t,sw_end_x,univ_point_t,univ_point_x,outcome,err_sw_t,err_sw_x\n")
writer = csv.writer(File,lineterminator='\n')
tmx = 720.
end_switching_curve0 = {'t': 23.36, 'x': .9592}
end_switching_curve_prev_t = end_switching_curve['t']
tuniv = tmx-30.
Τbars = np.arange(120,110,-2) #need to change here if more
for Τ in Τbars:
μ = 1./Τ
end_switching_curve = deepcopy(end_switching_curve0)
for Τbar in Τbars:
μbar = 1./Τbar
print("* Parameters: T = %.1f, Tbar = %.1f (μ = %.5f, μbar = %.5f)"%(Τ,Τbar,μ,μbar))
success = False; err = 1.
while (not success)|(norm(err)>1e-6):
end_switching_curve = {'t': 2*end_switching_curve['t']-end_switching_curve_prev_t-.001,
'x': end_switching_curve['x']-0.002}
sol = root(get_δρ_ending,(end_switching_curve['t'],end_switching_curve['x']),args=(μ,μbar))
end_switching_curve_prev_t = end_switching_curve['t']
end_switching_curve_prev_x = end_switching_curve['x']
end_switching_curve['t'], end_switching_curve['x'] = sol.x
success = sol.success
err = get_δρ_ending([end_switching_curve['t'],end_switching_curve['x']],μ,μbar)
if (not success):
print("! Trying again...", sol.message)
elif (norm(err)>1e-6):
print("! Trying again... Convergence is not sufficient")
else:
end_point = [end_switching_curve['t']]+get_state(end_switching_curve['t'],end_switching_curve['x'],μ,μbar)
print("Ending point: t = %.2f, x = %.2f%%"%(end_switching_curve['t'],100*end_switching_curve['x'])," Checking the solution:",err)
universal_curve = get_universal_curve(end_point,tmx,25,μ,μbar)
tuniv = root(get_finalizing_point_from_universal_curve,tuniv,args=(tmx,end_point,μ,μbar)).x
err_tuniv = get_finalizing_point_from_universal_curve(tuniv,tmx,end_point,μ,μbar)
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
print("tuniv = %.2f"%tuniv," Checking the solution: ",err_tuniv)
final_state = get_state_with_σstar(tmx,univ_point,μ,μbar)
outcome = exp((b-d)*tmx-final_state[-1])
print("Fold-change in tumor size: %.2f"%(outcome))
output = [Τ,Τbar,μ,μbar,end_switching_curve['t'],end_switching_curve['x']]+list(univ_point[0:2])+[outcome]+list(err)+[err_tuniv]
writer.writerow(output)
if (Τbar==Τ):
end_switching_curve0 = deepcopy(end_switching_curve)
File.close()
```
* Here I investigate the dependence of $\mathrm{FoldChange}(T,\bar T)$. I fix $T$ at 15,30,45,60 days, and then I vary $\bar T$ between zero and $4T$. The example below is just a simulation for only one given value of $T$.
```
import csv
from numpy.linalg import norm
File = open("../results/sensitivity1.csv", 'w')
File.write("T,Tbar,mu,mubar,sw_end_t,sw_end_x,univ_point_t,univ_point_x,outcome,err_sw_t,err_sw_x\n")
writer = csv.writer(File,lineterminator='\n')
tmx = 720.
end_switching_curve = {'t': 23.36, 'x': .9592}
end_switching_curve_prev_t = end_switching_curve['t']
tuniv = tmx-30.
Τ = 15
Τbars_step = .5; Tbars = np.arange(Τ*4,0,-Τbars_step)
for Τbar in Tbars:
μ = 1./Τ; μbar = 1./Τbar
print("* Parameters: T = %.1f, Tbar = %.1f (μ = %.5f, μbar = %.5f)"%(Τ,Τbar,μ,μbar))
success = False; err = 1.
while (not success)|(norm(err)>1e-6):
end_switching_curve = {'t': 2*end_switching_curve['t']-end_switching_curve_prev_t-.001,
'x': end_switching_curve['x']-0.002}
sol = root(get_δρ_ending,(end_switching_curve['t'],end_switching_curve['x']),args=(μ,μbar))
end_switching_curve_prev_t = end_switching_curve['t']
end_switching_curve_prev_x = end_switching_curve['x']
end_switching_curve['t'], end_switching_curve['x'] = sol.x
success = sol.success
err = get_δρ_ending([end_switching_curve['t'],end_switching_curve['x']],μ,μbar)
if (not success):
print("! Trying again...", sol.message)
elif (norm(err)>1e-6):
print("! Trying again... Convergence is not sufficient")
else:
end_point = [end_switching_curve['t']]+get_state(end_switching_curve['t'],end_switching_curve['x'],μ,μbar)
print("Ending point: t = %.2f, x = %.2f%%"%(end_switching_curve['t'],100*end_switching_curve['x'])," Checking the solution:",err)
universal_curve = get_universal_curve(end_point,tmx,25,μ,μbar)
tuniv = root(get_finalizing_point_from_universal_curve,tuniv,args=(tmx,end_point,μ,μbar)).x
err_tuniv = get_finalizing_point_from_universal_curve(tuniv,tmx,end_point,μ,μbar)
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
print("tuniv = %.2f"%tuniv," Checking the solution: ",err_tuniv)
final_state = get_state_with_σstar(tmx,univ_point,μ,μbar)
outcome = exp((b-d)*tmx-final_state[-1])
print("Fold-change in tumor size: %.2f"%(outcome))
output = [Τ,Τbar,μ,μbar,end_switching_curve['t'],end_switching_curve['x']]+list(univ_point[0:2])+[outcome]+list(err)+[err_tuniv]
writer.writerow(output)
File.close()
```
* The results are aggregated in a file **sensitivity1_agg.csv**.
```
df = pd.DataFrame.from_csv("../figures/draft/sensitivity1_agg.csv").reset_index().drop(['err_sw_t','err_sw_x','err_tuniv'],1)
df['Tratio'] = df['Tbar']/df['T']
df.head()
```
| github_jupyter |
# Logistic Regression (scikit-learn) with HDFS/Spark Data Versioning
This example is based on our [basic census income classification example](census-end-to-end.ipynb), using local setups of ModelDB and its client, and [HDFS/Spark data versioning](https://docs.verta.ai/en/master/api/api/versioning.html#verta.dataset.HDFSPath).
```
!pip install /path/to/verta-0.15.10-py2.py3-none-any.whl
HOST = "localhost:8080"
PROJECT_NAME = "Census Income Classification - HDFS Data"
EXPERIMENT_NAME = "Logistic Regression"
```
## Imports
```
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
```
---
# Log Workflow
This section demonstrates logging model metadata and training artifacts to ModelDB.
## Instantiate Client
```
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
```
<h2>Prepare Data</h2>
```
from pyspark import SparkContext
sc = SparkContext("local")
from verta.dataset import HDFSPath
hdfs = "hdfs://HOST:PORT"
dataset = client.set_dataset(name="Census Income S3")
blob = HDFSPath.with_spark(sc, "{}/data/census/*".format(hdfs))
version = dataset.create_version(blob)
version
csv = sc.textFile("{}/data/census/census-train.csv".format(hdfs)).collect()
from verta.external.six import StringIO
df_train = pd.read_csv(StringIO('\n'.join(csv)))
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
```
## Prepare Hyperparameters
```
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
```
## Train Models
```
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# save and log model
run.log_model(model)
# log dataset snapshot as version
run.log_dataset_version("train", version)
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
```
---
# Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
## Retrieve Best Run
```
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
```
## Train on Full Dataset
```
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
```
## Calculate Accuracy on Full Training Set
```
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
```
---
| github_jupyter |
## Contour deformation
In the context of GW method, contour deformation (CD) technique is used in conjunction with resolution of identity (RI) to reduce the formal scaling of the self-energy calculation. Compared to widely used analytic continuation approach it provides a means to evaluate self-energy directly on the real axis without employing Pade approximants or non-linear least squares fit and potentially offering superior accuracy. Here, we provide a brief outline of the theory behind CD and give an example of the self-energy calculation within CD without invoking RI in order to facilitate comparison with the results prsented above.
Detailed discussion of the CD can be found in the following papers:
1. Golze, D., Wilhelm, J., van Setten, M. J., & Rinke, P. (2018). Core-Level Binding Energies from GW : An Efficient Full-Frequency Approach within a Localized Basis. Journal of Chemical Theory and Computation, 14(9), 4856–4869. https://doi.org/10.1021/acs.jctc.8b00458
2. Giantomassi, M., Stankovski, M., Shaltaf, R., Grüning, M., Bruneval, F., Rinke, P., & Rignanese, G.-M. (2011). Electronic properties of interfaces and defects from many-body perturbation theory: Recent developments and applications. Physica Status Solidi (B), 248(2), 275–289. https://doi.org/10.1002/pssb.201046094
CD is used to recast the convolution in the GW expression of self-energy as a difference between two integrals, one which can be performed analytically whereas the other can be evaluated numerically on a relatively small grid. This is achieved by closing the inegration contour as shown below [2]:

$$
\Sigma(r_1,r_2, \omega) = \frac{i}{2\pi} \int_{-\infty}^{+\infty} e^{i\omega^{\prime} \eta} G(r_1, r_2, \omega + \omega^{\prime}) W(r_1, r_2, \omega^{\prime}) d\omega^{\prime}\\
= \frac{i}{2\pi} \oint_{\Gamma} G(r_1, r_2, \omega + z) W(r_1, r_2, z) dz - \frac{1}{2\pi} \int_{-\infty}^{+\infty} G(r_1, r_2, \omega + i\omega^{\prime}) W(r_1, r_2, i\omega^{\prime}) d\omega^{\prime}
$$
Depending on the $\omega$ value the lower-left and the upper-right loops of the contour can enclose one or several poles of the zero-order Green's function whereas the poles of the screened Coulomb interaction never fall within the contour. This allowes to evaluate the countour integral as a sum of corresponding residues with apropriate signs (note that the upper-right loop is traversed counter-clockwise, while the lower-left loop is traversed clockwise). The imaginary axis contribution is calculated using Gauss-Legendre grid. Importantly, the intgrals over the arches vanish iff the screened Coulomb interaction does not contain the exchange contribution.
```
import psi4
import numpy as np
import scipy as sp
from matplotlib import pyplot as plt
%matplotlib inline
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:95% !important;}</style>"))
psi4.set_options({'basis' : 'cc-pvdz', 'd_convergence' : 1e-7,'scf_type' : 'out_of_core', 'dft_spherical_points' : 974, 'dft_radial_points' : 150 })
c2h2 = psi4.geometry("""
C 0.0000 0.0000 0.6015
C 0.0000 0.0000 -0.6015
H 0.0000 0.0000 1.6615
H 0.0000 0.0000 -1.6615
symmetry c1
units angstrom
""")
psi4.set_output_file('c2h2_ccpvdz.out')
scf_e, scf_wfn = psi4.energy('PBE', return_wfn=True)
print("DFT energy is %16.10f" % scf_e)
epsilon = np.asarray(scf_wfn.epsilon_a())
print(epsilon*psi4.constants.hartree2ev)
```
``` SCF Total Energy (Ha): -77.2219432068 (MOLGW) ```
```
import GW
gw_par = {'no_qp' : 7, 'nv_qp' : 0, 'nomega_sigma' : 501, 'step_sigma' : 0.01, 'gl_npoint' : 200, 'low_mem' : True }
gw_c2h2_dz_cd1 = GW.GW_DFT(scf_wfn, c2h2, gw_par)
gw_c2h2_dz_cd1.print_summary()
```
```
GW eigenvalues (eV) RI
# E0 SigX-Vxc SigC Z E_qp^lin E_qp^graph
1 -269.503377 -35.463486 11.828217 0.724328 -286.623075 -326.542284
2 -269.449587 -35.412335 11.798952 0.725633 -286.584227 -326.514902
3 -18.425273 -9.085843 4.032739 0.740744 -22.168328 -21.438530
4 -13.915903 -6.453950 1.756727 0.797034 -17.659749 -17.729721
5 -11.997810 -5.869987 1.145594 0.873449 -16.124327 -15.984958
6 -6.915552 -3.811111 -0.355345 0.897341 -10.654285 -10.639366
7 -6.915552 -3.811111 -0.355345 0.897341 -10.654285 -10.639366
```
```
gw_par = {'no_qp' : 7, 'nv_qp' : 0, 'nomega_sigma' : 501, 'step_sigma' : 0.01, 'analytic_W': True, 'gl_npoint' : 200, 'debug' : False, 'low_mem' : False }
gw_c2h2_dz_cd2 = GW.GW_DFT(scf_wfn, c2h2, gw_par)
gw_c2h2_dz_cd2.print_summary()
```
```
Analytic vs approximate W (contour deformation algorithm)
Analytic
E^lin, eV E^graph, eV Z
-286.589767 -326.503147 0.724323
-286.550907 -326.475732 0.725630
-22.169264 -21.436806 0.740752
-17.660393 -17.728667 0.797120
-16.125682 -15.984765 0.873439
-10.631926 -10.639259 0.897342
-10.680195 -10.639259 0.897342
Approximate
E^lin, eV E^graph, eV Z
-286.587831 -326.503140 0.724323
-286.548967 -326.475725 0.725630
-22.168472 -21.436808 0.740752
-17.660116 -17.728666 0.797120
-16.125265 -15.984765 0.873439
-10.631349 -10.639259 0.897342
-10.679617 -10.639259 0.897342
MOLGW reference
GW eigenvalues (eV)
# E0 SigX-Vxc SigC Z E_qp^lin E_qp^graph
1 -269.503377 -35.463486 11.828217 0.724328 -286.623075 -326.542284
2 -269.449587 -35.412335 11.798952 0.725633 -286.584227 -326.514902
3 -18.425273 -9.085843 4.032739 0.740744 -22.168328 -21.438530
4 -13.915903 -6.453950 1.756727 0.797034 -17.659749 -17.729721
5 -11.997810 -5.869987 1.145594 0.873449 -16.124327 -15.984958
6 -6.915552 -3.811111 -0.355345 0.897341 -10.654285 -10.639366
7 -6.915552 -3.811111 -0.355345 0.897341 -10.654285 -10.639366
```
| github_jupyter |
# <center>Models and Pricing of Financial Derivativs HW_01</center>
**<center>11510691 程远星</center>**
## Question 1
$\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}\bspace$Selling a call option: As the writer of the call option, I give the holder the right to buy an asset at a specified time $T$ for a specified price $K$. My payoff would be $-\max\P{S_T - K,0}$ for european call options. If I sold an american call option, the holder can exercise at any time before $T$.
$\bspace$Buying a put option: As the holder of the put option, I actually was granted the right to sell an asset at a specified time $T$ for a specified price $K$. My payoff would be $\max\P{K - S_T, 0}$ Or before $T$ if what I bought is an american put option.
## Question 2
$\bspace$We can write their profit function on the stock price $S_t$.
- Stock: $100\P{S_t - 94}$
- Option: $2000\big(\max\P{S_t - 95,0} - 4.7\big)$
$\bspace$They intersect at two points, $\P{0,0}$ and $\P{100,600}$. It's generally acknowledge that it's of higher possibility that the stock price moves less than more, thus I personally think that holding the stocks rather than the options have a better chance to profit.
$\bspace$As for the second question, since we've already acquire their intersection, we can say that when the stock price goes higher than $100$, options will win more.
## Question 3
$\bspace$The trader now in the call holder's position. He has paid $c$ to buy the right that he can use $K$ to buy the underlying asset as time $T$. Also in the put writer's position. He has received $p$ and given the right to someone else selling him the asset at price $K$ at time $T$.
$\bspace$To let the prices equal, by the **Put-call parity**, we have $S_0 = Ke^{-rT}$, the time value of $K$ is equal to the initial price of the asset.
## Question 4
$\bspace$We first write its payoff function on the stock price:
$$\begin{align}
p &= 100\P{S_T - 40} + 100\SB{5 - \max\P{S_T - 50, 0}} + 100\SB{\max\P{30 - S_T,0} - 7}\\
&= \begin{cases}
800, &\text{if } S_T \geq 50 \\
100S_T - 4200, &\text{if } 50 \geq S_T \geq 30 \\
-1200, &\text{if } 30 \geq S_T \geq 0
\end{cases}
\end{align}
$$

After that, the payoff would change to:
$$\begin{align}
p &= 100\P{S_T - 40} + 200\SB{5 - \max\P{S_T - 50, 0}} + 200\SB{\max\P{30 - S_T,0} - 7}\\
&= \begin{cases}
5600 - 100S_T, &\text{if } S_T \geq 50 \\
100S_T - 4400, &\text{if } 50 \geq S_T \geq 30 \\
1600 - 100S_T, &\text{if } 30 \geq S_T \geq 0
\end{cases}
\end{align}
$$

## Question 5
$\bspace$The lower bound of the option can be obtained using the formula
$\bspace\begin{align}
c &\geq K e^{-rT} - S_0 \\
&= 15 \cdot e^{-6\% \times 1/12} - 12 \\
&\approx 2.93
\end{align}$
## Qustion 6
$\bspace$The early exercise of an American put option is to sell the stock to the writer at the Strike price $K$ before the expiration date $T$. Suppose he exercised at time $t$ thus his time value of money is $Ke^{-r\P{T-t}}$. But then he can not sell the stock at $K$ at time $T$ any more.
## Question 7
$\bspace$By the put-call parity, we have: $1 + 20 \times e^{-4\% \times 0.25} = p + 19$ thus $p = 1.80$
## Question 8
$\bspace$Based on the put-call parity, $c + Ke^{-rT} = S_0 + p \Longrightarrow c + 49.75 = 47+2.5$. Thus there's always a chance for arbitraging. He can use the same strategy that is to buy a stock and a put option using the borrowed money, $49.5$ with interest rate $6\%$.
$\bspace$Then he can win $50 - 49.5e^{0.06\times 1/12} \approx 0.25 $ if the stock price goes lower than $50$ or more if the stock price goes higher than $50$.
## Question 9
$\P{1}$
$\bspace P \geq p = c + Ke^{-rT}-S_0 = C + Ke^{-rT} - S_0 = 4 + 30 e^{-8\%\times1/4} -31 \approx 2.41$
$\bspace$And to find something that keeps over an American Put, we can use $K$ cash at the beginning, thus $c + K \geq P+ S_0$ always holds. Therefore, $P \leq c + K - S_0 = C + K - S_0 = 4 + 30 - 31 = 3$
$\P{2}$
$\bspace$If the American Put price is greater than $3$ that is to say that $P \geq C + K - S_0$, then we write an American put option and sell it to somebody, then use the money to buy a American call option, to borrow a stock and sell it to gain $S_0$. Send $K$ cash to the bank. Then when the American put option holder want to exercise, we can instantly use $K = 30$ to buy the stock and return to the stock lender. Up to now, we start from nothing to a American call option and a positive payoff and some interest.
## Question 10
$\P{1}$
$\bspace$If not, then $2c_2 > c_1 + c_3$. So that we have the arbitrage chance. First write two call option with strike price $K_2$ and then use the money gained to buy two call option with strike price $K_1$ and $K_3$. We already have some money left now.
$\bspace$Then we the exercise time comes, we can exercise all three at the same time since $2K_2 = K_1 + K_3$, meaning that we gain money from nothing. Thus $2c_2 \leq c_1 + c_3$.
$\P{2}$
$$p_2 \leq 0.5\P{p_1 + p_3}$$
$\bspace$The proof is obvious, similar to the preceding one.
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
# `git`, `GitHub`, `GitKraken` (continuación)
<img style="float: left; margin: 15px 15px 15px 15px;" src="http://conociendogithub.readthedocs.io/en/latest/_images/Git.png" width="180" height="50" />
<img style="float: left; margin: 15px 15px 15px 15px;" src="https://c1.staticflickr.com/3/2238/13158675193_2892abac95_z.jpg" title="github" width="180" height="50" />
<img style="float: left; margin: 15px 15px 15px 15px;" src="https://www.gitkraken.com/downloads/brand-assets/gitkraken-keif-teal-sq.png" title="gitkraken" width="180" height="50" />
___
## Recuento de la clase pasada
En la clase pasada vimos como sincronizar el repositorio remoto con los cambios que hemos realizado y documentado localmente. *En la práctica, esto se da cuando nosotros mismos editamos alguna parte de un proyecto en el que estamos trabajando*.
Ahora aprenderemos a hacer lo contrario. Es decir, como sincronizar el repositorio local con los cambios que se hayan hecho en el repositorio remoto. *En la práctica, esto se da cuando otros colaboradores del proyecto hacen algún cambio y nosotros queremos ver esos cambios*.
Seguimos basándonos en el siguiente video de `YouTube`.
```
from IPython.display import YouTubeVideo
YouTubeVideo('f0y_xCeM1Rk')
```
### Receta (continuación)
1. Estando en `GitHub` en el repositorio `hello-world`, picar en *Create new file*.
- Normalmente la gente no crea ni edita archivos en `GitHub`, sin embargo esta será nuestra forma de emular que alguien incluyó un nuevo archivo en nuestro proyecto.
- Darle algún nombre al archivo y poner algo en el cuerpo de texto.
- Poner un mensaje describiendo que se añadió un nuevo archivo.
- Picar en *Commit new file*.
- Ver que en el repositorio remoto en `GitHub` ya existe el nuevo archivo, pero en el repositorio local no.
2. Revisar el arbol de cambios en `GitKraken`. Vemos que ahora el ícono que revela los cambios en `GitHub` va un paso adelante del ícono que revela los cambios en el repositorio local.
3. Para incorporar los cambios del repositorio remoto en el repositorio local debemos picar en *Pull* en la parte superior. De nuevo, los íconos deberían juntarse.
4. Revisar el repositorio local para ver que el nuevo archivo ya está ahí.
### Hasta ahora...
Hemos aprendido como manejar repositorios remotos de forma básica con `GitKraken`:
1. Teniendo un repositorio remoto guardado en `GitHub`, *jalamos* (pulled) esos archivos a nuestro disco local para trabajar. El tipo de operaciones que llevamos a cabo fueron:
1. Clone:<font color= red > descripción</font>.
2. Pull:<font color= red > descripción</font>.
2. También hicimos lo opuesto. Si hacemos cambios en nustro repositorio local, pudimos actualizar nuestro repositorio de `GitHub`. El tipo de operación que llevamos a cabo fue:
1. Push:<font color= red > descripción</font>.
___
## ¿Y si cometemos algún error?
Los errores son inherentes a nuestra condición humana. Por tanto, es muy probable que en el desarrollo de un proyecto cometamos algún error.
Una de las características de gestionar versiones con `git` (y por ende con `GitKraken`), es que podemos volver a un commit anterior si cometimos algún error.
<font color= red > Mostrar cómo hacer esto en GitKraken.</font>
___
## Branching
Cuando hicimos el ejercicio *hello-world* al abrir la cuenta en `GitHub`, nos dieron una pequeña introducción al *branching* (creamos una rama de edición, editamos el archivo `README`, para finalmente fusionar los cambios en la rama *master*).
*Branching*:
- Es una forma <font color=green>segura</font> de realizar cambios importantes en un proyecto con `git`.
- Consiste en crear ramas adicionales para hacer modificaciones en el proyecto sin modificar la rama *master* hasta que se esté completamente seguro de las modificaciones.
- Una vez se esté seguro de que las modificaciones están bien, dichas modificaciones se incluyen en la rama *master*
### Ejemplo
1. En `GitKraken`, en el repositorio *hello-world*, crear una rama llamada *add_file*.
- Click derecho sobre el icono master, y picar en *Create branch here*.
- Dar nombre *add_file* y presionar la tecla enter.
- Notar que automáticamente `GitKraken` nos pone en la rama recién creada.
2. Ir al directorio local del repositorio y añadir un archivo nuevo.
3. Hacer el proceso de *stage* y *commit* en la rama.
4. Revisar que pasa con el directorio cuando cambiamos de rama (para cambiar de rama, dar doble click sobre la rama a la que se quiere pasar).
5. Incluir los cambios en la rama *master* (arrastrar una rama sobre la otra, y picar en la opción *Merge add_file into master*).
6. Cambiar a la rama *master* y borrar la rama *add_file*.
7. Hacer un *push* para actualizar el repositorio remoto.
___
## Forking
Una bifurcación (*fork*) es una copia de un repositorio. Bifurcar un repositorio te permite experimentar cambios libremente sin afectar el proyecto original.
Existen varias aplicaciones del *Forking*:
### Seguir un proyecto de otra persona
Como ejemplo, van a seguir el proyecto de la asignatura **SimMat2018-1**.
Los siguientes pasos nos enseñarán como mantener nuestro repositorio local actualizado con el repositorio de la asignatura.
1. Entrar al repositorio https://github.com/esjimenezro/SimMat2018-1.
2. En la esquina superior derecha, dar click en *fork* y esperar un momento. Esta acción copia en su cuenta de `GitHub` un repositorio idéntico al de la materia (con el mismo nombre).
3. Desde `GitKraken`, clonar el repositorio (el que ya está en su cuenta).
4. En la pestaña *REMOTE* dar click en el signo `+`.
- Picar en `GitHub`.
- Desplegar la pestaña y elegir esjimenezro/SimMat2018-1.
- Picar en *Add remote*.
5. <font color=red>Añadiré un nuevo archvo en el repositorio de la materia y ustedes verán qué pasa en `GitKraken`</font>.
6. Arrastrar el repositorio remoto ajeno a la rama *master* y dar click en la opción *Merge esjimenezro/master into master*. Ya el repositorio local está actualizado.
7. Para actualizar el repositorio remoto propio hacer un *push*.
### Proyectos colaborativos
Normalmente, los *forks* se usan para proponer cambios en el proyecto de otra persona (hacer proyectos colaborativos).
<font color=red>Hacer un cambio en el repositorio propio y mostrar como hacer el *pull request* y el *merge*</font>.
**Referencias:**
- https://help.github.com/articles/fork-a-repo/
- https://guides.github.com/activities/forking/
> <font color=blue>**Tarea**</font>: por parejas harán un proyecto colaborativo. En moodle subiré como hacerlo paso a paso y qué es lo que se debe entregar.
**Recordar tarea para hoy y recuento de la clase**
<img src="https://raw.githubusercontent.com/louim/in-case-of-fire/master/in_case_of_fire.png" title="In case of fire (https://github.com/louim/in-case-of-fire)" width="200" height="50" align="center">
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| github_jupyter |
ERROR: type should be string, got "https://github.com/scikit-learn/scikit-learn/issues/18305\n\nThomas' example with Logistic regression:\nhttps://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py\n\n```\nimport watermark\n%load_ext watermark\n#!pip install --upgrade scikit-learn\n#!pip install watermark\nimport sklearn\nfrom sklearn import set_config\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import OneHotEncoder, StandardScaler\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.compose import make_column_transformer\nfrom sklearn.linear_model import LogisticRegression\nsklearn.__version__\n# see version of system, python and libraries\n%watermark -n -v -m -g -iv\n#sklearn.set_config(display='diagram')\nset_config(display='diagram')\n\nnum_proc = make_pipeline(SimpleImputer(strategy='median'), StandardScaler())\n\ncat_proc = make_pipeline(\n SimpleImputer(strategy='constant', fill_value='missing'),\n OneHotEncoder(handle_unknown='ignore'))\n\npreprocessor = make_column_transformer((num_proc, ('feat1', 'feat3')),\n (cat_proc, ('feat0', 'feat2')))\n\nclf = make_pipeline(preprocessor, LogisticRegression())\nclf\nfrom sklearn.linear_model import LogisticRegression\n# Author: Pedro Morales <part.morales@gmail.com>\n#\n# License: BSD 3 clause\n\nimport numpy as np\n\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split, GridSearchCV\n\nnp.random.seed(0)\n\n# Load data from https://www.openml.org/d/40945\nX, y = fetch_openml(\"titanic\", version=1, as_frame=True, return_X_y=True)\n\n# Alternatively X and y can be obtained directly from the frame attribute:\n# X = titanic.frame.drop('survived', axis=1)\n# y = titanic.frame['survived']\nnumeric_features = ['age', 'fare']\nnumeric_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())])\n\ncategorical_features = ['embarked', 'sex', 'pclass']\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numeric_transformer, numeric_features),\n ('cat', categorical_transformer, categorical_features)])\n\n# Append classifier to preprocessing pipeline.\n# Now we have a full prediction pipeline.\nclf = Pipeline(steps=[('preprocessor', preprocessor),\n ('classifier', LogisticRegression())])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nclf.fit(X_train, y_train)\nprint(\"model score: %.3f\" % clf.score(X_test, y_test))\nfrom sklearn import set_config\nset_config(display='diagram')\nclf\nfrom sklearn import svm\nnp.random.seed(0)\n\n# Load data from https://www.openml.org/d/40945\nX, y = fetch_openml(\"titanic\", version=1, as_frame=True, return_X_y=True)\nnumeric_features = ['age', 'fare']\nnumeric_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())])\n\ncategorical_features = ['embarked', 'sex', 'pclass']\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numeric_transformer, numeric_features),\n ('cat', categorical_transformer, categorical_features)])\n\n# Append classifier to preprocessing pipeline.\n# Now we have a full prediction pipeline.\nclf = Pipeline(steps=[('preprocessor', preprocessor),\n ('classifier', svm.SVC())])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nclf.fit(X_train, y_train)\nprint(\"model score: %.3f\" % clf.score(X_test, y_test))\nfrom sklearn import set_config\nset_config(display='diagram')\nclf\n```\n\n" | github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
train = pd.read_csv("../dataset/validation/train_complete.csv")
test = pd.read_csv("../dataset/original/test_complete.csv")
# TODO aggiungere anche il resto del train
train
columns = ['queried_record_id', 'predicted_record_id', 'predicted_record_id_record', 'cosine_score',
'name_cosine', 'email_cosine', 'phone_cosine']
c_train = train[columns].drop_duplicates('queried_record_id', keep='first')
c_test = test[columns].drop_duplicates('queried_record_id', keep='first')
```
Label
```
c_train['split_record_id'] = c_train.queried_record_id.str.split('-')
c_train['linked_id'] = [x[0] for x in c_train.split_record_id]
c_train = c_train.drop('split_record_id', axis=1)
c_test['split_record_id'] = c_test.queried_record_id.str.split('-')
c_test['linked_id'] = [x[0] for x in c_test.split_record_id]
c_test = c_test.drop('split_record_id', axis=1)
c_train['linked_id'] = c_train.linked_id.astype(int)
c_test['linked_id'] = c_test.linked_id.astype(int)
# Get in_train set
train_train = pd.read_csv("../dataset/validation/train.csv", escapechar="\\")
train_test = pd.read_csv("../dataset/original/train.csv", escapechar="\\")
train_train = train_train.sort_values(by='record_id').reset_index(drop=True)
train_test = train_test.sort_values(by='record_id').reset_index(drop=True)
train_train['linked_id'] = train_train.linked_id.astype(int)
train_test['linked_id'] = train_test.linked_id.astype(int)
intrain_train = set(train_train.linked_id.values)
intrain_test = set(train_test.linked_id.values)
c_train['label'] = np.isin(c_train.linked_id.values, list(intrain_train))
c_test['label'] = np.isin(c_test.linked_id.values, list(intrain_test))
# 1 if it is in train, 0 is not in train
c_train['label'] = np.where(c_train.label.values == True, 1, 0)
c_test['label'] = np.where(c_test.label.values == True, 1, 0)
c_test
original_test = pd.read_csv("../dataset/original/test.csv", escapechar="\\")
validation_test = pd.read_csv("../dataset/validation/test.csv", escapechar="\\")
original_test = original_test.sort_values(by='record_id')
validation_test = validation_test.sort_values(by='record_id')
original_test['split'] = original_test.record_id.str.split('-')
original_test['linked_id'] = [x[0] for x in original_test.split]
original_test['linked_id'] = original_test.linked_id.astype(int)
original_test[~original_test.linked_id.isin(intrain_test)]
```
# Feature Extraction
```
# Numero di campi nulli nella riga
# Popolarità del nome
# Quante volte c'è la prima raccomandazione tra le top10 raccomandate
# Quante raccomandazioni diverse facciamo nella top10 (possibilmente proporzionandolo a quante raccomandazioni
# totali del primo elemento raccomandato)
#
# guardare quanti elemnenti non nel train becchiamo se thresholdiamo a 0
```
## Null field in each row
```
validation_test['nan_field'] = validation_test.isnull().sum(axis=1)
original_test['nan_field'] = original_test.isnull().sum(axis=1)
c_train = c_train.merge(validation_test[['record_id', 'nan_field']], how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
c_test = c_test.merge(original_test[['record_id', 'nan_field']], how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
c_train
```
# Scores
```
s_train = pd.read_csv("../lgb_predictions_validation.csv")
s_test = pd.read_csv("../lgb_predictions_full.csv")
s_train.ordered_scores = [eval(x) for x in s_train.ordered_scores]
s_test.ordered_scores = [eval(x) for x in s_test.ordered_scores]
s_train.ordered_linked = [eval(x) for x in s_train.ordered_linked]
s_test.ordered_linked = [eval(x) for x in s_test.ordered_linked]
def first_scores(df):
new_df = []
for (q, s) in tqdm(zip(df.queried_record_id, df.ordered_scores)):
new_df.append((q, s[0], s[1]))
new_df = pd.DataFrame(new_df, columns=['queried_record_id', 'score1', 'score2'])
return new_df
s_train_first = first_scores(s_train)
s_test_first = first_scores(s_test)
c_train = c_train.merge(s_train_first, how='left', on='queried_record_id')
c_test = c_test.merge(s_test_first, how='left', on='queried_record_id')
```
## Quanti linked_id uguali al primo abbiamo tra la top10 / quanti record_id ha il primo linked_id predetto in tutto
```
from collections import Counter
group_val = train_train.groupby('linked_id').size()
group_test = train_test.groupby('linked_id').size()
group_val = group_val.reset_index().rename(columns={0:'size'})
group_test = group_test.reset_index().rename(columns={0:'size'})
train_complete_list = train[['queried_record_id', 'predicted_record_id']].groupby('queried_record_id').apply(lambda x: list(x['predicted_record_id']))
test_complete_list = test[['queried_record_id', 'predicted_record_id']].groupby('queried_record_id').apply(lambda x: list(x['predicted_record_id']))
train_complete_list = train_complete_list.reset_index().rename(columns={0:'record_id'})
test_complete_list = test_complete_list.reset_index().rename(columns={0:'record_id'})
train_complete_list['size'] = [Counter(x[:10])[x[0]] for x in train_complete_list.record_id]
test_complete_list['size'] = [Counter(x[:10])[x[0]] for x in test_complete_list.record_id]
train_complete_list['first_pred'] = [x[0] for x in train_complete_list.record_id]
test_complete_list['first_pred'] = [x[0] for x in test_complete_list.record_id]
train_complete_list = train_complete_list.merge(group_val, how='left', left_on='first_pred', right_on='linked_id', suffixes=('_pred', '_real')).drop('linked_id', axis=1)
test_complete_list = test_complete_list.merge(group_test, how='left', left_on='first_pred', right_on='linked_id', suffixes=('_pred', '_real')).drop('linked_id', axis=1)
train_complete_list['pred_over_all'] = [ p/r for (p,r) in zip(train_complete_list.size_pred, train_complete_list.size_real)]
test_complete_list['pred_over_all'] = [ p/r for (p,r) in zip(test_complete_list.size_pred, test_complete_list.size_real)]
train_complete_list
c_train = c_train.merge(train_complete_list[['queried_record_id', 'pred_over_all']], how='left', on='queried_record_id')
c_test = c_test.merge(test_complete_list[['queried_record_id', 'pred_over_all']], how='left', on='queried_record_id')
```
## Number of different linked_id predicted in top10 & Size of the group in train identified by the first recommended item (E' il size_real precedentemente introdotto )
```
train_complete_list['n_diff_linked_id'] = [len(set(x[:10])) for x in train_complete_list.record_id]
test_complete_list['n_diff_linked_id'] = [len(set(x[:10])) for x in test_complete_list.record_id]
c_train = c_train.merge(train_complete_list[['queried_record_id', 'size_real', 'n_diff_linked_id']], how='left', on='queried_record_id')
c_test = c_test.merge(test_complete_list[['queried_record_id', 'size_real', 'n_diff_linked_id']], how='left', on='queried_record_id')
```
## Number of original field equal
## TODO Take the linked_id of the first record predicted, get the relative group of record in train, check how the queried record is similar to the all group
## Difference between the first score and the second
```
def first_second_score_difference(scores):
res = np.empty(len(scores))
for i in range(len(scores)):
res[i] = scores[i][0] - scores[i][1]
return res
c_train['s0_minus_s1'] = first_second_score_difference(s_train.ordered_scores.values)
c_test['s0_minus_s1'] = first_second_score_difference(s_test.ordered_scores.values)
```
## How many record in restricted_df
```
def restricted_df(s):
restricted_pred = []
max_delta = 2.0
for (q, sc, rec, l) in tqdm(zip(s.queried_record_id, s.ordered_scores, s.ordered_record, s.ordered_linked)):
for x in range(len(sc)):
if x == 0: # Il primo elemento predetto si mette sempre [quello con score più alto]
restricted_pred.append((q, sc[x], rec[x], l[x]))
else:
if x >= 10:
continue
elif (sc[0] - sc[x] < max_delta) or (l[0] == l[x]): # se le predizioni hanno uno scores che dista < max_delta dalla prima allora si inseriscono
restricted_pred.append((q, sc[x], rec[x], l[x]))
else:
continue
restricted_df = pd.DataFrame(restricted_pred, columns=['queried_record_id', 'scores', 'predicted_record_id', 'predicted_linked_id'])
return restricted_df
s_train.ordered_record = [eval(x) for x in s_train.ordered_record]
s_test.ordered_record = [eval(x) for x in s_test.ordered_record]
restricted_train = restricted_df(s_train)
restricted_test = restricted_df(s_test)
restricted_train = restricted_train.groupby('queried_record_id').size().reset_index().rename(columns={0:'restricted_size'})
restricted_test = restricted_test.groupby('queried_record_id').size().reset_index().rename(columns={0:'restricted_size'})
c_train = c_train.merge(restricted_train, how='left', on='queried_record_id')
c_test = c_test.merge(restricted_test, how='left', on='queried_record_id')
```
## How many equal linked_id with the same score of the first
# Fit and Predict
```
c_train
# change name of similarities
r = {'cosine_score': 'hybrid_score', 'name_cosine':'name_jaccard', 'email_cosine':'email_jaccard', 'phone_cosine':'phone_jaccard'}
c_train = c_train.rename(columns=r)
c_test = c_test.rename(columns=r)
import lightgbm as lgb
classifier = lgb.LGBMClassifier(max_depth=8, n_estimators=500,reg_alpha=0.2)
cols = ['hybrid_score', 'name_jaccard', 'email_jaccard', 'phone_jaccard', 'nan_field', 'pred_over_all', 'size_real', 'n_diff_linked_id', 'score1']
classifier.fit(c_train[cols], c_train['label'])
preds = classifier.predict(c_test[cols])
preds
c_test['predictions'] = preds
c_test['correct_preds'] = np.where(c_test.label.values == c_test.predictions.values, 1, 0)
acc = c_test['correct_preds'].sum() / c_test.shape[0]
acc
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(c_test.label, c_test.predictions)
cm = cm /c_test.shape[0]
cm
from lightgbm import plot_importance
from matplotlib import pyplot
# plot feature importance
plot_importance(classifier)
pyplot.show()
```
| github_jupyter |
Train a simple deep CNN on the CIFAR10 small images dataset.
It gets to 75% validation accuracy in 25 epochs, and 79% after 50 epochs.
(it's still underfitting at that point, though)
```
# https://gist.github.com/deep-diver
import warnings;warnings.filterwarnings('ignore')
from tensorflow import keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.optimizers import RMSprop
import os
batch_size = 32
num_classes = 10
epochs = 100
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# initiate RMSprop optimizer
opt = RMSprop(lr=0.0001, decay=1e-6)
# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
```
| github_jupyter |
##### Copyright 2020 The TensorFlow IO Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Reading PostgreSQL database from TensorFlow IO
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/io/tutorials/postgresql"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This tutorial shows how to create `tf.data.Dataset` from a PostgreSQL database server, so that the created `Dataset` could be passed to `tf.keras` for training or inference purposes.
A SQL database is an important source of data for data scientist. As one of the most popular open source SQL database, [PostgreSQL](https://www.postgresql.org) is widely used in enterprises for storing critial and transactional data across the board. Creating `Dataset` from a PostgreSQL database server directly and pass the `Dataset` to `tf.keras` for training or inference, could greatly simplify the data pipeline and help data scientist to focus on building machine learning models.
## Setup and usage
### Install required tensorflow-io packages, and restart runtime
```
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
```
### Install and setup PostgreSQL (optional)
**Warning: This notebook is designed to be run in a Google Colab only**. *It installs packages on the system and requires sudo access. If you want to run it in a local Jupyter notebook, please proceed with caution.*
In order to demo the usage on Google Colab you will install PostgreSQL server. The password and an empty database is also needed.
If you are not running this notebook on Google Colab, or you prefer to use an existing database, please skip the following setup and proceed to the next section.
```
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS tfio_demo;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE tfio_demo;'
```
### Setup necessary environmental variables
The following environmental variables are based on the PostgreSQL setup in the last section. If you have a different setup or you are using an existing database, they should be changed accordingly:
```
%env TFIO_DEMO_DATABASE_NAME=tfio_demo
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
```
### Prepare data in PostgreSQL server
For demo purposes this tutorial will create a database and populate the database with some data. The data used in this tutorial is from [Air Quality Data Set](https://archive.ics.uci.edu/ml/datasets/Air+Quality), available from [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml).
Below is a sneak preview of a subset of the Air Quality Data Set:
Date|Time|CO(GT)|PT08.S1(CO)|NMHC(GT)|C6H6(GT)|PT08.S2(NMHC)|NOx(GT)|PT08.S3(NOx)|NO2(GT)|PT08.S4(NO2)|PT08.S5(O3)|T|RH|AH|
----|----|------|-----------|--------|--------|-------------|----|----------|-------|------------|-----------|-|--|--|
10/03/2004|18.00.00|2,6|1360|150|11,9|1046|166|1056|113|1692|1268|13,6|48,9|0,7578|
10/03/2004|19.00.00|2|1292|112|9,4|955|103|1174|92|1559|972|13,3|47,7|0,7255|
10/03/2004|20.00.00|2,2|1402|88|9,0|939|131|1140|114|1555|1074|11,9|54,0|0,7502|
10/03/2004|21.00.00|2,2|1376|80|9,2|948|172|1092|122|1584|1203|11,0|60,0|0,7867|
10/03/2004|22.00.00|1,6|1272|51|6,5|836|131|1205|116|1490|1110|11,2|59,6|0,7888|
More information about Air Quality Data Set and UCI Machine Learning Repository are availabel in [References](#references) section.
To help simplify the data preparation, a sql version of the Air Quality Data Set has been prepared and is available as [AirQualityUCI.sql](https://github.com/tensorflow/io/blob/master/docs/tutorials/postgresql/AirQualityUCI.sql).
The statement to create the table is:
```
CREATE TABLE AirQualityUCI (
Date DATE,
Time TIME,
CO REAL,
PT08S1 INT,
NMHC REAL,
C6H6 REAL,
PT08S2 INT,
NOx REAL,
PT08S3 INT,
NO2 REAL,
PT08S4 INT,
PT08S5 INT,
T REAL,
RH REAL,
AH REAL
);
```
The complete commands to create the table in database and populate the data are:
```
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
```
### Create Dataset from PostgreSQL server and use it in TensorFlow
Create a Dataset from PostgreSQL server is as easy as calling `tfio.experimental.IODataset.from_sql` with `query` and `endpoint` arguments. The `query` is the SQL query for select columns in tables and the `endpoint` argument is the address and database name:
```
import os
import tensorflow_io as tfio
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT co, pt08s1 FROM AirQualityUCI;",
endpoint=endpoint)
print(dataset.element_spec)
```
As you could see from the output of `dataset.element_spec` above, the element of the created `Dataset` is a python dict object with column names of the database table as keys.
It is quite convenient to apply further operations. For example, you could select both `nox` and `no2` field of the `Dataset`, and calculate the difference:
```
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT nox, no2 FROM AirQualityUCI;",
endpoint=endpoint)
dataset = dataset.map(lambda e: (e['nox'] - e['no2']))
# check only the first 20 record
dataset = dataset.take(20)
print("NOx - NO2:")
for difference in dataset:
print(difference.numpy())
```
The created `Dataset` is ready to be passed to `tf.keras` directly for either training or inference purposes now.
## References
- Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
- S. De Vito, E. Massera, M. Piga, L. Martinotto, G. Di Francia, On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario, Sensors and Actuators B: Chemical, Volume 129, Issue 2, 22 February 2008, Pages 750-757, ISSN 0925-4005
| github_jupyter |
### Neural Machine Translation by Jointly Learning to Align and Translate
In this notebook we will implement the model from [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473) that will improve PPL (**perplexity**) as compared to the previous notebook.
Here is a general encoder-decoder model that we have used from the past.
<p align="center"><img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq1.png"/></p>
In the previous model, our architecture was set-up in a way to reduce "information compression" by explicitly passing the context vector, $z$, to the decoder at every time-step and by passing both the context vector and embedded input word, $d(y_t)$, along with the hidden state, $s_t$, to the linear layer, $f$, to make a prediction.
<p align="center"><img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq7.png"/></p>
Even though we have reduced some of this compression, our context vector still needs to contain all of the information about the source sentence. The model implemented in this notebook avoids this compression by allowing the decoder to look at the entire source sentence (via its hidden states) at each decoding step! How does it do this? It uses **attention**.
### Attention.
Attention works by first, calculating an attention vector, $a$, that is the length of the source sentence. The attention vector has the property that each element is between 0 and 1, and the entire vector sums to 1. We then calculate a weighted sum of our source sentence hidden states, $H$, to get a weighted source vector, $w$.
$$w = \sum_{i}a_ih_i$$
We calculate a new weighted source vector every time-step when decoding, using it as input to our decoder RNN as well as the linear layer to make a prediction.
### Data Preparation
Again we still prepare the data just like from the previous notebooks.
```
import torch
from torch import nn
from torch.nn import functional as F
import spacy, math, random
import numpy as np
from torchtext.legacy import datasets, data
```
### Setting seeds
```
SEED = 42
np.random.seed(SEED)
torch.manual_seed(SEED)
random.seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deteministic = True
```
### Loading the German and English models.
```
import spacy
import spacy.cli
spacy.cli.download('de_core_news_sm')
import de_core_news_sm, en_core_web_sm
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
```
### Preprocessing function that tokenizes sentences.
```
def tokenize_de(sent):
return [tok.text for tok in spacy_de.tokenizer(sent)]
def tokenize_en(sent):
return [tok.text for tok in spacy_en.tokenizer(sent)]
```
### Creating the `Fields`
```
SRC = data.Field(
tokenize = tokenize_de,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
TRG = data.Field(
tokenize = tokenize_en,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
```
### Loading `Multi30k` dataset.
```
train_data, valid_data, test_data = datasets.Multi30k.splits(
exts=('.de', '.en'),
fields = (SRC, TRG)
)
```
### Checking if we have loaded the data corectly.
```
from prettytable import PrettyTable
def tabulate(column_names, data):
table = PrettyTable(column_names)
table.title= "VISUALIZING SETS EXAMPLES"
table.align[column_names[0]] = 'l'
table.align[column_names[1]] = 'r'
for row in data:
table.add_row(row)
print(table)
column_names = ["SUBSET", "EXAMPLE(s)"]
row_data = [
["training", len(train_data)],
['validation', len(valid_data)],
['test', len(test_data)]
]
tabulate(column_names, row_data)
```
### Checking a single example, of the `SRC` and the `TRG`.
```
print(vars(train_data[0]))
```
### Building the vocabulary.
Just like from the previous notebook all the tokens that apears less than 2, are automatically converted to unknown token `<unk>`.
```
SRC.build_vocab(train_data, min_freq=2)
TRG.build_vocab(train_data, min_freq=2)
```
### Device
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
```
### Creating Iterators.
Just like from the previous notebook we are going to use the BucketIterator to create the train, validation and test sets.
```
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
device = device,
batch_size = BATCH_SIZE
)
```
### Encoder.
First, we'll build the encoder. Similar to the previous model, we only use a single layer GRU, however we now use a bidirectional RNN. With a bidirectional RNN, we have two RNNs in each layer. A forward RNN going over the embedded sentence from left to right (shown below in green), and a backward RNN going over the embedded sentence from right to left (teal). All we need to do in code is set bidirectional = True and then pass the embedded sentence to the RNN as before.
<p align="center">
<img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq8.png"/>
</p>
We now have:
$$\begin{align*}
h_t^\rightarrow &= \text{EncoderGRU}^\rightarrow(e(x_t^\rightarrow),h_{t-1}^\rightarrow)\\
h_t^\leftarrow &= \text{EncoderGRU}^\leftarrow(e(x_t^\leftarrow),h_{t-1}^\leftarrow)
\end{align*}$$
Where $x_0^\rightarrow = \text{<sos>}, x_1^\rightarrow = \text{guten}$ and $x_0^\leftarrow = \text{<eos>}, x_1^\leftarrow = \text{morgen}$.
As before, we only pass an input (embedded) to the RNN, which tells PyTorch to initialize both the forward and backward initial hidden states ($h_0^\rightarrow$ and $h_0^\leftarrow$, respectively) to a tensor of all zeros. We'll also get two context vectors, one from the forward RNN after it has seen the final word in the sentence, $z^\rightarrow=h_T^\rightarrow$, and one from the backward RNN after it has seen the first word in the sentence, $z^\leftarrow=h_T^\leftarrow$.
The RNN returns outputs and hidden.
outputs is of size [src len, batch size, hid dim * num directions] where the first hid_dim elements in the third axis are the hidden states from the top layer forward RNN, and the last hid_dim elements are hidden states from the top layer backward RNN. We can think of the third axis as being the forward and backward hidden states concatenated together other, i.e. $h_1 = [h_1^\rightarrow; h_{T}^\leftarrow]$, $h_2 = [h_2^\rightarrow; h_{T-1}^\leftarrow]$ and we can denote all encoder hidden states (forward and backwards concatenated together) as $H=\{ h_1, h_2, ..., h_T\}$.
hidden is of size [n layers * num directions, batch size, hid dim], where [-2, :, :] gives the top layer forward RNN hidden state after the final time-step (i.e. after it has seen the last word in the sentence) and [-1, :, :] gives the top layer backward RNN hidden state after the final time-step (i.e. after it has seen the first word in the sentence).
As the decoder is not bidirectional, it only needs a single context vector, $z$, to use as its initial hidden state, $s_0$, and we currently have two, a forward and a backward one ($z^\rightarrow=h_T^\rightarrow$ and $z^\leftarrow=h_T^\leftarrow$, respectively). We solve this by concatenating the two context vectors together, passing them through a linear layer, $g$, and applying the $\tanh$ activation function.
$$z=\tanh(g(h_T^\rightarrow, h_T^\leftarrow)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0$$
**Note:** this is actually a deviation from the paper. Instead, they feed only the first backward RNN hidden state through a linear layer to get the context vector/decoder initial hidden state. ***_This doesn't seem to make sense to me, so we have changed it._**
As we want our model to look back over the whole of the source sentence we return outputs, the stacked forward and backward hidden states for every token in the source sentence. We also return hidden, which acts as our initial hidden state in the decoder.
```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout):
super(Encoder, self).__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim=emb_dim)
self.gru = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)
self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src): # src = [src len, batch size]
embedded = self.dropout(self.embedding(src)) # embedded = [src len, batch size, emb dim]
outputs, hidden = self.gru(embedded)
"""
outputs = [src len, batch size, hid dim * num directions]
hidden = [n layers * num directions, batch size, hid dim]
hidden is stacked [forward_1, backward_1, forward_2, backward_2, ...]
outputs are always from the last layer
hidden [-2, :, : ] is the last of the forwards RNN
hidden [-1, :, : ] is the last of the backwards RNN
initial decoder hidden is final hidden state of the forwards and backwards
encoder RNNs fed through a linear layer
"""
hidden = torch.tanh(self.fc(torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)))
"""
outputs = [src len, batch size, enc hid dim * 2]
hidden = [batch size, dec hid dim]
"""
return outputs, hidden
```
### Attention Layer
Next up is the attention layer. This will take in the previous hidden state of the decoder, $s_{t-1}$, and all of the stacked forward and backward hidden states from the encoder, $H$. The layer will output an attention vector, $a_t$, that is the length of the source sentence, each element is between 0 and 1 and the entire vector sums to 1.
Intuitively, this layer takes what we have decoded so far, $s_{t-1}$, and all of what we have encoded, $H$, to produce a vector, $a_t$, that represents which words in the source sentence we should pay the most attention to in order to correctly predict the next word to decode, $\hat{y}_{t+1}$.
First, we calculate the energy between the previous decoder hidden state and the encoder hidden states. As our encoder hidden states are a sequence of $T$ tensors, and our previous decoder hidden state is a single tensor, the first thing we do is repeat the previous decoder hidden state $T$ times. We then calculate the energy, $E_t$, between them by concatenating them together and passing them through a linear layer (attn) and a $\tanh$ activation function.
$$E_t = \tanh(\text{attn}(s_{t-1}, H))$$
This can be thought of as calculating how well each encoder hidden state "matches" the previous decoder hidden state.
We currently have a [dec hid dim, src len] tensor for each example in the batch. We want this to be [src len] for each example in the batch as the attention should be over the length of the source sentence. This is achieved by multiplying the energy by a [1, dec hid dim] tensor, $v$.
$$\hat{a}_t = v E_t$$
We can think of $v$ as the weights for a weighted sum of the energy across all encoder hidden states. These weights tell us how much we should attend to each token in the source sequence. The parameters of $v$ are initialized randomly, but learned with the rest of the model via backpropagation. Note how $v$ is not dependent on time, and the same $v$ is used for each time-step of the decoding. We implement $v$ as a linear layer without a bias.
Finally, we ensure the attention vector fits the constraints of having all elements between 0 and 1 and the vector summing to 1 by passing it through a $\text{softmax}$ layer.
$$a_t = \text{softmax}(\hat{a_t})$$
This gives us the attention over the source sentence!
Graphically, this looks something like below. This is for calculating the very first attention vector, where $s_{t-1} = s_0 = z$. The green/teal blocks represent the hidden states from both the forward and backward RNNs, and the attention computation is all done within the pink block.
<p align="center"><img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq9.png"/></p>
```
class Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super(Attention, self).__init__()
self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim)
self.v = nn.Linear(dec_hid_dim, 1, bias = False)
def forward(self, hidden, encoder_outputs):
"""
hidden = [batch size, dec hid dim]
encoder_outputs = [src len, batch size, enc hid dim * 2]
"""
batch_size = encoder_outputs.shape[1]
src_len = encoder_outputs.shape[0]
# repeat decoder hidden state src_len times
hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
"""
hidden = [batch size, src len, dec hid dim]
encoder_outputs = [batch size, src len, enc hid dim * 2]
"""
energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim = 2))) # energy = [batch size, src len, dec hid dim]
attention = self.v(energy).squeeze(2) # attention= [batch size, src len]
return F.softmax(attention, dim=1)
```
### Decoder.
The decoder contains the attention layer, attention, which takes the previous hidden state, $s_{t-1}$, all of the encoder hidden states, $H$, and returns the attention vector, $a_t$.
We then use this attention vector to create a weighted source vector, $w_t$, denoted by weighted, which is a weighted sum of the encoder hidden states, $H$, using $a_t$ as the weights.
$$w_t = a_t H$$
The embedded input word, $d(y_t)$, the weighted source vector, $w_t$, and the previous decoder hidden state, $s_{t-1}$, are then all passed into the decoder RNN, with $d(y_t)$ and $w_t$ being concatenated together.
$$s_t = \text{DecoderGRU}(d(y_t), w_t, s_{t-1})$$
We then pass $d(y_t)$, $w_t$ and $s_t$ through the linear layer, $f$, to make a prediction of the next word in the target sentence, $\hat{y}_{t+1}$. This is done by concatenating them all together.
$$\hat{y}_{t+1} = f(d(y_t), w_t, s_t)$$
The image below shows decoding the first word in an example translation.
<p align="center">
<img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq10.png"/>
</p>
The green/teal blocks show the forward/backward encoder RNNs which output $H$, the red block shows the context vector, $z = h_T = \tanh(g(h^\rightarrow_T,h^\leftarrow_T)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0$, the blue block shows the decoder RNN which outputs $s_t$, the purple block shows the linear layer, $f$, which outputs $\hat{y}_{t+1}$ and the orange block shows the calculation of the weighted sum over $H$ by $a_t$ and outputs $w_t$. Not shown is the calculation of $a_t$.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention):
super(Decoder, self).__init__()
self.output_dim = output_dim
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.gru = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)
self.fc_out = nn.Linear((enc_hid_dim * 2) + dec_hid_dim + emb_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, encoder_outputs):
"""
input = [batch size]
hidden = [batch size, dec hid dim]
encoder_outputs = [src len, batch size, enc hid dim * 2]
"""
input = input.unsqueeze(0) # input = [1, batch size]
embedded = self.dropout(self.embedding(input)) # embedded = [1, batch size, emb dim]
a = self.attention(hidden, encoder_outputs)# a = [batch size, src len]
a = a.unsqueeze(1) # a = [batch size, 1, src len]
encoder_outputs = encoder_outputs.permute(1, 0, 2) # encoder_outputs = [batch size, src len, enc hid dim * 2]
weighted = torch.bmm(a, encoder_outputs) # weighted = [batch size, 1, enc hid dim * 2]
weighted = weighted.permute(1, 0, 2) # weighted = [1, batch size, enc hid dim * 2]
rnn_input = torch.cat((embedded, weighted), dim = 2) # rnn_input = [1, batch size, (enc hid dim * 2) + emb dim]
output, hidden = self.gru(rnn_input, hidden.unsqueeze(0))
"""
output = [seq len, batch size, dec hid dim * n directions]
hidden = [n layers * n directions, batch size, dec hid dim]
seq len, n layers and n directions will always be 1 in this decoder, therefore:
output = [1, batch size, dec hid dim]
hidden = [1, batch size, dec hid dim]
this also means that output == hidden
"""
assert (output == hidden).all()
embedded = embedded.squeeze(0)
output = output.squeeze(0)
weighted = weighted.squeeze(0)
prediction = self.fc_out(torch.cat((output, weighted, embedded), dim = 1)) # prediction = [batch size, output dim]
return prediction, hidden.squeeze(0)
```
### Seq2Seq Model
This is the first model where we don't have to have the encoder RNN and decoder RNN have the same hidden dimensions, however the encoder has to be bidirectional. This requirement can be removed by changing all occurences of enc_dim * 2 to enc_dim * 2 if encoder_is_bidirectional else enc_dim.
This seq2seq encapsulator is similar to the last two. The only difference is that the encoder returns both the final hidden state (which is the final hidden state from both the forward and backward encoder RNNs passed through a linear layer) to be used as the initial hidden state for the decoder, as well as every hidden state (which are the forward and backward hidden states stacked on top of each other). We also need to ensure that hidden and encoder_outputs are passed to the decoder.
**Briefly going over all of the steps:**
* the outputs tensor is created to hold all predictions, $\hat{Y}$
* the source sequence, $X$, is fed into the encoder to receive $z$ and $H$
* the initial decoder hidden state is set to be the context vector, $s_0 = z = h_T$
* we use a batch of <sos> tokens as the first input, $y_1$
* **we then decode within a loop:**
* inserting the input token $y_t$, previous hidden state, $s_{t-1}$, and all encoder outputs, $H$, into the decoder
* receiving a prediction, $\hat{y}_{t+1}$, and a new hidden state, $s_t$
* we then decide if we are going to teacher force or not, setting the next input as appropriate
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
"""
src = [src len, batch size]
trg = [trg len, batch size]
teacher_forcing_ratio is probability to use teacher forcing
e.g. if teacher_forcing_ratio is 0.75 we use teacher forcing 75% of the time
"""
trg_len, batch_size = trg.shape
trg_vocab_size = self.decoder.output_dim
# tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
# encoder_outputs is all hidden states of the input sequence, back and forwards
# hidden is the final forward and backward hidden states, passed through a linear layer
encoder_outputs, hidden = self.encoder(src)
# first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
# insert input token embedding, previous hidden state and all encoder hidden states
# receive output tensor (predictions) and new hidden state
output, hidden = self.decoder(input, hidden, encoder_outputs)
# place predictions in a tensor holding predictions for each token
outputs[t] = output
# decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
# get the highest predicted token from our predictions
top1 = output.argmax(1)
# if teacher forcing, use actual next token as next input
# if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
```
### Training the Seq2Seq Model
The rest of the code is similar from the previous notebooks, where there's changes I will highlight.
### Hyper parameters
```
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = DEC_EMB_DIM = 256
ENC_HID_DIM = DEC_HID_DIM = 512
ENC_DROPOUT = DEC_DROPOUT = 0.5
attn = Attention(ENC_HID_DIM, DEC_HID_DIM)
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)
model = Seq2Seq(enc, dec, device).to(device)
model
```
### Initializing the weights
ere, we will initialize all biases to zero and all weights from $\mathcal{N}(0, 0.01)$.
```
def init_weights(m):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
model.apply(init_weights)
```
### Counting model parameters.
The model parameters has increased with `~50%` from the previous notebook.
```
def count_trainable_params(model):
return sum(p.numel() for p in model.parameters()), sum(p.numel() for p in model.parameters() if p.requires_grad)
n_params, trainable_params = count_trainable_params(model)
print(f"Total number of paramaters: {n_params:,}\nTotal tainable parameters: {trainable_params:,}")
```
### Optimizer
```
optimizer = torch.optim.Adam(model.parameters())
```
### Loss Function
Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
```
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
```
### Training and Evaluating Functions
```
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
# trg = [trg len, batch size]
# output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
# trg = [(trg len - 1) * batch size]
# output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) # turn off teacher forcing
# trg = [trg len, batch size]
# output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
# trg = [(trg len - 1) * batch size]
# output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
### Train Loop.
Bellow is a function that tells us how long each epoch took to complete.
```
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'best-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
```
### Evaluating the best model.
```
model.load_state_dict(torch.load('best-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
We've improved on the previous model, but this came at the cost of `doubling` the training time.
In the next notebook, we'll be using the same architecture but using a few tricks that are applicable to all RNN architectures - **``packed padded``** sequences and **`masking`**. We'll also implement code which will allow us to look at what words in the input the RNN is paying attention to when decoding the output.
### Credits.
* [bentrevett](https://github.com/bentrevett/pytorch-seq2seq/blob/master/3%20-%20Neural%20Machine%20Translation%20by%20Jointly%20Learning%20to%20Align%20and%20Translate.ipynb)
| github_jupyter |
<a href="https://colab.research.google.com/github/yasirabd/solver-society-job-data/blob/main/2_0_Ekstrak_job_position.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Objective
Ekstrak job_position, semakin sedikit jumlah unique semakin bagus.
Data input yang dibutuhkan:
1. jobstreet_clean_tahap1.csv
Data output yang dihasilkan:
1. jobstreet_clean_tahap2.csv
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import re
data = pd.read_csv("/content/drive/My Drive/Data Loker/jobstreet_clean_tahap1.csv")
data.head()
# check null values
data.isnull().sum()
```
## Explore data
```
# check null values
data['job_position'].isnull().sum()
# convert to lower case
job_position = data['job_position'].str.lower()
job_position.value_counts()
```
> Terdapat 10056 data unique, sangat kotor sekali.
```
# top 50 job_position
job_position.value_counts()[:50]
# convert menjadi list
temp = job_position.tolist()
```
## Cleaning data penjelas
```
# cari data yang terdapat bracket () dan []
brackets = ['(', ')', '[', ']']
# remove data in brackets
# alasan terdapat data -> "marketing executive (jakarta/cikarang)""
# pada bracket ada lokasi, tidak perlu
for idx, pos in enumerate(temp):
for b in brackets:
if b in pos:
pos = re.sub("[\(\[].*?[\)\]]", "", pos)
temp[idx] = pos
# remove data setelah "-"
# alasan terdapat data -> "tukang bubut manual - cikarang"
# setelah '-' menunjukkan lokasi atau penjelas
for idx, pos in enumerate(temp):
if '-' in pos:
temp[idx] = pos.split('-')[0]
# remove data setelah ":"
# alasan -> "finance supervisor: bandar lampung"
# setelah ':' menunjukkan lokasi atau penjelas
for idx, pos in enumerate(temp):
if ':' in pos:
temp[idx] = pos.split(':')[0]
# remove extra space
for idx, pos in enumerate(temp):
temp[idx] = ' '.join(pos.split())
```
## Kelompokkan job_position yang mirip
### Sales Executive
```
# check banyak jenis job_position sales executive
sales_exec = list()
for pos in temp:
if 'sales executive' in pos:
sales_exec.append(pos)
pd.Series(sales_exec).unique()
# replace dengan hanya sales executive
for idx,pos in enumerate(temp):
if 'sales executive' in pos:
temp[idx] = 'sales executive'
```
### Accounting Staff
```
# check banyak jenis accounting staff
acc_staff = list()
for pos in temp:
if 'accounting staff' in pos and 'finance' not in pos:
acc_staff.append(pos)
pd.Series(acc_staff).unique()
# replace dengan hanya accounting staff
for idx, pos in enumerate(temp):
if 'accounting staff' in pos and 'finance' not in pos:
temp[idx] = 'accounting staff'
```
### Digital Marketing
```
# check banyak jenis digital marketing
digital_marketing = list()
for pos in temp:
if 'digital marketing' in pos:
digital_marketing.append(pos)
pd.Series(digital_marketing).unique()
# replace dengan hanya digital marketing
for idx,pos in enumerate(temp):
if 'digital marketing' in pos:
temp[idx] = 'digital marketing'
```
### Sales Manager
```
# check banyak jenis sales manager
sales_manager = list()
for pos in temp:
if 'sales manager' in pos:
sales_manager.append(pos)
pd.Series(sales_manager).unique()
# replace dengan hanya sales manager
for idx,pos in enumerate(temp):
if 'sales manager' in pos:
temp[idx] = 'sales manager'
```
### Project Manager
```
# check banyak jenis project manager
proj_manager = list()
for pos in temp:
if 'project manager' in pos:
proj_manager.append(pos)
pd.Series(proj_manager).unique()
# replace dengan hanya project manager
for idx,pos in enumerate(temp):
if 'project manager' in pos:
temp[idx] = 'project manager'
```
### Graphic Designer
```
# check banyak jenis graphic designer
graph_designer = list()
for pos in temp:
if 'graphic designer' in pos:
graph_designer.append(pos)
pd.Series(graph_designer).unique()
# replace dengan hanya graphic designer
for idx,pos in enumerate(temp):
if 'graphic designer' in pos:
temp[idx] = 'graphic designer'
```
### Staff Accounting
```
# check banyak jenis staff accounting
staff_acc = list()
for pos in temp:
if 'staff accounting' in pos:
staff_acc.append(pos)
pd.Series(staff_acc).unique()
# replace dengan hanya staff accounting
for idx,pos in enumerate(temp):
if 'staff accounting' in pos:
temp[idx] = 'staff accounting'
```
### Marketing Manager
```
# check banyak jenis marketing manager
mark_manager = list()
for pos in temp:
if 'marketing manager' in pos:
mark_manager.append(pos)
pd.Series(mark_manager).unique()
# replace dengan hanya marketing manager
for idx,pos in enumerate(temp):
if 'marketing manager' in pos:
temp[idx] = 'marketing manager'
```
### Marketing Executive
```
# check banyak jenis marketing executive
mark_exec = list()
for pos in temp:
if 'marketing executive' in pos:
mark_exec.append(pos)
pd.Series(mark_exec).unique()
# replace dengan hanya marketing executive
for idx,pos in enumerate(temp):
if 'marketing executive' in pos:
temp[idx] = 'marketing executive'
```
### Sales Engineer
```
# check banyak jenis sales engineer
sales_engineer = list()
for pos in temp:
if 'sales engineer' in pos:
sales_engineer.append(pos)
pd.Series(sales_engineer).unique()
# replace dengan hanya sales engineer
for idx,pos in enumerate(temp):
if 'sales engineer' in pos:
temp[idx] = 'sales engineer'
```
### Web Developer
```
# check banyak jenis web developer
web_dev = list()
for pos in temp:
if 'web developer' in pos:
web_dev.append(pos)
pd.Series(web_dev).unique()
# replace dengan hanya web developer
for idx,pos in enumerate(temp):
if 'web developer' in pos:
temp[idx] = 'web developer'
```
### Accounting Supervisor
```
# check banyak jenis accounting supervisor
job_list = list()
for pos in temp:
if 'accounting supervisor' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya accounting supervisor
for idx,pos in enumerate(temp):
if 'accounting supervisor' in pos:
temp[idx] = 'accounting supervisor'
```
### Account Executive
```
# check banyak jenis account executive
job_list = list()
for pos in temp:
if 'account executive' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya account executive
for idx,pos in enumerate(temp):
if 'account executive' in pos:
temp[idx] = 'account executive'
```
### Sales Marketing
```
# check banyak jenis sales marketing
job_list = list()
for pos in temp:
if 'sales marketing' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya sales marketing
for idx,pos in enumerate(temp):
if 'sales marketing' in pos:
temp[idx] = 'sales marketing'
```
### Finance Staff
```
# check banyak jenis finance staff
job_list = list()
for pos in temp:
if 'finance staff' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya finance staff
for idx,pos in enumerate(temp):
if 'finance staff' in pos:
temp[idx] = 'finance staff'
```
### Drafter
```
# check banyak jenis drafter
job_list = list()
for pos in temp:
if 'drafter' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya drafter
for idx,pos in enumerate(temp):
if 'drafter' in pos:
temp[idx] = 'drafter'
```
### Sales Supervisor
```
# check banyak jenis sales supervisor
job_list = list()
for pos in temp:
if 'sales supervisor' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya sales supervisor
for idx,pos in enumerate(temp):
if 'sales supervisor' in pos:
temp[idx] = 'sales supervisor'
```
### Purchasing Staff
```
# check banyak jenis purchasing staff
job_list = list()
for pos in temp:
if 'purchasing staff' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya purchasing staff
for idx,pos in enumerate(temp):
if 'purchasing staff' in pos:
temp[idx] = 'purchasing staff'
```
### IT Programmer
```
# check banyak jenis it programmer
job_list = list()
for pos in temp:
if 'it programmer' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya it programmer
for idx,pos in enumerate(temp):
if 'it programmer' in pos:
temp[idx] = 'it programmer'
```
### IT Support
```
# check banyak jenis it support
job_list = list()
for pos in temp:
if 'it support' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya it support
for idx,pos in enumerate(temp):
if 'it support' in pos:
temp[idx] = 'it support'
```
### Account Manager
```
# check banyak jenis account manager
job_list = list()
for pos in temp:
if 'account manager' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya account manager
for idx,pos in enumerate(temp):
if 'account manager' in pos:
temp[idx] = 'account manager'
```
### Customer Service
```
# check banyak jenis customer service
job_list = list()
for pos in temp:
if 'customer service' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya customer service
for idx,pos in enumerate(temp):
if 'customer service' in pos:
temp[idx] = 'customer service'
```
### Brand Manager
```
# check banyak jenis brand manager
job_list = list()
for pos in temp:
if 'brand manager' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya brand manager
for idx,pos in enumerate(temp):
if 'brand manager' in pos:
temp[idx] = 'brand manager'
```
### Programmer
```
# check banyak jenis programmer
job_list = list()
for pos in temp:
if 'programmer' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya programmer
for idx,pos in enumerate(temp):
if 'programmer' in pos:
temp[idx] = 'programmer'
```
### Supervisor Produksi
```
# check banyak jenis supervisor produksi
job_list = list()
for pos in temp:
if 'supervisor produksi' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya supervisor produksi
for idx,pos in enumerate(temp):
if 'supervisor produksi' in pos:
temp[idx] = 'supervisor produksi'
```
### Personal Assistant
```
# check banyak jenis personal assistant
job_list = list()
for pos in temp:
if 'personal assistant' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya personal assistant
for idx,pos in enumerate(temp):
if 'personal assistant' in pos:
temp[idx] = 'personal assistant'
```
## Kelompokkan job_position dengan jumlah kata
Note: pada tahap ini belum dilakukan, mengingat waktu yang sudah mepet.
```
job_pos_satu = list()
job_pos_dua = list()
job_pos_banyak = list()
for job in temp:
split = job.split()
if len(split) == 1:
job_pos_satu.append(job)
elif len(split) == 2:
job_pos_dua.append(job)
else:
job_pos_banyak.append(job)
len(job_pos_satu), len(job_pos_dua), len(job_pos_banyak)
# lihat unique pada satu kata
pd.Series(job_pos_satu).value_counts()
# lihat unique pada dua kata
pd.Series(job_pos_dua).value_counts()
# lihat unique pada banyak kata
pd.Series(job_pos_banyak).value_counts()
```
### Mapping job satu kata
```
# pd.Series(job_pos_satu).value_counts().to_csv('job_pos_satu.csv')
# pd.Series(job_pos_dua).value_counts().to_csv('job_pos_dua.csv')
# pd.Series(job_pos_banyak).value_counts().to_csv('job_pos_banyak.csv')
```
## Hasil akhir
```
pd.Series(temp).value_counts()
# pd.Series(temp).value_counts().to_csv('job_position_distinct.csv')
```
> Yang semula distinct 10078, menjadi 7414 distinct
```
# insert into dataframe dan check null values
data['job_position'] = temp
data['job_position'].isnull().sum()
data.head()
```
# Export csv
```
data.shape
data.to_csv('jobstreet_clean_tahap2.csv', index=False)
```
| github_jupyter |
# Tutorial: DESI spectral fitting with `provabgs`
```
# lets install the python package `provabgs`, a python package for generating the PRObabilistic Value-Added BGS (PROVABGS)
!pip install git+https://github.com/changhoonhahn/provabgs.git --upgrade --user
!pip install zeus-mcmc --user
import numpy as np
from provabgs import infer as Infer
from provabgs import models as Models
from provabgs import flux_calib as FluxCalib
# -- plotting --
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
# read in DESI Cascades spectra from TILE 80612
from desispec.io import read_spectra
spectra = read_spectra('/global/cfs/cdirs/desi/spectro/redux/cascades/tiles/80612/deep/coadd-0-80612-deep.fits')
igal = 10
from astropy.table import Table
zbest = Table.read('/global/cfs/cdirs/desi/spectro/redux/cascades/tiles/80612/deep/zbest-0-80612-deep.fits', hdu=1)
zred = zbest['Z'][igal]
print('z=%f' % zred)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(spectra.wave['b'], spectra.flux['b'][igal])
sub.plot(spectra.wave['r'], spectra.flux['r'][igal])
sub.plot(spectra.wave['z'], spectra.flux['z'][igal])
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_ylim(0, 5)
# declare prior
priors = Infer.load_priors([
Infer.UniformPrior(9., 12, label='sed'),
Infer.FlatDirichletPrior(4, label='sed'),
Infer.UniformPrior(np.array([6.9e-5, 6.9e-5, 0., 0., -2.2]), np.array([7.3e-3, 7.3e-3, 3., 4., 0.4]), label='sed'),
Infer.UniformPrior(np.array([0.9, 0.9, 0.9]), np.array([1.1, 1.1, 1.1]), label='flux_calib') # flux calibration variables
])
# declare model
m_nmf = Models.NMF(burst=False, emulator=True)
# declare flux calibration
fluxcalib = FluxCalib.constant_flux_DESI_arms
desi_mcmc = Infer.desiMCMC(
model=m_nmf,
flux_calib=fluxcalib,
prior=priors
)
mcmc = desi_mcmc.run(
wave_obs=[spectra.wave['b'], spectra.wave['r'], spectra.wave['z']],
flux_obs=[spectra.flux['b'][igal], spectra.flux['r'][igal], spectra.flux['z'][igal]],
flux_ivar_obs=[spectra.ivar['b'][igal], spectra.ivar['r'][igal], spectra.ivar['z'][igal]],
zred=zred,
sampler='zeus',
nwalkers=100,
burnin=100,
opt_maxiter=10000,
niter=1000,
debug=True)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(spectra.wave['b'], spectra.flux['b'][igal])
sub.plot(spectra.wave['r'], spectra.flux['r'][igal])
sub.plot(spectra.wave['z'], spectra.flux['z'][igal])
sub.plot(mcmc['wavelength_obs'], mcmc['flux_spec_model'], c='k', ls='--')
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_ylim(0, 5)
```
| github_jupyter |
# Human Rights Considered NLP
### **Overview**
This notebook creates a training dataset using data sourced from the [Police Brutality 2020 API](https://github.com/2020PB/police-brutality) by adding category labels for types of force and the people involved in incidents using [Snorkel](https://www.snorkel.org/) for NLP.
Build on original notebook by [Axel Corro](https://github.com/axefx) sourced from the HRD Team C DS [repository](https://github.com/Lambda-School-Labs/Labs25-Human_Rights_First-TeamC-DS/blob/main/notebooks/snorkel_hrf.ipynb).
# Imports
```
!pip install snorkel
import pandas as pd
from snorkel.labeling import labeling_function
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
import sys
from google.colab import files
# using our cleaned processed data
df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/Labs25-Human_Rights_First-TeamC-DS/main/Data/pv_incidents.csv', na_values=False)
df2 = df.filter(['text'], axis=1)
df2['text'] = df2['text'].astype(str)
```
# Use of Force Tags
### Categories of force:
- **Presence**: Police show up and their presence is enough to de-escalate. This is ideal.
- **verbalization**: Police use voice commands, force is non-physical.
- **empty-hand control soft technique**: Officers use grabs, holds and joint locks to restrain an individual. shove, chase, spit, raid, push
- **empty-hand control hard technique**: Officers use punches and kicks to restrain an individual.
- **blunt impact**: Officers may use a baton to immobilize a combative person, struck, shield, beat
- **projectiles**: Projectiles shot or launched by police at civilians. Includes "less lethal" mutnitions such as rubber bullets, bean bag rounds, water hoses, and flash grenades, as well as deadly weapons such as firearms.
- **chemical**: Officers use chemical sprays or projectiles embedded with chemicals to restrain an individual (e.g., pepper spray).
- **conducted energy devices**: Officers may use CEDs to immobilize an individual. CEDs discharge a high-voltage, low-amperage jolt of electricity at a distance.
- **miscillaneous**: LRAD (long-range audio device), sound cannon, sonic weapon
## Presence category
Police presence is enough to de-escalate. This is ideal.
```
PRESENCE = 1
NOT_PRESENCE = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_swarm(x):
return PRESENCE if 'swarm' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_show(x):
return PRESENCE if 'show' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_arrive(x):
return PRESENCE if 'arrive' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_swarm, lf_keyword_show, lf_keyword_arrive]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["presence_label"] = label_model.predict(L=L_train, tie_break_policy="abstain")
```
## Verbalization Category
police use voice commands, force is non-physical
```
VERBAL = 1
NOT_VERBAL = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_shout(x):
return VERBAL if 'shout' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_order(x):
return VERBAL if 'order' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_loudspeaker(x):
return VERBAL if 'loudspeaker' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_shout, lf_keyword_order,lf_keyword_loudspeaker]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["verbal_label"] = label_model.predict(L=L_train, tie_break_policy="abstain")
lf_keyword_shout, lf_keyword_order, lf_keyword_loudspeaker = (L_train != ABSTAIN).mean(axis=0)
print(f"lf_keyword_shout coverage: {lf_keyword_shout * 100:.1f}%")
print(f"lf_keyword_order coverage: {lf_keyword_order * 100:.1f}%")
print(f"lf_keyword_loudspeaker coverage: {lf_keyword_loudspeaker * 100:.1f}%")
df2[df2['verbal_label']==1]
```
## Empty-hand Control - Soft Technique
Officers use grabs, holds and joint locks to restrain an individual. shove, chase, spit, raid, push
```
EHCSOFT = 1
NOT_EHCSOFT = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_shove(x):
return EHCSOFT if 'shove' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_grabs(x):
return EHCSOFT if 'grabs' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_holds(x):
return EHCSOFT if 'holds' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_arrest(x):
return EHCSOFT if 'arrest' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_spit(x):
return EHCSOFT if 'spit' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_raid(x):
return EHCSOFT if 'raid' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_push(x):
return EHCSOFT if 'push' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_shove, lf_keyword_grabs, lf_keyword_spit, lf_keyword_raid,
lf_keyword_push, lf_keyword_holds, lf_keyword_arrest]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["ehc-soft_technique"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['ehc-soft_technique']==1]
```
## Empty-hand Control - Hard Technique
Officers use bodily force (punches and kicks or asphyxiation) to restrain an individual.
```
EHCHARD = 1
NOT_EHCHARD = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_beat(x):
return EHCHARD if 'beat' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_tackle(x):
return EHCHARD if 'tackle' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_punch(x):
return EHCHARD if 'punch' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_assault(x):
return EHCHARD if 'assault' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_choke(x):
return EHCHARD if 'choke' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_kick(x):
return EHCHARD if 'kick' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_kneel(x):
return EHCHARD if 'kneel' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_beat, lf_keyword_tackle, lf_keyword_choke,
lf_keyword_kick, lf_keyword_punch, lf_keyword_assault,
lf_keyword_kneel]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["ehc-hard_technique"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['ehc-hard_technique']==1]
```
## Blunt Impact Category
Officers may use tools like batons to immobilize a person.
```
BLUNT = 1
NOT_BLUNT = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_baton(x):
return BLUNT if 'baton' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_club(x):
return BLUNT if 'club' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_shield(x):
return BLUNT if 'shield' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_bike(x):
return BLUNT if 'bike' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_horse(x):
return BLUNT if 'horse' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_vehicle(x):
return BLUNT if 'vehicle' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_car(x):
return BLUNT if 'car' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_baton, lf_keyword_club, lf_keyword_horse, lf_keyword_vehicle,
lf_keyword_car, lf_keyword_shield, lf_keyword_bike]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["blunt_impact"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['blunt_impact']==1]
```
## Projectiles category
Projectiles shot or launched by police at civilians. Includes "less lethal" mutnitions such as rubber bullets, bean bag rounds, water hoses, and flash grenades, as well as deadly weapons such as firearms.
```
PROJECTILE = 1
NOT_PROJECTILE = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_pepper(x):
return PROJECTILE if 'pepper' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_rubber(x):
return PROJECTILE if 'rubber' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_bean(x):
return PROJECTILE if 'bean' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_shoot(x):
return PROJECTILE if 'shoot' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_shot(x):
return PROJECTILE if 'shot' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_fire(x):
return PROJECTILE if 'fire' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_grenade(x):
return PROJECTILE if 'grenade' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_bullet(x):
return PROJECTILE if 'bullet' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_throw(x):
return PROJECTILE if 'throw' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_discharge(x):
return PROJECTILE if 'discharge' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_projectile(x):
return PROJECTILE if 'projectile' in x.text else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_pepper, lf_keyword_rubber, lf_keyword_bean,
lf_keyword_shoot, lf_keyword_shot, lf_keyword_fire, lf_keyword_grenade,
lf_keyword_bullet, lf_keyword_throw, lf_keyword_discharge,
lf_keyword_projectile]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["projectile"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['projectile'] == 1]
```
## Chemical Agents
Police use chemical agents including pepper pray, tear gas on civilians.
```
CHEMICAL = 1
NOT_CHEMICAL = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_pepper(x):
return CHEMICAL if 'pepper' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_gas(x):
return CHEMICAL if 'gas' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_smoke(x):
return CHEMICAL if 'smoke' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_mace(x):
return CHEMICAL if 'mace' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_spray(x):
return CHEMICAL if 'spray' in x.text else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_pepper, lf_keyword_gas, lf_keyword_smoke,
lf_keyword_spray, lf_keyword_mace]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["chemical"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['chemical']==1]
```
## Conducted energy devices
Officers may use CEDs to immobilize an individual. CEDs discharge a high-voltage, low-amperage jolt of electricity at a distance. Most commonly tasers.
```
CED = 1
NOT_CED = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_taser(x):
return CED if 'taser' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_stun(x):
return CED if 'stun' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_stungun(x):
return CED if 'stungun' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_taze(x):
return CED if 'taze' in x.text else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_taser, lf_keyword_stun, lf_keyword_stungun, lf_keyword_taze]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["ced_category"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['ced_category']==1]
```
# Add force tags to dataframe
```
df2.columns
def add_force_labels(row):
tags = []
if row['presence_label'] == 1:
tags.append('Presence')
if row['verbal_label'] == 1:
tags.append('Verbalization')
if row['ehc-soft_technique'] == 1:
tags.append('EHC Soft Technique')
if row['ehc-hard_technique'] == 1:
tags.append('EHC Hard Technique')
if row['blunt_impact'] == 1:
tags.append('Blunt Impact')
if row['projectile'] == 1 or row['projectile'] == 0:
tags.append('Projectiles')
if row['chemical'] == 1:
tags.append('Chemical')
if row['ced_category'] == 1:
tags.append('Conductive Energy')
if not tags:
tags.append('Other/Unknown')
return tags
# apply force tags to incident data
df2['force_tags'] = df2.apply(add_force_labels,axis=1)
# take a peek
df2[['text','force_tags']].head(3)
# clean the tags column by seperating tags
def join_tags(content):
return ', '.join(content)
# add column to main df
df['force_tags'] = df2['force_tags'].apply(join_tags)
df['force_tags'].value_counts()
```
# Human Categories
### Police Categories:
police, officer, deputy, PD, cop
federal, agent
```
POLICE = 1
NOT_POLICE = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_police(x):
return POLICE if 'police' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_officer(x):
return POLICE if 'officer' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_deputy(x):
return POLICE if 'deputy' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_pd(x):
return POLICE if 'PD' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_cop(x):
return POLICE if 'cop' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_enforcement(x):
return POLICE if 'enforcement' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_leo(x):
return POLICE if 'LEO' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_swat(x):
return POLICE if 'SWAT' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_police, lf_keyword_officer, lf_keyword_deputy, lf_keyword_pd,
lf_keyword_cop, lf_keyword_enforcement, lf_keyword_swat, lf_keyword_leo]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['police_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['police_label']==1]
```
### Federal Agent Category
```
FEDERAL = 1
NOT_FEDERAL = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_federal(x):
return FEDERAL if 'federal' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_feds(x):
return FEDERAL if 'feds' in x.text else ABSTAIN
# national guard
@labeling_function()
def lf_keyword_guard(x):
return FEDERAL if 'guard' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_federal, lf_keyword_feds, lf_keyword_guard]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['federal_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['federal_label']==1]
```
### Civilian Categories:
protesters, medic,
reporter, journalist,
minor, child
```
PROTESTER = 1
NOT_PROTESTER = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_protester(x):
return PROTESTER if 'protester' in x.text else ABSTAIN
# adding the mispelling 'protestor'
@labeling_function()
def lf_keyword_protestor(x):
return PROTESTER if 'protestor' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_medic(x):
return PROTESTER if 'medic' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_protester, lf_keyword_protestor, lf_keyword_medic]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['protester_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['protester_label']==1]
```
Press
```
PRESS = 1
NOT_PRESS = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_reporter(x):
return PRESS if 'reporter' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_press(x):
return PRESS if 'press' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_journalist(x):
return PRESS if 'journalist' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_reporter, lf_keyword_press, lf_keyword_journalist]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['press_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['press_label']==1]
```
Minors
```
MINOR = 1
NOT_MINOR = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_minor(x):
return MINOR if 'minor' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_underage(x):
return MINOR if 'underage' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_teen(x):
return MINOR if 'teen' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_child(x):
return MINOR if 'child' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_baby(x):
return MINOR if 'baby' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_toddler(x):
return MINOR if 'toddler' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_minor, lf_keyword_child, lf_keyword_baby,
lf_keyword_underage, lf_keyword_teen, lf_keyword_toddler]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['minor_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['minor_label']==1]
```
# Add human tags to Dataframe
```
df2.columns
def add_human_labels(row):
tags = []
if row['police_label'] == 1 or row['police_label'] == 0:
tags.append('Police')
if row['federal_label'] == 1:
tags.append('Federal')
if row['protester_label'] == 1:
tags.append('Protester')
if row['press_label'] == 1:
tags.append('Press')
if row['minor_label'] == 1:
tags.append('Minor')
if not tags:
tags.append('Other/Unknown')
return tags
# apply human tags to incident data
df2['human_tags'] = df2.apply(add_human_labels,axis=1)
# take a peek
df2[['text','force_tags', 'human_tags']].head(3)
# clean the tags column by seperating tags
def join_tags(content):
return ', '.join(content)
# add column to main df
df['human_tags'] = df2['human_tags'].apply(join_tags)
df['human_tags'].value_counts()
# last check
df = df.drop('date_text', axis=1)
df = df.drop('Unnamed: 0', axis=1)
df = df.drop_duplicates(subset=['id'], keep='last')
df.head(3)
print(df.shape)
# exporting the dataframe
df.to_csv('training_data.csv')
files.download('training_data.csv')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/patrickcgray/deep_learning_ecology/blob/master/basic_cnn_minst.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Training a Convolutional Neural Network on the MINST dataset.
### import all necessary python modules
```
'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import numpy as np # linear algebra
import os
import matplotlib.pyplot as plt
%matplotlib inline
```
### set hyperparameters and get training and testing data formatted
```
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
```
### build the model and take a look at the model summary
```
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
```
### compile and train/fit the model
```
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
```
### evaluate the model on the testing dataset
```
score = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
### compare predictions to the input data
```
w=10
h=10
fig=plt.figure(figsize=(8, 8))
columns = 9
rows = 1
indices = np.random.randint(len(x_test), size=(10))
labels = np.argmax(model.predict(x_test[indices]), axis=1)
for i in range(1, columns*rows+1):
fig.add_subplot(rows, columns, i)
plt.imshow(x_test[indices[i-1]].reshape((28, 28)), cmap = 'gray')
plt.axis('off')
plt.text(15,45, labels[i-1], horizontalalignment='center', verticalalignment='center')
plt.show()
```
### code that will allow us to visualize the convolutional filters
```
layer_dict = dict([(layer.name, layer) for layer in model.layers])
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
#x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
def vis_img_in_filter(img = np.array(x_train[0]).reshape((1, 28, 28, 1)).astype(np.float64),
layer_name = 'conv2d_2'):
layer_output = layer_dict[layer_name].output
img_ascs = list()
for filter_index in range(layer_output.shape[3]):
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
loss = K.mean(layer_output[:, :, :, filter_index])
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, model.input)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([model.input], [loss, grads])
# step size for gradient ascent
step = 5.
img_asc = np.array(img)
# run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([img_asc])
img_asc += grads_value * step
img_asc = img_asc[0]
img_ascs.append(deprocess_image(img_asc).reshape((28, 28)))
if layer_output.shape[3] >= 35:
plot_x, plot_y = 6, 6
elif layer_output.shape[3] >= 23:
plot_x, plot_y = 4, 6
elif layer_output.shape[3] >= 11:
plot_x, plot_y = 2, 6
else:
plot_x, plot_y = 1, 2
fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12))
ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray')
ax[0, 0].set_title('Input image')
fig.suptitle('Input image and %s filters' % (layer_name,))
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]:
if x == 0 and y == 0:
continue
ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray')
ax[x, y].set_title('filter %d' % (x * plot_y + y - 1))
ax[x, y].set_axis_off()
#plt.axis('off')
```
### convolutional filters for the first element in the training dataset for the first convolutional layer
```
vis_img_in_filter(img = np.array(x_train[0]).reshape((1, 28, 28, 1)).astype(np.float64), layer_name = 'conv2d')
```
### convolutional filters for the first element in the training dataset for the second convolutional layer
```
vis_img_in_filter(img = np.array(x_train[0]).reshape((1, 28, 28, 1)).astype(np.float64), layer_name = 'conv2d_1')
```
| github_jupyter |
**Türkçe için sentiment(duygu) analiz kodu**
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('./drive/My Drive/')
# veriyi pandas ile okuyoruz
data = pd.read_csv("sentiment_data.csv")
df = data.copy()
df.head()
#0->negatif veri etiketi
#1->pozitif veri etiketi
df['Rating'].unique().tolist()
#model için gerekli kütüphaneleri import ediyoruz
import numpy as np
import pandas as pd
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Dropout
from tensorflow.python.keras.preprocessing.text import Tokenizer
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
# bütün verileri ve etiketleri listeye çeviriyoruz
target = df['Rating'].values.tolist()#negatif=0, pozitif=1
data = df['Review'].values.tolist()#text verisi
#veriyi test ve train verisi olarak ayırıyoruz
seperation = int(len(data) * 0.80)
x_train, x_test = data[:seperation], data[seperation:]
y_train, y_test = target[:seperation], target[seperation:]
#veri satır ve sütun sayısı
df.shape
# Verisetimizde en sık geçen 10000 kelimeyi alıyoruz
num_words = 10000
# Keras ile tokenizer tanımlıyoruz
tokenizer = Tokenizer(num_words=num_words)
# Veriyi tokenlara ayırıyoruz
tokenizer.fit_on_texts(data)
# Tokenizerı kaydediyoruz
import pickle
with open('turkish_tokenizer_hack.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Tokenizerı yüklüyoruz
with open('turkish_tokenizer_hack.pickle', 'rb') as handle:
turkish_tokenizer = pickle.load(handle)
# Train verisi olarak ayırdığımız veriyi tokenizer ile tokenize ediyoruz
x_train_tokens = turkish_tokenizer.texts_to_sequences(x_train)
x_train[100]
x_train_tokens[100]
# Test verisi olarak ayırdığımız veriyi tokenizer ile tokenize ediyoruz
x_test_tokens = turkish_tokenizer.texts_to_sequences(x_test)
#Text verileri için padding yapıyoruz
#RNN ağlarını kullanırken önceden belirdiğimiz sabit bir size olur. Tüm input textlerinin sizelarını bu sabit size için padding yaparak 0 lar
#ile doldururuz.
num_tokens = [len(tokens) for tokens in x_train_tokens + x_test_tokens]
num_tokens = np.array(num_tokens)
num_tokens.shape
# Bütün text verileri içinde maximum token sayısına sahip olanı buluyoruz
max_tokens = np.mean(num_tokens) + 2*np.std(num_tokens)
max_tokens = int(max_tokens)
max_tokens
# Bütün verilere padding yapıyoruz ve bütün veriler aynı boyutta oluyor
x_train_pad = pad_sequences(x_train_tokens, maxlen=max_tokens)
x_test_pad = pad_sequences(x_test_tokens, maxlen=max_tokens)
# size
print(x_train_pad.shape)
print(x_test_pad.shape)
model = Sequential() # Kullanacağımız Keras modelini tanımlıyoruz
embedding_size = 50 # Her kelime için vektör boyutunu 50 olarak belirledik
#Kerasta bir embedding layer oluşturuyoruz ve rastgele vektörler oluşturuyoruz
# Modele embedding layer ekliyoruz
# embedding matris size = num_words * embedding_size -> 10.000 * 50
model.add(Embedding(input_dim=num_words,
output_dim=embedding_size,
input_length=max_tokens,
name='embedding_layer'))
# 3-katmanlı(layer) LSTM
model.add(LSTM(units=16, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=8, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=4, return_sequences=False))
model.add(Dropout(0.2))
# Dense layer: Tek nörondan oluşuyor
model.add(Dense(1, activation='sigmoid'))# Sigmoid aktivasyon fonksiyonu
# Adam optimizer
from tensorflow.python.keras.optimizers import Adam
optimizer = Adam(lr=1e-3)
# Farklı optimizerları deniyoruz
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# modelin özeti
model.summary()
# epoch -> veri ile kaç kere eğiteceğiz
# batch_size -> feeding size-her epochta kaç veri ile besleyeceğiz
model.fit(x_train_pad, y_train, epochs=10, batch_size=256)
# model sonuçları
result = model.evaluate(x_test_pad, y_test)
result
# doğruluk oranı
accuracy = (result[1]) * 100
accuracy
#test yorumları(inputlar)
text1 = "böyle bir şeyi kabul edemem"
text2 = "tasarımı güzel ancak ürün açılmış tavsiye etmem"
text3 = "bu işten çok sıkıldım artık"
text4 = "kötü yorumlar gözümü korkutmuştu ancak hiçbir sorun yaşamadım teşekkürler"
text5 = "yaptığın işleri hiç beğenmiyorum"
text6 = "tam bir fiyat performans ürünü beğendim"
text7 = "Bu ürünü beğenmedim"
texts = [text1, text2,text3,text4,text5,text6,text7]
tokens = turkish_tokenizer.texts_to_sequences(texts)
tokens = turkish_tokenizer.texts_to_sequences(texts)
tokens
#padding
tokens_pad = pad_sequences(tokens, maxlen=max_tokens)
#model bu yorumların hangi duyguya yakın olduğunu tahminliyor
model.predict(tokens_pad)
for i in model.predict(tokens_pad):
if i < 0.5:
print("negatif")#negatif yorum yapmış
else
print("pozitif")#pozitif yorum yapmış
from keras.models import load_model
model.save('hack_model.h5') # modeli kaydediyoruz
```
| github_jupyter |
```
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
import h5py
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
```
## Set Training Label
```
label = 'NewSim_Type2'
```
## Find all files
```
from glob import glob
files_loc = "/gpfs/slac/atlas/fs1/u/rafaeltl/Muon/toy_sim/si-mu-lator/out_files/"
files_bkg = glob(files_loc+'*Muon*bkgr*.h5')
```
## Open files
```
import dataprep
dmat, Y, Y_mu, Y_hit = dataprep.make_data_matrix(files_bkg, max_files=100)
fig, axs = plt.subplots(3, 4, figsize=(20,10))
axs = axs.flatten()
my_mask = dmat[:,:,8] > 0
print(my_mask.shape)
for ivar in range(dmat.shape[2]):
this_var = dmat[:,:,ivar]
this_max = np.max(this_var)
this_min = np.min(this_var)
if ivar == 0:
this_min = -0.02
this_max = 0.02
if ivar == 1:
this_min = -1
this_max = 10
if ivar == 2:
this_min = -0.01
this_max = 0.01
if ivar == 3:
this_min = -1
this_max = 10
if ivar == 4:
this_min = -1
this_max = 10
if ivar == 5:
this_min = -1
this_max = 10
if ivar == 6:
this_min = -1
this_max = 10
if ivar == 7:
this_min = -1
this_max = 10
if ivar == 8:
this_min = -1
this_max = 9
if ivar == 9:
this_min = -150
this_max = 150
if this_min == -99:
this_min = -1
axs[ivar].hist( this_var[(Y_mu == 0)].flatten()[this_var[(Y_mu == 0)].flatten() != -99], histtype='step', range=(this_min, this_max), bins=50 )
axs[ivar].hist( this_var[(Y_mu == 1)].flatten()[this_var[(Y_mu == 1)].flatten() != -99], histtype='step', range=(this_min, this_max), bins=50 )
plt.show()
vars_of_interest = np.zeros(11, dtype=bool)
vars_of_interest[0] = 1
vars_of_interest[2] = 1
vars_of_interest[8] = 1
vars_of_interest[9] = 1
X = dmat[:,:,vars_of_interest]
Y_mu.sum()
```
## Define network
```
import sys
sys.path.insert(0, '../')
import models
lambs = [0, 1, 10]
mymods = []
for ll in lambs:
mymodel = models.muon_nn_type2( (X.shape[1],X.shape[2]), ll)
# mymodel = models.muon_nn_selfatt( (X.shape[1],X.shape[2]), ll)
mymods.append(mymodel)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
histories = []
for mod,ll in zip(mymods, lambs):
history = mod.fit( X_train, Y_train,
callbacks = [
EarlyStopping(monitor='val_loss', patience=1000, verbose=1),
ModelCheckpoint(f'weights/{label}_ll_{ll}.h5', monitor='val_loss', verbose=True, save_best_only=True) ],
epochs=3000,
validation_split = 0.3,
batch_size=1024*100,
verbose=0
)
mod.load_weights(f'weights/{label}_ll_{ll}.h5')
histories.append(history)
for history,ll in zip(histories,lambs):
plt.Figure()
for kk in history.history.keys():
plt.plot(history.history[kk], label=kk)
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title(f'Lambda = {ll}')
plt.savefig(f'plots/{label}_loss_ll_{ll}.pdf')
plt.show()
Y_test_hits = Y_test[:,1:]
Y_test_mu = Y_test[:,0]
Y_test_hits_f_mu = Y_test_hits[Y_test_mu==1].flatten()
Y_test_hits_f_nomu = Y_test_hits[Y_test_mu==0].flatten()
from sklearn.metrics import roc_curve
for mod,ll in zip(mymods,lambs):
Y_pred = mod.predict(X_test, verbose=1)
Y_pred_hits = Y_pred[:,1:]
Y_pred_mu = Y_pred[:,0]
Y_pred_hits_f_mu = Y_pred_hits[Y_test_mu==1].flatten()
Y_pred_hits_f_nomu = Y_pred_hits[Y_test_mu==0].flatten()
plt.Figure()
plt.hist(Y_pred_hits_f_mu[Y_test_hits_f_mu==0], histtype='step', bins=50, range=(0,1),
label='Muon present, non-muon hits')
plt.hist(Y_pred_hits_f_mu[Y_test_hits_f_mu==1], histtype='step', bins=50, range=(0,1),
label='Muon present, muon hits')
plt.yscale('log')
plt.title(f'Lambda = {ll}')
plt.legend()
plt.savefig(f'plots/{label}_hits_pred_ll_{ll}.pdf')
plt.show()
plt.Figure()
plt.hist(Y_pred_mu[Y_test_mu==0], histtype='step', bins=50, range=(0,1))
plt.hist(Y_pred_mu[Y_test_mu==1], histtype='step', bins=50, range=(0,1))
plt.yscale('log')
plt.title(f'Lambda = {ll}')
plt.savefig(f'plots/{label}_muon_pred_ll_{ll}.pdf')
plt.show()
fig, axs = plt.subplots(1, 2, figsize=(16, 8) )
axs = axs.flatten()
coli = 3
icol = 0
for mod,ll in zip(mymods,lambs):
Y_pred = mod.predict(X_test, verbose=1)
Y_pred_hits = Y_pred[:,1:]
Y_pred_mu = Y_pred[:,0]
Y_pred_hits_f_mu = Y_pred_hits[Y_test_mu==1].flatten()
Y_pred_hits_f_nomu = Y_pred_hits[Y_test_mu==0].flatten()
fpr_hits, tpr_hits, _ = roc_curve(Y_test_hits_f_mu[Y_test_hits_f_mu>-90], Y_pred_hits_f_mu[Y_test_hits_f_mu>-90])
axs[0].semilogy(tpr_hits, 1./fpr_hits, color=f'C{coli+icol}', label=f'lambda = {ll}')
fpr_mus, tpr_mus, _ = roc_curve(Y_test_mu, Y_pred_mu)
axs[1].semilogy(tpr_mus, 1./fpr_mus, color=f'C{coli+icol}', label=f'lambda = {ll}')
icol+=1
axs[0].set_ylabel('Background hits rejection')
axs[0].set_xlabel('Signal hits efficiency')
axs[0].legend()
axs[0].set_xlim(-0.01, 1.01)
axs[0].set_ylim(0.5, 1e6)
axs[1].set_ylabel('Rejection of events with no muons')
axs[1].set_xlabel('Efficiency of events with muons')
axs[1].set_xlim(0.9,1.01)
axs[1].set_ylim(0.5, 1e5)
axs[1].legend()
plt.savefig(f'plots/{label}_ROCs.pdf', transparent=True)
plt.show()
```
| github_jupyter |
# Vectors, Matrices, and Arrays
# Loading Data
## Loading a Sample Dataset
```
# Load scikit-learn's datasets
from sklearn import datasets
# Load digit dataset
digits = datasets.load_digits()
# Create features matrix
features = digits.data
# Create target matrix
target = digits.target
# View first observation
print(features[0])
```
## Creating a Simulated Dataset
```
# For Regression
# Load library
from sklearn.datasets import make_regression
# Generate features matrix, target vector, and the true coefficients
features, target, coefficients = make_regression(n_samples = 100,
n_features = 3,
n_informative = 3,
n_targets = 1,
noise = 0.0,
coef = True,
random_state = 1)
# View feature matrix and target vector
print("Feature Matrix\n",features[:3])
print("Target Vector\n",target[:3])
# For Classification
# Load library
from sklearn.datasets import make_classification
# Generate features matrix, target vector, and the true coefficients
features, target = make_classification(n_samples = 100,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
weights = [.25, .75],
random_state = 1)
# View feature matrix and target vector
print("Feature Matrix\n",features[:3])
print("Target Vector\n",target[:3])
# For Clustering
# Load library
from sklearn.datasets import make_blobs
# Generate features matrix, target vector, and the true coefficients
features, target = make_blobs(n_samples = 100,
n_features = 2,
centers = 3,
cluster_std = 0.5,
shuffle = True,
random_state = 1)
# View feature matrix and target vector
print("Feature Matrix\n",features[:3])
print("Target Vector\n",target[:3])
# Load library
import matplotlib.pyplot as plt
%matplotlib inline
# View scatterplot
plt.scatter(features[:,0], features[:,1], c=target)
plt.show()
```
## Loading a CSV File
```
# Load a library
import pandas as pd
# Create URL
url = 'https://people.sc.fsu.edu/~jburkardt/data/csv/airtravel.csv'
# Load dataset
dataframe = pd.read_csv(url)
# View first two rows
dataframe.head(2)
```
## Loading an Excel File
```
# Load a library
import pandas as pd
# Create URL
url = 'https://dornsife.usc.edu/assets/sites/298/docs/ir211wk12sample.xls'
# Load dataset
dataframe = pd.read_excel(url, sheet_name=0, header=1)
# View first two rows
dataframe.head(2)
```
## Loading a JSON File
```
# Load a library
import pandas as pd
# Create URL
url = 'http://ergast.com/api/f1/2004/1/results.json'
# Load dataset
dataframe = pd.read_json(url, orient = 'columns')
# View first two rows
dataframe.head(2)
# semistructured JSON to a pandas DataFrame
#pd.json_normalize
```
## Queryin a SQL Database
```
# Load a library
import pandas as pd
from sqlalchemy import create_engine
# Create a connection to the database
database_connection = create_engine('sqlite://sample.db')
# Load dataset
dataframe = pd.read_sql_query('SELECT * FROM data', database_connectiona)
# View first two rows
dataframe.head(2)
```
| github_jupyter |

## Welcome to The QuantConnect Research Page
#### Refer to this page for documentation https://www.quantconnect.com/docs#Introduction-to-Jupyter
#### Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicQuantBookTemplate.ipynb
## QuantBook Basics
### Start QuantBook
- Add the references and imports
- Create a QuantBook instance
```
%matplotlib inline
# Imports
from clr import AddReference
AddReference("System")
AddReference("QuantConnect.Common")
AddReference("QuantConnect.Jupyter")
AddReference("QuantConnect.Indicators")
from System import *
from QuantConnect import *
from QuantConnect.Data.Market import TradeBar, QuoteBar
from QuantConnect.Jupyter import *
from QuantConnect.Indicators import *
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import pandas as pd
# Create an instance
qb = QuantBook()
```
### Selecting Asset Data
Checkout the QuantConnect [docs](https://www.quantconnect.com/docs#Initializing-Algorithms-Selecting-Asset-Data) to learn how to select asset data.
```
spy = qb.AddEquity("SPY")
eur = qb.AddForex("EURUSD")
```
### Historical Data Requests
We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol.
For more information, please follow the [link](https://www.quantconnect.com/docs#Historical-Data-Historical-Data-Requests).
```
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution
h1 = qb.History(360, Resolution.Daily)
# Plot closing prices from "SPY"
h1.loc["SPY"]["close"].plot()
# Gets historical data from the subscribed assets, from the last 30 days with daily resolution
h2 = qb.History(timedelta(30), Resolution.Daily)
# Plot high prices from "EURUSD"
h2.loc["EURUSD"]["high"].plot()
# Gets historical data from the subscribed assets, between two dates with daily resolution
h3 = qb.History(spy.Symbol, datetime(2014,1,1), datetime.now(), Resolution.Daily)
# Only fetchs historical data from a desired symbol
h4 = qb.History(spy.Symbol, 360, Resolution.Daily)
# or qb.History("SPY", 360, Resolution.Daily)
# Only fetchs historical data from a desired symbol
# When we are not dealing with equity, we must use the generic method
h5 = qb.History[QuoteBar](eur.Symbol, timedelta(30), Resolution.Daily)
# or qb.History[QuoteBar]("EURUSD", timedelta(30), Resolution.Daily)
```
### Historical Options Data Requests
- Select the option data
- Sets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35))
- Get the OptionHistory, an object that has information about the historical options data
```
goog = qb.AddOption("GOOG")
goog.SetFilter(-2, 2, timedelta(0), timedelta(180))
option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4))
print option_history.GetStrikes()
print option_history.GetExpiryDates()
h6 = option_history.GetAllData()
```
### Get Fundamental Data
- *GetFundamental([symbol], selector, start_date = datetime(1998,1,1), end_date = datetime.now())*
We will get a pandas.DataFrame with fundamental data.
```
data = qb.GetFundamental(["AAPL","AIG","BAC","GOOG","IBM"], "ValuationRatios.PERatio")
data
```
### Indicators
We can easily get the indicator of a given symbol with QuantBook.
For all indicators, please checkout QuantConnect Indicators [Reference Table](https://www.quantconnect.com/docs#Indicators-Reference-Table)
```
# Example with BB, it is a datapoint indicator
# Define the indicator
bb = BollingerBands(30, 2)
# Gets historical data of indicator
bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily)
# drop undesired fields
bbdf = bbdf.drop('standarddeviation', 1)
# Plot
bbdf.plot()
# For EURUSD
bbdf = qb.Indicator(bb, "EURUSD", 360, Resolution.Daily)
bbdf = bbdf.drop('standarddeviation', 1)
bbdf.plot()
# Example with ADX, it is a bar indicator
adx = AverageDirectionalIndex("adx", 14)
adxdf = qb.Indicator(adx, "SPY", 360, Resolution.Daily)
adxdf.plot()
# For EURUSD
adxdf = qb.Indicator(adx, "EURUSD", 360, Resolution.Daily)
adxdf.plot()
# SMA cross:
symbol = "EURUSD"
# Get History
hist = qb.History[QuoteBar](symbol, 500, Resolution.Daily)
# Get the fast moving average
fast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily)
# Get the fast moving average
slow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily)
# Remove undesired columns and rename others
fast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'})
slow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'})
# Concatenate the information and plot
df = pd.concat([hist.loc[symbol]["close"], fast, slow], axis=1).dropna(axis=0)
df.plot()
# Get indicator defining a lookback period in terms of timedelta
ema1 = qb.Indicator(ExponentialMovingAverage(50), "SPY", timedelta(100), Resolution.Daily)
# Get indicator defining a start and end date
ema2 = qb.Indicator(ExponentialMovingAverage(50), "SPY", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily)
ema = pd.concat([ema1, ema2], axis=1)
ema.plot()
rsi = RelativeStrengthIndex(14)
# Selects which field we want to use in our indicator (default is Field.Close)
rsihi = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.High)
rsilo = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.Low)
rsihi = rsihi.rename(columns={'relativestrengthindex': 'high'})
rsilo = rsilo.rename(columns={'relativestrengthindex': 'low'})
rsi = pd.concat([rsihi['high'], rsilo['low']], axis=1)
rsi.plot()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jonkrohn/ML-foundations/blob/master/notebooks/7-algos-and-data-structures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Algorithms & Data Structures
This class, *Algorithms & Data Structures*, introduces the most important computer science topics for machine learning, enabling you to design and deploy computationally efficient data models.
Through the measured exposition of theory paired with interactive examples, you’ll develop a working understanding of all of the essential data structures across the list, dictionary, tree, and graph families. You’ll also learn the key algorithms for working with these structures, including those for searching, sorting, hashing, and traversing data.
The content covered in this class is itself foundational for the *Optimization* class of the *Machine Learning Foundations* series.
Over the course of studying this topic, you'll:
* Use “Big O” notation to characterize the time efficiency and space efficiency of a given algorithm, enabling you to select or devise the most sensible approach for tackling a particular machine learning problem with the hardware resources available to you.
* Get acquainted with the entire range of the most widely-used Python data structures, including list-, dictionary-, tree-, and graph-based structures.
* Develop an understanding of all of the essential algorithms for working with data, including those for searching, sorting, hashing, and traversing.
**Note that this Jupyter notebook is not intended to stand alone. It is the companion code to a lecture or to videos from Jon Krohn's [Machine Learning Foundations](https://github.com/jonkrohn/ML-foundations) series, which offer detail on the following:**
*Segment 1: Introduction to Data Structures and Algorithms*
* A Brief History of Data and Data Structures
* A Brief History of Algorithms
* “Big O” Notation for Time and Space Complexity
*Segment 2: Lists and Dictionaries*
* List-Based Data Structures: Arrays, Linked Lists, Stacks, Queues, and Deques
* Searching and Sorting: Binary, Bubble, Merge, and Quick
* Dictionaries: Sets and Maps
* Hashing: Hash Tables and Hash Maps
*Segment 3: Trees and Graphs*
* Trees: Binary Search, Heaps, and Self-Balancing
* Graphs: Terminology, Coded Representations, Properties, Traversals, and Paths
* Resources for Further Study of Data Structures & Algorithms
**Code coming in July 2020... Watch this space**
| github_jupyter |
```
import itertools as it
import sys
import os
#if len(sys.argv) != 3:
# print("Usage: python3 " + sys.argv[0] + " cluster.xyz" + " mode")
# print("mode=1: no cp")
# print("mode=2, whole cluster cp")
# print("mode=3, individual cluster cp")
# sys.exit(1)
#fxyz = sys.argv[1]
#mode = int(sys.argv[2])
fxyz = "cluster.xyz"
atlist = [3,3,3,3,3]
chglist = [0,0,0,0,0]
mode = 3
#atlist = [3,3]
# Function to write the different combinations from an xyz file
def write_combs(xyz, use_cp, atlist):
f = open(xyz,'r')
nat = f.readline().split()[0]
f.readline()
mons = []
for i in range(len(atlist)):
m = []
for j in range(atlist[i]):
line = f.readline()
m.append(line)
mons.append(m)
comb = []
monsN = range(len(atlist))
for i in range(1,len(atlist) + 1):
comb.append(list(it.combinations(monsN,i)))
if not use_cp:
for i in range(len(atlist)):
fname = str(i + 1) + "b.xyz"
ff = open(fname,'w')
for j in range(len(comb[i])):
inat = 0
w=" "
for k in range(len(comb[i][j])):
inat += atlist[comb[i][j][k]]
w += " " + str(comb[i][j][k] + 1)
ff.write(str(inat) + "\n")
ff.write(w + "\n")
for k in range(len(comb[i][j])):
for l in mons[comb[i][j][k]]:
ff.write(l)
ff.close()
else:
for i in range(len(atlist)):
# Counterpoise
fname = str(i + 1) + "b.xyz"
ff = open(fname,'w')
for j in range(len(comb[i])):
w=" "
for k in range(len(comb[i][j])):
w += " " + str(comb[i][j][k] + 1)
ff.write(str(nat) + "\n")
ff.write(w + "\n")
for k in range(len(atlist)):
if not k in comb[i][j]:
for l in mons[k]:
line = l.strip().split()
line[0] = line[0] + "1"
for mm in range(len(line)):
ff.write(line[mm] + " ")
ff.write("\n")
for k in range(len(comb[i][j])):
for l in mons[comb[i][j][k]]:
ff.write(l)
ff.close()
return comb
def write_xyz(xyz,atlist,chglist,comb):
f = open(xyz,'r')
nat = f.readline().split()[0]
f.readline()
mons = []
for i in range(len(atlist)):
m = []
for j in range(atlist[i]):
line = f.readline()
m.append(line)
mons.append(m)
monsN = range(len(atlist))
for i in range(len(atlist)):
fname = str(i + 1) + "b.xyz"
ff = open(fname,'r')
foldname = str(i + 1) + "b"
os.mkdir(foldname)
for j in range(len(comb[i])):
os.mkdir(foldname + "/" + str(j + 1))
inat = int(ff.readline().split()[0])
mns = ff.readline()
fx = open(foldname + "/" + str(j + 1) + "/input.xyz", 'w')
fx.write(str(inat) + "\n")
fx.write(mns)
for k in range(inat):
fx.write(ff.readline())
fx.close()
fx = open(foldname + "/" + str(j + 1) + "/input.charge", 'w')
c = 0
for k in range(len(comb[i][j])):
c += chglist[comb[i][j][k]]
fx.write(str(c) + '\n')
fx.close()
ff.close()
# Obtain the different configurations
if mode == 1:
comb = write_combs(fxyz, False, atlist)
write_xyz(fxyz,atlist,chglist,comb)
elif mode == 2:
comb = write_combs(fxyz, True, atlist)
write_xyz(fxyz,atlist,chglist,comb)
elif mode == 3:
comb = write_combs(fxyz, False, atlist)
write_xyz(fxyz,atlist,chglist,comb)
for i in range(len(atlist)):
foldname = str(i + 1) + "b"
for j in range(len(comb[i])):
fi = foldname + "/" + str(j + 1)
os.chdir(fi)
atl = []
chgl = []
for k in range(len(comb[i][j])):
atl.append(atlist[comb[i][j][k]])
chgl.append(chglist[comb[i][j][k]])
cmb = write_combs("input.xyz",True,atl)
write_xyz("input.xyz",atl,chgl,cmb)
os.chdir("../../")
else:
print("Mode " + str(mode) + " not defined")
sys.exit(1)
if mode == 1:
print("\nYou run the XYZ preparation for non-counterpoise correction\n")
elif mode == 2:
print("\nYou run the XYZ preparation with non-counterpoise correction for the whole cluster\n")
elif mode == 3:
print("\nYou run the XYZ preparation with non-counterpoise correction for individual clusters\n")
if mode == 1 or mode == 2:
a = """
Now you have all the XYZ in the 1b/1, 1b/2 ... , 2b... folders
Please, generate appropiate inputs run the calculations and save
the TOTAL ENERGY in the file input.energy inside each folder.
Then run the second part of the script.
"""
elif mode == 3:
a = """
Now you have all the XYZ in the 1b/1, 1b/2 ... , 2b... folders
Please, generate appropiate inputs run the calculations and save
the TOTAL ENERGY in the file input.energy inside each folder.
Inside each one of the folders, there is a new tree that contains
the coordinates with individual cluster counterpoise correction.
Please, generate appropiate inputs run the calculations and save
the TOTAL ENERGY in the file input.energy inside each folder.
Then run the second part of the script.
"""
print(a)
```
| github_jupyter |
### \*\*\*needs cleaning***
```
import pandas as pd
import numpy as np
import sys
import os
import itertools
import time
import random
#import utils
sys.path.insert(0, '../utils/')
from utils_preprocess_v3 import *
from utils_modeling_v9 import *
from utils_plots_v2 import *
#sklearn
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
start_time = time.time()
data = pd.read_csv('../data/datasets_processed/OpenPBTA_data_mean.csv', index_col='Unnamed: 0', dtype = 'unicode')
data = data.T.reset_index().rename(columns = {'index' : 'node'})
response = pd.read_csv('../data/datasets_processed/OpenPBTA_response.csv', index_col='Kids_First_Biospecimen_ID')
interactome = pd.read_csv('../data/interactomes/inbiomap_processed.txt', sep = '\t')
# get nodes from data and graph
data_nodes = data['node'].tolist()
interactome_nodes = list(set(np.concatenate((interactome['node1'], interactome['node2']))))
# organize data
organize = Preprocessing()
save_location = '../data/reduced_interactomes/reduced_interactome_OpenPBTA.txt'
organize.transform(data_nodes, interactome_nodes, interactome, data, save_location, load_graph = True)
# extract info from preprocessing
X = organize.sorted_X.T.values
y = response.values.reshape(-1,1)
L_norm = organize.L_norm
L = organize.L
g = organize.g
num_to_node = organize.num_to_node
# split for training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# scaling X
scaler_X = StandardScaler()
X_train = scaler_X.fit_transform(X_train)
X_test = scaler_X.transform(X_test)
# scalying y
scaler_y = StandardScaler()
y_train = scaler_y.fit_transform(y_train).reshape(-1)
y_test = scaler_y.transform(y_test).reshape(-1)
val_1, vec_1 = scipy.linalg.eigh(L_norm.toarray())
val_zeroed = val_1 - min(val_1) + 1e-8
L_rebuild = vec_1.dot(np.diag(val_zeroed)).dot(np.linalg.inv(vec_1))
X_train_lower = np.linalg.cholesky(L_rebuild)
X_train_lower.dot(X_train_lower.T).sum()
L_norm.sum()
np.save('L_half.npy', X_train_lower)
```
# Lasso + LapRidge
```
# hyperparameters
alpha1_list = np.logspace(-1,0,15)
alpha2_list = np.logspace(-1,2,15)
threshold_list = np.logspace(-3,-1,10)
max_features = 10
alpha_pairs = list(itertools.product(alpha1_list, alpha2_list))
def loss_fn(X,Y, L, alpha1, alpha2, beta):
return 0.5/(len(X)) * cp.norm2(cp.matmul(X, beta) - Y)**2 + \
alpha1 * cp.norm1(beta) + \
alpha2 * cp.sum(cp.quad_form(beta,L))
def run(pair, X_train, y_train, L_norm):
beta = cp.Variable(X_train.shape[1])
alpha1 = cp.Parameter(nonneg=True)
alpha2 = cp.Parameter(nonneg=True)
alpha1.value = pair[0]
alpha2.value = pair[1]
problem = cp.Problem(cp.Minimize(loss_fn(X_train, y_train, L_norm, alpha1, alpha2, beta )))
problem.solve(solver=cp.SCS, verbose=True, max_iters=50000)
np.save('openpbta/' + str(pair) + '.npy', beta.value)
return beta.value
betas = Parallel(n_jobs=8, verbose=10)(delayed(run)(alpha_pairs[i],
X_train,
y_train,
L_norm) for i in range(len(alpha_pairs)))
# load betas
beta_order = [str(i) + '.npy' for i in alpha_pairs]
betas = [np.load('openpbta/' + i) for i in beta_order]
feats = [getFeatures(None, i, threshold=0.001, max_features=10) for i in betas]
regr = LinearRegression()
scores = [getScoring(regr, X_train, y_train, X_test, y_test, i, None) for i in feats]
train_scores = [i[0] for i in scores]
test_scores = [i[1] for i in scores]
gridsearch_results = pd.DataFrame(np.array(test_scores), columns = ['Test MSE'])
getGridsearchPlot(gridsearch_results, alpha1_list, alpha2_list, save_location = None)
np.where(test_scores == min(test_scores))
min(test_scores)
getTranslatedNodes(feats[17], betas[17][feats[17]], num_to_node, g, )
```
# MCP + LapRidge
```
# define training params
alpha1_list = np.logspace(-3,-2,15)
alpha2_list = np.logspace(-1,2,15)
threshold_list = np.logspace(-3,-1,10)
max_features = 10
alpha_list_pairs = list(itertools.product(alpha1_list, alpha2_list))
results = {}
feats_list = []
betas = []
for i in alpha2_list:
X_train_new = np.vstack((X_train, np.sqrt(i)*X_train_lower))
y_train_new = np.concatenate((y_train, np.zeros(len(X_train_lower))))
s = pycasso.Solver(X_train_new, y_train_new, lambdas=alpha1_list, penalty = 'mcp')
s.train()
beta = s.coef()['beta']
betas += [i for i in beta]
feats = [getFeatures(None, i, threshold=0.001, max_features=10) for i in beta]
feats_list += feats
print([len(i) for i in feats])
regr = LinearRegression()
scores = [getScoring(regr, X_train, y_train, X_test, y_test, i, None) for i in feats]
results[i] = scores
train_scores = []
test_scores = []
for k,v in results.items():
train_scores += [i[0] for i in v]
test_scores += [i[1] for i in v]
gridsearch_results = pd.DataFrame(np.array(test_scores), columns = ['Test MSE'])
getGridsearchPlot(gridsearch_results, alpha1_list, alpha2_list, save_location = None)
min(test_scores)
np.where(test_scores == min(test_scores))
getTranslatedNodes(feats_list[75], betas[75][feats_list[75]], num_to_node, g)
```
# SCAD + LapRidge
```
# define training params
alpha1_list = np.logspace(-3,-2,15)
alpha2_list = np.logspace(-1,2,15)
threshold_list = np.logspace(-3,-1,10)
max_features = 10
alpha_list_pairs = list(itertools.product(alpha1_list, alpha2_list))
results = {}
feats_list = []
betas = []
for i in alpha2_list:
X_train_new = np.vstack((X_train, np.sqrt(i)*X_train_lower))
y_train_new = np.concatenate((y_train, np.zeros(len(X_train_lower))))
s = pycasso.Solver(X_train_new, y_train_new, lambdas=alpha1_list, penalty = 'scad')
s.train()
beta = s.coef()['beta']
betas += [i for i in beta]
feats = [getFeatures(None, i, threshold=0.001, max_features=10) for i in beta]
feats_list += feats
print([len(i) for i in feats])
regr = LinearRegression()
scores = [getScoring(regr, X_train, y_train, X_test, y_test, i, None) for i in feats]
results[i] = scores
train_scores = []
test_scores = []
for k,v in results.items():
train_scores += [i[0] for i in v]
test_scores += [i[1] for i in v]
gridsearch_results = pd.DataFrame(np.array(test_scores), columns = ['Test MSE'])
getGridsearchPlot(gridsearch_results, alpha1_list, alpha2_list, save_location = None)
min(test_scores)
np.where(test_scores == min(test_scores))
getTranslatedNodes(feats_list[201], betas[201][feats_list[201]], num_to_node, g)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import xarray as xr
import zarr
import math
import glob
import pickle
import statistics
import scipy.stats as stats
from sklearn.neighbors import KernelDensity
import dask
import seaborn as sns
import matplotlib.pyplot as plt
def getrange(numbers):
return max(numbers) - min(numbers)
def get_files():
models = glob.glob("/terra/data/cmip5/global/historical/*")
avail={}
for model in models:
zg = glob.glob(str(model)+"/r1i1p1/day/native/zg*")
try:
test = zg[0]
avail[model.split('/')[-1]] = zg
except:
pass
return avail
files = get_files()
files['NOAA'] = glob.glob("/terra/data/reanalysis/global/reanalysis/NOAA/20thC/r1/day/native/z_day*")
files['ERA5'] = glob.glob("/terra/data/reanalysis/global/reanalysis/ECMWF/ERA5/6hr/native/zg*")
results={}
for model in files.keys():
print(model)
x = xr.open_mfdataset(files[model])
if model == 'NOAA':
x = x.rename({'hgt':'zg'})
x = x.rename({'level':'plev'})
x = x.sel(plev=850)
x = x.sel(time=slice('1950','2005'))
elif model == 'ERA5':
x = x.rename({'latitude':'lat'})
x = x.rename({'longitude':'lon'})
x = x.rename({'level':'plev'})
x = x.sel(plev=850)
x = x.sel(time=slice('1979','2005'))
else:
x = x.sel(plev=85000)
x = x.sel(time=slice('1950','2005'))
x = x.load()
if model == 'ERA5':
x = x.sel(lat=slice(0,-60))
else:
x = x.sel(lat=slice(-60,0))
x = x[['zg']]
x = x.assign_coords(lon=(((x.lon + 180) % 360) - 180))
with dask.config.set(**{'array.slicing.split_large_chunks': True}):
x = x.sortby(x.lon)
x = x.sel(lon=slice(-50,20))
x = x.resample(time="QS-DEC").mean(dim="time",skipna=True)
x = x.load()
x['maxi']=x.zg
for i in range(len(x.time)):
x.maxi[i] = x.zg[i].where((x.zg[i]==np.max(x.zg[i])))
east=[]
south=[]
pres=[]
for i in range(len(x.time)):
ids = np.argwhere(~np.isnan(x.maxi[i].values))
latsid = [item[0] for item in ids]
lonsid = [item[1] for item in ids]
east.append(x.lon.values[np.max(lonsid)])
south.append(x.lat.values[np.max(latsid)])
pres.append(x.maxi.values[i][np.max(latsid)][np.max(lonsid)])
results[model]=pd.DataFrame(np.array([x.time.values,east,south,pres]).T,columns=['time','east','south','pres'])
x.close()
for model in results:
l = len(results[model])
bottom = results[model].south.mean() - 3*(results[model].south.std())
top = results[model].south.mean() + 3*(results[model].south.std())
bottom_e = results[model].east.mean() - 3*(results[model].east.std())
top_e = results[model].east.mean() + 3*(results[model].east.std())
results[model] = results[model].where((results[model].south > bottom) & (results[model].south<top))
results[model] = results[model].where((results[model].east > bottom_e) & (results[model].east < top_e)).dropna()
print(model,l-len(results[model]))
results.pop('MIROC-ESM') #no variability
scores = pd.DataFrame([],columns=['Model','Meridional','Zonal','Pressure'])
i = 1000
for model in results:
#longitude
x = np.linspace(min([np.min(results[key].east) for key in results]) , max([np.max(results[key].east) for key in results]) , int(i) )
bw = 1.059*np.min([np.std(results['NOAA'].east.values),stats.iqr(results['NOAA'].east.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results['NOAA'].east.values)[:, np.newaxis]) # replicates sns
ref = np.exp(kde.score_samples(x[:, np.newaxis]))
#
bw = 1.059*np.min([np.std(results[model].east.values),stats.iqr(results[model].east.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results[model].east.values)[:, np.newaxis]) # replicates sns
cmip = np.exp(kde.score_samples(x[:, np.newaxis]))
#
score = []
scale = getrange(x)/i
for j in range(len(ref)):
score.append(abs(ref[j]-cmip[j])*scale)
meridional = np.sum(score)
#latitude
x = np.linspace(min([np.min(results[key].south) for key in results]) , max([np.max(results[key].south) for key in results]) , int(i) )
bw = 1.059*np.min([np.std(results['NOAA'].south.values),stats.iqr(results['NOAA'].south.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results['NOAA'].south.values)[:, np.newaxis]) # replicates sns
ref = np.exp(kde.score_samples(x[:, np.newaxis]))
#
bw = 1.059*np.min([np.std(results[model].south.values),stats.iqr(results[model].south.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results[model].south.values)[:, np.newaxis]) # replicates sns
cmip = np.exp(kde.score_samples(x[:, np.newaxis]))
#
score = []
scale = getrange(x)/i
for j in range(len(ref)):
score.append(abs(ref[j]-cmip[j])*scale)
zonal = np.sum(score)
#pressure
x = np.linspace(min([np.min(results[key].pres) for key in results]) , max([np.max(results[key].pres) for key in results]) , int(i) )
bw = 1.059*np.min([np.std(results['NOAA'].pres.values),stats.iqr(results['NOAA'].pres.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results['NOAA'].pres.values)[:, np.newaxis]) # replicates sns
ref = np.exp(kde.score_samples(x[:, np.newaxis]))
#
bw = 1.059*np.min([np.std(results[model].pres.values),stats.iqr(results[model].pres.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results[model].pres.values)[:, np.newaxis]) # replicates sns
cmip = np.exp(kde.score_samples(x[:, np.newaxis]))
#
score = []
scale = getrange(x)/i
for j in range(len(ref)):
score.append(abs(ref[j]-cmip[j])*scale)
pres = np.sum(score)
scores.loc[len(scores)] = [model,meridional,zonal,pres]
inttype = type(results['NOAA'].time[1])
for index in results:
if isinstance(results[index].time[1], inttype):
results[index].time = pd.to_datetime(results[index].time)
for index in results:
results[index].east = pd.to_numeric(results[index].east)
results[index].south = pd.to_numeric(results[index].south)
results[index].pres = pd.to_numeric(results[index].pres)
pickle.dump( scores, open( "../HIGH_OUT/scores_1D.p", "wb" ) )
pickle.dump( results, open( "../HIGH_OUT/tracker_1D.p", "wb" ) )
out = pickle.load( open( "../HIGH_OUT/tracker_1D.p", "rb" ) )
#for index in out:
for index in ['ERA5']:
if index == 'NOAA':
pass
else:
df = out['NOAA']
df['model'] = 'NOAA'
df2 = out[index]
df2['model'] = str(index)
df = df.append(df2)
g = sns.jointplot(data= df,x='east',y = 'south', hue="model",kind="kde",fill=True, palette=["blue","red"],joint_kws={'alpha': 0.6} )
#g.plot_joint(sns.scatterplot, s=30, alpha=.5)
g.ax_joint.set_xlabel('Longitude')
g.ax_joint.set_ylabel('Latitude')
plt.savefig('../HIGH_OUT/jointplots/jointplot_'+str(index)+'.png',dpi=100)
plt.savefig('../HIGH_OUT/jointplots/jointplot_'+str(index)+'.pdf')
#plt.close()
plt.show()
NOAA = out['NOAA']
seasons =[]
for i in range(len(NOAA.time)):
if NOAA.iloc[i].time.month == 12:
seasons.append('Summer')
elif NOAA.iloc[i].time.month == 3:
seasons.append('Autumn')
elif NOAA.iloc[i].time.month == 6:
seasons.append('Winter')
else:
seasons.append('Spring')
NOAA['Season'] = seasons
NOAA
df = NOAA
g = sns.jointplot(data= df,x='east',y = 'south',hue='Season',kind="kde",fill=True, palette=['r','y','b','g'],joint_kws={'alpha': 0.35})
g.ax_joint.set_xlabel('Longitude')
g.ax_joint.set_ylabel('Latitude')
#plt.savefig('../HIGH_OUT/NOAA_seasonality_jointplot.png',dpi=1000)
plt.savefig('../HIGH_OUT/NOAA_seasonality_jointplot.pdf')
f = open("../HIGH_OUT/out_dict.txt","w") #ipython pickles cant be read by .py
f.write( str(out) )
f.close()
results_df = pd.DataFrame([],columns=["model", "Mean Latitude" ,"Latitude Difference","Latitude std.","Latitude Range", "Mean Longitude" ,"Longitude Difference" ,"longitude std.", "Longitude Range"])
for index in out:
results_df.loc[len(results_df)] = [index,round(np.mean(out[index].south),2),round(np.mean(out[index].south-np.mean(out['NOAA'].south)),2), round(np.std(out[index].south),2),round(getrange(out[index].south),2),round(np.mean(out[index].east),2),round(np.mean(out[index].east-np.mean(out['NOAA'].east)),2),round(np.std(out[index].east),2),round(getrange(out[index].east),2)]
results_df.to_csv('../HIGH_OUT/results_table.csv',float_format='%.2f')
fig = sns.kdeplot(data=NOAA,y='pres',hue='Season',fill=True,alpha=0.35, palette=['r','y','b','g'])
plt.ylabel('Pressure (gpm)')
plt.savefig('../HIGH_OUT/NOAA_seasonality_pressure.png',dpi=1000)
plt.savefig('../HIGH_OUT/NOAA_seasonality_pressure.pdf')
results_df=pd.DataFrame([],columns=('Model','Mean','Difference', 'Std.','Range','Mean', 'Difference', 'Std.','Range','Mean','Difference', 'Std.','Range'))
for index in out.keys():
results_df.loc[len(results_df)] = [index,round(np.mean(out[index].south),2),round(np.mean(out[index].south-np.mean(out['NOAA'].south)),2), round(np.std(out[index].south),2),round(getrange(out[index].south),2),round(np.mean(out[index].east),2),round(np.mean(out[index].east-np.mean(out['NOAA'].east)),2),round(np.std(out[index].east),2),round(getrange(out[index].east),2),round(np.mean(out[index].pres),2),round(np.mean(out[index].pres-np.mean(out['NOAA'].pres)),2),round(np.std(out[index].pres),2),round(getrange(out[index].pres),2)]
results_df.head()
results_df.to_csv('../HIGH_OUT/results_table_1D.csv')
```
| github_jupyter |
# 2D Isostatic gravity inversion - Inverse Problem
Este [IPython Notebook](http://ipython.org/videos.html#the-ipython-notebook) utiliza a biblioteca de código aberto [Fatiando a Terra](http://fatiando.org/)
```
%matplotlib inline
import numpy as np
from scipy.misc import derivative
import scipy as spy
from scipy import interpolate
import matplotlib
#matplotlib.use('TkAgg', force=True)
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import math
import cPickle as pickle
import datetime
import string as st
from scipy.misc import imread
from __future__ import division
from fatiando import gravmag, mesher, utils, gridder
from fatiando.mesher import Prism, Polygon
from fatiando.gravmag import prism
from fatiando.utils import ang2vec, si2nt, contaminate
from fatiando.gridder import regular
from fatiando.vis import mpl
from numpy.testing import assert_almost_equal
from numpy.testing import assert_array_almost_equal
from pytest import raises
plt.rc('font', size=16)
import functions as fc
```
## Observation coordinates.
```
# Model`s limits
ymin = 0.0
ymax = 383000.0
zmin = -1000.0
zmax = 45000.0
xmin = -100000.0
xmax = 100000.0
area = [ymin, ymax, zmax, zmin]
ny = 150 # number of observation datas and number of prisms along the profile
# coordinates defining the horizontal boundaries of the
# adjacent columns along the profile
y = np.linspace(ymin, ymax, ny)
# coordinates of the center of the columns forming the
# interpretation model
n = ny - 1
dy = (ymax - ymin)/n
ycmin = ymin + 0.5*dy
ycmax = ymax - 0.5*dy
yc = np.reshape(np.linspace(ycmin, ycmax, n),(n,1))
x = np.zeros_like(yc)
z = np.zeros_like(yc)-150.0
## Edge extension (observation coordinates)
sigma = 2.0
edge = sigma*dy*n
```
## Model parameters
```
# Model densities
# Indices and polygons relationship:
# cc = continental crust layer
# oc = ocean crust layer
# w = water layer
# s = sediment layer
# m = mantle layer
dw = np.array([1030.0])
ds0 = np.array([2350.0])
ds1 = np.array([2600.0])
dcc = np.array([2870.0])
doc = np.array([2885.0])
dm = np.array([3300.0])
#dc = dcc
# coordinate defining the horizontal boundaries of the continent-ocean boundary
COT = 350000.0
# list defining crust density variance
dc = np.zeros_like(yc)
aux = yc <= COT
for i in range(len(yc[aux])):
dc[i] = dcc
for i in range(len(yc[aux]),n):
dc[i] = doc
# defining sediments layers density matrix
ds = np.vstack((np.reshape(np.repeat(ds0,n),(1,n)),np.reshape(np.repeat(ds1,n),(1,n))))
# S0 => isostatic compensation surface (Airy's model)
S0 = np.array([44000.0]) #original
```
## Synthetic data
```
gsyn = np.reshape(np.loadtxt('../data/magma-poor-margin-synthetic-gravity-data.txt'),(n,1))
```
## Water bottom
```
bathymetry = np.reshape(np.loadtxt('../data/etopo1-pelotas.txt'),(n,1))
tw = 0.0 - bathymetry
```
## True surfaces
```
toi = np.reshape(np.loadtxt('../data/volcanic-margin-true-toi-surface.txt'),(n,1))
true_basement = np.reshape(np.loadtxt('../data/volcanic-margin-true-basement-surface.txt'),(n,1))
true_moho = np.reshape(np.loadtxt('../data/volcanic-margin-true-moho-surface.txt'),(n,1))
# True reference moho surface (SR = S0+dS0)
true_S0 = np.array([44000.0])
true_dS0 = np.array([2200.0]) #original
# True first layer sediments thickness
ts0 = toi - tw
# True second layer sediments thickness
true_ts1 = true_basement - toi
# True thickness sediments vector
true_ts = np.vstack((np.reshape(ts0,(1,n)),np.reshape(true_ts1,(1,n))))
# True layer anti-root thickness
true_tm = S0 - true_moho
# true parameters vector
ptrue = np.vstack((true_ts1, true_tm, true_dS0))
```
## Initial guess surfaces
```
# initial guess basement surface
ini_basement = np.reshape(np.loadtxt('../data/volcanic-margin-initial-basement-surface.txt'),(n,1))
# initial guess moho surface
ini_moho = np.reshape(np.loadtxt('../data/volcanic-margin-initial-moho-surface.txt'),(n,1))
# initial guess reference moho surface (SR = S0+dS0)
ini_dS0 = np.array([11500.0])
ini_RM = S0 + ini_dS0
# initial guess layer igneous thickness
ini_ts1 = ini_basement - toi
# initial guess anti-root layer thickness
ini_tm = S0 - ini_moho
# initial guess parameters vector
p0 = np.vstack((ini_ts1, ini_tm, ini_dS0))
```
## Known depths
```
# Known values: basement and moho surfaces
base_known = np.loadtxt('../data/volcanic-margin-basement-known-depths.txt')
#base_known = np.loadtxt('../data/volcanic-margin-basement-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/volcanic-margin-basement-new-known-depths.txt')
#base_known = np.loadtxt('../data/volcanic-margin-basement-few-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/volcanic-margin-basement-few-new-known-depths.txt')
#base_known_old = np.loadtxt('../data/volcanic-margin-basement-known-depths.txt')
moho_known = np.loadtxt('../data/volcanic-margin-moho-known-depths.txt')
(rs,index_rs) = fc.base_known_function(dy,tw,yc,base_known,ts0,two_layers=True)
(rm,index_rm) = fc.moho_known_function(dy,yc,S0,moho_known)
index_base = index_rs
index_moho = index_rm - n
assert_almost_equal(base_known[:,0], yc[index_base][:,0], decimal=6)
assert_almost_equal(moho_known[:,0], yc[index_moho][:,0], decimal=6)
assert_almost_equal(true_ts1[index_base][:,0], rs[:,0], decimal=6)
assert_almost_equal((true_tm[index_moho][:,0]), rm[:,0], decimal=6)
```
## Initial guess data
```
g0 = np.reshape(np.loadtxt('../data/magma-poor-margin-initial-guess-gravity-data.txt'),(n,1))
```
### parameters vector box limits
```
# true thickness vector limits
print 'ts =>', np.min(ptrue[0:n]),'-', np.max(ptrue[0:n])
print 'tm =>', np.min(ptrue[n:n+n]),'-', np.max(ptrue[n:n+n])
print 'dS0 =>', ptrue[n+n]
# initial guess thickness vector limits
print 'ts =>', np.min(p0[0:n]),'-', np.max(p0[0:n])
print 'tm =>', np.min(p0[n:n+n]),'-', np.max(p0[n:n+n])
print 'dS0 =>', p0[n+n]
```
```
# defining parameters values limits
pjmin = np.zeros((len(ptrue),1))
pjmax = np.zeros((len(ptrue),1))
pjmin[0:n] = 0.0
pjmax[0:n] = 25000.
pjmin[n:n+n] = 2000.0
pjmax[n:n+n] = 28000.
pjmin[n+n] = 0.0
pjmax[n+n] = 12000.
```
### Inversion code
```
#Parametros internos para implementacao da funcao (convergencia, numero de iteracoes, etc.)
beta = 10**(-3)
itmax = 50
itmax_marq = 10
lamb = 1.
mi = 10**(-3)
dmi = 10.
dp1 = 1.
dp2 = 1.
#inicializacao de variaveis
ymin = area[0]
ymax = area[1]
x = np.zeros_like(yc)
z = np.zeros_like(yc)-150.0
n = len(yc) # numero de dados observados
m = 2*n+1 # numero de parametros a inverter
# calculo da contribuicao dos prismas que formam a camada de agua.
prism_w = fc.prism_w_function(xmax,xmin,dy,edge,dw,dcc,tw,yc)
gzw = prism.gz(np.reshape(x,(n,)),np.reshape(yc,(n,)),np.reshape(z,(n,)),prism_w)
# matrizes
I = np.identity(m)
W0 = np.identity(n-1)
R0 = fc.R_matrix_function(n,isostatic=True)
R = fc.R_matrix_function(n)
C = fc.C_matrix_function(ds,dm,dc,two_layers=True)
D = fc.D_matrix_function(dw,dc,ds,two_layers=True)
A = fc.A_matrix_function(n,rs,index_rs)
B = fc.B_matrix_function(n,rm,index_rm)
G0 = fc.G_matrix_function(xmax,xmin,dy,edge,dp1,dp2,S0,dw,ds,dm,dcc,dc,tw,p0,yc,ts0,two_layers=True)
# Hessianas
Hess_phi = (2/n)*G0.T.dot(G0)
Hess_psi0 = 2*C.T.dot(R0.T.dot(W0.T.dot(W0.dot(R0.dot(C)))))
Hess_psi1 = 2*R.T.dot(R)
Hess_psi2 = 2*A.T.dot(A)
Hess_psi3 = 2*B.T.dot(B)
# Normalizacao dos vinculos
diag_phi = np.diag(Hess_phi)
diag_psi0 = np.diag(Hess_psi0)
diag_psi1 = np.diag(Hess_psi1)
diag_psi2 = np.diag(Hess_psi2)
diag_psi3 = np.diag(Hess_psi3)
f_phi = np.median(diag_phi)
f_psi0 = np.median(diag_psi0)
f_psi1 = np.median(diag_psi1)
#f_psi2 = np.median(diag_psi2)
#f_psi3 = np.median(diag_psi3)
f_psi2 = 2.
f_psi3 = 2.
print f_phi, f_psi0, f_psi1, f_psi2, f_psi3
# coeficientes dos vinculos
alpha0 = (f_phi/f_psi0)*10**(1) # vinculo isostatico
alpha1 = (f_phi/f_psi1)*10**(0) # vinculo suavidade
alpha2 = (f_phi/f_psi2)*10**(0) # vinculo de igualdade espessura sedimento
alpha3 = (f_phi/f_psi3)*10**(1) # vinculo de igualdade espessura (S0 - tm)
print alpha0, alpha1, alpha2, alpha3
p1 = p0.copy()
g1 = g0.copy()
gama1 = fc.gama_function(alpha0,alpha1,alpha2,alpha3,lamb,S0,tw,gsyn,g1,p1,rs,rm,W0,R0,C,D,R,A,B,ts0,two_layers=True)
gama_list = [gama1]
k0=0
k1=0
#implementacao da funcao
for it in range (itmax):
p1_hat = - np.log((pjmax - p1)/(p1-pjmin))
G1 = fc.G_matrix_function(xmax,xmin,dy,edge,dp1,dp2,S0,dw,ds,dm,dcc,dc,tw,p1,yc,ts0,two_layers=True)
grad_phi = (-2/n)*G1.T.dot(gsyn - g1)
Hess_phi = (2/n)*G1.T.dot(G1)
grad_psi0 = fc.grad_ps0_function(S0,tw,p1,W0,R0,C,D,ts0,two_layers=True)
grad_psi1 = fc.grad_psi1_function(p1,R)
grad_psi2 = fc.grad_psi2_function(p1,rs,A)
grad_psi3 = fc.grad_psi2_function(p1,rm,B)
grad_gama = grad_phi + lamb*(alpha0*grad_psi0+alpha1*grad_psi1+alpha2*grad_psi2+alpha3*grad_psi3)
Hess_gama = Hess_phi+lamb*(alpha0*Hess_psi0+alpha1*Hess_psi1+alpha2*Hess_psi2+alpha3*Hess_psi3)
T = fc.T_matrix_function(pjmin, pjmax, p1)
for it_marq in range(itmax_marq):
deltap = np.linalg.solve((Hess_gama.dot(T) + mi*I), -grad_gama)
p2_hat = p1_hat + deltap
p2 = pjmin + ((pjmax - pjmin)/(1 + np.exp(-p2_hat)))
#Calculo do vetor de dados preditos e da funcao phi
prism_s = fc.prism_s_function(xmax,xmin,dy,edge,ds,dcc,tw,p2,yc,ts0,two_layers=True)
prism_c = fc.prism_c_function(xmax,xmin,dy,edge,S0,dcc,dc,tw,p2,yc,ts0,two_layers=True)
prism_m = fc.prism_m_function(xmax,xmin,dy,edge,S0,dcc,dm,p2,yc)
g2 = np.reshape(fc.g_function(np.reshape(x,(n,)),np.reshape(yc,(n,)),np.reshape(z,(n,)),gzw,prism_s,prism_c,prism_m),(n,1))
gama2 = fc.gama_function(alpha0,alpha1,alpha2,alpha3,lamb,S0,tw,gsyn,g2,p2,rs,rm,W0,R0,C,D,R,A,B,ts0,two_layers=True)
#Verificando se a funcao phi esta diminuindo
dgama = gama2 - gama1
if dgama > 0.:
mi *= dmi
print 'k0=',k0
k0 += 1
else:
mi /= dmi
break
#Testando convergencia da funcao phi
if (dgama < 0.) & (abs(gama1 - gama2) < beta):
#if fc.convergence_function(gama1, gama2, beta):
print 'convergence achieved'
break
#Atualizando variaveis
else:
print 'k1=',k1
k1 += 1
#gama1 = gama2.copy()
print gama1
gama_list.append(gama1)
thicknesses = tw + ts0 + p2[0:n] + p2[n:n+n]
print 'thicknesses=', np.max(thicknesses)
if np.alltrue(thicknesses <= S0):
p = p1.copy()
g = g1.copy()
p1 = p2.copy()
g1 = g2.copy()
gama1 = gama2.copy()
assert np.alltrue(thicknesses <= S0), 'sum of the thicknesses shall be less than or equal to isostatic compensation surface'
p = p2.copy()
g = g2.copy()
gama_list.append(gama2)
it = [i for i in range(len(gama_list))]
#plt.figure(figsize=(8,8))
ax = plt.figure(figsize=(8,8)).gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.plot(gama_list,'ko')
plt.yscale('log')
plt.xlabel('$k$', fontsize=18)
plt.ylabel('$\Gamma(\mathbf{p})$', fontsize=18)
plt.grid()
#plt.xlim(-1,50)
#plt.xlim(-1, len(gama_list)+5)
plt.ylim(np.min(gama_list)-3*np.min(gama_list),np.max(gama_list)+3*np.min(gama_list))
#mpl.savefig('../manuscript/figures/magma-poor-margin-gama-list-alphas_1_0_0_1.png', dpi='figure', bbox_inches='tight')
#mpl.savefig('../manuscript/figures/magma-poor-margin-gama-list-alphas_2_1_0_1_more-known-depths.png', dpi='figure', bbox_inches='tight')
plt.show()
```
## Lithostatic Stress
```
sgm_true = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*true_ts1 + dc*(S0-tw-ts0-true_ts1-true_tm)+dm*true_tm)
sgm = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*p[0:n] + dc*(S0-tw-ts0-p[0:n]-p[n:n+n])+dm*p[n:n+n])
```
## Inversion model plot
```
# Inverrsion results
RM = S0 + p[n+n]
basement = tw + ts0 + p[0:n]
moho = S0 - p[n:n+n]
print ptrue[n+n], p[n+n]
polygons_water = []
for (yi, twi) in zip(yc, tw):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_water.append(Polygon(np.array([[y1, y2, y2, y1],
[0.0, 0.0, twi, twi]]).T,
props={'density': dw - dcc}))
polygons_sediments0 = []
for (yi, twi, s0i) in zip(yc, np.reshape(tw,(n,)), np.reshape(toi,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments0.append(Polygon(np.array([[y1, y2, y2, y1],
[twi, twi, s0i, s0i]]).T,
props={'density': ds0 - dcc}))
polygons_sediments1 = []
for (yi, s0i, s1i) in zip(yc, np.reshape(toi,(n,)), np.reshape(basement,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments1.append(Polygon(np.array([[y1, y2, y2, y1],
[s0i, s0i, s1i, s1i]]).T,
props={'density': ds1 - dcc}))
polygons_crust = []
for (yi, si, Si, dci) in zip(yc, np.reshape(basement,(n,)), np.reshape(moho,(n,)), dc):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_crust.append(Polygon(np.array([[y1, y2, y2, y1],
[si, si, Si, Si]]).T,
props={'density': dci - dcc}))
polygons_mantle = []
for (yi, Si) in zip(yc, np.reshape(moho,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_mantle.append(Polygon(np.array([[y1, y2, y2, y1],
[Si, Si, S0+p[n+n], S0+p[n+n]]]).T,
props={'density': dm - dcc}))
%matplotlib inline
plt.close('all')
fig = plt.figure(figsize=(12,16))
import matplotlib.gridspec as gridspec
heights = [8, 8, 8, 1]
gs = gridspec.GridSpec(4, 1, height_ratios=heights)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax1.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='--', linewidth=1)
ax1.plot(0.001*yc, gsyn, 'or', mfc='none', markersize=8, label='simulated data')
ax1.plot(0.001*yc, g0, '-b', linewidth=2, label='initial guess data')
ax1.plot(0.001*yc, g, '-g', linewidth=2, label='predicted data')
ax1.set_xlim(0.001*ymin, 0.001*ymax)
ax1.set_ylabel('gravity disturbance (mGal)', fontsize=16)
ax1.set_xticklabels(['%g'% (l) for l in ax1.get_xticks()], fontsize=14)
ax1.set_yticklabels(['%g'% (l) for l in ax1.get_yticks()], fontsize=14)
ax1.legend(loc='best', fontsize=14, facecolor='silver')
ax2.plot(0.001*yc, sgm_true, 'or', mfc='none', markersize=8, label='simulated lithostatic stress')
ax2.plot(0.001*yc, sgm, '-g', linewidth=2, label='evaluated lithostatic stress')
ax2.set_xlim(0.001*ymin, 0.001*ymax)
ax2.set_ylabel('lithostatic stress (MPa)', fontsize=16)
ax2.set_xticklabels(['%g'% (l) for l in ax2.get_xticks()], fontsize=14)
ax2.set_yticklabels(['%g'% (l) for l in ax2.get_yticks()], fontsize=14)
ax2.legend(loc='best', fontsize=14, facecolor='silver')
ax3.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=1)
aux = yc <= COT
for (pwi) in (polygons_water):
tmpx = [x for x in pwi.x]
tmpx.append(pwi.x[0])
tmpy = [y for y in pwi.y]
tmpy.append(pwi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='lightskyblue')
for (ps0i) in (polygons_sediments0):
tmpx = [x for x in ps0i.x]
tmpx.append(ps0i.x[0])
tmpy = [y for y in ps0i.y]
tmpy.append(ps0i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='tan')
for (ps1i) in (polygons_sediments1):
tmpx = [x for x in ps1i.x]
tmpx.append(ps1i.x[0])
tmpy = [y for y in ps1i.y]
tmpy.append(ps1i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='rosybrown')
for (pci) in (polygons_crust[:len(yc[aux])]):
tmpx = [x for x in pci.x]
tmpx.append(pci.x[0])
tmpy = [y for y in pci.y]
tmpy.append(pci.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='orange')
for (pcoi) in (polygons_crust[len(yc[aux]):n]):
tmpx = [x for x in pcoi.x]
tmpx.append(pcoi.x[0])
tmpy = [y for y in pcoi.y]
tmpy.append(pcoi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='olive')
for (pmi) in (polygons_mantle):
tmpx = [x for x in pmi.x]
tmpx.append(pmi.x[0])
tmpy = [y for y in pmi.y]
tmpy.append(pmi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='pink')
#ax3.axhline(y=S0, xmin=ymin, xmax=ymax, color='w', linestyle='--', linewidth=3)
ax3.plot(yc, tw, '-k', linewidth=3)
ax3.plot(yc, toi, '-k', linewidth=3)
ax3.plot(yc, true_basement, '-k', linewidth=3, label='true surfaces')
ax3.plot(yc, true_moho, '-k', linewidth=3)
ax3.plot(yc, ini_basement, '-.b', linewidth=3, label='initial guess surfaces')
ax3.plot(yc, ini_moho, '-.b', linewidth=3)
ax3.plot(yc, basement, '--w', linewidth=3, label='estimated surfaces')
ax3.plot(yc, moho, '--w', linewidth=3)
ax3.axhline(y=true_S0+true_dS0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=3)
ax3.axhline(y=S0+ini_dS0, xmin=ymin, xmax=ymax, color='b', linestyle='-.', linewidth=3)
ax3.axhline(y=S0+p[n+n], xmin=ymin, xmax=ymax, color='w', linestyle='--', linewidth=3)
ax3.plot(base_known[:,0], base_known[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_old[:,0], base_known_old[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_new[:,0], base_known_new[:,1], 'v', color = 'magenta', markersize=15, label='more known depths (basement)')
ax3.plot(moho_known[:,0], moho_known[:,1], 'D', color = 'lime', markersize=15, label='known depths (moho)')
#ax3.set_ylim((S0+p[n+n]), zmin)
ax3.set_ylim((56000.0), zmin)
ax3.set_xlim(ymin, ymax)
ax3.set_xlabel('y (km)', fontsize=16)
ax3.set_ylabel('z (km)', fontsize=16)
ax3.set_xticklabels(['%g'% (0.001*l) for l in ax3.get_xticks()], fontsize=14)
ax3.set_yticklabels(['%g'% (0.001*l) for l in ax3.get_yticks()], fontsize=14)
ax3.legend(loc='lower right', fontsize=14, facecolor='silver')
X, Y = fig.get_dpi()*fig.get_size_inches()
plt.title('Density contrast (kg/m$^{3}$)', fontsize=17)
#plt.title('Density (kg/m$^{3}$)', fontsize=17)
ax4.axis('off')
layers_list1 = ['water', 'sediment 1', 'sediment 2', 'continental', 'oceanic', 'mantle']
layers_list2 = ['', '', '', 'crust', 'crust', '']
colors_list = ['lightskyblue', 'tan', 'rosybrown', 'orange', 'olive', 'pink']
density_list = ['-1840', '-520', '-270', '0', '15', '430'] #original
#density_list = ['1030', '2350', '2600', '2870', '2885', '3300']
ncols = len(colors_list)
nrows = 1
h = Y / nrows
w = X / (ncols + 1)
i=ncols-1
for color, density, layers1, layers2 in zip(colors_list, density_list, layers_list1, layers_list2):
col = i // nrows
row = i % nrows
x = X - (col*w) - w
yi_line = Y
yf_line = Y - Y*0.15
yi_text1 = Y - Y*0.2
yi_text2 = Y - Y*0.28
yi_text3 = Y - Y*0.08
i-=1
poly = Polygon(np.array([[x, x+w*0.75, x+w*0.75, x], [yi_line, yi_line, yf_line, yf_line]]).T)
tmpx = [x for x in poly.x]
tmpx.append(poly.x[0])
tmpy = [y for y in poly.y]
tmpy.append(poly.y[0])
ax4.plot(tmpx, tmpy, linestyle='-', color='k', linewidth=1)
ax4.fill(tmpx, tmpy, color=color)
ax4.text(x+w*0.375, yi_text1, layers1, fontsize=(w*0.14), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text2, layers2, fontsize=(w*0.14), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text3, density, color = 'k', fontsize=(w*0.14), horizontalalignment='center', verticalalignment='center')
plt.tight_layout()
#mpl.savefig('../manuscript/figures/magma-poor-margin-grafics-estimated-model-alphas_1_0_0_1.png', dpi='figure', bbox_inches='tight')
#mpl.savefig('../data/fig/volcanic-margin-grafics-estimated-model-alphas_2_1_1_2_more-known-depths.png', dpi='figure', bbox_inches='tight')
plt.show()
```
| github_jupyter |
# **Fraud Detection & Model Evaluation** (SOLUTION)
Source: [https://github.com/d-insight/code-bank.git](https://github.com/dsfm-org/code-bank.git)
License: [MIT License](https://opensource.org/licenses/MIT). See open source [license](LICENSE) in the Code Bank repository.
-------------
## Overview
In this project, we will explore different model evaluation metrics in the context of fraud detection.
Data source: [Kaggle](https://www.kaggle.com/c/ieee-fraud-detection/overview).
| Feature name | Variable Type | Description
|------------------|---------------|--------------------------------------------------------
|isFraud | Categorical | Target variable, where 1 = fraud and 0 = no fraud
|TransactionAMT | Continuous | Transaction payment amount in USD
|ProductCD | Categorical | Product code, the product for each transaction
|card1 - card6 | Continuous | Payment card information, such as card type, card category, issue bank, country, etc.
|dist | Continuous | Distance
|C1 - C14 | Continuous | Counting, such as how many addresses are found to be associated with the payment card, etc. The actual meaning is masked.
|D1 - D15 | Continous | Timedelta, such as days between previous transaction, etc.
-------------
## **Part 0**: Setup
### Import Packages
```
# Import packages
import lightgbm
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 6]
import seaborn as sn
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
# Constants
TRAIN_PATH = 'data/train.csv'
TEST_PATH = 'data/test.csv'
THRESHOLDS = [i/100 for i in range(0, 101)]
SEED = 1234
```
## **Part 1**: Exploratory data analysis
```
# Load data
train_df = pd.read_csv(TRAIN_PATH)
test_df = pd.read_csv(TEST_PATH)
train_df.shape
# Target distribution in training data
train_df['isFraud'].value_counts() / len(train_df)
# Split into training and testing data
feature_names = [col for col in train_df.columns if col not in ['isFraud', 'addr1', 'addr2', 'P_emaildomain', 'R_emaildomain']]
X_train, y_train = train_df[feature_names], train_df['isFraud']
X_test, y_test = test_df[feature_names], test_df['isFraud']
```
## **Part 2**: Fit model
```
# Train model
model = lightgbm.LGBMClassifier(random_state=SEED)
model.fit(X_train, y_train)
# Evaluate model
y_test_pred = model.predict_proba(X_test)[:, 1]
print('The first 10 predictions ...')
y_test_pred[:10]
```
## **Part 3**: Compare evaluation metrics
### __a)__: Accuracy
$\frac{TP + TN}{TP + FP + TN + FN}$
Proportion of binary predictions that are correct, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
scores.append(accuracy_score(y_test, y_test_pred_binary))
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum accuracy of {}% at threshold {}'.format(round(max(scores), 2)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('Accuracy')
plt.show()
```
---
<img src="images/accuracy.png" width="800" height="800" align="center"/>
### __b)__: Confusion matrix
```
# Plot the confusion matrix
sn.heatmap(confusion_matrix(y_test, y_test_pred_binary), annot=True, fmt='.0f')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
```
### __c)__: True positive rate (recall, sensitivity)
$\frac{TP}{TP+FN}$
"Collect them all!" – High recall might give you some bad items, but it'll also return most of the good items.
How many fraudulent transactions do we “recall” out of all actual fraudulent transactions, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
true_positive_rate = tp / (tp + fn)
scores.append(true_positive_rate)
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum TPR of {}% at threshold {}'.format(round(max(scores), 4)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('TPR')
plt.show()
```
---
<img src="images/tpr.png" width="800" height="800" align="center"/>
### __d)__: False positive rate (Type I error)
$\frac{FP}{FP+TN}$
"False alarm!"
The fraction of false alarms raised by the model, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
false_positive_rate = fp / (fp + tn)
scores.append(false_positive_rate)
min_t = THRESHOLDS[np.argmin(scores)]
y_test_pred_binary = [int(i >= min_t) for i in y_test_pred]
print('Minimum FPR of {}% at threshold {}'.format(round(min(scores), 4)*100, min_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(min_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('FPR')
plt.show()
```
---
<img src="images/fpr.png" width="800" height="800" align="center"/>
### __e)__: False negative rate (Type II error)
$\frac{FN}{FN+TP}$
"Dammit, we missed it!"
The fraction of fraudulent transactions
missed by the model, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
false_negative_rate = fn / (fn + tp)
scores.append(false_negative_rate)
min_t = THRESHOLDS[np.argmin(scores)]
y_test_pred_binary = [int(i >= min_t) for i in y_test_pred]
print('Minimum FNR of {}% at threshold {}'.format(round(min(scores), 4)*100, min_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(min_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('FNR')
plt.show()
```
---
<img src="images/fnr.png" width="800" height="800" align="center"/>
### __f)__: Positive predictive value (precision)
$\frac{TP}{TP+FP}$
"Don't waste my time!"
High precision might leave some good ideas out, but what it returns is of high quality (i.e. very precise).
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
ppv = tp / (tp + fp)
scores.append(ppv)
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum PPV of {}% at threshold {}'.format(round(max(scores), 4)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('PPV')
plt.show()
```
### __g)__: F1 score
"Let's just have one metric."
A balance between precision (accuracy over cases predicted to be positive) and recall (actual positive cases that correctly get a positive prediction), for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
f1 = f1_score(y_test, y_test_pred_binary)
scores.append(f1)
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum F1 of {}% at threshold {}'.format(round(max(scores), 4)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('F1 score')
plt.show()
```
---
<img src="images/f1.png" width="800" height="800" align="center"/>
## Bonus materials
- Google's Cassie Kozyrkov on precision and recall [on Youtube](https://www.youtube.com/watch?v=O4joFUqvz40)
| github_jupyter |
## How-to
1. You need to use [modeling.py](modeling.py) from extractive-summarization folder. An improvement BERT model to accept text longer than 512 tokens.
```
import tensorflow as tf
import numpy as np
import pickle
with open('dataset-bert.pkl', 'rb') as fopen:
dataset = pickle.load(fopen)
dataset.keys()
BERT_VOCAB = 'uncased_L-12_H-768_A-12/vocab.txt'
BERT_INIT_CHKPNT = 'uncased_L-12_H-768_A-12/bert_model.ckpt'
BERT_CONFIG = 'uncased_L-12_H-768_A-12/bert_config.json'
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
import modeling
tokenization.validate_case_matches_checkpoint(True,BERT_INIT_CHKPNT)
tokenizer = tokenization.FullTokenizer(
vocab_file=BERT_VOCAB, do_lower_case=True)
bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG)
epoch = 20
batch_size = 8
warmup_proportion = 0.1
num_train_steps = int(len(dataset['train_texts']) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
class Model:
def __init__(
self,
learning_rate = 2e-5,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, None])
self.mask = tf.placeholder(tf.int32, [None, None])
self.clss = tf.placeholder(tf.int32, [None, None])
mask = tf.cast(self.mask, tf.float32)
model = modeling.BertModel(
config=bert_config,
is_training=True,
input_ids=self.X,
input_mask=self.input_masks,
token_type_ids=self.segment_ids,
use_one_hot_embeddings=False)
outputs = tf.gather(model.get_sequence_output(), self.clss, axis = 1, batch_dims = 1)
self.logits = tf.layers.dense(outputs, 1)
self.logits = tf.squeeze(self.logits, axis=-1)
self.logits = self.logits * mask
crossent = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.Y)
crossent = crossent * mask
crossent = tf.reduce_sum(crossent)
total_size = tf.reduce_sum(mask)
self.cost = tf.div_no_nan(crossent, total_size)
self.optimizer = optimization.create_optimizer(self.cost, learning_rate,
num_train_steps, num_warmup_steps, False)
l = tf.round(tf.sigmoid(self.logits))
self.accuracy = tf.reduce_mean(tf.cast(tf.boolean_mask(l, tf.equal(self.Y, 1)), tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(learning_rate = 1e-5)
sess.run(tf.global_variables_initializer())
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert')
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, BERT_INIT_CHKPNT)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
train_X = dataset['train_texts']
test_X = dataset['test_texts']
train_clss = dataset['train_clss']
test_clss = dataset['test_clss']
train_Y = dataset['train_labels']
test_Y = dataset['test_labels']
train_segments = dataset['train_segments']
test_segments = dataset['test_segments']
import tqdm
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x, _ = pad_sentence_batch(train_X[i : index], 0)
batch_y, _ = pad_sentence_batch(train_Y[i : index], 0)
batch_segments, _ = pad_sentence_batch(train_segments[i : index], 0)
batch_clss, _ = pad_sentence_batch(train_clss[i : index], -1)
batch_clss = np.array(batch_clss)
batch_x = np.array(batch_x)
batch_mask = 1 - (batch_clss == -1)
batch_clss[batch_clss == -1] = 0
mask_src = 1 - (batch_x == 0)
feed = {model.X: batch_x,
model.Y: batch_y,
model.mask: batch_mask,
model.clss: batch_clss,
model.segment_ids: batch_segments,
model.input_masks: mask_src}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x, _ = pad_sentence_batch(test_X[i : index], 0)
batch_y, _ = pad_sentence_batch(test_Y[i : index], 0)
batch_segments, _ = pad_sentence_batch(test_segments[i : index], 0)
batch_clss, _ = pad_sentence_batch(test_clss[i : index], -1)
batch_clss = np.array(batch_clss)
batch_x = np.array(batch_x)
batch_mask = 1 - (batch_clss == -1)
batch_clss[batch_clss == -1] = 0
mask_src = 1 - (batch_x == 0)
feed = {model.X: batch_x,
model.Y: batch_y,
model.mask: batch_mask,
model.clss: batch_clss,
model.segment_ids: batch_segments,
model.input_masks: mask_src}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/GuysBarash/ML_Workshop/blob/main/Bayesian_Agent.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
from scipy.optimize import minimize_scalar
from scipy.stats import beta
from scipy.stats import binom
from scipy.stats import bernoulli
from matplotlib import animation
from IPython.display import HTML, clear_output
from matplotlib import rc
matplotlib.use('Agg')
agent_truth_p = 0.8 #@param {type: "slider", min: 0.0, max: 1.0, step:0.01}
repeats = 700
starting_guess_for_b = 1 # Agent's correct answers
starting_guess_for_a = 1 # Agent's incorrect answers
```
# Example
```
def plotPrior(a, b):
fig = plt.figure()
ax = plt.axes()
plt.xlim(0, 1)
x = np.linspace(0, 1, 1000)
y = beta.pdf(x, a, b)
x_guess = x[y.argmax()]
ax.plot(x, y);
maximal_point = ax.axvline(x=x_guess, label=f'Best guess for prior: {x_guess:>.2f}');
ax.legend();
return
```
The agent has a chance of "p" of telling the truth, and a chance of 1-p of randomly selecting an answer
```
def agentDecision(real_answer,options,agent_truth_p):
choice = bernoulli.rvs(agent_truth_p)
if choice == 1:
return real_answer
else:
choice = bernoulli.rvs(0.5)
if choice == 1:
return options[0]
else:
return options[1]
b = starting_guess_for_b
a = starting_guess_for_a
```
Prior before any testing takes place. You can see it's balanced.
```
print("p = ", a / (a + b))
plotPrior(a, b)
agent_log = pd.DataFrame(index=range(repeats),columns=['a','b','Real type','Agent answer','Agent is correct'])
data_validity_types = ["BAD","GOOD"]
for i in range(repeats):
data_is_valid = np.random.choice(data_validity_types)
agent_response_on_the_data = agentDecision(data_is_valid,data_validity_types,agent_truth_p)
agent_is_correct = data_is_valid == agent_response_on_the_data
agent_log.loc[i,['Real type','Agent answer','Agent is correct']] = data_is_valid, agent_response_on_the_data, agent_is_correct
# a and b update dynamically each step
a += int(agent_is_correct)
b += int(not agent_is_correct)
agent_log.loc[i,['a','b']] = a, b
correct_answers = agent_log['Agent is correct'].sum()
total_answers = agent_log['Agent is correct'].count()
percentage = 0
if total_answers > 0:
percentage = float(correct_answers) / total_answers
print(f"Agent was right {correct_answers}/{total_answers} ({100 * percentage:>.2f} %) of the times.")
plotPrior(a, b)
```
# Dynamic example
```
# create a figure and axes
fig = plt.figure(figsize=(12,5));
ax = plt.subplot(1,1,1);
# set up the subplots as needed
ax.set_xlim(( 0, 1));
ax.set_ylim((0, 10));
# create objects that will change in the animation. These are
# initially empty, and will be given new values for each frame
# in the animation.
txt_title = ax.set_title('');
maximal_point = ax.axvline(x=0, label='line at x = {}'.format(0));
line1, = ax.plot([], [], 'b', lw=2); # ax.plot returns a list of 2D line objects
clear_output()
plt.close('all')
def getPriorFrame(frame_n):
global agent_log
a = agent_log.loc[frame_n,'a']
b = agent_log.loc[frame_n,'b']
x = np.linspace(0, 1, 1000)
y = beta.pdf(x, a, b)
x_guess = x[y.argmax()]
ax.legend()
maximal_point.set_xdata(x_guess)
maximal_point.set_label(f'Best guess for prior: {x_guess:>.2f}')
line1.set_data(x, y)
txt_title.set_text(f'Agent step = {frame_n:4d}, a = {a}, b= {b}')
return line1,
num_of_steps = 50
frames =[0]+ list(range(0, len(agent_log), int(len(agent_log) / num_of_steps))) + [agent_log.index[-1]]
ani = animation.FuncAnimation(fig, getPriorFrame, frames,
interval=100, blit=True)
rc('animation', html='html5')
ani
```
| github_jupyter |
```
%matplotlib inline
import pymc3 as pm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
plt.style.use(['seaborn-colorblind', 'seaborn-darkgrid'])
```
#### Code 2.1
```
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
```
#### Code 2.2
$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$
The probability of observing six W’s in nine tosses—under a value of p=0.5
```
stats.binom.pmf(6, n=9, p=0.5)
```
#### Code 2.3 and 2.5
Computing the posterior using a grid approximation.
In the book the following code is not inside a function, but this way is easier to play with different parameters
```
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
#prior = (p_grid >= 0.5).astype(int) # truncated
#prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
```
#### Code 2.3
```
points = 20
w, n = 6, 9
p_grid, posterior = posterior_grid_approx(points, w, n)
plt.plot(p_grid, posterior, 'o-', label='success = {}\ntosses = {}'.format(w, n))
plt.xlabel('probability of water', fontsize=14)
plt.ylabel('posterior probability', fontsize=14)
plt.title('{} points'.format(points))
plt.legend(loc=0);
```
#### Code 2.6
Computing the posterior using the quadratic aproximation
```
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1/pm.find_hessian(mean_q, vars=[p]))**0.5)[0]
mean_q['p'], std_q
norm = stats.norm(mean_q, std_q)
prob = .89
z = stats.norm.ppf([(1-prob)/2, (1+prob)/2])
pi = mean_q['p'] + std_q * z
pi
```
#### Code 2.7
```
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x , w+1, n-w+1),
label='True posterior')
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q['p'], std_q),
label='Quadratic approximation')
plt.legend(loc=0, fontsize=13)
plt.title('n = {}'.format(n), fontsize=14)
plt.xlabel('Proportion water', fontsize=14)
plt.ylabel('Density', fontsize=14);
import sys, IPython, scipy, matplotlib, platform
print("This notebook was createad on a computer %s running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nSciPy %s\nMatplotlib %s\n" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, scipy.__version__, matplotlib.__version__))
```
| github_jupyter |
# Rolling Window Features
Following notebook showcases an example workflow of creating rolling window features and building a model to predict which customers will buy in next 4 weeks.
This uses dummy sales data but the idea can be implemented on actual sales data and can also be expanded to include other available data sources such as click-stream data, call center data, email contacts data, etc.
***
<b>Spark 3.1.2</b> (with Python 3.8) has been used for this notebook.<br>
Refer to [spark documentation](https://spark.apache.org/docs/3.1.2/api/sql/index.html) for help with <b>data ops functions</b>.<br>
Refer to [this article](https://medium.com/analytics-vidhya/installing-and-using-pyspark-on-windows-machine-59c2d64af76e) to <b>install and use PySpark on Windows machine</b>.
### Building a spark session
To create a SparkSession, use the following builder pattern:
`spark = SparkSession\
.builder\
.master("local")\
.appName("Word Count")\
.config("spark.some.config.option", "some-value")\
.getOrCreate()`
```
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql import Window
from pyspark.sql.types import FloatType
#initiating spark session
spark.stop()
spark = SparkSession\
.builder\
.appName("rolling_window")\
.config("spark.executor.memory", "1536m")\
.config("spark.driver.memory", "2g")\
.getOrCreate()
spark
```
## Data prep
We will be using window functions to compute relative features for all dates. We will first aggregate the data to customer x week level so it is easier to handle.
<mark>The week level date that we create will serve as the 'reference date' from which everything will be relative.</mark>
All the required dimension tables have to be joined with the sales table prior to aggregation so that we can create all required features.
### Read input datasets
```
import pandas as pd
df_sales = spark.read.csv('./data/rw_sales.csv',inferSchema=True,header=True)
df_customer = spark.read.csv('./data/clustering_customer.csv',inferSchema=True,header=True)
df_product = spark.read.csv('./data/clustering_product.csv',inferSchema=True,header=True)
df_payment = spark.read.csv('./data/clustering_payment.csv',inferSchema=True,header=True)
```
<b>Quick exploration of the datasets:</b>
1. We have sales data that captures date, customer id, product, quantity, dollar amount & payment type at order x item level. `order_item_id` refers to each unique product in each order
2. We have corresponding dimension tables for customer info, product info, and payment tender info
```
df_sales.show(5)
# order_item_id is the primary key
(df_sales.count(),
df_sales.selectExpr('count(Distinct order_item_id)').collect()[0][0],
df_sales.selectExpr('count(Distinct order_id)').collect()[0][0])
df_sales.printSchema()
# fix date type for tran_dt
df_sales = df_sales.withColumn('tran_dt', F.to_date('tran_dt'))
df_customer.show(5)
# we have 1k unique customers in sales data with all their info in customer dimension table
(df_sales.selectExpr('count(Distinct customer_id)').collect()[0][0],
df_customer.count(),
df_customer.selectExpr('count(Distinct customer_id)').collect()[0][0])
# product dimension table provides category and price for each product
df_product.show(5)
(df_product.count(),
df_product.selectExpr('count(Distinct product_id)').collect()[0][0])
# payment type table maps the payment type id from sales table
df_payment.show(5)
```
### Join all dim tables and add week_end column
```
df_sales = df_sales.join(df_product.select('product_id','category'), on=['product_id'], how='left')
df_sales = df_sales.join(df_payment, on=['payment_type_id'], how='left')
```
<b>week_end column: Saturday of every week</b>
`dayofweek()` returns 1-7 corresponding to Sun-Sat for a date.
Using this, we will convert each date to the date corresponding to the Saturday of that week (week: Sun-Sat) using below logic:<br/>
`date + 7 - dayofweek()`
```
df_sales.printSchema()
df_sales = df_sales.withColumn('week_end',
F.col('tran_dt') + 7 - F.dayofweek('tran_dt'))
df_sales.show(5)
```
### customer_id x week_end aggregation
We will be creating following features at weekly level. These will then be aggregated for multiple time frames using window functions for the final dataset.
1. Sales
2. No. of orders
3. No. of units
4. Sales split by category
5. Sales split by payment type
```
df_sales_agg = df_sales.groupBy('customer_id','week_end').agg(
F.sum('dollars').alias('sales'),
F.countDistinct('order_id').alias('orders'),
F.sum('qty').alias('units'))
# category split pivot
df_sales_cat_agg = df_sales.withColumn('category', F.concat(F.lit('cat_'), F.col('category')))
df_sales_cat_agg = df_sales_cat_agg.groupBy('customer_id','week_end').pivot('category').agg(F.sum('dollars'))
# payment type split pivot
# clean-up values in payment type column
df_payment_agg = df_sales.withColumn(
'payment_type',
F.concat(F.lit('pay_'), F.regexp_replace(F.col('payment_type'),' ','_')))
df_payment_agg = df_payment_agg.groupby('customer_id','week_end').pivot('payment_type').agg(F.max('dollars'))
# join all together
df_sales_agg = df_sales_agg.join(df_sales_cat_agg, on=['customer_id','week_end'], how='left')
df_sales_agg = df_sales_agg.join(df_payment_agg, on=['customer_id','week_end'], how='left')
df_sales_agg = df_sales_agg.persist()
df_sales_agg.count()
df_sales_agg.show(5)
```
### Fill Missing weeks
```
# cust level min and max weeks
df_cust = df_sales_agg.groupBy('customer_id').agg(
F.min('week_end').alias('min_week'),
F.max('week_end').alias('max_week'))
# function to get a dataframe with 1 row per date in provided range
def pandas_date_range(start, end):
dt_rng = pd.date_range(start=start, end=end, freq='W-SAT') # W-SAT required as we want all Saturdays
df_date = pd.DataFrame(dt_rng, columns=['date'])
return df_date
# use the cust level table and create a df with all Saturdays in our range
date_list = df_cust.selectExpr('min(min_week)', 'max(max_week)').collect()[0]
min_date = date_list[0]
max_date = date_list[1]
# use the function and create df
df_date_range = spark.createDataFrame(pandas_date_range(min_date, max_date))
# date format
df_date_range = df_date_range.withColumn('date',F.to_date('date'))
df_date_range = df_date_range.repartition(1).persist()
df_date_range.count()
```
<b>Cross join date list df with cust table to create filled base table</b>
```
df_base = df_cust.crossJoin(F.broadcast(df_date_range))
# filter to keep only week_end since first week per customer
df_base = df_base.where(F.col('date')>=F.col('min_week'))
# rename date to week_end
df_base = df_base.withColumnRenamed('date','week_end')
```
<b>Join with the aggregated week level table to create full base table</b>
```
df_base = df_base.join(df_sales_agg, on=['customer_id','week_end'], how='left')
df_base = df_base.fillna(0)
df_base = df_base.persist()
df_base.count()
# write base table as parquet
df_base.repartition(8).write.parquet('./data/rw_base/', mode='overwrite')
df_base = spark.read.parquet('./data/rw_base/')
```
## y-variable
Determining whether a customer buys something in the next 4 weeks of current week.
```
# flag 1/0 for weeks with purchases
df_base = df_base.withColumn('purchase_flag', F.when(F.col('sales')>0,1).otherwise(0))
# window to aggregate the flag over next 4 weeks
df_base = df_base.withColumn(
'purchase_flag_next_4w',
F.max('purchase_flag').over(
Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(1,4)))
```
## Features
We will be aggregating the features columns over various time intervals (1/4/13/26/52 weeks) to create a rich set of look-back features. We will also create derived features post aggregation.
```
# we can create and keep Window() objects that can be referenced in multiple formulas
# we don't need a window definition for 1w features as these are already present
window_4w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-3,Window.currentRow)
window_13w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-12,Window.currentRow)
window_26w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-25,Window.currentRow)
window_52w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-51,Window.currentRow)
df_base.columns
```
<b>Direct features</b>
```
cols_skip = ['customer_id','week_end','min_week','max_week','purchase_flag_next_4w']
for cols in df_base.drop(*cols_skip).columns:
df_base = df_base.withColumn(cols+'_4w', F.sum(F.col(cols)).over(window_4w))
df_base = df_base.withColumn(cols+'_13w', F.sum(F.col(cols)).over(window_13w))
df_base = df_base.withColumn(cols+'_26w', F.sum(F.col(cols)).over(window_26w))
df_base = df_base.withColumn(cols+'_52w', F.sum(F.col(cols)).over(window_52w))
```
<b>Derived features</b>
```
# aov, aur, upt at each time cut
for cols in ['sales','orders','units']:
for time_cuts in ['1w','_4w','_13w','_26w','_52w']:
if time_cuts=='1w': time_cuts=''
df_base = df_base.withColumn('aov'+time_cuts, F.col('sales'+time_cuts)/F.col('orders'+time_cuts))
df_base = df_base.withColumn('aur'+time_cuts, F.col('sales'+time_cuts)/F.col('units'+time_cuts))
df_base = df_base.withColumn('upt'+time_cuts, F.col('units'+time_cuts)/F.col('orders'+time_cuts))
# % split of category and payment type for 26w (can be extended to other time-frames as well)
for cat in ['A','B','C','D','E']:
df_base = df_base.withColumn('cat_'+cat+'_26w_perc', F.col('cat_'+cat+'_26w')/F.col('sales_26w'))
for pay in ['cash', 'credit_card', 'debit_card', 'gift_card', 'others']:
df_base = df_base.withColumn('pay_'+pay+'_26w_perc', F.col('pay_'+pay+'_26w')/F.col('sales_26w'))
# all columns
df_base.columns
```
<b>Derived features: trend vars</b>
```
# we will take ratio of sales for different time-frames to estimate trend features
# that depict whether a customer has an increasing trend or not
df_base = df_base.withColumn('sales_1w_over_4w', F.col('sales')/ F.col('sales_4w'))
df_base = df_base.withColumn('sales_4w_over_13w', F.col('sales_4w')/ F.col('sales_13w'))
df_base = df_base.withColumn('sales_13w_over_26w', F.col('sales_13w')/F.col('sales_26w'))
df_base = df_base.withColumn('sales_26w_over_52w', F.col('sales_26w')/F.col('sales_52w'))
```
<b>Time elements</b>
```
# extract year, month, and week of year from week_end to be used as features
df_base = df_base.withColumn('year', F.year('week_end'))
df_base = df_base.withColumn('month', F.month('week_end'))
df_base = df_base.withColumn('weekofyear', F.weekofyear('week_end'))
```
<b>More derived features</b>:<br/>
We can add many more derived features as well, as required.
e.g. lag variables of existing features, trend ratios for other features, % change (Q-o-Q, M-o-M type) using lag variables, etc.
```
# save sample rows to csv for checks
df_base.limit(50).toPandas().to_csv('./files/rw_features_qc.csv',index=False)
# save features dataset as parquet
df_base.repartition(8).write.parquet('./data/rw_features/', mode='overwrite')
df_features = spark.read.parquet('./data/rw_features/')
```
## Model Build
### Dataset for modeling
<b>Sample one week_end per month</b>
```
df_wk_sample = df_features.select('week_end').withColumn('month', F.substring(F.col('week_end'), 1,7))
df_wk_sample = df_wk_sample.groupBy('month').agg(F.max('week_end').alias('week_end'))
df_wk_sample = df_wk_sample.repartition(1).persist()
df_wk_sample.count()
df_wk_sample.sort('week_end').show(5)
count_features = df_features.count()
# join back to filer
df_model = df_features.join(F.broadcast(df_wk_sample.select('week_end')), on=['week_end'], how='inner')
count_wk_sample = df_model.count()
```
<b>Eligibility filter</b>: Customer should be active in last year w.r.t the reference date
```
# use sales_52w for elig. filter
df_model = df_model.where(F.col('sales_52w')>0)
count_elig = df_model.count()
# count of rows at each stage
print(count_features, count_wk_sample, count_elig)
```
<b>Removing latest 4 week_end dates</b>: As we have a look-forward period of 4 weeks, latest 4 week_end dates in the data cannot be used for our model as these do not have 4 weeks ahead of them for the y-variable.
```
# see latest week_end dates (in the dataframe prior to monthly sampling)
df_features.select('week_end').drop_duplicates().sort(F.col('week_end').desc()).show(5)
# filter
df_model = df_model.where(F.col('week_end')<'2020-11-14')
count_4w_rm = df_model.count()
# count of rows at each stage
print(count_features, count_wk_sample, count_elig, count_4w_rm)
```
### Model Dataset Summary
Let's look at event rate for our dataset and also get a quick summary of all features.
The y-variable is balanced here because it is a dummy dataset. <mark>In most actual scenarios, this will not be balanced and the model build exercise will involving sampling for balancing.</mark>
```
df_model.groupBy('purchase_flag_next_4w').count().sort('purchase_flag_next_4w').show()
df_model.groupBy().agg(F.avg('purchase_flag_next_4w').alias('event_rate'), F.avg('purchase_flag').alias('wk_evt_rt')).show()
```
<b>Saving summary of all numerical features as a csv</b>
```
summary_metrics =\
('count','mean','stddev','min','0.10%','1.00%','5.00%','10.00%','20.00%','25.00%','30.00%',
'40.00%','50.00%','60.00%','70.00%','75.00%','80.00%','90.00%','95.00%','99.00%','99.90%','max')
df_summary_numeric = df_model.summary(*summary_metrics)
df_summary_numeric.toPandas().T.to_csv('./files/rw_features_summary.csv')
# fillna
df_model = df_model.fillna(0)
```
### Train-Test Split
80-20 split
```
train, test = df_model.randomSplit([0.8, 0.2], seed=125)
train.columns
```
### Data Prep
Spark Models require a vector of features as input. Categorical columns also need to be String Indexed before they can be used.
As we don't have any categorical columns currently, we will directly go with VectorAssembler.
<b>We will add it to a pipeline model that can be saved to be used on test & scoring datasets.</b>
```
# model related imports (RF)
from pyspark.ml.classification import RandomForestClassifier, RandomForestClassificationModel
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.feature import VectorAssembler, StringIndexer
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# list of features: remove identifier columns and the y-var
col_list = df_model.drop('week_end','customer_id','min_week','max_week','purchase_flag_next_4w').columns
stages = []
assembler = VectorAssembler(inputCols=col_list, outputCol='features')
stages.append(assembler)
pipe = Pipeline(stages=stages)
pipe_model = pipe.fit(train)
pipe_model.write().overwrite().save('./files/model_objects/rw_pipe/')
pipe_model = PipelineModel.load('./files/model_objects/rw_pipe/')
```
<b>Apply the transformation pipeline</b>
Also keep the identifier columns and y-var in the transformed dataframe.
```
train_pr = pipe_model.transform(train)
train_pr = train_pr.select('customer_id','week_end','purchase_flag_next_4w','features')
train_pr = train_pr.persist()
train_pr.count()
test_pr = pipe_model.transform(test)
test_pr = test_pr.select('customer_id','week_end','purchase_flag_next_4w','features')
test_pr = test_pr.persist()
test_pr.count()
```
### Model Training
We will train one iteration of Random Forest model as showcase.
In actual scenario, you will have to iterate through the training step multiple times for feature selection, and model hyper parameter tuning to get a good final model.
```
train_pr.show(5)
model_params = {
'labelCol': 'purchase_flag_next_4w',
'numTrees': 128, # default: 128
'maxDepth': 12, # default: 12
'featuresCol': 'features',
'minInstancesPerNode': 25,
'maxBins': 128,
'minInfoGain': 0.0,
'subsamplingRate': 0.7,
'featureSubsetStrategy': '0.3',
'impurity': 'gini',
'seed': 125,
'cacheNodeIds': False,
'maxMemoryInMB': 256
}
clf = RandomForestClassifier(**model_params)
trained_clf = clf.fit(train_pr)
```
### Feature Importance
We will save feature importance as a csv.
```
# Feature importance
feature_importance_list = trained_clf.featureImportances
feature_list = pd.DataFrame(train_pr.schema['features'].metadata['ml_attr']['attrs']['numeric']).sort_values('idx')
feature_importance_list = pd.DataFrame(
data=feature_importance_list.toArray(),
columns=['relative_importance'],
index=feature_list['name'])
feature_importance_list = feature_importance_list.sort_values('relative_importance', ascending=False)
feature_importance_list.to_csv('./files/rw_rf_feat_imp.csv')
```
### Predict on train and test
```
secondelement = F.udf(lambda v: float(v[1]), FloatType())
train_pred = trained_clf.transform(train_pr).withColumn('score',secondelement(F.col('probability')))
test_pred = trained_clf.transform(test_pr).withColumn('score', secondelement(F.col('probability')))
test_pred.show(5)
```
### Test Set Evaluation
```
evaluator = BinaryClassificationEvaluator(
rawPredictionCol='rawPrediction',
labelCol='purchase_flag_next_4w',
metricName='areaUnderROC')
# areaUnderROC
evaluator.evaluate(train_pred)
evaluator.evaluate(test_pred)
# cm
test_pred.groupBy('purchase_flag_next_4w','prediction').count().sort('purchase_flag_next_4w','prediction').show()
# accuracy
test_pred.where(F.col('purchase_flag_next_4w')==F.col('prediction')).count()/test_pred.count()
```
### Save Model
```
trained_clf.write().overwrite().save('./files/model_objects/rw_rf_model/')
trained_clf = RandomForestClassificationModel.load('./files/model_objects/rw_rf_model/')
```
## Scoring
We will take the records for latest week_end from df_features and score it using our trained model.
```
df_features = spark.read.parquet('./data/rw_features/')
max_we = df_features.selectExpr('max(week_end)').collect()[0][0]
max_we
df_scoring = df_features.where(F.col('week_end')==max_we)
df_scoring.count()
# fillna
df_scoring = df_scoring.fillna(0)
# transformation pipeline
pipe_model = PipelineModel.load('./files/model_objects/rw_pipe/')
# apply
df_scoring = pipe_model.transform(df_scoring)
df_scoring = df_scoring.select('customer_id','week_end','features')
# rf model
trained_clf = RandomForestClassificationModel.load('./files/model_objects/rw_rf_model/')
#apply
secondelement = F.udf(lambda v: float(v[1]), FloatType())
df_scoring = trained_clf.transform(df_scoring).withColumn('score',secondelement(F.col('probability')))
df_scoring.show(5)
# save scored output
df_scoring.repartition(8).write.parquet('./data/rw_scored/', mode='overwrite')
```
| github_jupyter |
# Hyperparameter Tuning using Your Own Keras/Tensorflow Container
This notebook shows how to build your own Keras(Tensorflow) container, test it locally using SageMaker Python SDK local mode, and bring it to SageMaker for training, leveraging hyperparameter tuning.
The model used for this notebook is a ResNet model, trainer with the CIFAR-10 dataset. The example is based on https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
## Set up the notebook instance to support local mode
Currently you need to install docker-compose in order to use local mode (i.e., testing the container in the notebook instance without pushing it to ECR).
```
!/bin/bash setup.sh
```
## Permissions
Running this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because it creates new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately.
## Set up the environment
We will set up a few things before starting the workflow.
1. get the execution role which will be passed to sagemaker for accessing your resources such as s3 bucket
2. specify the s3 bucket and prefix where training data set and model artifacts are stored
```
import os
import numpy as np
import tempfile
import tensorflow as tf
import sagemaker
import boto3
from sagemaker.estimator import Estimator
region = boto3.Session().region_name
sagemaker_session = sagemaker.Session()
smclient = boto3.client("sagemaker")
bucket = (
sagemaker.Session().default_bucket()
) # s3 bucket name, must be in the same region as the one specified above
prefix = "sagemaker/DEMO-hpo-keras-cifar10"
role = sagemaker.get_execution_role()
NUM_CLASSES = 10 # the data set has 10 categories of images
```
## Complete source code
- [trainer/start.py](trainer/start.py): Keras model
- [trainer/environment.py](trainer/environment.py): Contain information about the SageMaker environment
## Building the image
We will build the docker image using the Tensorflow versions on dockerhub. The full list of Tensorflow versions can be found at https://hub.docker.com/r/tensorflow/tensorflow/tags/
```
import shlex
import subprocess
def get_image_name(ecr_repository, tensorflow_version_tag):
return "%s:tensorflow-%s" % (ecr_repository, tensorflow_version_tag)
def build_image(name, version):
cmd = "docker build -t %s --build-arg VERSION=%s -f Dockerfile ." % (name, version)
subprocess.check_call(shlex.split(cmd))
# version tag can be found at https://hub.docker.com/r/tensorflow/tensorflow/tags/
# e.g., latest cpu version is 'latest', while latest gpu version is 'latest-gpu'
tensorflow_version_tag = "1.10.1"
account = boto3.client("sts").get_caller_identity()["Account"]
domain = "amazonaws.com"
if region == "cn-north-1" or region == "cn-northwest-1":
domain = "amazonaws.com.cn"
ecr_repository = "%s.dkr.ecr.%s.%s/test" % (
account,
region,
domain,
) # your ECR repository, which you should have been created before running the notebook
image_name = get_image_name(ecr_repository, tensorflow_version_tag)
print("building image:" + image_name)
build_image(image_name, tensorflow_version_tag)
```
## Prepare the data
```
def upload_channel(channel_name, x, y):
y = tf.keras.utils.to_categorical(y, NUM_CLASSES)
file_path = tempfile.mkdtemp()
np.savez_compressed(os.path.join(file_path, "cifar-10-npz-compressed.npz"), x=x, y=y)
return sagemaker_session.upload_data(
path=file_path, bucket=bucket, key_prefix="data/DEMO-keras-cifar10/%s" % channel_name
)
def upload_training_data():
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
train_data_location = upload_channel("train", x_train, y_train)
test_data_location = upload_channel("test", x_test, y_test)
return {"train": train_data_location, "test": test_data_location}
channels = upload_training_data()
```
## Testing the container locally (optional)
You can test the container locally using local mode of SageMaker Python SDK. A training container will be created in the notebook instance based on the docker image you built. Note that we have not pushed the docker image to ECR yet since we are only running local mode here. You can skip to the tuning step if you want but testing the container locally can help you find issues quickly before kicking off the tuning job.
### Setting the hyperparameters
```
hyperparameters = dict(
batch_size=32,
data_augmentation=True,
learning_rate=0.0001,
width_shift_range=0.1,
height_shift_range=0.1,
epochs=1,
)
hyperparameters
```
### Create a training job using local mode
```
%%time
output_location = "s3://{}/{}/output".format(bucket, prefix)
estimator = Estimator(
image_name,
role=role,
output_path=output_location,
train_instance_count=1,
train_instance_type="local",
hyperparameters=hyperparameters,
)
estimator.fit(channels)
```
## Pushing the container to ECR
Now that we've tested the container locally and it works fine, we can move on to run the hyperparmeter tuning. Before kicking off the tuning job, you need to push the docker image to ECR first.
The cell below will create the ECR repository, if it does not exist yet, and push the image to ECR.
```
# The name of our algorithm
algorithm_name = 'test'
# If the repository doesn't exist in ECR, create it.
exist_repo = !aws ecr describe-repositories --repository-names {algorithm_name} > /dev/null 2>&1
if not exist_repo:
!aws ecr create-repository --repository-name {algorithm_name} > /dev/null
# Get the login command from ECR and execute it directly
!$(aws ecr get-login --region {region} --no-include-email)
!docker push {image_name}
```
## Specify hyperparameter tuning job configuration
*Note, with the default setting below, the hyperparameter tuning job can take 20~30 minutes to complete. You can customize the code in order to get better result, such as increasing the total number of training jobs, epochs, etc., with the understanding that the tuning time will be increased accordingly as well.*
Now you configure the tuning job by defining a JSON object that you pass as the value of the TuningJobConfig parameter to the create_tuning_job call. In this JSON object, you specify:
* The ranges of hyperparameters you want to tune
* The limits of the resource the tuning job can consume
* The objective metric for the tuning job
```
import json
from time import gmtime, strftime
tuning_job_name = "BYO-keras-tuningjob-" + strftime("%d-%H-%M-%S", gmtime())
print(tuning_job_name)
tuning_job_config = {
"ParameterRanges": {
"CategoricalParameterRanges": [],
"ContinuousParameterRanges": [
{
"MaxValue": "0.001",
"MinValue": "0.0001",
"Name": "learning_rate",
}
],
"IntegerParameterRanges": [],
},
"ResourceLimits": {"MaxNumberOfTrainingJobs": 9, "MaxParallelTrainingJobs": 3},
"Strategy": "Bayesian",
"HyperParameterTuningJobObjective": {"MetricName": "loss", "Type": "Minimize"},
}
```
## Specify training job configuration
Now you configure the training jobs the tuning job launches by defining a JSON object that you pass as the value of the TrainingJobDefinition parameter to the create_tuning_job call.
In this JSON object, you specify:
* Metrics that the training jobs emit
* The container image for the algorithm to train
* The input configuration for your training and test data
* Configuration for the output of the algorithm
* The values of any algorithm hyperparameters that are not tuned in the tuning job
* The type of instance to use for the training jobs
* The stopping condition for the training jobs
This example defines one metric that Tensorflow container emits: loss.
```
training_image = image_name
print("training artifacts will be uploaded to: {}".format(output_location))
training_job_definition = {
"AlgorithmSpecification": {
"MetricDefinitions": [{"Name": "loss", "Regex": "loss: ([0-9\\.]+)"}],
"TrainingImage": training_image,
"TrainingInputMode": "File",
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": channels["train"],
"S3DataDistributionType": "FullyReplicated",
}
},
"CompressionType": "None",
"RecordWrapperType": "None",
},
{
"ChannelName": "test",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": channels["test"],
"S3DataDistributionType": "FullyReplicated",
}
},
"CompressionType": "None",
"RecordWrapperType": "None",
},
],
"OutputDataConfig": {"S3OutputPath": "s3://{}/{}/output".format(bucket, prefix)},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m4.xlarge", "VolumeSizeInGB": 50},
"RoleArn": role,
"StaticHyperParameters": {
"batch_size": "32",
"data_augmentation": "True",
"height_shift_range": "0.1",
"width_shift_range": "0.1",
"epochs": "1",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 43200},
}
```
## Create and launch a hyperparameter tuning job
Now you can launch a hyperparameter tuning job by calling create_tuning_job API. Pass the name and JSON objects you created in previous steps as the values of the parameters. After the tuning job is created, you should be able to describe the tuning job to see its progress in the next step, and you can go to SageMaker console->Jobs to check out the progress of each training job that has been created.
```
smclient.create_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuning_job_name,
HyperParameterTuningJobConfig=tuning_job_config,
TrainingJobDefinition=training_job_definition,
)
```
Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully and is `InProgress`.
```
smclient.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)[
"HyperParameterTuningJobStatus"
]
```
## Analyze tuning job results - after tuning job is completed
Please refer to "HPO_Analyze_TuningJob_Results.ipynb" to see example code to analyze the tuning job results.
## Deploy the best model
Now that we have got the best model, we can deploy it to an endpoint. Please refer to other SageMaker sample notebooks or SageMaker documentation to see how to deploy a model.
| github_jupyter |
### Scroll Down Below to start from Exercise 8.04
```
# Removes Warnings
import warnings
warnings.filterwarnings('ignore')
#import the necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## Reading the data using pandas
```
data= pd.read_csv('Churn_Modelling.csv')
data.head(5)
len(data)
data.shape
```
## Scrubbing the data
```
data.isnull().values.any()
#It seems we have some missing values now let us explore what are the columns
#having missing values
data.isnull().any()
## it seems that we have missing values in Gender,age and EstimatedSalary
data[["EstimatedSalary","Age"]].describe()
data.describe()
#### It seems that HasCrCard has value as 0 and 1 hence needs to be changed to category
data['HasCrCard'].value_counts()
## No of missing Values present
data.isnull().sum()
## Percentage of missing Values present
round(data.isnull().sum()/len(data)*100,2)
## Checking the datatype of the missing columns
data[["Gender","Age","EstimatedSalary"]].dtypes
```
### There are three ways to impute missing values:
1. Droping the missing values rows
2. Fill missing values with a test stastics
3. Predict the missing values using ML algorithm
```
mean_value=data['EstimatedSalary'].mean()
data['EstimatedSalary']=data['EstimatedSalary']\
.fillna(mean_value)
data['Gender'].value_counts()
data['Gender']=data['Gender'].fillna(data['Gender']\
.value_counts().idxmax())
mode_value=data['Age'].mode()
data['Age']=data['Age'].fillna(mode_value[0])
##checking for any missing values
data.isnull().any()
```
### Renaming the columns
```
# We would want to rename some of the columns
data = data.rename(columns={'CredRate': 'CreditScore',\
'ActMem' : 'IsActiveMember',\
'Prod Number': 'NumOfProducts',\
'Exited':'Churn'})
data.columns
```
### We would also like to move the churn columnn to the extreme right and drop the customer ID
```
data.drop(labels=['CustomerId'], axis=1,inplace = True)
column_churn = data['Churn']
data.drop(labels=['Churn'], axis=1,inplace = True)
data.insert(len(data.columns), 'Churn', column_churn.values)
data.columns
```
### Changing the data type
```
data["Geography"] = data["Geography"].astype('category')
data["Gender"] = data["Gender"].astype('category')
data["HasCrCard"] = data["HasCrCard"].astype('category')
data["Churn"] = data["Churn"].astype('category')
data["IsActiveMember"] = data["IsActiveMember"]\
.astype('category')
data.dtypes
```
# Exploring the data
## Statistical Overview
```
data['Churn'].value_counts(0)
data['Churn'].value_counts(1)*100
data['IsActiveMember'].value_counts(1)*100
data.describe()
summary_churn = data.groupby('Churn')
summary_churn.mean()
summary_churn.median()
corr = data.corr()
plt.figure(figsize=(15,8))
sns.heatmap(corr, \
xticklabels=corr.columns.values,\
yticklabels=corr.columns.values,\
annot=True,cmap='Greys_r')
corr
```
## Visualization
```
f, axes = plt.subplots(ncols=3, figsize=(15, 6))
sns.distplot(data.EstimatedSalary, kde=True, color="gray", \
ax=axes[0]).set_title('EstimatedSalary')
axes[0].set_ylabel('No of Customers')
sns.distplot(data.Age, kde=True, color="gray", \
ax=axes[1]).set_title('Age')
axes[1].set_ylabel('No of Customers')
sns.distplot(data.Balance, kde=True, color="gray", \
ax=axes[2]).set_title('Balance')
axes[2].set_ylabel('No of Customers')
plt.figure(figsize=(15,4))
p=sns.countplot(y="Gender", hue='Churn', data=data,\
palette="Greys_r")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Churn Distribution by Gender')
plt.figure(figsize=(15,4))
p=sns.countplot(x='Geography', hue='Churn', data=data, \
palette="Greys_r")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Geography Distribution')
plt.figure(figsize=(15,4))
p=sns.countplot(x='NumOfProducts', hue='Churn', data=data, \
palette="Greys_r")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Distribution by Product')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Age'] , \
color=sns.color_palette("Greys_r")[0],\
shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Age'] , \
color=sns.color_palette("Greys_r")[1],\
shade=True, label='churn')
ax.set(xlabel='Customer Age', ylabel='Frequency')
plt.title('Customer Age - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Balance'] , \
color=sns.color_palette("Greys_r")[0],\
shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Balance'] , \
color=sns.color_palette("Greys_r")[1],\
shade=True, label='churn')
ax.set(xlabel='Customer Balance', ylabel='Frequency')
plt.title('Customer Balance - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'CreditScore'] , \
color=sns.color_palette("Greys_r")[0],\
shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'CreditScore'] , \
color=sns.color_palette("Greys_r")[1],\
shade=True, label='churn')
ax.set(xlabel='CreditScore', ylabel='Frequency')
plt.title('Customer CreditScore - churn vs no churn')
plt.figure(figsize=(16,4))
p=sns.barplot(x='NumOfProducts',y='Balance',hue='Churn',\
data=data, palette="Greys_r")
p.legend(loc='upper right')
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Number of Product VS Balance')
```
## Feature selection
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
data.dtypes
### Encoding the categorical variables
data["Geography"] = data["Geography"].astype('category')\
.cat.codes
data["Gender"] = data["Gender"].astype('category').cat.codes
data["HasCrCard"] = data["HasCrCard"].astype('category')\
.cat.codes
data["Churn"] = data["Churn"].astype('category').cat.codes
target = 'Churn'
X = data.drop('Churn', axis=1)
y=data[target]
X_train, X_test, y_train, y_test = train_test_split\
(X,y,test_size=0.15, \
random_state=123, \
stratify=y)
forest=RandomForestClassifier(n_estimators=500,random_state=1)
forest.fit(X_train,y_train)
importances=forest.feature_importances_
features = data.drop(['Churn'],axis=1).columns
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(15,4))
plt.title("Feature importances using Random Forest")
plt.bar(range(X_train.shape[1]), importances[indices],\
color="gray", align="center")
plt.xticks(range(X_train.shape[1]), features[indices], \
rotation='vertical',fontsize=15)
plt.xlim([-1, X_train.shape[1]])
plt.show()
feature_importance_df = pd.DataFrame({"Feature":features,\
"Importance":importances})
print(feature_importance_df)
```
## Model Fitting
```
import statsmodels.api as sm
top5_features = ['Age','EstimatedSalary','CreditScore',\
'Balance','NumOfProducts']
logReg = sm.Logit(y_train, X_train[top5_features])
logistic_regression = logReg.fit()
logistic_regression.summary
logistic_regression.params
# Create function to compute coefficients
coef = logistic_regression.params
def y (coef, Age, EstimatedSalary, CreditScore, Balance, \
NumOfProducts) : return coef[0]*Age+ coef[1]\
*EstimatedSalary+coef[2]*CreditScore\
+coef[1]*Balance+coef[2]*NumOfProducts
import numpy as np
#A customer having below attributes
#Age: 50
#EstimatedSalary: 100,000
#CreditScore: 600
#Balance: 100,000
#NumOfProducts: 2
#would have 38% chance of churn
y1 = y(coef, 50, 100000, 600,100000,2)
p = np.exp(y1) / (1+np.exp(y1))
p
```
## Logistic regression using scikit-learn
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0, solver='lbfgs')\
.fit(X_train[top5_features], y_train)
clf.predict(X_test[top5_features])
clf.predict_proba(X_test[top5_features])
clf.score(X_test[top5_features], y_test)
```
## Exercise 8.04
# Performing standardization
```
from sklearn import preprocessing
X_train[top5_features].head()
scaler = preprocessing.StandardScaler().fit(X_train[top5_features])
scaler.mean_
scaler.scale_
X_train_scalar=scaler.transform(X_train[top5_features])
X_train_scalar
X_test_scalar=scaler.transform(X_test[top5_features])
```
## Exercise 8.05
# Performing Scaling
```
min_max = preprocessing.MinMaxScaler().fit(X_train[top5_features])
min_max.min_
min_max.scale_
X_train_min_max=min_max.transform(X_train[top5_features])
X_test_min_max=min_max.transform(X_test[top5_features])
```
## Exercise 8.06
# Normalization
```
normalize = preprocessing.Normalizer().fit(X_train[top5_features])
normalize
X_train_normalize=normalize.transform(X_train[top5_features])
X_test_normalize=normalize.transform(X_test[top5_features])
np.sqrt(np.sum(X_train_normalize**2, axis=1))
np.sqrt(np.sum(X_test_normalize**2, axis=1))
```
## Exercise 8.07
# Model Evaluation
```
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=10)\
.split(X_train[top5_features].values,y_train.values)
results=[]
for i, (train,test) in enumerate(skf):
clf.fit(X_train[top5_features].values[train],\
y_train.values[train])
fit_result=clf.score(X_train[top5_features].values[test],\
y_train.values[test])
results.append(fit_result)
print('k-fold: %2d, Class Ratio: %s, Accuracy: %.4f'\
% (i,np.bincount(y_train.values[train]),fit_result))
print('accuracy for CV is:%.3f' % np.mean(results))
```
### Using Scikit Learn cross_val_score
```
from sklearn.model_selection import cross_val_score
results_cross_val_score=cross_val_score\
(estimator=clf,\
X=X_train[top5_features].values,\
y=y_train.values,cv=10,n_jobs=1)
print('accuracy for CV is:%.3f '\
% np.mean(results_cross_val_score))
results_cross_val_score
print('accuracy for CV is:%.3f' % np.mean(results_cross_val_score))
```
## Exercise 8.08
# Fine Tuning of Model Using Grid Search
```
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
parameters = [ {'kernel': ['linear'], 'C':[0.1, 1]}, \
{'kernel': ['rbf'], 'C':[0.1, 1]}]
clf = GridSearchCV(svm.SVC(), parameters, \
cv = StratifiedKFold(n_splits = 3),\
verbose=4,n_jobs=-1)
clf.fit(X_train[top5_features], y_train)
print('best score train:', clf.best_score_)
print('best parameters train: ', clf.best_params_)
```
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
## Submitting:
Open up the 'jwt' file in the first-neural-network directory (which also contains this notebook) for submission instructions
| github_jupyter |
# ALIGN Tutorial Notebook: DEVIL'S ADVOCATE
This notebook provides an introduction to **ALIGN**,
a tool for quantifying multi-level linguistic similarity
between speakers, using the "Devil's Advocate" transcript data reported in Duran, Paxton, and Fusaroli: "ALIGN: Analyzing Linguistic Interactions with Generalizable techNiques - a Python Library". This method was also introduced in Duran, Paxton, and Fusaroli (2019), which can be accessed here for reference: https://osf.io/kx8ur/.
## Tutorial Overview
The Devil's Advocate ("DA") study examines interpersonal linguistic alignment between dyads across two conversations where participants either agreed or disagreed with each other (as a randomly assigned between-dyads condition) and where one of the conversations involved the truth and the other deception (as a within-subjects condition), with order of conversations counterbalanced across dyads.
**Transcript Data:**
The complete de-identified dataset of raw conversational transcripts is hosted on a secure protected access data repository provided by the Inter-university Consortium for Political and Social Research (ICPSR). These transcripts need to be downloaded to use this tutorial. Please click on the link to the ICPSR repository to access: http://dx.doi.org/10.3886/ICPSR37124.v1.
**Analysis:**
To replicate the results reported in Duran, Paxton, and Fusaroli (2019), or for an example of R code used to analzye the ALIGN output for this dataset, please visit the OSF repository for this project: https://osf.io/3TGUF/
***
## Table of Contents
* [Getting Started](#Getting-Started)
* [Prerequisites](#Prerequisites)
* [Preparing input data](#Preparing-input-data)
* [Filename conventions](#Filename-conventions)
* [Highest-level functions](#Highest-level-functions)
* [Setup](#Setup)
* [Import libraries](#Import-libraries)
* [Specify ALIGN path settings](#Specify-ALIGN-path-settings)
* [Phase 1: Prepare transcripts](#Phase-1:-Prepare-transcripts)
* [Preparation settings](#Preparation-settings)
* [Run preparation phase](#Run-preparation-phase)
* [Phase 2: Calculate alignment](#Phase-2:-Calculate-alignment)
* [For real data: Alignment calculation settings](#For-real-data:-Alignment-calculation-settings)
* [For real data: Run alignment calculation](#For-real-data:-Run-alignment-calculation)
* [For surrogate data: Alignment calculation settings](#For-surrogate-data:-Alignment-calculation-settings)
* [For surrogate data: Run alignment calculation](#For-surrogate-data:-Run-alignment-calculation)
* [ALIGN output overview](#ALIGN-output-overview)
* [Speed calculations](#Speed-calculations)
* [Printouts!](#Printouts!)
***
# Getting Started
## Preparing input data
**The transcript data used for this analysis adheres to the following requirements:**
* Each input text file contains a single conversation organized in an `N x 2` matrix
* Text file must be tab-delimited.
* Each row corresponds to a single conversational turn from a speaker.
* Rows must be temporally ordered based on their occurrence in the conversation.
* Rows must alternate between speakers.
* Speaker identifier and content for each turn are divided across two columns.
* Column 1 must have the header `participant`.
* Each cell specifies the speaker.
* Each speaker must have a unique label (e.g., `P1` and `P2`, `0` and `1`).
* **NOTE: For the DA dataset, the label with a value of 0 indicates speaker did not receive any special assignment at the start of the experiment, a value of 1 indicates the speaker has been assigned the role of deceiver (i.e., “devil’s advocate) at the start of the experiment.**
* Column 2 must have the header `content`.
* Each cell corresponds to the transcribed utterance from the speaker.
* Each cell must end with a newline character: `\n`
## Filename conventions
* Each conversation text file must be regularly formatted, including a prefix for dyad and a prefix for conversation prior to the identifier for each that are separated by a unique character. By default, ALIGN looks for patterns that follow this convention: `dyad1-condA.txt`
* However, users may choose to include any label for dyad or condition so long as the two labels are distinct from one another and are not subsets of any possible dyad or condition labels. Users may also use any character as a separator so long as it does not occur anywhere else in the filename.
* The chosen file format **must** be used when saving **all** files for this analysis.
**NOTE: For the DA dataset, each conversation text file is saved in the format of: dyad_condX-Y-Z (e.g., dyad11_cond1-0-2).**
Such that for X, Y, and Z condition codes:
* X = Indicates whether the conversation involved dyads who agreed or disagreed with each other: value of 1 indicates a disagreement conversation, value of 2 indicates an agreement conversation (e.g., “cond1”)
* Y = Indicates whether the conversation involved deception: value of 0 indicates truth, value of 1 indicates deception.
* Z = Indicates conversation order. Given each dyad had two conversations: value of 2 indicates the conversation occurred first, value of 3 indicates the conversation occurred last.
## Highest-level functions
Given appropriately prepared transcript files, ALIGN can be run in 3 high-level functions:
**`prepare_transcripts`**: Pre-process each standardized
conversation, checking it conforms to the requirements.
Each utterance is tokenized and lemmatized and has
POS tags added.
**`calculate_alignment`**: Generates turn-level and
conversation-level alignment scores (lexical,
conceptual, and syntactic) across a range of
*n*-gram sequences.
**`calculate_baseline_alignment`**: Generate a surrogate corpus
and run alignment analysis (using identical specifications
from `calculate_alignment`) on it to produce a baseline.
***
# Setup
## Import libraries
Install ALIGN if you have not already.
```
import sys
!{sys.executable} -m pip install align
```
Import packages we'll need to run ALIGN.
```
import align, os
import pandas as pd
```
Import `time` so that we can get a sense of how
long the ALIGN pipeline takes.
```
import time
```
Import `warnings` to flag us if required files aren't provided.
```
import warnings
```
## Install additional NTLK packages
Download some addition `nltk` packages for `align` to work.
```
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
```
## Specify ALIGN path settings
ALIGN will need to know where the raw transcripts are stored, where to store the processed data, and where to read in any additional files needed for optional ALIGN parameters.
### Required directories
For the sake of this tutorial, specify a base path that will serve as our jumping-off point for our saved data. All of the shipped data will be called from the package directory but the DA transcripts will need to be added manually.
**`BASE_PATH`**: Containing directory for this tutorial.
```
BASE_PATH = os.getcwd()
```
**`DA_EXAMPLE`**: Subdirectories for output and other
files for this tutorial. (We'll create a default directory
if one doesn't already exist.)
```
DA_EXAMPLE = os.path.join(BASE_PATH,
'DA/')
if not os.path.exists(DA_EXAMPLE):
os.makedirs(DA_EXAMPLE)
```
**`TRANSCRIPTS`**: Transcript text files must be first downloaded from the ICPSR repository.
Next, set variable for folder name (as string) for relative location of folder into which the downloaded transcript files need to be manually added. (We'll create a default directory if one doesn't already exist.)
```
TRANSCRIPTS = os.path.join(DA_EXAMPLE,
'DA-transcripts/')
if not os.path.exists(TRANSCRIPTS):
os.makedirs(TRANSCRIPTS)
if not os.listdir(TRANSCRIPTS) :
warnings.warn('DA text files not found at the specified '
'location. Please download from '
'http://dx.doi.org/10.3886/ICPSR37124.v1 '
'and add to directory.')
```
**`PREPPED_TRANSCRIPTS`**: Set variable for folder name
(as string) for relative location of folder into which
prepared transcript files will be saved. (We'll create
a default directory if one doesn't already exist.)
```
PREPPED_TRANSCRIPTS = os.path.join(DA_EXAMPLE,
'DA-prepped/')
if not os.path.exists(PREPPED_TRANSCRIPTS):
os.makedirs(PREPPED_TRANSCRIPTS)
```
**`ANALYSIS_READY`**: Set variable for folder name
(as string) for relative location of folder into
which analysis-ready dataframe files will be saved.
(We'll create a default directory if one doesn't
already exist.)
```
ANALYSIS_READY = os.path.join(DA_EXAMPLE,
'DA-analysis/')
if not os.path.exists(ANALYSIS_READY):
os.makedirs(ANALYSIS_READY)
```
**`SURROGATE_TRANSCRIPTS`**: Set variable for folder name
(as string) for relative location of folder into which all
prepared surrogate transcript files will be saved. (We'll
create a default directory if one doesn't already exist.)
```
SURROGATE_TRANSCRIPTS = os.path.join(DA_EXAMPLE,
'DA-surrogate/')
if not os.path.exists(SURROGATE_TRANSCRIPTS):
os.makedirs(SURROGATE_TRANSCRIPTS)
```
### Paths for optional parameters
**`OPTIONAL_PATHS`**: If using Stanford POS tagger or
pretrained vectors, the path to these files. If these
files are provided in other locations, be sure to
change the file paths for them. (We'll create a default
directory if one doesn't already exist.)
```
OPTIONAL_PATHS = os.path.join(DA_EXAMPLE,
'optional_directories/')
if not os.path.exists(OPTIONAL_PATHS):
os.makedirs(OPTIONAL_PATHS)
```
#### Stanford POS Tagger
The Stanford POS tagger **will not be used** by
default in this example. However, you may use them
by uncommenting and providing the requested file
paths in the cells in this section and then changing
the relevant parameters in the ALIGN calls below.
If desired, we could use the Standford part-of-speech
tagger along with the Penn part-of-speech tagger
(which is always used in ALIGN). To do so, the files
will need to be downloaded separately:
https://nlp.stanford.edu/software/tagger.shtml#Download
**`STANFORD_POS_PATH`**: If using Stanford POS tagger
with the Penn POS tagger, path to Stanford directory.
```
# STANFORD_POS_PATH = os.path.join(OPTIONAL_PATHS,
# 'stanford-postagger-full-2018-10-16/')
# if os.path.exists(STANFORD_POS_PATH) == False:
# warnings.warn('Stanford POS directory not found at the specified '
# 'location. Please update the file path with '
# 'the folder that can be directly downloaded here: '
# 'https://nlp.stanford.edu/software/stanford-postagger-full-2018-10-16.zip '
# '- Alternatively, comment out the '
# '`STANFORD_POS_PATH` information.')
```
**`STANFORD_LANGUAGE`**: If using Stanford tagger,
set language model to be used for POS tagging.
```
# STANFORD_LANGUAGE = os.path.join('models/english-left3words-distsim.tagger')
# if os.path.exists(STANFORD_POS_PATH + STANFORD_LANGUAGE) == False:
# warnings.warn('Stanford tagger language not found at the specified '
# 'location. Please update the file path or comment '
# 'out the `STANFORD_POS_PATH` information.')
```
#### Google News pretrained vectors
The Google News pretrained vectors **will be used**
by default in this example. The file is available for
download here: https://code.google.com/archive/p/word2vec/
If desired, researchers may choose to read in pretrained
`word2vec` vectors rather than creating a semantic space
from the corpus provided. This may be especially useful
for small corpora (i.e., fewer than 30k unique words),
although the choice of semantic space corpus should be
made with careful consideration about the nature of the
linguistic context (for further discussion, see Duran,
Paxton, & Fusaroli, 2019).
**`PRETRAINED_INPUT_FILE`**: If using pretrained vectors, path
to pretrained vector files. You may choose to download the file
directly to this path or change the path to a different one.
```
PRETRAINED_INPUT_FILE = os.path.join(OPTIONAL_PATHS,
'GoogleNews-vectors-negative300.bin')
if os.path.exists(PRETRAINED_INPUT_FILE) == False:
warnings.warn('Google News vector not found at the specified '
'location. Please update the file path with '
'the .bin file that can be accessed here: '
'https://code.google.com/archive/p/word2vec/ '
'- Alternatively, comment out the `PRETRAINED_INPUT_FILE` information')
```
***
# Phase 1: Prepare transcripts
In Phase 1, we take our raw transcripts and get them ready
for later ALIGN analysis.
## Preparation settings
There are a number of parameters that we can set for the
`prepare_transcripts()` function:
```
print(align.prepare_transcripts.__doc__)
```
For the sake of this demonstration, we'll keep everything as
defaults. Among other parameters, this means that:
* any turns fewer than 2 words will be removed from the corpus
(`minwords=2`),
* we'll be using regex to strip out any filler words
(e.g., "uh," "um," "huh"; `use_filler_list=None`),
* if you like, you can supply additional filler words as `use_filler_list=["string1", "string2"]` but be sure to set `filler_regex_and_list=True`
* we'll be using the Project Gutenberg corpus to create our
spell-checker algorithm (`training_dictionary=None`),
* we'll rely only on the Penn POS tagger
(`add_stanford_tags=False`), and
* our data will be saved both as individual conversation files
and as a master dataframe of all conversation outputs
(`save_concatenated_dataframe=True`).
## Run preparation phase
First, we prepare our transcripts by reading in individual `.txt`
files for each conversation, clean up undesired text and turns,
spell-check, tokenize, lemmatize, and add POS tags.
```
start_phase1 = time.time()
model_store = align.prepare_transcripts(
input_files=TRANSCRIPTS,
output_file_directory=PREPPED_TRANSCRIPTS,
minwords=2,
use_filler_list=None,
filler_regex_and_list=False,
training_dictionary=None,
add_stanford_tags=False,
### if you want to run the Stanford POS tagger, be sure to uncomment the next two lines
# stanford_pos_path=STANFORD_POS_PATH,
# stanford_language_path=STANFORD_LANGUAGE,
save_concatenated_dataframe=True)
end_phase1 = time.time()
```
***
# Phase 2: Calculate alignment
## For real data: Alignment calculation settings
There are a number of parameters that we can set for the
`calculate_alignment()` function:
```
print(align.calculate_alignment.__doc__)
```
For the sake of this tutorial, we'll keep everything as
defaults. Among other parameters, this means that we'll:
* use only unigrams and bigrams for our *n*-grams
(`maxngram=2`),
* use pretrained vectors instead of creating our own
semantic space, since our tutorial corpus is quite
small (`use_pretrained_vectors=True` and
`pretrained_file_directory=PRETRAINED_INPUT_FILE`),
* ignore exact lexical duplicates when calculating
syntactic alignment,
* we'll rely only on the Penn POS tagger
(`add_stanford_tags=False`), and
* implement high- and low-frequency cutoffs to clean
our transcript data (`high_sd_cutoff=3` and
`low_n_cutoff=1`).
Whenever we calculate a baseline level of alignment,
we need to include the same parameter values for any
parameters that are present in both `calculate_alignment()`
(this step) and `calculate_baseline_alignment()`
(next step). As a result, we'll specify these here:
```
# set standards to be used for real and surrogate
INPUT_FILES = PREPPED_TRANSCRIPTS
MAXNGRAM = 2
USE_PRETRAINED_VECTORS = True
SEMANTIC_MODEL_INPUT_FILE = os.path.join(DA_EXAMPLE,
'align_concatenated_dataframe.txt')
PRETRAINED_FILE_DRIRECTORY = PRETRAINED_INPUT_FILE
ADD_STANFORD_TAGS = False
IGNORE_DUPLICATES = True
HIGH_SD_CUTOFF = 3
LOW_N_CUTOFF = 1
```
## For real data: Run alignment calculation
```
start_phase2real = time.time()
[turn_real,convo_real] = align.calculate_alignment(
input_files=INPUT_FILES,
maxngram=MAXNGRAM,
use_pretrained_vectors=USE_PRETRAINED_VECTORS,
pretrained_input_file=PRETRAINED_INPUT_FILE,
semantic_model_input_file=SEMANTIC_MODEL_INPUT_FILE,
output_file_directory=ANALYSIS_READY,
add_stanford_tags=ADD_STANFORD_TAGS,
ignore_duplicates=IGNORE_DUPLICATES,
high_sd_cutoff=HIGH_SD_CUTOFF,
low_n_cutoff=LOW_N_CUTOFF)
end_phase2real = time.time()
```
## For surrogate data: Alignment calculation settings
For the surrogate or baseline data, we have many of the same
parameters for `calculate_baseline_alignment()` as we do for
`calculate_alignment()`:
```
print(align.calculate_baseline_alignment.__doc__)
```
As mentioned above, when calculating the baseline, it is **vital**
to include the *same* parameter values for any parameters that
are included in both `calculate_alignment()` and
`calculate_baseline_alignment()`. As a result, we re-use those
values here.
We demonstrate other possible uses for labels by setting
`dyad_label = time`, allowing us to compare alignment over
time across the same speakers. We also demonstrate how to
generate a subset of surrogate pairings rather than all
possible pairings.
In addition to the parameters that we're re-using from
the `calculate_alignment()` values (see above), we'll
keep most parameters at their defaults by:
* preserving the turn order when creating surrogate
pairs (`keep_original_turn_order=True`),
* specifying condition with `cond` prefix
(`condition_label='cond'`), and
* using a hyphen to separate the condition and
dyad identifiers (`id_separator='\-'`).
However, we will also change some of these defaults,
including:
* generating only a subset of surrogate data equal
to the size of the real data (`all_surrogates=False`)
and
* specifying that we'll be shuffling the baseline data
by time instead of by dyad (`dyad_label='time'`).
## For surrogate data: Run alignment calculation
```
start_phase2surrogate = time.time()
[turn_surrogate,convo_surrogate] = align.calculate_baseline_alignment(
input_files=INPUT_FILES,
maxngram=MAXNGRAM,
use_pretrained_vectors=USE_PRETRAINED_VECTORS,
pretrained_input_file=PRETRAINED_INPUT_FILE,
semantic_model_input_file=SEMANTIC_MODEL_INPUT_FILE,
output_file_directory=ANALYSIS_READY,
add_stanford_tags=ADD_STANFORD_TAGS,
ignore_duplicates=IGNORE_DUPLICATES,
high_sd_cutoff=HIGH_SD_CUTOFF,
low_n_cutoff=LOW_N_CUTOFF,
surrogate_file_directory=SURROGATE_TRANSCRIPTS,
all_surrogates=False,
keep_original_turn_order=True,
id_separator='\_',
dyad_label='dyad',
condition_label='cond')
convo_surrogate
end_phase2surrogate = time.time()
```
***
# ALIGN output overview
## Speed calculations
As promised, let's take a look at how long it takes to run each section. Time is given in seconds.
**Phase 1:**
```
end_phase1 - start_phase1
```
**Phase 2, real data:**
```
end_phase2real - start_phase2real
```
**Phase 2, surrogate data:**
```
end_phase2surrogate - start_phase2surrogate
```
**All phases:**
```
end_phase2surrogate - start_phase1
```
## Printouts!
And that's it! Before we go, let's take a look at the output from the real data analyzed at the turn level for each conversation (`turn_real`) and at the conversation level for each dyad (`convo_real`). We'll then look at our surrogate data, analyzed both at the turn level (`turn_surrogate`) and at the conversation level (`convo_surrogate`). In our next step, we would then take these data and plug them into our statistical model of choice. As an example of how this was done for Duran, Paxton, and Fusaroli (2019, *Psychological Methods*, https://doi.org/10.1037/met0000206) please visit: https://osf.io/3TGUF/
```
turn_real.head(10)
convo_real.head(10)
turn_surrogate.head(10)
convo_surrogate.head(10)
```
| github_jupyter |
### Install Required Packages
```
! pip install numpy pandas scikit-learn matplotlib
```
### Imports
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_validate
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer, ENGLISH_STOP_WORDS
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
```
### Read data
```
train_df = pd.read_csv('../assist_material/datasets/extracted/q1/train.csv', sep=',')
train_df.columns = ['id', 'title', 'content', 'label']
```
### Benchmark Models
For the benchmarking we use the following combinations. SVM with TF-IDF, Random Forest with TF-IDF & SVM with SVD,
Random Forest with SVD. As Bag-of-words I used TF-IDF variation in order to vectorize the datasets.
```
vectorizer = TfidfVectorizer(max_features=50000)
svd = TruncatedSVD(n_components=300)
svm = SVC(kernel='linear')
random_forest = RandomForestClassifier(n_estimators=1000, max_features='sqrt', n_jobs=-1)
svm_tfidf = make_pipeline(vectorizer,svm)
random_forest_tfidf = make_pipeline(vectorizer, random_forest)
svm_tfidf_svd = make_pipeline(vectorizer, svd, svm)
random_forest_tfidf_svd = make_pipeline(vectorizer, svd, random_forest)
```
### Initialize data with labels in order to seed the classifiers
```
X = train_df['title'] + ' ' + train_df['content']
y = train_df['label']
```
### SVM with TF-IDF
```
scores_svm_tfidf = cross_validate(svm_tfidf, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
n_jobs=-1,
return_train_score=False)
print('SVM + tfidf', scores_svm_tfidf)
```
### Random Forest with TF-IDF
```
scores_random_forest_tfidf = cross_validate(random_forest_tfidf, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
return_train_score=False)
print('Random Forest + tfidf', scores_random_forest_tfidf)
```
### SVM with SVD
```
scores_svm_tfidf_svd = cross_validate(svm_tfidf_svd, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
n_jobs=-1,
return_train_score=False)
print('SVM + tfidf + SVD', scores_svm_tfidf_svd)
```
### Random Forest with SVD
```
scores_random_forest_tfidf_svd = cross_validate(random_forest_tfidf_svd, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
return_train_score=False)
print('Random Forest + tfidf + SVD', scores_random_forest_tfidf_svd)
```
## Beat the Benchmark classifier
In order to achieve the best performance in terms of accuracy and execution time the best choice is Random Forest with
SVD. Tuning this model further we can achieve 96% accuracy. The hyper-parameters below are occurred through the
tuning phase which was a time-consuming process, approximately 20 different combinations were executed. Also, some
information about the preprocessing. The input text initially gets cleaned up from stopwords, turned to lower case
and finally vectorized to TF-IDF.
```
# Preprocess
# Give a small gain to titles
X = (train_df['title'] + ' ') * 3 + train_df['content']
stop_words = ENGLISH_STOP_WORDS.union(['will', 's', 't', 'one', 'new', 'said', 'say', 'says', 'year'])
vectorizer_tuned = TfidfVectorizer(lowercase=True, stop_words=stop_words, ngram_range=(1,1), max_features=50000)
svd_tuned = TruncatedSVD(n_components=1000)
random_forest_tuned = RandomForestClassifier(n_estimators=1000, max_features='sqrt', n_jobs=-1)
random_forest_tfidf_svd_tuned = make_pipeline(vectorizer_tuned, svd_tuned, random_forest_tuned)
scores_random_forest_tfidf_svd_tuned = cross_validate(random_forest_tfidf_svd_tuned, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
return_train_score=False)
print('Random Forest + tfidf + SVD', scores_random_forest_tfidf_svd_tuned)
```
### Generating stats table
```
data_table = [[np.mean(scores_svm_tfidf['test_accuracy'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_accuracy'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_accuracy'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_accuracy'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_accuracy'], dtype='float64')],
[np.mean(scores_svm_tfidf['test_precision_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_precision_macro'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_precision_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_precision_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_precision_macro'], dtype='float64')],
[np.mean(scores_svm_tfidf['test_recall_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_recall_macro'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_recall_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_recall_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_recall_macro'], dtype='float64')],
[np.mean(scores_svm_tfidf['test_f1_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_f1_macro'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_f1_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_f1_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_f1_macro'], dtype='float64')]
]
cell_text = []
for row in data_table:
cell_text.append([f'{x:1.5f}' for x in row])
plt.figure(dpi=150)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.box(on=None)
plt.subplots_adjust(left=0.2, bottom=0.2)
the_table = plt.table(cellText=cell_text,
rowLabels=['Accuracy', 'Precision', 'Recall', 'F1-Score'],
colLabels=['SVM (BoW)', 'Random Forest (BoW)', 'SVM (SVD)', 'Random Forest (SVD)', 'My Method'],
colColours=['lightsteelblue'] * 5,
rowColours=['lightsteelblue'] * 4,
loc='center')
the_table.scale(1, 1.5)
fig = plt.gcf()
plt.show()
```
| github_jupyter |
# SETUP
```
!pip install -r requirements_colab.txt -q
```
# DATA
> To speed up the review process , i provided the ***drive id*** of the data i've created from the Train creation folder noteboooks .
---
> I also add each data drive link in the Readme Pdf file attached with this solution
```
!gdown --id 1hNRbtcqd9F6stMOK1xAZApDITwAjiSDJ
!gdown --id 1-QCmWsNGREXuWArifN0nD_Sp4hJxf0tu
!gdown --id 1-47L_1NKLeVgW1vWmqXXXCuWZ3gwZWsS
!gdown --id 1-aO4FEtv5CF-ZOcxDSO3jGEzPcIFdxgP
!gdown --id 1-8J_xFgI0WKT5UXFnfH4q1KUw_KgNY37
!gdown --id 1-a55a7N6a4SoqolPF_wI4C6Q70u_d7Hj
!gdown --id 1-BgXQwmXqBuk_P8VtvLfdLqy83dv56Kz
!gdown --id 1-hQGF2TNBbsy3jsGNtndmK55egbdFDjs
!gdown --id 1VE3L15uXRbP0kzmDzyuYac3Mi9yZY3YI
!gdown --id 1wDzl_QHgKtW2-FoDJs_U-qfbBTBUqNSA
```
## LIBRARIES
```
#import necessary dependecies
import os
import numpy as np
import pandas as pd
import random
from tqdm import tqdm
import copy
import lightgbm as lgb
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import log_loss
from sklearn.preprocessing import QuantileTransformer
import warnings
warnings.filterwarnings('ignore')
# fix seed
np.random.seed(111)
random.seed(111)
```
## Train Creation
```
def create_train():
train =pd.read_csv("S2TrainObs1.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def create_test():
test =pd.read_csv("S2TestObs1.csv" )
return test
def createObs2_train():
train =pd.read_csv("S2TrainObs2.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs2_test():
test =pd.read_csv("S2TestObs2.csv" )
return test
def createObs3_train():
train =pd.read_csv("S2TrainObs3.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs3_test():
test =pd.read_csv("S2TestObs3.csv" )
return test
def createObs4_train():
train =pd.read_csv("S2TrainObs4.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs4_test():
test =pd.read_csv("S2TestObs4.csv" )
return test
def createObs5_train():
train =pd.read_csv("S2TrainObs5.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs5_test():
test =pd.read_csv("S2TestObs5.csv" )
return test
```
## Feature Engineering
```
def process(T) :
# process bands
Bcols = T.filter(like='B').columns.tolist()
Vcols = T.filter(like='V').columns.tolist()
Obs1 = T.filter(like='Month4').columns.tolist()
Obs2 = T.filter(like='Month5').columns.tolist()
Obs3 = T.filter(like='Month6').columns.tolist()
Obs4 = T.filter(like='Month7').columns.tolist()
Obs5 = T.filter(like='Month8').columns.tolist()
Obs6 = T.filter(like='Month9').columns.tolist()
Obs7 = T.filter(like='Month10').columns.tolist()
Obs8 = T.filter(like='Month11').columns.tolist()
# vegetation indexes
B8cols = T.filter(like='B8_').columns.tolist()
B8cols = [x for x in B8cols if 'std' not in x]
B4cols = T.filter(like='B4_').columns.tolist()
B4cols = [x for x in B4cols if 'std' not in x]
B3cols = T.filter(like='B3_').columns.tolist()
B3cols = [x for x in B3cols if 'std' not in x]
B5cols = T.filter(like='B5_').columns.tolist()
B5cols = [x for x in B5cols if 'std' not in x]
B3cols = T.filter(like='B3_').columns.tolist()
B3cols = [x for x in B3cols if 'std' not in x]
B2cols = T.filter(like='B2_').columns.tolist()
B2cols = [x for x in B2cols if 'std' not in x]
B7cols = T.filter(like='B7_').columns.tolist()
B7cols = [x for x in B7cols if 'std' not in x]
B8Acols = T.filter(like='B8A_').columns.tolist()
B8Acols = [x for x in B8Acols if 'std' not in x]
B6cols = T.filter(like='B6_').columns.tolist()
B6cols = [x for x in B6cols if 'std' not in x]
B12cols = T.filter(like='B12_').columns.tolist()
B12cols = [x for x in B12cols if 'std' not in x]
B11cols = T.filter(like='B11_').columns.tolist()
B11cols = [x for x in B11cols if 'std' not in x]
B1cols = T.filter(like='B1_').columns.tolist()
B1cols = [x for x in B1cols if 'std' not in x]
B9cols = T.filter(like='B9_').columns.tolist()
B9cols = [x for x in B9cols if 'std' not in x]
L = 0.725
for b1,b2 ,b3 ,b4, b5 , b6, b7, b8 ,b8a ,b9,b11,b12 in zip(B1cols,B2cols,B3cols,B4cols,B5cols,B6cols,B7cols,B8cols,B8Acols,B9cols,B11cols,B12cols) :
T[f'NDVI_{b8.split("_")[1]}'] = ((T[b8] - T[b4]) / (T[b8] + T[b4]))
T[f'SAVI_{b8.split("_")[1]}'] = ((T[b8] - T[b4]) / (T[b8] + T[b4]+L) * (1.0 + L))
T[f'GRNDVI_{b8.split("_")[1]}'] = ((T[b8] - (T[b3]+T[b4])) / (T[b8] + (T[b3]+T[b4])))
T[f'GNDVI_{b8.split("_")[1]}'] = ((T[b8] - T[b3] ) / (T[b8] + T[b3]))
T[f'NDRE_{b8.split("_")[1]}'] = ((T[b5] - T[b4])/ (T[b5] + T[b4]))
T[f'EVI_{b8.split("_")[1]}'] = (2.5 * (T[b8] - T[b4] ) / ((T[b8] + 6.0 * T[b4] - 7.5 * T[b2]) + 1.0)).values.clip(min=-5,max=5)
T[f'WDRVI_{b8.split("_")[1]}'] = (((8 * T[b8]) - T[b4])/ ((8* T[b8]) + T[b4]))
T[f'ExBlue_{b8.split("_")[1]}'] = ((2 * T[b2]) - (T[b3]+T[b4]))
T[f'ExGreen_{b8.split("_")[1]}'] = ((2 * T[b3]) - (T[b2]+T[b4]) )
T[f'NDRE7_{b8.split("_")[1]}'] = ((T[b7] - T[b4])/ (T[b7] + T[b4]))
T[f'MTCI_{b8.split("_")[1]}'] = ((T[b8a] - T[b6])/ (T[b7] + T[b6]))
T[f'VARI_{b8.split("_")[1]}'] = ((T[b3] - T[b4])/ (T[b3] + T[b4] - T[b2]))
T[f'ARVI_{b8.split("_")[1]}'] = ( ((T[b8] - T[b4])-(T[b4] - T[b2])) / ((T[b8] + T[b4])-(T[b4] - T[b2])) )
# Bands Relations
T[f'b7b5_{b8.split("_")[1]}'] = (T[b7] - T[b5])/ (T[b7] + T[b5]) # B7 / B5
T[f'b7b6_{b8.split("_")[1]}'] = (T[b7] - T[b6])/ (T[b7] + T[b6]) # B7 / B6
T[f'b8ab5_{b8.split("_")[1]}'] = (T[b8a] - T[b5])/ (T[b8a] + T[b5]) # B8A / B5
T[f'b6b5_{b8.split("_")[1]}'] = (T[b6] - T[b5])/ (T[b6] + T[b5]) # B6 / B5
# ASSAZZIN bands relations
T[f'b3b1_{b8.split("_")[1]}'] = (T[b3] - T[b1])/ (T[b3] + T[b1])
T[f'b11b8_{b8.split("_")[1]}'] = (T[b11] - T[b8])/ (T[b11] + T[b8])
T[f'b12b11_{b8.split("_")[1]}'] = (T[b12] - T[b11])/ (T[b12] + T[b11])
T[f'b3b4_{b8.split("_")[1]}'] = (T[b3] - T[b4])/ (T[b3] + T[b4])
T[f'b9b4_{b8.split("_")[1]}'] = (T[b9] - T[b4])/ (T[b9] + T[b4])
T[f'b5b3_{b8.split("_")[1]}'] = (T[b5] - T[b3])/ (T[b5] + T[b3])
T[f'b12b3_{b8.split("_")[1]}'] = (T[b12] - T[b3])/ (T[b12] + T[b3])
T[f'b2b1_{b8.split("_")[1]}'] = (T[b2] - T[b1])/ (T[b2] + T[b1])
T[f'b4b1_{b8.split("_")[1]}'] = (T[b4] - T[b1])/ (T[b4] + T[b1])
T[f'b11b3_{b8.split("_")[1]}'] = (T[b11] - T[b3])/ (T[b11] + T[b3])
T[f'b12b8_{b8.split("_")[1]}'] = (T[b12] - T[b8])/ (T[b12] + T[b8])
T[f'b3b2_{b8.split("_")[1]}'] = (T[b3] - T[b2])/ (T[b3] + T[b2])
T[f'b8ab3_{b8.split("_")[1]}'] = (T[b8a] - T[b3])/ (T[b8a] + T[b3])
T[f'b8ab2_{b8.split("_")[1]}'] = (T[b8a] - T[b2])/ (T[b8a] + T[b2])
T[f'b8b1_{b8.split("_")[1]}'] = (T[b8] - T[b1])/ (T[b8] + T[b1])
T[f'ARVI2_{b8.split("_")[1]}'] = ( ((T[b3] - T[b4])-(T[b4] - T[b2])) / ((T[b3] + T[b4])+(T[b4] + T[b2])) )
T[f'ARVI3_{b8.split("_")[1]}'] = ( ((T[b5] - T[b3])-(T[b3] - T[b2])) / ((T[b5] + T[b3])+(T[b3] + T[b2])) )
T[f'b8b9_{b8.split("_")[1]}'] = (T[b8] - T[b9])/ (T[b8] + T[b9])
T[f'b3b9_{b8.split("_")[1]}'] = (T[b3] - T[b9])/ (T[b3] + T[b9])
T[f'b2b9_{b8.split("_")[1]}'] = (T[b2] - T[b9])/ (T[b2] + T[b9])
T[f'b12b9_{b8.split("_")[1]}'] = (T[b12] - T[b9])/ (T[b12] + T[b9])
T[f'b12b8_{b8.split("_")[1]}'] = (T[b12] - T[b8])/ (T[b12] + T[b8])
for col in Bcols :
T[col] = np.sqrt(T[col])
for b2 ,b3 ,b4 in zip(B2cols,B3cols,B4cols) :
T[f'RGB_STD_{b3.split("_")[1]}'] = T[[b2,b3,b4]].std(axis=1)
T[f'RGB_MEAN_{b3.split("_")[1]}'] = T[[b2,b3,b4]].mean(axis=1)
for col in Vcols :
T[col] = np.sqrt(T[col])
for col1,col2,col3,col4,col5,col6,col7,col8 in zip(Obs1,Obs2,Obs3,Obs4,Obs5,Obs6,Obs7,Obs8) :
T[f'{col1.split("_")[0]}_std'] = T[[col1,col2,col3,col4,col5,col6,col7,col8]].std(axis=1)
# process Vegetation indexes
ObsN = T.filter(like='NDVI_').columns.tolist()
ObsSA = T.filter(like='SAVI_').columns.tolist()
ObsCC = T.filter(like='CCCI_').columns.tolist()
ObsWDR = T.filter(like='WDRVI_').columns.tolist()
ObsNDRE7 = T.filter(like='NDRE7_').columns.tolist()
T['NDVI_max'] = T[ObsN].max(axis=1)
T['NDVI_min'] = T[ObsN].min(axis=1)
T['SAVI_max'] = T[ObsSA].max(axis=1)
T['SAVI_mmin'] = T[ObsSA].min(axis=1)
T['WDRVI_max'] = T[ObsWDR].max(axis=1)
T['WDRVI_min'] = T[ObsWDR].min(axis=1)
T['NDRE7_max'] = T[ObsNDRE7].max(axis=1)
T['NDRE7_min'] = T[ObsNDRE7].min(axis=1)
return T
Train = create_train()
Test = create_test()
Train2 = createObs2_train()
Test2 = createObs2_test()
Train3 = createObs3_train()
Test3 = createObs3_test()
Train4 = createObs4_train()
Test4 = createObs4_test()
Train5 = createObs5_train()
Test5 = createObs5_test()
Train.shape , Test.shape
Train2.shape , Test2.shape
Train3.shape , Test3.shape
Train4.shape , Test4.shape
Train5.shape , Test5.shape
Train = process(Train)
Test = process(Test)
Train2 = process(Train2)
Test2 = process(Test2)
Train3 = process(Train3)
Test3 = process(Test3)
Train4 = process(Train4)
Test4 = process(Test4)
Train5 = process(Train5)
Test5 = process(Test5)
Train.shape , Test.shape
Train2.shape , Test2.shape
Train3.shape , Test3.shape
Train4.shape , Test4.shape
Train5.shape , Test5.shape
Train = pd.concat([Train,Train2.drop(columns=['field_id','label']),Train3.drop(columns=['field_id','label']),
Train4.drop(columns=['field_id','label']),Train5.drop(columns=['field_id','label'])],axis=1)
Train.shape
Test = pd.concat([Test,Test2.drop(columns=['field_id']),Test3.drop(columns=['field_id'])],axis=1)
Test = pd.merge(Test,Test4,on='field_id',how='left')
Test = pd.merge(Test,Test5,on='field_id',how='left')
Test.shape
import gc ; gc.collect()
```
# MODELING
```
X = Train.replace(np.inf,50).drop(['field_id','label'], axis=1)
COLUMNS = X.columns.tolist()
y = Train.label
TEST = Test.replace(np.inf,50).drop(['field_id'], axis=1)
TEST.columns = X.columns.tolist()
data = pd.concat([X,TEST])
qt=QuantileTransformer(output_distribution="normal",random_state=42)
data= pd.DataFrame(qt.fit_transform(data),columns=X.columns)
X = data[:X.shape[0]].values
TEST = data[X.shape[0]:].values
X.shape , TEST.shape
##############################################################################################################################################################################
def kfold_split(Train,y):
Train["folds"]=-1
kf = StratifiedKFold(n_splits= 10,random_state=seed,shuffle=True)
for fold, (_, val_index) in enumerate(kf.split(Train,y)):
Train.loc[val_index, "folds"] = fold
return Train
Train = kfold_split(Train,y)
```
### Cross Validation
```
seed = 47
sk = StratifiedKFold(n_splits= 10,random_state=seed,shuffle=True)
def DefineModel(name='lgbm') :
if name =='lgbm':
return lgb.LGBMClassifier(learning_rate = 0.1,n_estimators = 3000,
objective ='multiclass',random_state = 111,
num_leaves = 80,max_depth = 6,
metric = 'multi_logloss',
colsample_bytree = 0.5 ,
bagging_freq= 5, bagging_fraction= 0.75,
lambda_l2 = 100 ,
)
elif name =='catboost' :
cat_params = {"loss_function": "MultiClass","eval_metric": "MultiClass","learning_rate": 0.1,
"random_seed": 42,"l2_leaf_reg": 3,"bagging_temperature": 1,
"depth": 6,"od_type": "Iter","od_wait": 50,"thread_count": 16,"iterations": 50000,
"use_best_model": True,'task_type':"GPU",'devices':'0:1'}
return CatBoostClassifier(**cat_params
)
else :
return xgb.XGBClassifier(objective = 'multi:softmax',
base_score = np.mean(y),eval_metric ="mlogloss",
subsample= 0.8,n_estimators = 2000,
seed=seed,random_state = seed,num_class = 9,
)
def Run5fold(name,X,y,TEST,COLUMNS) :
print(f'TRAINING {name}')
cv_score_ = 0
oof_preds = np.zeros((Train.shape[0],9))
final_predictions = np.zeros((Test.shape[0],9))
for fold in [7,8,9]:
print()
print(f'######### FOLD {fold+1} / {sk.n_splits} ')
train_idx = Train[Train['folds'] !=fold].index.tolist()
test_idx = Train[Train['folds'] ==fold].index.tolist()
X_train,y_train = X[train_idx,:],y[train_idx]
X_test,y_test = X[test_idx,:] ,y[test_idx]
model = DefineModel(name=name)
model.fit(X_train,y_train,
eval_set = [(X_test,y_test)],
early_stopping_rounds = 100,
verbose = 100
)
oof_prediction = model.predict_proba(X_test)
np.save(f'LGBM_oof_{fold}',oof_prediction)
cv_score_ += log_loss(y_test,oof_prediction) / sk.n_splits
print(f'Log Loss Fold {fold} : {log_loss(y_test,oof_prediction) }')
oof_preds[test_idx] = oof_prediction
test_prediction = model.predict_proba(TEST)
np.save(f'LGBM_testPred_{fold}',test_prediction)
final_predictions += test_prediction / sk.n_splits
del X_train,y_train , X_test,y_test , model
gc.collect()
# return feats,oof_preds , final_predictions
import gc ; gc.collect()
Run5fold(name='lgbm',X=X,y=y,TEST=TEST,COLUMNS=COLUMNS) #Log Loss Fold 0 : 0.625915534181997
```
---
## save into drive
```
from google.colab import drive
drive.mount('/content/drive')
os.makedirs('/content/drive/MyDrive/RadiantEarth/LGBMS2',exist_ok=True)
!cp LGBM_oof_* '/content/drive/MyDrive/RadiantEarth/LGBMS2/'
!cp LGBM_testPred_* '/content/drive/MyDrive/RadiantEarth/LGBMS2/'
```
## oof
```
OOF = np.empty((0, 9),dtype=np.float16)
for i in range(10) :
oof_pred = np.load(f'/content/drive/MyDrive/RadiantEarth/LGBMS2/LGBM_oof_{i}.npy')
OOF = np.append(OOF, oof_pred, axis=0)
OOF.shape
Y = np.empty((0,),dtype=np.float16)
for i in range(10) :
Y_ = Train[Train['folds'].isin([i])]['label'].values
Y = np.append(Y, Y_, axis=0)
Y.shape
predictions_lgbm = []
for i in range(10) :
test_pred = np.load(f'/content/drive/MyDrive/RadiantEarth/LGBMS2/LGBM_testPred_{i}.npy')
predictions_lgbm.append(test_pred)
print('LGBM LOG LOSS :',log_loss(Y,OOF))
Field = np.empty((0,),dtype=np.float16)
for i in range(10) :
Field_ = Train[Train['folds'].isin([i])]['field_id'].values
Field = np.append(Field, Field_, axis=0)
Field.shape
DLGBM = pd.DataFrame(Field,columns=['field_id'])
cols = ['oof'+str(i) for i in range(9)]
for col in cols :
DLGBM[col] =0
DLGBM[cols] = OOF
oof_lgbm = pd.merge(Train[['field_id']],DLGBM,on='field_id',how='left')[cols].values
print('LGBM LOG LOSS :',log_loss(y,oof_lgbm))
# In this part we format the DataFrame to have column names and order similar to the sample submission file.
pred_df = pd.DataFrame(np.mean(predictions_lgbm,axis=0))
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = Test['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
pred_df.shape
# Write the predicted probabilites to a csv for submission
pred_df.to_csv('S2_LightGBM.csv', index=False)
np.save('S2_oof_lgbm.npy',oof_lgbm)
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'bird', 'cat', 'deer'}
fg_used = '234'
fg1, fg2, fg3 = 2,3,4
all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
background_classes = all_classes - foreground_classes
background_classes
# print(type(foreground_classes))
all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
background_classes = all_classes - foreground_classes
background_classes
# print(type(foreground_classes))
train = trainset.data
label = trainset.targets
train.shape
train = np.reshape(train, (50000,3072))
train.shape
from numpy import linalg as LA
u, s, vh = LA.svd(train, full_matrices= False)
u.shape , s.shape, vh.shape
s
vh
# vh = vh.T
vh
dir = vh[1062:1072,:]
dir
u1 = dir[7,:]
u2 = dir[8,:]
u3 = dir[9,:]
u1
u2
u3
len(label)
cnt=0
for i in range(50000):
if(label[i] == fg1):
# print(train[i])
# print(LA.norm(train[i]))
# print(u1)
train[i] = train[i] + 0.1 * LA.norm(train[i]) * u1
# print(train[i])
cnt+=1
if(label[i] == fg2):
train[i] = train[i] + 0.1 * LA.norm(train[i]) * u2
cnt+=1
if(label[i] == fg3):
train[i] = train[i] + 0.1 * LA.norm(train[i]) * u3
cnt+=1
if(i%10000 == 9999):
print("partly over")
print(cnt)
train.shape, trainset.data.shape
train = np.reshape(train, (50000,32, 32, 3))
train.shape
trainset.data = train
test = testset.data
label = testset.targets
test.shape
test = np.reshape(test, (10000,3072))
test.shape
len(label)
cnt=0
for i in range(10000):
if(label[i] == fg1):
# print(train[i])
# print(LA.norm(train[i]))
# print(u1)
test[i] = test[i] + 0.1 * LA.norm(test[i]) * u1
# print(train[i])
cnt+=1
if(label[i] == fg2):
test[i] = test[i] + 0.1 * LA.norm(test[i]) * u2
cnt+=1
if(label[i] == fg3):
test[i] = test[i] + 0.1 * LA.norm(test[i]) * u3
cnt+=1
if(i%1000 == 999):
print("partly over")
print(cnt)
test.shape, testset.data.shape
test = np.reshape(test, (10000,32, 32, 3))
test.shape
testset.data = test
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
fg,bg
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
img1 = torch.cat((background_data[0],background_data[1],background_data[2]),1)
imshow(img1)
img2 = torch.cat((foreground_data[27],foreground_data[3],foreground_data[43]),1)
imshow(img2)
img3 = torch.cat((img1,img2),2)
imshow(img3)
print(img2.size())
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
list_set_labels = []
for i in range(desired_num):
set_idx = set()
bg_idx = np.random.randint(0,35000,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,15000)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
list_set_labels.append(set_idx)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Module1(nn.Module):
def __init__(self):
super(Module1, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,1)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
class Module2(nn.Module):
def __init__(self):
super(Module2, self).__init__()
self.module1 = Module1().double()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,3)
def forward(self,z): #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
x = x.to("cuda")
y = y.to("cuda")
for i in range(9):
x[:,i] = self.module1.forward(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
y = y.contiguous()
y1 = self.pool(F.relu(self.conv1(y)))
y1 = self.pool(F.relu(self.conv2(y1)))
y1 = y1.contiguous()
y1 = y1.reshape(-1, 16 * 5 * 5)
y1 = F.relu(self.fc1(y1))
y1 = F.relu(self.fc2(y1))
y1 = F.relu(self.fc3(y1))
y1 = self.fc4(y1)
return y1 , x, y
fore_net = Module2().double()
fore_net = fore_net.to("cuda")
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(fore_net.parameters(), lr=0.01, momentum=0.9)
nos_epochs = 600
for epoch in range(nos_epochs): # loop over the dataset multiple times
running_loss = 0.0
cnt=0
mini_loss = []
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels, fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
# zero the parameter gradients
# optimizer_what.zero_grad()
# optimizer_where.zero_grad()
optimizer.zero_grad()
# avg_images , alphas = where_net(inputs)
# avg_images = avg_images.contiguous()
# outputs = what_net(avg_images)
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
# optimizer_what.step()
# optimizer_where.step()
optimizer.step()
running_loss += loss.item()
mini = 40
if cnt % mini == mini - 1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
mini_loss.append(running_loss / mini)
running_loss = 0.0
cnt=cnt+1
if(np.average(mini_loss) <= 0.05):
break
print('Finished Training')
torch.save(fore_net.state_dict(),"/content/drive/My Drive/Research/mosaic_from_CIFAR_involving_bottop_eigen_vectors/fore_net_epoch"+str(epoch)+"_fg_used"+str(fg_used)+".pt")
```
#Train summary on Train mosaic made from Trainset of 50k CIFAR
```
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
from tabulate import tabulate
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half",argmax_more_than_half)
print("argmax_less_than_half",argmax_less_than_half)
print(count)
print("="*100)
table3 = []
entry = [1,'fg = '+ str(fg),'bg = '+str(bg),30000]
entry.append((100 * focus_true_pred_true / total))
entry.append( (100 * focus_false_pred_true / total))
entry.append( ( 100 * focus_true_pred_false / total))
entry.append( ( 100 * focus_false_pred_false / total))
entry.append( argmax_more_than_half)
train_entry = entry
table3.append(entry)
print(tabulate(table3, headers=['S.No.', 'fg_class','bg_class','data_points','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
test_set_labels = []
for i in range(10000):
set_idx = set()
bg_idx = np.random.randint(0,35000,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,15000)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_set_labels.append(set_idx)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
```
#Test summary on Test mosaic made from Trainset of 50k CIFAR
```
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half",argmax_more_than_half)
print("argmax_less_than_half",argmax_less_than_half)
print("="*100)
# table4 = []
entry = [2,'fg = '+ str(fg),'bg = '+str(bg),10000]
entry.append((100 * focus_true_pred_true / total))
entry.append( (100 * focus_false_pred_true / total))
entry.append( ( 100 * focus_true_pred_false / total))
entry.append( ( 100 * focus_false_pred_false / total))
entry.append( argmax_more_than_half)
test_entry = entry
table3.append(entry)
print(tabulate(table3, headers=['S.No.', 'fg_class','bg_class','data_points','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
dataiter = iter(testloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(1000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
test_set_labels = []
for i in range(10000):
set_idx = set()
bg_idx = np.random.randint(0,7000,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,3000)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_set_labels.append(set_idx)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
unseen_test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
```
# Test summary on Test mosaic made from Testset of 10k CIFAR
```
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in unseen_test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half",argmax_more_than_half)
print("argmax_less_than_half",argmax_less_than_half)
print("="*100)
# table4 = []
entry = [3,'fg = '+ str(fg),'bg = '+str(bg),10000]
entry.append((100 * focus_true_pred_true / total))
entry.append( (100 * focus_false_pred_true / total))
entry.append( ( 100 * focus_true_pred_false / total))
entry.append( ( 100 * focus_false_pred_false / total))
entry.append( argmax_more_than_half)
test_entry = entry
table3.append(entry)
print(tabulate(table3, headers=['S.No.', 'fg_class','bg_class','data_points','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
```
| github_jupyter |
# Natural Language Processing - Problems
**Author:** Ties de Kok ([Personal Website](https://www.tiesdekok.com)) <br>
**Last updated:** June 2021
**Python version:** Python 3.6+
**License:** MIT License
**Recommended environment: `researchPython`**
```
import os
recommendedEnvironment = 'researchPython'
if os.environ['CONDA_DEFAULT_ENV'] != recommendedEnvironment:
print('Warning: it does not appear you are using the {0} environment, did you run "conda activate {0}" before starting Jupyter?'.format(recommendedEnvironment))
```
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: left; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Introduction</span>
</div>
<div style='border-style: solid; padding: 5px; border-color: darkred; border-width:5px; text-align: center; margin-left: 100px; margin-right:100px;'>
<span style='color:black; font-size: 20px; font-weight:bold;'> Make sure to open up the respective tutorial notebook(s)! <br> That is what you are expected to use as primariy reference material. </span>
</div>
### Relevant tutorial notebooks:
1) [`0_python_basics.ipynb`](https://nbviewer.jupyter.org/github/TiesdeKok/LearnPythonforResearch/blob/master/0_python_basics.ipynb)
2) [`2_handling_data.ipynb`](https://nbviewer.jupyter.org/github/TiesdeKok/LearnPythonforResearch/blob/master/2_handling_data.ipynb)
3) [`NLP_Notebook.ipynb`](https://nbviewer.jupyter.org/github/TiesdeKok/Python_NLP_Tutorial/blob/master/NLP_Notebook.ipynb)
## Import required packages
```
import os, re
import pandas as pd
import numpy as np
import en_core_web_lg
nlp = en_core_web_lg.load()
```
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: center; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Part 1 </span>
</div>
<div style='border-style: solid; padding: 5px; border-color: darkred; border-width:5px; text-align: center; margin-left: 100px; margin-right:100px;'>
<span style='color:black; font-size: 15px; font-weight:bold;'> Note: feel free to add as many cells as you'd like to answer these problems, you don't have to fit it all in one cell. </span>
</div>
## 1) Perform basic operations on a sample earnings transcript text file
### 1a) Load the following text file: `data > example_transcript.txt` into Python
### 1b) Print the first 400 characters of the text file you just loaded
### 1c) Count the number of times the words `Alex` and `Angie` are mentioned
### 1c) Use the provided Regular Expression to capture all numbers prior to a "%"
Use this regular expression: `\W([\.\d]{,})%`
**You can play around with this regular expression here: <a href='https://bit.ly/3heIqoG'>Test on Pythex.org</a>**
**Hint:** use the `re.findall()` function
### Extra: try to explain to a neighbour / group member what the regular expression is doing
You can use the cheatsheet on Pythex.org for reference.
### 1d) Load the text into a Spacy object and split it into a list of sentences
Make sure to evaluate how well it worked by inspecting various elements of the sentence list.
#### What is the 150th sentence?
### Why is there a difference between showing a string and printing a string? See the illustration below:
```
demo_sentence = "This is a test sentence, the keyword:\x20\nSeattle"
demo_sentence
print(demo_sentence)
```
### 1e) Parse out the following 3 blocks of text:
* The meta data at the top
* The presentation portion
* The Q&A portion
**Note 1:** I recommend to do this based on the full text (i.e., the raw string as you loaded it) not the list of sentences.
**Note 2:** Don't use the location (e.g., `[:123]`), that wouldn't work if you had more than 1 transcript.
### 1f) How many characters, sentences, words (tokens) does the presentation portion and the Q&A portion have?
<div style='border-style: solid; padding: 5px; border-color: darkred; border-width:5px; text-align: left; margin-left: 0px; margin-right:100px;'>
<span style='color:black; font-size: 20px; font-weight:bold;'> Note: problems 1g and 1h are quite challenging, it might make sense to skip them until the end and move on to questions 2 and 3 first.</span>
### 1g) Create a list of all the questions during the Q&A and include the person that asked the question
You should end up with 20 questions.
### 1h) Modify the Q&A list by adding in the answer + answering person
This is what the first entry should (rougly) look like:
```python
qa_list[0] =
{
'q_person': 'Christopher McGratty ',
'question': 'Great, thanks, good afternoon. Kevin maybe you could start -- or Alex on the margin, obviously the environment has got a little bit tougher for the banks. But you have this -- the ability to bring down deposit costs, which you talked about in your prepared remarks. I appreciate in the guidance for the first quarter, but if the rate outlook remains steady, how do we think about ultimate stability in the flow and the margin, where and kind of when?',
'answers': [{
'name': 'Alex Ko ',
'answer': 'Sure, sure. As I indicated, we would expect to have continued compression next quarter given the rate cuts that we have experienced especially October rate cut, it will continue next quarter. But as we indicated, our proactive deposit initiative as well as very disciplined pricing on the deposit, even though we have a very competitive -- competition on the loan rate is very still severe. We would expect to stabilize in the second quarter of 2020 in terms of net interest margin and then second half of the year, we would expect to start to increase.'
}]
}
```
>
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: left; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Part 2:</span>
</div>
## 2) Extract state name counts from MD&As
Follow Garcia and Norli (2012) and extract state name counts from MD&As.
#### References
Garcia, D., & Norli, Ø. (2012). Geographic dispersion and stock returns. Journal of Financial Economics, 106(3), 547-565.
#### Data to use
I have included a random selection of 20 pre-processed MDA filings in the `data > MDA_files` folder. The filename is the unique identifier.
You will also find a file called `MDA_META_DF.xlsx` in the "data" folder, this contains the following meta-data for eaching MD&A:
* filing date
* cik
* company name
* link to filing
### 2a) Load data into a dictionary with as key the filename and as value the content of the text file
The files should all be in the following folder:
```
os.path.join('data', 'MDA_files')
```
### 2b) Load state name data into a DataFrame
**Note:** state names are provided in the `state_names.xlsx` file in the "data" folder.
### 2c) Count the number of times that each U.S. state name is mentioned in each MD&A
**Hint:** save the counts to a list where each entry is a list that contains the following three items: [*filename*, *state_name*, *count*], like this:
> [
['21344_0000021344-16-000050.txt', 'Alabama', 0],
['21344_0000021344-16-000050.txt', 'Alaska', 0],
['21344_0000021344-16-000050.txt', 'Arizona', 0],
> ....
>['49071_0000049071-16-000117.txt', 'West Virginia', 0],
['49071_0000049071-16-000117.txt', 'Wisconsin', 0],
['49071_0000049071-16-000117.txt', 'Wyoming', 0]
]
You can verify that it worked by checking whether the the 80th element (i.e. `list[79]`) equals:
> ['21510_0000021510-16-000074.txt', 'New Jersey', 2]
(I looped over the companies first, and then over the states)
### 2d) Convert the list you created in `2c` into a Pandas DataFrame and save it as an Excel sheet
**Hint:** Use the `columns=[...]` parameter to name the columns
## 3) Create sentiment score based on Loughran and McDonald (2011)
Create a sentiment score for MD&As based on the Loughran and McDonald (2011) word lists.
#### References
Loughran, T., & McDonald, B. (2011). When is a liability not a liability? Textual analysis, dictionaries, and 10‐Ks. The Journal of Finance, 66(1), 35-65.
#### Data to use
I have included a random selection of 20 pre-processed MDA filings in the `data > MDA_files` folder. The filename is the unique identifier.
You will also find a file called `MDA_META_DF.xlsx` in the "data" folder, this contains the following meta-data for eaching MD&A:
* filing date
* cik
* company name
* link to filing
### 3a) Load the Loughran and McDonald master dictionary
**Note:** The Loughran and McDonald dictionary is included in the "data" folder: `LoughranMcDonald_MasterDictionary_2014.xlsx `
### 3b) Create two lists: one containing all the negative words and the other one containing all the positive words
**Tip:** I recommend to change all words to lowercase in this step so that you don't need to worry about that later
### 3c) For each MD&A calculate the *total* number of times negative and positive words are mentioned
**Note:** make sure you also convert the text to lowercase!
**Hint:** save the counts to a list where each entry is a list that contains the following three items: [*filename*, *total pos count*, *total neg count*], like this:
> [
['21344_0000021344-16-000050.txt', 1166, 2318],
['21510_0000021510-16-000074.txt', 606, 1078],
['21665_0001628280-16-011343.txt', 516, 1058],
> ....
['47217_0000047217-16-000093.txt', 544, 928],
['47518_0001214659-16-014806.txt', 482, 974],
['49071_0000049071-16-000117.txt', 954, 1636]
]
You can verify that it worked by checking whether the the 16th element (i.e. `list[15]`) equals:
> ['43920_0000043920-16-000025.txt', 558, 1568]
### 3d) Convert the list created in 3c into a Pandas DataFrame
**Hint:** Use the `columns=[...]` parameter to name the columns
### 3e) Create a new column with a "sentiment score" for each MD&A
Use the following imaginary sentiment score:
$$\frac{(Num\ Positive\ Words - Num\ Negative\ Words)}{Sum\ of Pos\ and\ Neg\ Words}$$
## 3f) Use the `MDA_META_DF` file to add the company name, filing date, and CIK to the sentiment dataframe
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: left; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Part 3: "Ties, I am bored, please give me a challenge"</span>
</div>
**Note:** You don't have to complete part 3 if you are handing in the problems for credit.
------
## 1) Visualize the entities in the following sentences:
```python
example_string = "John Smith is a Professor at the University of Washington. Which is located in Seattle."
```
What you should get:

## 2) For each sentence in `data > example_transcript.txt` find the sentences that is closest in semantic similarity
Use the built-in word vectors that come with Spacy. Limit your sample to sentences with more than 100 characters.
This is what you should get:

| github_jupyter |
# Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Import necessary packages
As usual we need to first import the Python packages that we will need.
```
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
```
wiki = graphlab.SFrame('people_wiki.gl')
wiki
```
## Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in `wiki`.
```
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
```
## Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
```
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
```
Let's look at the top 10 nearest neighbors by performing the following query:
```
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
```
All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
* Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
* Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
* Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
* Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
```
def top_words(name):
"""
Get a table of the most frequent words in the given person's wikipedia page.
"""
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
```
Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as **join**. The **join** operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See [the documentation](https://dato.com/products/create/docs/generated/graphlab.SFrame.join.html) for more details.
For instance, running
```
obama_words.join(barrio_words, on='word')
```
will extract the rows from both tables that correspond to the common words.
```
combined_words = obama_words.join(barrio_words, on='word')
combined_words
```
Since both tables contained the column named `count`, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (`count`) is for Obama and the second (`count.1`) for Barrio.
```
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
```
**Note**. The **join** operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget `ascending=False` to display largest counts first.
```
combined_words.sort('Obama', ascending=False)
```
**Quiz Question**. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint:
* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.
* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function `has_top_words` to accomplish the task.
- Convert the list of top 5 words into set using the syntax
```
set(common_words)
```
where `common_words` is a Python list. See [this link](https://docs.python.org/2/library/stdtypes.html#set) if you're curious about Python sets.
- Extract the list of keys of the word count dictionary by calling the [`keys()` method](https://docs.python.org/2/library/stdtypes.html#dict.keys).
- Convert the list of keys into a set as well.
- Use [`issubset()` method](https://docs.python.org/2/library/stdtypes.html#set) to check if all 5 words are among the keys.
* Now apply the `has_top_words` function on every row of the SFrame.
* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
```
common_words = set(combined_words.sort('Obama', ascending=False)['word'][:5])
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(dict(word_count_vector).keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words)
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum()
```
**Checkpoint**. Check your `has_top_words` function on two random articles:
```
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
```
**Quiz Question**. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint: To compute the Euclidean distance between two dictionaries, use `graphlab.toolkits.distances.euclidean`. Refer to [this link](https://dato.com/products/create/docs/generated/graphlab.toolkits.distances.euclidean.html) for usage.
```
def get_wc_dict(name):
return wiki[wiki['name'] == name]['word_count'][0]
graphlab.toolkits.distances.euclidean(get_wc_dict('Barack Obama'), get_wc_dict('George W. Bush'))
graphlab.toolkits.distances.euclidean(get_wc_dict('Barack Obama'), get_wc_dict('Joe Biden'))
graphlab.toolkits.distances.euclidean(get_wc_dict('George W. Bush'), get_wc_dict('Joe Biden'))
```
**Quiz Question**. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
```
bush_words = top_words('George W. Bush')
obama_words.join(bush_words, on='word').rename({'count':'Obama', 'count.1':'Bush'}).sort('Obama', ascending=False)
```
**Note.** Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
## TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. **TF-IDF** (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
```
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
```
Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
```
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
```
Using the **join** operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
```
obama_tf_idf.join(schiliro_tf_idf, 'word')
```
The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
**Quiz Question**. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
```
common_words = set(obama_tf_idf.join(schiliro_tf_idf, 'word').sort('weight', ascending=False)['word'][:5])
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(dict(word_count_vector).keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words)
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum()
```
Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
## Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of `model_tf_idf`. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
**Quiz Question**. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
```
def get_tfidf_dict(name):
return wiki[wiki['name'] == name]['tf_idf'][0]
graphlab.toolkits.distances.euclidean(get_tfidf_dict('Barack Obama'), get_tfidf_dict('Joe Biden'))
```
The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
```
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
```
But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
```
def compute_length(row):
return len(row['text'])
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
```
To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
```
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
**Note:** Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to **cosine distances**:
$$
d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}
$$
Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
```
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
```
From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
```
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
```
Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
**Moral of the story**: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
# Problem with cosine distances: tweets vs. long articles
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
```
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
```
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
```
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
```
Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
```
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
```
Now, compute the cosine distance between the Barack Obama article and this tweet:
```
obama = wiki[wiki['name'] == 'Barack Obama']
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
```
Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
```
model2_tf_idf.query(obama, label='name', k=10)
```
With cosine distances, the tweet is "nearer" to Barack Obama than everyone else, except for Joe Biden! This probably is not something we want. If someone is reading the Barack Obama Wikipedia page, would you want to recommend they read this tweet? Ignoring article lengths completely resulted in nonsensical results. In practice, it is common to enforce maximum or minimum document lengths. After all, when someone is reading a long article from _The Atlantic_, you wouldn't recommend him/her a tweet.
| github_jupyter |
# Azure Cosmos DB Live TV
```
## Import (?) Client Initialization (??) + Objects Creation
######### NOT NECESSARY!!
#from azure.cosmos import CosmosClient
#import os
#url = os.environ['ACCOUNT_URI']
#key = os.environ['ACCOUNT_KEY']
#client = CosmosClient(url, credential=key)
######### NOT NECESSARY!!
db_name = 'AzureCosmosDBLiveTVTestDB'
database_client = cosmos_client.create_database_if_not_exists(db_name)
print('Database with id \'{0}\' created'.format(db_name))
#Creating a container with analytical store
from azure.cosmos.partition_key import PartitionKey
container_name = "AzureTVData"
partition_key_value = "/id"
offer = 400
container_client = database_client.create_container_if_not_exists(
id=container_name,
partition_key=PartitionKey(path=partition_key_value),
offer_throughput=offer,
analytical_storage_ttl=-1)
print('Container with id \'{0}\' created'.format(container_name))
#Propreties
properties = database_client.read()
print(json.dumps(properties))
print(" ")
print(" ")
properties = container_client.read()
print(json.dumps(properties))
#DB Offer - ERROR!!!!!!!!!!
# Dedicated throughput only. Will return error "offer not found" for Objects without dedicated throughput
# Database
db_offer = database_client.read_offer()
print('Found Offer \'{0}\' for Database \'{1}\' and its throughput is \'{2}\''.format(db_offer.properties['id'], database.id, db_offer.properties['content']['offerThroughput']))
#Container Offer
# Dedicated throughput only. Will return error "offer not found" for Objects without dedicated throughput
container_offer = container_client.read_offer()
print('Found Offer \'{0}\' for Container \'{1}\' and its throughput is \'{2}\''.format(container_offer.properties['id'], container.id, container_offer.properties['content']['offerThroughput']))
#Query 1 Error????
import json
db = cosmos_client.get_database_client('TestDB')
container = db.get_container_client ('Families')
for item in container.query_items(query='SELECT * FROM Familes'):#,enable_cross_partition_query=True):
print(json.dumps(item, indent=True))
#Query 2
import json
db = cosmos_client.get_database_client('TestDB')
container = db.get_container_client ('Families')
for item in container.query_items(query='SELECT * FROM Familes.children',enable_cross_partition_query=True):
print(json.dumps(item, indent=True))
# Boolean Test
for i in range(1, 10):
container_client.upsert_item({
'id': 'item{0}'.format(i),
'productName': 'Widget',
'productModel': 'Model {0}'.format(i),
'isEnabled': True
}
)
import json
for item in container_client.query_items(
query='SELECT * FROM AzureTVData',
enable_cross_partition_query=True):
print(json.dumps(item, indent=True))
import json
for item in container_client.query_items(
query='SELECT * FROM AzureTVData',
enable_cross_partition_query=True):
print(item)
```
| github_jupyter |
# Aproximações e Erros de Arredondamento
_Prof. Dr. Tito Dias Júnior_
## **Erros de Arredondamento**
### Épsilon de Máquina
```
#Calcula o épsilon de máquina
epsilon = 1
while (epsilon+1)>1:
epsilon = epsilon/2
epsilon = 2 * epsilon
print(epsilon)
```
Aproximação de uma função por Série de Taylor
```
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return -0.1*x**4 -0.15*x**3 -0.5*x**2 -0.25*x +1.2
def df(x):
return -0.4*x**3 -0.45*x**2 -1.0*x -0.25
def ddf(x):
return -1.2*x**2 -0.9*x -1.0
def dddf(x):
return -2.4*x -0.9
def d4f(x):
return -2.4
x1 = 0
x2 = 1
# Aproximação de ordem zero
fO_0 = f(x1) # Valor previsto
erroO_0 = f(x2) - fO_0 # valor exato menos o valor previsto
# Aproximação de primeira ordem
fO_1 = f(x1) + df(x1)*(x2-x1) # Valor previsto
erroO_1 = f(x2) - fO_1 # valor exato menos o valor previsto
# Aproximação de segunda ordem
fO_2 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 # Valor previsto
erroO_2 = f(x2) - fO_2 # valor exato menos o valor previsto
# Aproximação de terceira ordem
fO_3 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 # Valor previsto 3!=3*2*1=6
erroO_3 = f(x2) - fO_3 # valor exato menos o valor previsto
# Aproximação de quarta ordem
fO_4 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 + (d4f(x1)/24)*(x2-x1)**4 # Valor previsto 4!=4*3*2*1=24
erroO_4 = f(x2) - fO_4 # valor exato menos o valor previsto
print('Ordem ~f(x) Erro')
print('0 {0:8f} {1:3f}'.format(fO_0, erroO_0))
print('1 {0:8f} {1:8f}'.format(fO_1, erroO_1))
print('2 {0:8f} {1:8f}'.format(fO_2, erroO_2))
print('3 {0:8f} {1:8f}'.format(fO_3, erroO_3))
print('4 {0:8f} {1:8f}'.format(fO_4, erroO_4))
# Plotagem dos gráficos
xx = np.linspace(-2,2.0,40)
yy = f(xx)
plt.plot(xx,yy,'b',x2,fO_0,'*',x2,fO_1,'*r', x2, fO_2, '*g', x2, fO_3,'*y', x2, fO_4, '*r')
plt.savefig('exemplo1.png')
plt.show()
# Exercício do dia 17/08/2020
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return np.sin(x)
def df(x):
return np.cos(x)
def ddf(x):
return -np.sin(x)
def dddf(x):
return -np.cos(x)
def d4f(x):
return np.sin(x)
x1 = np.pi/2
x2 = 3*np.pi/4 # igual a pi/2 +pi/4
# Aproximação de ordem zero
fO_0 = f(x1) # Valor previsto
erroO_0 = f(x2) - fO_0 # valor exato menos o valor previsto
# Aproximação de primeira ordem
fO_1 = f(x1) + df(x1)*(x2-x1) # Valor previsto
erroO_1 = f(x2) - fO_1 # valor exato menos o valor previsto
# Aproximação de segunda ordem
fO_2 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 # Valor previsto
erroO_2 = f(x2) - fO_2 # valor exato menos o valor previsto
# Aproximação de terceira ordem
fO_3 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 # Valor previsto 3!=3*2*1=6
erroO_3 = f(x2) - fO_3 # valor exato menos o valor previsto
# Aproximação de quarta ordem
fO_4 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 + (d4f(x1)/24)*(x2-x1)**4 # Valor previsto 4!=4*3*2*1=24
erroO_4 = f(x2) - fO_4 # valor exato menos o valor previsto
print('Ordem ~f(x) Erro')
print('0 {0:8f} {1:3f}'.format(fO_0, erroO_0))
print('1 {0:8f} {1:8f}'.format(fO_1, erroO_1))
print('2 {0:8f} {1:8f}'.format(fO_2, erroO_2))
print('3 {0:8f} {1:8f}'.format(fO_3, erroO_3))
print('4 {0:8f} {1:8f}'.format(fO_4, erroO_4))
# Plotagem dos gráficos
xx = np.linspace(0,2.*np.pi,40)
yy = f(xx)
plt.plot(xx,yy,'b',x2,fO_0,'*',x2,fO_1,'*b', x2, fO_2, '*g', x2, fO_3,'*y', x2, fO_4, '*r')
plt.savefig('exemplo2.png')
plt.show()
```
### Exercício - Aula 17/08/2020
Utilizando o exemplo anterior faça expansões de Taylor para a função seno, de ordem zero até ordem 4, a partir de $x = \pi/2$ com $h = \pi/4$, ou seja, para estimar o valor da função em $x_{i+1} = 3 \pi/4$. E responda os check de verificação no AVA:
1. Check: Qual o erro da estimativa de ordem zero?
2. Check: Qual o erro da estimativa de quarta ordem?
### Exercício - Aula 24/08/2020
Utilizando os exemplo e exercícios anteriores faça os gráficos das expansões de Taylor para as funções estudadas, de ordem zero até ordem 4, salve o arquivo em formato png e faça o upload no AVA.
## Referências
Kiusalaas, J. (2013). **Numerical Methods in Engineering With Python 3**. Cambridge: Cambridge.<br>
Brasil, R.M.L.R.F, Balthazar, J.M., Góis, W. (2015) **Métodos Numéricos e Computacionais na Prática de Engenharias e Ciências**, São Paulo: Edgar Blucher
| github_jupyter |
# Shashank V. Sonar
## Task 3: Perform ‘Exploratory Data Analysis’ on dataset ‘SampleSuperstore’
### ● As a business manager, try to find out the weak areas where you can
### work to make more profit.
### ● What all business problems you can derive by exploring the data?
```
#importing libraries
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
#load data
my_data=pd.read_csv(r"C:\Users\91814\Desktop\GRIP\Task 3\SampleSuperstore.csv")
#displaying the data
my_data
# To Check first 5 rows of Dataset
my_data.head(5)
# To Check last 5 rows of Dataset
my_data.tail(5)
```
### Exploratory Data Analysis
```
# Information of the Dataset
my_data.info()
# Shape Of the Dataset
my_data.shape
# Columns Of the Dataset
my_data.columns
# Datatype of each Attribute
my_data.dtypes
# Checking for any Null Values in the columns and duplicates values
my_data.isnull().sum()
# Checking of Duplicated data
my_data.duplicated().sum()
# Deleting Duplicates if any
my_data.drop_duplicates(inplace=True)
# finding out any duplicates left from the sample file
my_data.duplicated().sum()
# Displaying the unique data
my_data.nunique()
# Dropping of Irrelevant columns like we have postal code in the sample file
my_data.drop(['Postal Code'],axis=1, inplace=True)
my_data
# To Check first 5 rows of Dataset
my_data.head()
# Correlation between the Attributes
my_data.corr()
```
### Data Visualization
```
# Pairplot the data
sns.pairplot(my_data,hue='Segment')
#ploting against various attributes
plt.figure(figsize=(20,10))
plt.bar('Sub-Category','Category', data=my_data)
plt.title('Category vs Sub-Category')
plt.xlabel('Sub-Category')
plt.ylabel('Category')
plt.xticks(rotation=50)
plt.show()
# Visualizing the correlation between the Attributes
sns.heatmap(my_data.corr(), annot=True)
print("1 represents strong positive correlation")
print("-0.22 represents negative correlation")
# Countplot each attribute
fig,axs=plt.subplots(nrows=2,ncols=2,figsize=(10,7));
sns.countplot(my_data['Category'],ax=axs[0][0])
axs[0][0].set_title('Category',fontsize=20)
sns.countplot(my_data['Segment'],ax=axs[0][1])
axs[0][1].set_title('Segment',fontsize=20)
sns.countplot(my_data['Ship Mode'],ax=axs[1][0])
axs[1][0].set_title('Ship Mode',fontsize=20)
sns.countplot(my_data['Region'],ax=axs[1][1])
axs[1][1].set_title('Region',fontsize=20)
plt.tight_layout()
# Countplot State wise shipments
plt.figure(figsize=(12,7))
sns.countplot(x=my_data['State'])
plt.xticks(rotation=90)
plt.title('Count of State wise shipments')
# State vs Profit
plt.figure(figsize =(20,12))
my_data.groupby(by ='State')['Profit'].sum().sort_values(ascending = True).plot(kind = 'bar')
# Count of Sub-Category
plt.figure(figsize=(12,7))
sns.countplot(x=my_data['Sub-Category'])
plt.xticks(rotation=90)
plt.title('Count of Sub-categories')
# Discount Vs Profit
sns.lineplot(x='Discount',y='Profit',label='Profit',data=my_data)
plt.legend()
# Discount VS Sales
sns.lineplot(x='Discount',y='Sales',label='Profit',data=my_data)
plt.legend()
# Count of Ship Mode by Region
plt.figure(figsize=(9,5))
sns.countplot(x='Region',hue='Ship Mode',data=my_data)
plt.ylabel('Count of Ship Mode')
plt.title('Count of Ship Mode by Region')
```
Thus we can conclude that, Product sales increases with increase in Discounts but Profit decreases
Conclusion:
1) The cities / states which gives more discounts on products shows more sales but very less profit
2) West region shows more sales and more profit whereas south region shows least sales and profits. So, we should try to increase sales and profits in south region.
3) We should limit the discount on standard class shipment and try to increase its sales and profits.
4) Technology category shows more profit so, we should increse its sales and reduce the sales for furnitures due to its low profit.
5) Copiers sub-category shows more profit and tables shows least profit so we should also reduce its sales and increase sales for sub-categories like Accessories and Phones
6) States showing high profit ratios are California and New York whereas States showing least profit ratios are Texas and Ohio
7) Cities showing high profit ratios are New-york and Los angeles whereas Cities showing least profit ratios are Philidelphia and Housten
| github_jupyter |
# TensorFlow Reproducibility
```
from __future__ import division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
from tensorflow import keras
```
## Checklist
1. Do not run TensorFlow on the GPU.
2. Beware of multithreading, and make TensorFlow single-threaded.
3. Set all the random seeds.
4. Eliminate any other source of variability.
## Do Not Run TensorFlow on the GPU
Some operations (like `tf.reduce_sum()`) have favor performance over precision, and their outputs may vary slightly across runs. To get reproducible results, make sure TensorFlow runs on the CPU:
```
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
```
## Beware of Multithreading
Because floats have limited precision, the order of execution matters:
```
2. * 5. / 7.
2. / 7. * 5.
```
You should make sure TensorFlow runs your ops on a single thread:
```
config = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
with tf.Session(config=config) as sess:
#... this will run single threaded
pass
```
The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded:
```
with tf.Session() as sess:
#... also single-threaded!
pass
```
## Set all the random seeds!
### Python's built-in `hash()` function
```
print(set("Try restarting the kernel and running this again"))
print(set("Try restarting the kernel and running this again"))
```
Since Python 3.3, the result will be different every time, unless you start Python with the `PYTHONHASHSEED` environment variable set to `0`:
```shell
PYTHONHASHSEED=0 python
```
```pycon
>>> print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
>>> exit()
```
```shell
PYTHONHASHSEED=0 python
```
```pycon
>>> print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
```
Alternatively, you could set this environment variable system-wide, but that's probably not a good idea, because this automatic randomization was [introduced for security reasons](http://ocert.org/advisories/ocert-2011-003.html).
Unfortunately, setting the environment variable from within Python (e.g., using `os.environ["PYTHONHASHSEED"]="0"`) will not work, because Python reads it upon startup. For Jupyter notebooks, you have to start the Jupyter server like this:
```shell
PYTHONHASHSEED=0 jupyter notebook
```
```
if os.environ.get("PYTHONHASHSEED") != "0":
raise Exception("You must set PYTHONHASHSEED=0 when starting the Jupyter server to get reproducible results.")
```
### Python Random Number Generators (RNGs)
```
import random
random.seed(42)
print(random.random())
print(random.random())
print()
random.seed(42)
print(random.random())
print(random.random())
```
### NumPy RNGs
```
import numpy as np
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
print()
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
```
### TensorFlow RNGs
TensorFlow's behavior is more complex because of two things:
* you create a graph, and then you execute it. The random seed must be set before you create the random operations.
* there are two seeds: one at the graph level, and one at the individual random operation level.
```
import tensorflow as tf
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
```
Every time you reset the graph, you need to set the seed again:
```
tf.reset_default_graph()
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
```
If you create your own graph, it will ignore the default graph's seed:
```
tf.reset_default_graph()
tf.set_random_seed(42)
graph = tf.Graph()
with graph.as_default():
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
```
You must set its own seed:
```
graph = tf.Graph()
with graph.as_default():
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
```
If you set the seed after the random operation is created, the seed has no effet:
```
tf.reset_default_graph()
rnd = tf.random_uniform(shape=[])
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
```
#### A note about operation seeds
You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works:
| Graph seed | Op seed | Resulting seed |
|------------|---------|--------------------------------|
| None | None | Random |
| graph_seed | None | f(graph_seed, op_index) |
| None | op_seed | f(default_graph_seed, op_seed) |
| graph_seed | op_seed | f(graph_seed, op_seed) |
* `f()` is a deterministic function.
* `op_index = graph._last_id` when there is a graph seed, different random ops without op seeds will have different outputs. However, each of them will have the same sequence of outputs at every run.
In eager mode, there is a global seed instead of graph seed (since there is no graph in eager mode).
```
tf.reset_default_graph()
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
```
In the following example, you may think that all random ops will have the same random seed, but `rnd3` will actually have a different seed:
```
tf.reset_default_graph()
tf.set_random_seed(42)
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
```
#### Estimators API
**Tip**: in a Jupyter notebook, you probably want to set the random seeds regularly so that you can come back and run the notebook from there (instead of from the beginning) and still get reproducible outputs.
```
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
```
If you use the Estimators API, make sure to create a `RunConfig` and set its `tf_random_seed`, then pass it to the constructor of your estimator:
```
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
```
Let's try it on MNIST:
```
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
```
Unfortunately, the `numpy_input_fn` does not allow us to set the seed when `shuffle=True`, so we must shuffle the data ourself and set `shuffle=False`.
```
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
```
The final loss should be exactly 0.46282205.
Instead of using the `numpy_input_fn()` function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed:
```
def create_dataset(X, y=None, n_epochs=1, batch_size=32,
buffer_size=1000, seed=None):
dataset = tf.data.Dataset.from_tensor_slices(({"X": X}, y))
dataset = dataset.repeat(n_epochs)
dataset = dataset.shuffle(buffer_size, seed=seed)
return dataset.batch(batch_size)
input_fn=lambda: create_dataset(X_train, y_train, seed=42)
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
dnn_clf.train(input_fn=input_fn)
```
The final loss should be exactly 1.0556093.
```python
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled,
num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
```
#### Keras API
If you use the Keras API, all you need to do is set the random seed any time you clear the session:
```
keras.backend.clear_session()
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax"),
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10)
```
You should get exactly 97.16% accuracy on the training set at the end of training.
## Eliminate other sources of variability
For example, `os.listdir()` returns file names in an order that depends on how the files were indexed by the file system:
```
for i in range(10):
with open("my_test_foo_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_foo_")]
for i in range(10):
with open("my_test_bar_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_bar_")]
```
You should sort the file names before you use them:
```
filenames = os.listdir()
filenames.sort()
[f for f in filenames if f.startswith("my_test_foo_")]
for f in os.listdir():
if f.startswith("my_test_foo_") or f.startswith("my_test_bar_"):
os.remove(f)
```
I hope you enjoyed this notebook. If you do not get reproducible results, or if they are different than mine, then please [file an issue](https://github.com/ageron/handson-ml/issues) on github, specifying what version of Python, TensorFlow, and NumPy you are using, as well as your O.S. version. Thank you!
If you want to learn more about Deep Learning and TensorFlow, check out my book [Hands-On Machine Learning with Scitkit-Learn and TensorFlow](http://homl.info/amazon), O'Reilly. You can also follow me on twitter [@aureliengeron](https://twitter.com/aureliengeron) or watch my videos on YouTube at [youtube.com/c/AurelienGeron](https://www.youtube.com/c/AurelienGeron).
| github_jupyter |
## next_permutation
Implement next permutation, which rearranges numbers into the lexicographically next greater permutation of numbers.
If such arrangement is not possible, it must rearrange it as the lowest possible order (ie, sorted in ascending order).
The replacement must be in-place and use only constant extra memory.
Here are some examples. Inputs are in the left-hand column and its corresponding outputs are in the right-hand column.
<b>1,2,3 → 1,3,2</b><br>
<b>3,2,1 → 1,2,3</b><br>
<b>1,1,5 → 1,5,1</b>


To illustrate the algorithm with an example, consider nums = [2,3,1,5,4,2]. <br>
It is easy to see that i = 2 is the first i (from the right) such that nums[i] < nums[i+1].<br>
Then we swap nums[2] = 1 with the smallest number in nums[3:] that is larger than 1, which is nums[5] = 2, after which we get nums = [2,3,2,5,4,1]. <br>
To get the lexicographically next greater permutation of nums, we just need to sort nums[3:] = [5,4,1] in-place. <br>
Finally, we reach nums = [2,3,2,1,4,5].
```
#SOLUTION WITHOUT COMMENTS
def next_perm(ls):
n = len(ls)
for i in range(n-1, 0, -1):
if ls[i] > ls[i-1]:
j = i
while j < n and ls[j] > ls[i-1]:
idx = j
j += 1
ls[idx], ls[i-1] = ls[i-1], ls[idx]
for k in range((n-i)//2):
ls[i+k], ls[n-1-k] = ls[n-1-k], ls[i+k]
break
else:
ls.reverse()
return ls
print(next_perm([2,3,1,5,4,2]))
print(next_perm([3,2,1]))
print(next_perm([1,2,3]))
print(next_perm([1,1,5]))
#SOLUTION WITH COMMENTS
def next_perm(ls):
# Find the length of the list
n = len(ls)
print("Length of the list :", len(ls))
# Decreement from last element to the first
for i in range(n-1, 0, -1):
# if last element is greater then last previous. Then j =i
# find the first decreasing element .in the above example 4
if ls[i] > ls[i-1]:
print("Found the first decreasing element ls[i]",ls[i])
j = i
print("Reset both the pointers",ls[i], i,j)
#Here the pointer has been reset
#find the number find the next element which is greater than 4
# Here we increement j till we ls[j] is greater than 4 in the above example it is 5
while j < n and ls[j] > ls[i-1]:
idx = j
j += 1
# swap the elements in the list 4 and 5
ls[idx], ls[i-1] = ls[i-1], ls[idx]
# double-slash for “floor” division (rounds down to nearest whole number)
for k in range((n-i)//2):
ls[i+k], ls[n-1-k] = ls[n-1-k], ls[i+k]
break
else:
ls.reverse()
return ls
print(next_perm([2,3,1,5,4,2]))
print(next_perm([3,2,1]))
print(next_perm([1,1,5]))
#If all values are in descending order then we need to only reverse
print(next_perm([4,3,2,1]))
```

```
def next_perm(ls):
# Find the length of the list
print("ls",ls)
n = len(ls)
print("Length of the list :", len(ls))
# Decreement from last element to the first
for i in range(n-1, 0, -1):
# if last element is greater then last previous. Then j =i
# find the first decreasing element .in the above example
if ls[i] > ls[i-1]:
print("\nFIND FIRST DECREASING ELEMENT")
print("First decreasing element ls[i]:i",ls[i-1],i-1)
j = i
print("Reset both the pointers",i,j)
print("ls",ls)
#Here the pointer has been reset
#find the number find the next element which is greater than 4
# Here we increement j till we ls[j] is greater than 4 in the above example it is 5
while j < n and ls[j] > ls[i-1]:
print("\nFIND NEXT GREATEST ELEMENT")
print("j : {},len n :{},ls[j]:{},ls[i-1] {}".format(j,n,ls[j],ls[i-1]))
idx = j
j += 1
print("idx is the pointer next greater num, idx {} ls[idx] {}".format(idx,ls[idx]))
# swap the elements in the list 4 and 5
ls[idx], ls[i-1] = ls[i-1], ls[idx]
print("After swap idx and ls[i-1](this was 2)".format(ls[idx],ls[i-1]))
print("\n",ls)
# double-slash for “floor” division (rounds down to nearest whole number)
print("\nfind (n-i)//2) => ({} - {} )// 2 = {}".format(n,i,(n-i)//2))
for k in range((n-i)//2):
print("\nREVERSE")
print("n",n)
print("i",i)
print("ls[i]",ls[i])
print("(n-i)//2",(n-i)//2)
print("k",k)
print("ls[i+k]",ls[i+k])
print("ls[n-1]",ls[n-1])
print("ls[n-1-k]",ls[n-1-k])
ls[i+k], ls[n-1-k] = ls[n-1-k], ls[i+k]
break
else:
ls.reverse()
return ls
print(next_perm([2,3,1,5,4,2]))
"""print(next_perm([3,2,1]))
print(next_perm([1,1,5]))"""
#BRUTE FORCE. DOES NOT SOLVE ALL CASE
def next_perm(ls):
# Check the max element in the list
max_num=max(ls)
min_num=min(ls)
head =0
tail =len(ls)-1
#We know the max for the first element has been reached then swap the max element
if ls[0] == max_num:
temp= ls[-1]
ls[-1]=ls[0]
ls[0] =temp
return ls
while head <=tail:
if ls[tail] > ls[tail-1]:
temp= ls[tail]
ls[tail]=ls[tail-1]
ls[tail-1] =temp
return ls
head +=1
tail -=1
return ls
print(next_perm([3,2,1]))
print(next_perm([1,2,3]))
print(next_perm([1,1,5]))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/partha1189/machine_learning/blob/master/CONV1D_LSTM_time_series.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer_size):
series = tf.expand_dims(series, axis=-1)
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift= 1, drop_remainder =True)
dataset = dataset.flat_map(lambda window:window.batch(window_size+1))
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
return dataset.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 30
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer_size=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding='causal', activation='relu', input_shape=[None, 1]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 200)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(lambda epoch : 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss= tf.keras.losses.Huber(),
optimizer = optimizer,
metrics = ['mae'])
history = model.fit(train_set, epochs = 100, callbacks = [lr_schedule])
plt.semilogx(history.history['lr'], history.history['loss'])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
#batch_size = 16
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
```
| github_jupyter |
**This notebook is an exercise in the [Time Series](https://www.kaggle.com/learn/time-series) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/hybrid-models).**
---
# Introduction #
Run this cell to set everything up!
```
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.time_series.ex5 import *
# Setup notebook
from pathlib import Path
from learntools.time_series.style import * # plot style settings
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from statsmodels.tsa.deterministic import DeterministicProcess
from xgboost import XGBRegressor
comp_dir = Path('../input/store-sales-time-series-forecasting')
data_dir = Path("../input/ts-course-data")
store_sales = pd.read_csv(
comp_dir / 'train.csv',
usecols=['store_nbr', 'family', 'date', 'sales', 'onpromotion'],
dtype={
'store_nbr': 'category',
'family': 'category',
'sales': 'float32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
store_sales['date'] = store_sales.date.dt.to_period('D')
store_sales = store_sales.set_index(['store_nbr', 'family', 'date']).sort_index()
family_sales = (
store_sales
.groupby(['family', 'date'])
.mean()
.unstack('family')
.loc['2017']
)
```
-------------------------------------------------------------------------------
In the next two questions, you'll create a boosted hybrid for the *Store Sales* dataset by implementing a new Python class. Run this cell to create the initial class definition. You'll add `fit` and `predict` methods to give it a scikit-learn like interface.
```
# You'll add fit and predict methods to this minimal class
class BoostedHybrid:
def __init__(self, model_1, model_2):
self.model_1 = model_1
self.model_2 = model_2
self.y_columns = None # store column names from fit method
```
# 1) Define fit method for boosted hybrid
Complete the `fit` definition for the `BoostedHybrid` class. Refer back to steps 1 and 2 from the **Hybrid Forecasting with Residuals** section in the tutorial if you need.
```
def fit(self, X_1, X_2, y):
# YOUR CODE HERE: fit self.model_1
self.model_1.fit(X_1, y)
y_fit = pd.DataFrame(
# YOUR CODE HERE: make predictions with self.model_1
self.model_1.predict(X_1),
index=X_1.index, columns=y.columns,
)
# YOUR CODE HERE: compute residuals
y_resid = y - y_fit
y_resid = y_resid.stack().squeeze() # wide to long
# YOUR CODE HERE: fit self.model_2 on residuals
self.model_2.fit(X_2, y_resid)
# Save column names for predict method
self.y_columns = y.columns
# Save data for question checking
self.y_fit = y_fit
self.y_resid = y_resid
# Add method to class
BoostedHybrid.fit = fit
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#q_1.hint()
q_1.solution()
```
-------------------------------------------------------------------------------
# 2) Define predict method for boosted hybrid
Now define the `predict` method for the `BoostedHybrid` class. Refer back to step 3 from the **Hybrid Forecasting with Residuals** section in the tutorial if you need.
```
def predict(self, X_1, X_2):
y_pred = pd.DataFrame(
# YOUR CODE HERE: predict with self.model_1
self.model_1.predict(X_1),
index=X_1.index, columns=self.y_columns,
)
y_pred = y_pred.stack().squeeze() # wide to long
# YOUR CODE HERE: add self.model_2 predictions to y_pred
y_pred += self.model_2.predict(X_2)
return y_pred.unstack() # long to wide
# Add method to class
BoostedHybrid.predict = predict
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#q_2.hint()
q_2.solution()
```
-------------------------------------------------------------------------------
Now you're ready to use your new `BoostedHybrid` class to create a model for the *Store Sales* data. Run the next cell to set up the data for training.
```
# Target series
y = family_sales.loc[:, 'sales']
# X_1: Features for Linear Regression
dp = DeterministicProcess(index=y.index, order=1)
X_1 = dp.in_sample()
# X_2: Features for XGBoost
X_2 = family_sales.drop('sales', axis=1).stack() # onpromotion feature
# Label encoding for 'family'
le = LabelEncoder() # from sklearn.preprocessing
X_2 = X_2.reset_index('family')
X_2['family'] = le.fit_transform(X_2['family'])
# Label encoding for seasonality
X_2["day"] = X_2.index.day # values are day of the month
```
# 3) Train boosted hybrid
Create the hybrid model by initializing a `BoostedHybrid` class with `LinearRegression()` and `XGBRegressor()` instances.
```
# YOUR CODE HERE: Create LinearRegression + XGBRegressor hybrid with BoostedHybrid
model = BoostedHybrid(
model_1=LinearRegression(),
model_2=XGBRegressor(),
)
# YOUR CODE HERE: Fit and predict
model.fit(X_1, X_2, y)
y_pred = model.predict(X_1, X_2)
y_pred = y_pred.clip(0.0)
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#q_3.hint()
q_3.solution()
```
-------------------------------------------------------------------------------
Depending on your problem, you might want to use other hybrid combinations than the linear regression + XGBoost hybrid you've created in the previous questions. Run the next cell to try other algorithms from scikit-learn.
```
# Model 1 (trend)
from pyearth import Earth
from sklearn.linear_model import ElasticNet, Lasso, Ridge
# Model 2
from sklearn.ensemble import ExtraTreesRegressor, RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
# Boosted Hybrid
# YOUR CODE HERE: Try different combinations of the algorithms above
model = BoostedHybrid(
model_1=Ridge(),
model_2=KNeighborsRegressor(),
)
```
These are just some suggestions. You might discover other algorithms you like in the scikit-learn [User Guide](https://scikit-learn.org/stable/supervised_learning.html).
Use the code in this cell to see the predictions your hybrid makes.
```
y_train, y_valid = y[:"2017-07-01"], y["2017-07-02":]
X1_train, X1_valid = X_1[: "2017-07-01"], X_1["2017-07-02" :]
X2_train, X2_valid = X_2.loc[:"2017-07-01"], X_2.loc["2017-07-02":]
# Some of the algorithms above do best with certain kinds of
# preprocessing on the features (like standardization), but this is
# just a demo.
model.fit(X1_train, X2_train, y_train)
y_fit = model.predict(X1_train, X2_train).clip(0.0)
y_pred = model.predict(X1_valid, X2_valid).clip(0.0)
families = y.columns[0:6]
axs = y.loc(axis=1)[families].plot(
subplots=True, sharex=True, figsize=(11, 9), **plot_params, alpha=0.5,
)
_ = y_fit.loc(axis=1)[families].plot(subplots=True, sharex=True, color='C0', ax=axs)
_ = y_pred.loc(axis=1)[families].plot(subplots=True, sharex=True, color='C3', ax=axs)
for ax, family in zip(axs, families):
ax.legend([])
ax.set_ylabel(family)
```
# 4) Fit with different learning algorithms
Once you're ready to move on, run the next cell for credit on this question.
```
# View the solution (Run this cell to receive credit!)
q_4.check()
```
# Keep Going #
[**Convert any forecasting task**](https://www.kaggle.com/ryanholbrook/forecasting-with-machine-learning) to a machine learning problem with four ML forecasting strategies.
---
*Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/time-series/discussion) to chat with other learners.*
| github_jupyter |
# Lab 2: Importing and plotting data
**Data Science for Biologists** • University of Washington • BIOL 419/519 • Winter 2019
Course design and lecture material by [Bingni Brunton](https://github.com/bwbrunton) and [Kameron Harris](https://github.com/kharris/). Lab design and materials by [Eleanor Lutz](https://github.com/eleanorlutz/), with helpful comments and suggestions from Bing and Kam.
### Table of Contents
1. Review of Numpy arrays
2. Importing data from a file into a Numpy array
3. Examining and plotting data in a Numpy array
4. Bonus exercise
### Helpful Resources
- [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas
- [Python Basics Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/e30fbcd9-f595-4a9f-803d-05ca5bf84612) by Python for Data Science
- [Jupyter Notebook Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/48093c40-5303-45f4-bbf9-0c96c0133c40) by Python for Data Science
- [Matplotlib Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/28b8210c-60cc-4f13-b0b4-5b4f2ad4790b) by Python for Data Science
- [Numpy Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/e9f83f72-a81b-42c7-af44-4e35b48b20b7) by Python for Data Science
### Data
- The data in this lab is from the [Palmer Penguin Project](https://github.com/allisonhorst/palmerpenguins) by Dr. Kristen Gorman. The data was edited for teaching purposes.
## Lab 2 Part 1: Review of Numpy arrays
In lecture this week we used Numpy arrays to generate random numbers, look at data, and make patterns. In this first lab section we'll review how to create, access, and edit parts of a Numpy array.
To use the Numpy library we need to first import it using the command `import numpy as np`. We'll also import Matplotlib in this same code block, since we'll use this library later in the lab. It's good practice to import all of your libraries at the very beginning of your code file, so that anyone can quickly see what external libraries are necessary to run your code.
```
import numpy as np
import matplotlib.pyplot as plt
# Magic command to turn on in-line plotting
# (show plots within the Jupyter Notebook)
%matplotlib inline
```
### Creating a Numpy array from existing data
To review some important concepts about Numpy arrays, let's make a small 3x3 array called `alphabet_data`, filled with different letters of the alphabet:
```
row_A = ["A", "B", "C"]
row_D = ["D", "E", "F"]
row_G = ["G", "H", "I"]
alphabet_data = np.array([row_A, row_D, row_G])
print(alphabet_data)
```
We can use the `print` command to look at the entire `alphabet_data` Numpy array. But often we'll work with very large arrays full of data, and we'll want to pick small subsets of the data to look at. Therefore, it's useful to know how to ask Python to give you just a section of any Numpy array.
### Selecting subsets of Numpy arrays
In lab 1, we talked about how index values describe where to find a specific item within a Python list or array. For example, the variable `example_list` is a list with one row, containing three items. To print the first item in the list we would print `example_list[0]`, or *the value in the variable example_list at index 0*. Remember that the first item in a Python list corresponds to the *index* 0.
```
example_list = ["avocado", "tomato", "onion"]
print("example_list is:", example_list)
print("example_list[0] is:", example_list[0])
```
### Selecting a single value in a Numpy array
`alphabet_data` is a little more complicated since it has rows *and* columns, but the general principle of indexing is still the same. Each value in a Numpy array has a unique index value for its row location, and a separate unique index value for its column location. We can ask Numpy to give us just the value we want by using the syntax `alphabet_data[row index, column index]`.

**Exercise 1:** Use indexing to print the second item in the first row of `alphabet_data`:
```
print(alphabet_data[0, 1])
```
### Selecting a range of values in a Numpy array
In addition to selecting just one value, we can use the syntax `lower index range : upper index range` to select a range of values. Remember that ranges in Python are *exclusive* - the last index in the range is not included. Below is an example of range indexing syntax used on `example_list`:
```
example_list = ["avocado", "tomato", "onion"]
print("example_list is:", example_list)
print("example_list[0:2] is:", example_list[0:2])
```
We can use exactly the same notation in a Numpy array. However, since we have both row *and* column indices, we can declare one range for the rows and one range for the columns. For example, the following code prints all rows from index 0 to index 3, and all columns from index 0 to index 2. Note that index 3 doesn't actually exist - but since the upper index range is not included in a Python range, we need to use an index of 3 to print everything up to index 2.
```
print(alphabet_data[0:3, 0:2])
```
**Exercise 2:** Print the first two rows of the first two columns in `alphabet_data`.
```
print(alphabet_data[0:2, 0:2])
```
**Exercise 3:** Print the last two rows of the last two columns in `alphabet_data`.
```
print(alphabet_data[1:, 1:])
```
Once we know how to select subsets of arrays, we can use this knowledge to *change* the items in these selections. For example, in a list we can assign a value found at a specific index to be something else. In this example we use indexing to reference the first item in `example_list`, and then change it.
```
example_list = ["avocado", "tomato", "onion"]
print("before assignment, the example_list is:", example_list)
example_list[0] = "banana"
print("after assignment, the example_list is:", example_list)
```
Similarly, we can change items in a Numpy array using indexing:
```
print("before assignment, alphabet_data is:")
print(alphabet_data)
alphabet_data[0, 0] = "Z"
print("after assignment, alphabet_data is:")
print(alphabet_data)
```
**Exercise 4:** Replace the item in the third row and second column of `alphabet_data` with `"V"`.
```
alphabet_data[2, 1] = "V"
print(alphabet_data)
```
**Exercise 5:** Replace the entire second row of `alphabet_data` with a new row: `["X", "Y", "X"]`
```
alphabet_data[1] = ["X", "Y", "X"]
print(alphabet_data)
```
## Lab 2 Part 2: Importing data from a file into a Numpy array
Let's apply these principles of Numpy arrays to some real biological data. In the `Lab_02` data folder there are three data files:
- `./data/Lab_02/Adelie_Penguin.csv`
- `./data/Lab_02/Chinstrap_Penguin.csv`
- `./data/Lab_02/Gentoo_Penguin.csv`
These files contain data collected by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER - a member of the Long Term Ecological Research Network.

*Credit:* Artwork by @allison_horst
The data is formatted as a large table, with one file for each species of penguin. The files contain 50 rows, each representing one individual, and four columns, which represent culmen length and depth, flipper length, and body mass. For example, `Adelie_Penguin.csv` corresponds to the column and row labels shown below:
| Penguin ID | Culmen Length (mm) | Culmen Depth (mm) | Flipper Length (mm) | Body Mass (g) |
| --- | ----------- | ----------- | ----------- | ----------- |
| Individual 1 | 39.1 | 18.7 | 181 |3750 |
| Individual 2 | 39.5 |17.4 |186| 3800|
| ... | ... | ... | ... | ... |
| Individual 50 | 39.6 |17.7| 186|3500|

*Credit:* Artwork by @allison_horst
We'll use the Numpy command `loadtxt` to read in our first file, `Adelie_Penguin.csv`. We will save this data in a Numpy array called `adelie_data`.
```
# Load our file data from "filename" into a variable called adelie_data
filename = "./data/Lab_02/Adelie_Penguin.csv"
adelie_data = np.loadtxt(fname=filename, delimiter=",")
```
The data description above tells us that `adelie_data` should contain 50 rows and 4 columns, so let's use the Numpy `shape` command to double check that's the case. Numpy `shape` will print two numbers in the format `(number of rows, number of columns)`.
**Exercise 6:** Right now, the code below prints a warning if we don't have the expected 50 rows. Edit the code so that the warning is also printed if the number of columns is not 4.
```
# Print the shape of the loaded dataset
data_shape = adelie_data.shape
print("Adelie data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
```
It looks like our `adelie_data` Numpy array is the shape we expect. Now let's look at a subset of data to see what kind of data we're working with.
**Exercise 7:** Use Python array indexing to print the first three rows, first four columns of `adelie_data`. Check to make sure that the printed data matches what is given to you in the data description above.
```
print(adelie_data[0:3, 0:5])
```
## Lab 2 Part 3: Examining and plotting data in a Numpy array
### Calculate interesting characteristics of a Numpy array
Now that we have loaded our Adelie penguin data into a Numpy array, there are several interesting commands we can use to find out more about our data. First let's look at the culmen length column (the first column in the dataset). Using array indexing, we will put this entire first column into a new variable called `culmen_lengths`. When indexing between a range of values, leaving the upper range bound blank causes Python to include everything until the end of the array:
```
# put the culmen lengths for this dataset in a variable called culmen_lengths
culmen_lengths = adelie_data[0:, 0]
print("The culmen lengths in this dataset is:")
print(culmen_lengths)
```
Numpy contains many useful functions for finding out different characteristics of a dataset. The code below shows some examples:
```
# Print some interesting characteristics of the data
print("Mean:", np.mean(culmen_lengths))
print("Standard deviation:", np.std(culmen_lengths))
print("Median:", np.median(culmen_lengths))
print("Minimum:", np.min(culmen_lengths))
print("Maximum:", np.max(culmen_lengths))
```
We can use our `culmen_lengths` variable and the useful characteristics we found above to make a histogram of our data. In the below code we've created a histogram, and added a line that shows where the mean of the dataset is.
**Exercise 8:** Edit the code block below to plot the maximum and minimum data values as two additional vertical lines.
```
# Create a histogram with an opacity of 50% (alpha=0.5)
plt.hist(culmen_lengths, alpha=0.5)
# Add a vertical line to the plot showing the mean.
plt.axvline(np.mean(culmen_lengths), label="mean")
# Your code here!
plt.axvline(np.max(culmen_lengths), label="maximum")
plt.axvline(np.min(culmen_lengths), label="minimum")
# Don't forget to label the axes!
plt.xlabel("Culmen length (mm)")
plt.ylabel("Frequency (number of penguins)")
# Add a legend to the plot
plt.legend()
# Show the plot in our jupyter notebook
plt.show()
```
#### Review of for loops using indexing
Last week in lab we went over an example of a `for` loop that uses indices to loop through a list. Let's pretend that in this Adelie penguin dataset, we have marked in our lab notebook that the first, 12th, 26th, and 44th penguins we sampled seemed suspiciously small. Let's use a `for` loop to print out the culmen length of each of these penguins.
```
# First let's make a list of all of the indexes where we can find suspicious penguins.
interesting_indices = [0, 11, 25, 43]
# Now we'll look at every single index in the list of suspicious indices.
for index in interesting_indices:
# Because we are looking at indices, we need to use indexing to find the
# value in culmen_lengths that we're interested in.
culmen = culmen_lengths[index]
print("The culmen length at index", index, ":", culmen)
```
**Exercise 9:** Instead of using a `for` loop to look at just the indices in `interesting_indices`, use a `for` loop to look at *all* indices in the `culmen_lengths` dataset. Remember that you can use the command `len(culmen_lengths)` to find out how many values are in the data. Print the culmen length and index if the culmen length is larger than the mean culmen length.
```
all_indices = np.arange(0, len(adelie_data))
for index in all_indices:
culmen = culmen_lengths[index]
if culmen > culmen_lengths.mean():
print("The culmen length at index", index, ":", culmen)
```
So far we've only looked at the culmen lengths in this dataset. Let's use a `for` loop to also look at the culmen depths, flipper lengths, and body mass. Remember that the columns in this dataset stand for:
```
culmen_lengths = adelie_data[0:, 0]
culmen_depths = adelie_data[0:, 1]
flipper_lengths = adelie_data[0:, 2]
body_mass = adelie_data[0:, 3]
morphologies = [culmen_lengths, culmen_depths, flipper_lengths, body_mass]
for morphology in morphologies:
# Create a histogram
plt.hist(morphology)
# Show the plot in our jupyter notebook
plt.show()
```
Notice that the code in the above box is doing the same action for every column in the array. So instead of re-assigning every column in the array to a new variable called `culmen_lengths`, `body_mass`, etc, let's use array indexing to loop through the data instead. Notice that the only thing changing when looking at different columns is the *column index*.
**Exercise 10:** Change the following code so that it creates a histogram for all columns in the Adelie penguin data, like in the previous block. However, instead of making a new variable for each column called `culmen_lengths`, `body_mass`, etc, use indexing instead.
```
column_indices = [0, 1, 2, 3]
for index in column_indices:
data_subset = adelie_data[0:, index]
# Create a histogram
plt.hist(data_subset)
# Show the plot in our jupyter notebook
plt.show()
```
### Putting it all together: Using a for loop to load, analyze, and plot multiple data files
We've now found some interesting things about Adelie penguins. But our original dataset included three different species - Adelie penguins, Chinstrap penguins, and Gentoo penguins. We probably want to run these exact same analyses for each species, and this is a great opportunity to use a `for` loop to make our lives easier. Because all three of our datasets are exactly the same shape and format, we can reuse all of our code that we've already written.
```
# First, make a list of each filename that we're interested in analyzing
filenames = ["./data/Lab_02/Adelie_Penguin.csv",
"./data/Lab_02/Chinstrap_Penguin.csv",
"./data/Lab_02/Gentoo_Penguin.csv"]
```
Now that we have a list of filenames to analyze, we can turn this into a `for` loop that loads each file and then runs analyses on the file. The code block below has started the process - for each filename, we load in the file data as a variable called `penguin_data`. Note that we're not actually doing anything with the data yet, so we don't see many interesting things being printed.
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("NOW ANALYZING DATASET: ", filename)
```
The data loading doesn't seem to have caused any errors, so we'll continue to copy and paste the code we've already written to work with the data. Note that everything we've copied and pasted is code we've already written - but now we're asking Python to run this same code on *all* the data files, instead of just Adelie penguins. For the purposes of this exercise, we'll analyze just the culmen lengths of the dataset, so that we end up with a manageable number of output plots.
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("----")
print("NOW ANALYZING DATASET: ", filename)
# Print the shape of the loaded dataset
data_shape = penguin_data.shape
print("Penguin data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
# put the culmen lengths for this dataset in a variable called culmen_lengths
culmen_lengths = penguin_data[0:, 0]
print("The culmen lengths in this dataset is:")
print(culmen_lengths)
```
**Exercise 11:** Similarly, add in the code you've already written to print the interesting characteristics of the data (mean, median, max, etc.) and create a histogram for each data file that includes the mean and median. Run your final for loop. Which penguin species has the longest mean culmen length? Smallest minimum culmen length?
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("----")
print("NOW ANALYZING DATASET: ", filename)
# Print the shape of the loaded dataset
data_shape = penguin_data.shape
print("Penguin data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
# put the culmen lengths for this dataset in a variable called culmen_lengths
culmen_lengths = penguin_data[0:, 0]
print("The culmen lengths in this dataset is:")
print(culmen_lengths)
# Print some interesting characteristics of the data
print("Mean:", np.mean(culmen_lengths))
print("Standard deviation:", np.std(culmen_lengths))
print("Median:", np.median(culmen_lengths))
print("Minimum:", np.min(culmen_lengths))
print("Maximum:", np.max(culmen_lengths))
# Create a histogram with an opacity of 50% (alpha=0.5)
plt.hist(culmen_lengths, alpha=0.5)
# Add a vertical line to the plot showing the mean.
plt.axvline(np.mean(culmen_lengths), label="mean")
plt.axvline(np.median(culmen_lengths), label="median")
# Your code here!
# Don't forget to label the axes!
plt.xlabel("Culmen length (mm)")
plt.ylabel("Frequency (number of penguins)")
# Add a legend to the plot
plt.legend()
# Show the plot in our jupyter notebook
plt.show()
```
## Lab 2 Bonus exercise
**Bonus Exercise 1:** Now take the above code and edit it so that we analyze all of the 4 penguin morphology variables, for all of the species. Label the plot axis and title with the appropriate information (penguin species for title, and the morphological variable on the x axis).
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("----")
print("NOW ANALYZING DATASET: ", filename)
# Print the shape of the loaded dataset
data_shape = penguin_data.shape
print("Penguin data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
# the number of columns is the same as the number of items in the first row
num_columns = len(penguin_data[0])
axis_labels = ["Culmen length (mm)",
"Culmen depth (mm)",
"Flipper length (mm)",
"Body mass (g)"]
# THIS IS CALLED A NESTED FOR LOOP!
# A NESTED FOR LOOP HAS A FOR LOOP INSIDE OF ANOTHER FOR LOOP.
for index in np.arange(0, num_columns):
data_subset = penguin_data[0:, index]
# Print some interesting characteristics of the data
print("Mean:", np.mean(data_subset))
print("Standard deviation:", np.std(data_subset))
print("Median:", np.median(data_subset))
print("Minimum:", np.min(data_subset))
print("Maximum:", np.max(data_subset))
# Create a histogram with an opacity of 50% (alpha=0.5)
plt.hist(data_subset, alpha=0.5)
# Add a vertical line to the plot showing the mean.
plt.axvline(np.mean(data_subset), label="mean")
plt.axvline(np.median(data_subset), label="median")
# Your code here!
# Don't forget to label the axes!
plt.xlabel(axis_labels[index])
plt.ylabel("Frequency (number of penguins)")
species_name = filename.strip('.csv')
plt.title(species_name)
# Add a legend to the plot
plt.legend()
# Show the plot in our jupyter notebook
plt.show()
```
| github_jupyter |
# Solution for Ex 5 of the ibmqc 2021
This solution is from the point of view from someone who has just started to explore Quantum Computing, but is familiar with the physics behind it and has some experience with programming and optimization problems.
So I did not create this solution entirely by myself, but altered the tutorial solution from the H-H molecule which was provided.
The goal was to create an ansatz with the lowest possible number of CNOT gates.
```
from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
```
There were many hints on how to reduce the problem to a manageable, in particular reducing the number of qubits, hence resulting in smaller circuits with fewer operations. A first hint was to freeze the core, since Li has the configuration of 2 electrons in the 1s orbital and 1 in the 2s orbital (which forms bonds with other atoms). The electrons in orbitals nearer the core can therefore be frozen.
Li : 1s, 2s, and px, py, pz orbitals --> 6 orbitals
H : 1s --> 1 orbital
```
from qiskit_nature.transformers import FreezeCoreTransformer
trafo = FreezeCoreTransformer(freeze_core=True)
q_molecule_reduced = trafo.transform(qmolecule)
```
There are 5 properties to consider to better understand the task. Note that there was already a transformation. Before this transformation the properties would have been (in this order: 4, 6, 12, 12, 1.0259348796432726)
```
n_el = q_molecule_reduced.num_alpha + q_molecule_reduced.num_beta
print("Number of electrons in the system: ", n_el)
n_mo = q_molecule_reduced.num_molecular_orbitals
print("Number of molecular orbitals: ", n_mo)
n_so = 2 * q_molecule_reduced.num_molecular_orbitals
print("Number of spin orbitals: ", n_so)
n_q = 2 * q_molecule_reduced.num_molecular_orbitals
print("Number of qubits one would need with Jordan-Wigner mapping:", n_q)
e_nn = q_molecule_reduced.nuclear_repulsion_energy
print("Nuclear repulsion energy", e_nn)
```
#### Electronic structure problem
One can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings).
In the following cell on could also use a quantum molecule transformer to remove orbitals which would not contribute to the ground state - for example px and py in this problem. Why they correspond to orbitals 3 and 4 I'm not really sure, maybe one has to look through the documentation a bit better than I did, but since there were only very limited combinations I tried them at random and kept an eye on the ground state energy.
```
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
problem= ElectronicStructureProblem(driver, q_molecule_transformers=[FreezeCoreTransformer(freeze_core=True,
remove_orbitals=[3,4])])
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
```
### QubitConverter
Allows to define the mapping that you will use in the simulation. For the LiH problem the Parity mapper is chosen, because it allows the "TwoQubitReduction" setting which will further simplify the problem.
If I understand the paper correctly - referenced as [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1)- symmetries from particle number operators such eq 52 of the paper are used to reduce the number of qubits. The only challenging thing was to understand what [1] is meaning if you pass this as the z2symmetry-reduction parameter.
```
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper = ParityMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction=[1])
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
```
#### Initial state
One has to chose an initial state for the system which is reduced to 4 qubits from 12 at the beginning. The initialisation may be chosen by you or you stick to the one proposed by the Hartree-Fock function (i.e. $|\Psi_{HF} \rangle = |1100 \rangle$). For the Exercise it is recommended to stick to stick to the function!
```
from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
init_state.draw('mpl')
```
5. Ansatz
Playing with the Ansatz was really fun. I found the TwoLocal Ansatz very interesting to gain some knowlegde and insight on how to compose an ansatz for the problem. Later on I tried to create my own Ansatz and converged to an Ansatz quite similiar to a TwoLocal one.
It's obvious you have to entangle the qubits somehow with CNOTs. But to give the optimization algorithm a chance to find a minimum, you have to make sure to change the states of the qubits before and afterwards independently of the other ones.
```
# Choose the ansatz
ansatz_type = "Custom"
# Parameters for q-UCC ansatz
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry']
# Entangling gates
entanglement_blocks = ['cx']
# How the qubits are entangled
entanglement = 'linear'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 1
# Skip the final rotation_blocks layer
skip_final_rotation_layer = False
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
from qiskit.circuit.random import random_circuit
# Define the variational parameter
param_names_theta = ['a', 'b', 'c', 'd']
thetas = [Parameter(param_names_theta[i]) for i in range(len(param_names_theta))]
param_names_eta = ['e', 'f', 'g', 'h']
etas = [Parameter(param_names_eta[i]) for i in range(len(param_names_eta))]
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a CNOT ladder
for i in range(n):
qc.ry(thetas[i], i)
for i in range(n-1):
qc.cx(i, i+1)
for i in range(n):
qc.ry(etas[n-i-1], i)
# Visual separator
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
ansatz.draw('mpl')
```
### Backend
This is where you specify the simulator or device where you want to run your algorithm.
We will focus on the `statevector_simulator` in this challenge.
```
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator')
```
### Optimizer
The optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.
A clever choice might reduce drastically the number of needed energy evaluations.
Some of the optimizer seem to not reach the minimum. So the choice of the optimizer and the parameters is important.
I did not get to the minimum with the other optimizers than SLSQP.
I found a very nice and short explanation of how the optimizer works on stackoverflow:
The algorithm described by Dieter Kraft is a quasi-Newton method (using BFGS) applied to a lagrange function consisting of loss function and equality- and inequality-constraints. Because at each iteration some of the inequality constraints are active, some not, the inactive inequalities are omitted for the next iteration. An equality constrained problem is solved at each step using the active subset of constraints in the lagrange function.
https://stackoverflow.com/questions/59808494/how-does-the-slsqp-optimization-algorithm-work
```
from qiskit.algorithms.optimizers import SLSQP
optimizer = SLSQP(maxiter=4000)
```
### Exact eigensolver
In the exercise we got the following exact diagonalizer function to compare the results.
```
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
```
### VQE and initial parameters for the ansatz
Now we can import the VQE class and run the algorithm. This code was also provided. Everything I have done so far is plugged in.
```
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
# Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]
# Check your answer using following code
from qc_grader import grade_ex5
freeze_core = True # change to True if you freezed core electrons
grade_ex5(ansatz,qubit_op,result,freeze_core)
```
Thank you very much for this awesome challenge. Without the outline, explanations, examples and hints I would have never been able to solve this in a reasonable time.
I will definitely save this Notebook along with the other exercises as a bluprint for the future.
| github_jupyter |
```
import numpy as np
import theano
import theano.tensor as T
import lasagne
import os
#thanks @keskarnitish
```
# Generate names
* Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train NN instead.
* Dataset contains ~8k human names from different cultures[in latin transcript]
* Objective (toy problem): learn a generative model over names.
```
start_token = " "
with open("names") as f:
names = f.read()[:-1].split('\n')
names = [start_token+name for name in names]
print 'n samples = ',len(names)
for x in names[::1000]:
print x
```
# Text processing
```
#all unique characters go here
token_set = set()
for name in names:
for letter in name:
token_set.add(letter)
tokens = list(token_set)
print 'n_tokens = ',len(tokens)
#!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>
token_to_id = {t:i for i,t in enumerate(tokens) }
#!id_to_token = < dictionary of symbol identifier -> symbol itself>
id_to_token = {i:t for i,t in enumerate(tokens)}
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(map(len,names),bins=25);
# truncate names longer than MAX_LEN characters.
MAX_LEN = min([60,max(list(map(len,names)))])
#ADJUST IF YOU ARE UP TO SOMETHING SERIOUS
```
### Cast everything from symbols into identifiers
```
names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names))
#crop long names and pad short ones
for i in range(len(names_ix)):
names_ix[i] = names_ix[i][:MAX_LEN] #crop too long
if len(names_ix[i]) < MAX_LEN:
names_ix[i] += [token_to_id[" "]]*(MAX_LEN - len(names_ix[i])) #pad too short
assert len(set(map(len,names_ix)))==1
names_ix = np.array(names_ix)
```
# Input variables
```
input_sequence = T.matrix('token sequencea','int32')
target_values = T.matrix('actual next token','int32')
```
# Build NN
You will be building a model that takes token sequence and predicts next token
* iput sequence
* one-hot / embedding
* recurrent layer(s)
* otput layer(s) that predict output probabilities
```
from lasagne.layers import InputLayer,DenseLayer,EmbeddingLayer
from lasagne.layers import RecurrentLayer,LSTMLayer,GRULayer,CustomRecurrentLayer
l_in = lasagne.layers.InputLayer(shape=(None, None),input_var=input_sequence)
#!<Your neural network>
l_emb = lasagne.layers.EmbeddingLayer(l_in, len(tokens), 40)
l_rnn = lasagne.layers.RecurrentLayer(l_emb,40,nonlinearity=lasagne.nonlinearities.tanh)
#flatten batch and time to be compatible with feedforward layers (will un-flatten later)
l_rnn_flat = lasagne.layers.reshape(l_rnn, (-1,l_rnn.output_shape[-1]))
l_out = lasagne.layers.DenseLayer(l_rnn_flat,len(tokens), nonlinearity=lasagne.nonlinearities.softmax)
# Model weights
weights = lasagne.layers.get_all_params(l_out,trainable=True)
print weights
network_output = lasagne.layers.get_output(l_out)
#If you use dropout do not forget to create deterministic version for evaluation
predicted_probabilities_flat = network_output
correct_answers_flat = target_values.ravel()
loss = T.mean(lasagne.objectives.categorical_crossentropy(predicted_probabilities_flat, correct_answers_flat))
#<Loss function - a simple categorical crossentropy will do, maybe add some regularizer>
updates = lasagne.updates.adam(loss,weights)
```
# Compiling it
```
#training
train = theano.function([input_sequence, target_values], loss, updates=updates, allow_input_downcast=True)
#computing loss without training
compute_cost = theano.function([input_sequence, target_values], loss, allow_input_downcast=True)
```
# generation
Simple:
* get initial context(seed),
* predict next token probabilities,
* sample next token,
* add it to the context
* repeat from step 2
You'll get a more detailed info on how it works in the homework section.
```
#compile the function that computes probabilities for next token given previous text.
#reshape back into original shape
next_word_probas = network_output.reshape((input_sequence.shape[0],input_sequence.shape[1],len(tokens)))
#predictions for next tokens (after sequence end)
last_word_probas = next_word_probas[:,-1]
probs = theano.function([input_sequence],last_word_probas,allow_input_downcast=True)
def generate_sample(seed_phrase=None,N=MAX_LEN,t=1,n_snippets=1):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
parameters:
sample_fun - max_ or proportional_sample_fun or whatever else you implemented
The phrase is set using the variable seed_phrase
The optional input "N" is used to set the number of characters of text to predict.
'''
if seed_phrase is None:
seed_phrase=start_token
if len(seed_phrase) > MAX_LEN:
seed_phrase = seed_phrase[-MAX_LEN:]
assert type(seed_phrase) is str
snippets = []
for _ in range(n_snippets):
sample_ix = []
x = map(lambda c: token_to_id.get(c,0), seed_phrase)
x = np.array([x])
for i in range(N):
# Pick the character that got assigned the highest probability
p = probs(x).ravel()
p = p**t / np.sum(p**t)
ix = np.random.choice(np.arange(len(tokens)),p=p)
sample_ix.append(ix)
x = np.hstack((x[-MAX_LEN+1:],[[ix]]))
random_snippet = seed_phrase + ''.join(id_to_token[ix] for ix in sample_ix)
snippets.append(random_snippet)
print("----\n %s \n----" % '; '.join(snippets))
```
# Model training
Here you can tweak parameters or insert your generation function
__Once something word-like starts generating, try increasing seq_length__
```
def sample_batch(data, batch_size):
rows = data[np.random.randint(0,len(data),size=batch_size)]
return rows[:,:-1],rows[:,1:]
print("Training ...")
#total N iterations
n_epochs=100
# how many minibatches are there in the epoch
batches_per_epoch = 500
#how many training sequences are processed in a single function call
batch_size=10
for epoch in xrange(n_epochs):
print "Generated names"
generate_sample(n_snippets=10)
avg_cost = 0;
for _ in range(batches_per_epoch):
x,y = sample_batch(names_ix,batch_size)
avg_cost += train(x, y)
print("Epoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
generate_sample(n_snippets=10,t=1.5)
generate_sample(seed_phrase=" Putin",n_snippets=100)
```
# And now,
* try lstm/gru
* try several layers
* try mtg cards
* try your own dataset of any kind
| github_jupyter |
# Training Models
```
import numpy as np
import pandas as pd
import os
import sys
import matplotlib as mpl
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "training_linear_models"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
## Linear regression using the Normal Equation
```
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show()
X_b = np.c_[np.ones((100, 1)), X]
theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta)
y_predict
plt.plot(X_new, y_predict, 'r-')
plt.plot(X, y, 'b.')
plt.axis([0, 2, 0, 15])
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
```
### Linear regression using batch gradient descent
```
eta = 0.1
n_iterations = 100
m = 100
theta = np.random.randn(2, 1)
for i in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta*gradients
theta
```
### Stochastic Gradient Descent
```
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2*xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch*m+i)
theta = theta - eta * gradients
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
```
### Mini-batch Gradient Descent
```
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 200, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
```
### Polynomial Regression
```
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
```
### Learning Curves
```
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_pred = model.predict(X_train[:m])
y_val_pred = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_pred))
val_errors.append(mean_squared_error(y_val, y_val_pred))
plt.plot(np.sqrt(train_errors), 'r-+', linewidth=2, label='train')
plt.plot(np.sqrt(val_errors), 'b-', linewidth=3, label='val')
plt.show()
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
```
### Ridge Regression
```
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver='cholesky')
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
```
### Lasso Regression
```
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
```
### Elastic Net
```
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
```
### Logistic Regression
```
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
X = iris['data'][:, 3:]
y = (iris['target'] ==2).astype(np.int)
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
y_proba
plt.plot(X_new, y_proba[:,1], 'g-', label='Iris-Virginica')
plt.plot(X_new, y_proba[:,0], 'b--', label='Not Iris-Virginica')
plt.xlabel("Petal width", fontsize=18)
plt.ylabel("Probability", fontsize=18)
plt.legend()
plt.show()
```
### Softmax Regression
```
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10)
softmax_reg.fit(X, y)
softmax_reg.predict([[3, 4]])
softmax_reg.predict_proba([[3, 4]])
```
| github_jupyter |
## Convolutional Neural Network for MNIST image classficiation
```
import numpy as np
# from sklearn.utils.extmath import softmax
from matplotlib import pyplot as plt
import re
from tqdm import trange
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pandas as pd
from sklearn.datasets import fetch_openml
import matplotlib.gridspec as gridspec
from sklearn.decomposition import PCA
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = ['Times New Roman'] + plt.rcParams['font.serif']
```
## Alternating Least Squares for Matrix Factorization
```
def coding_within_radius(X, W, H0,
r=None,
a1=0, #L1 regularizer
a2=0, #L2 regularizer
sub_iter=[5],
stopping_grad_ratio=0.0001,
nonnegativity=True,
subsample_ratio=1):
"""
Find \hat{H} = argmin_H ( || X - WH||_{F}^2 + a1*|H| + a2*|H|_{F}^{2} ) within radius r from H0
Use row-wise projected gradient descent
"""
H1 = H0.copy()
i = 0
dist = 1
idx = np.arange(X.shape[1])
if subsample_ratio>1: # subsample columns of X and solve reduced problem (like in SGD)
idx = np.random.randint(X.shape[1], size=X.shape[1]//subsample_ratio)
A = W.T @ W ## Needed for gradient computation
B = W.T @ X[:,idx]
while (i < np.random.choice(sub_iter)):
if_continue = np.ones(H0.shape[0]) # indexed by rows of H
H1_old = H1.copy()
for k in [k for k in np.arange(H0.shape[0])]:
grad = (np.dot(A[k, :], H1[:,idx]) - B[k,:] + a1 * np.ones(len(idx))) + a2 * 2 * np.sign(H1[k,idx])
grad_norm = np.linalg.norm(grad, 2)
step_size = (1 / (((i + 1) ** (1)) * (A[k, k] + 1)))
if r is not None: # usual sparse coding without radius restriction
d = step_size * grad_norm
step_size = (r / max(r, d)) * step_size
if step_size * grad_norm / np.linalg.norm(H1_old, 2) > stopping_grad_ratio:
H1[k, idx] = H1[k, idx] - step_size * grad
if nonnegativity:
H1[k,idx] = np.maximum(H1[k,idx], np.zeros(shape=(len(idx),))) # nonnegativity constraint
i = i + 1
return H1
def ALS(X,
n_components = 10, # number of columns in the dictionary matrix W
n_iter=100,
a0 = 0, # L1 regularizer for H
a1 = 0, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
H_nonnegativity=True,
W_nonnegativity=True,
compute_recons_error=False,
subsample_ratio = 10):
'''
Given data matrix X, use alternating least squares to find factors W,H so that
|| X - WH ||_{F}^2 + a0*|H|_{1} + a1*|W|_{1} + a12 * |W|_{F}^{2}
is minimized (at least locally)
'''
d, n = X.shape
r = n_components
#normalization = np.linalg.norm(X.reshape(-1,1),1)/np.product(X.shape) # avg entry of X
#print('!!! avg entry of X', normalization)
#X = X/normalization
# Initialize factors
W = np.random.rand(d,r)
H = np.random.rand(r,n)
# H = H * np.linalg.norm(X) / np.linalg.norm(H)
for i in trange(n_iter):
H = coding_within_radius(X, W.copy(), H.copy(), a1=a0, nonnegativity=H_nonnegativity, subsample_ratio=subsample_ratio)
W = coding_within_radius(X.T, H.copy().T, W.copy().T, a1=a1, a2=a12, nonnegativity=W_nonnegativity, subsample_ratio=subsample_ratio).T
if compute_recons_error and (i % 10 == 0) :
print('iteration %i, reconstruction error %f' % (i, np.linalg.norm(X-W@H)**2))
return W, H
# Simulated Data and its factorization
W0 = np.random.rand(10,5)
H0 = np.random.rand(5,20)
X0 = W0 @ H0
W, H = ALS(X=X0,
n_components=5,
n_iter=100,
a0 = 0, # L1 regularizer for H
a1 = 1, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
H_nonnegativity=True,
W_nonnegativity=True,
compute_recons_error=True,
subsample_ratio=1)
print('reconstruction error (relative) = %f' % (np.linalg.norm(X0-W@H)**2/np.linalg.norm(X0)**2))
print('Dictionary error (relative) = %f' % (np.linalg.norm(W0 - W)**2/np.linalg.norm(W0)**2))
print('Code error (relative) = %f' % (np.linalg.norm(H0-H)**2/np.linalg.norm(H0)**2))
```
# Learn dictionary of MNIST images
```
def display_dictionary(W, save_name=None, score=None, grid_shape=None):
k = int(np.sqrt(W.shape[0]))
rows = int(np.sqrt(W.shape[1]))
cols = int(np.sqrt(W.shape[1]))
if grid_shape is not None:
rows = grid_shape[0]
cols = grid_shape[1]
figsize0=(6, 6)
if (score is None) and (grid_shape is not None):
figsize0=(cols, rows)
if (score is not None) and (grid_shape is not None):
figsize0=(cols, rows+0.2)
fig, axs = plt.subplots(nrows=rows, ncols=cols, figsize=figsize0,
subplot_kw={'xticks': [], 'yticks': []})
for ax, i in zip(axs.flat, range(100)):
if score is not None:
idx = np.argsort(score)
idx = np.flip(idx)
ax.imshow(W.T[idx[i]].reshape(k, k), cmap="viridis", interpolation='nearest')
ax.set_xlabel('%1.2f' % score[i], fontsize=13) # get the largest first
ax.xaxis.set_label_coords(0.5, -0.05)
else:
ax.imshow(W.T[i].reshape(k, k), cmap="viridis", interpolation='nearest')
if score is not None:
ax.set_xlabel('%1.2f' % score[i], fontsize=13) # get the largest first
ax.xaxis.set_label_coords(0.5, -0.05)
plt.tight_layout()
# plt.suptitle('Dictionary learned from patches of size %d' % k, fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
if save_name is not None:
plt.savefig( save_name, bbox_inches='tight')
plt.show()
def display_dictionary_list(W_list, label_list, save_name=None, score_list=None):
# Make plot
# outer gridspec
nrows=1
ncols=len(W_list)
fig = plt.figure(figsize=(16, 5), constrained_layout=False)
outer_grid = gridspec.GridSpec(nrows=nrows, ncols=ncols, wspace=0.1, hspace=0.05)
# make nested gridspecs
for i in range(1 * ncols):
k = int(np.sqrt(W_list[i].shape[0]))
sub_rows = int(np.sqrt(W_list[i].shape[1]))
sub_cols = int(np.sqrt(W_list[i].shape[1]))
idx = np.arange(W_list[i].shape[1])
if score_list is not None:
idx = np.argsort(score_list[i])
idx = np.flip(idx)
inner_grid = outer_grid[i].subgridspec(sub_rows, sub_cols, wspace=0.05, hspace=0.05)
for j in range(sub_rows*sub_cols):
a = j // sub_cols
b = j % sub_cols #sub-lattice indices
ax = fig.add_subplot(inner_grid[a, b])
ax.imshow(W_list[i].T[idx[j]].reshape(k, k), cmap="viridis", interpolation='nearest')
ax.set_xticks([])
if (b>0):
ax.set_yticks([])
if (a < sub_rows-1):
ax.set_xticks([])
if (a == 0) and (b==2):
#ax.set_title("W_nonnegativity$=$ %s \n H_nonnegativity$=$ %s"
# % (str(nonnegativity_list[i][0]), str(nonnegativity_list[i][1])), y=1.2, fontsize=14)
ax.set_title(label_list[i], y=1.2, fontsize=14)
if (score_list is not None) and (score_list[i] is not None):
ax.set_xlabel('%1.2f' % score_list[i][idx[j]], fontsize=13) # get the largest first
ax.xaxis.set_label_coords(0.5, -0.07)
# plt.suptitle('Dictionary learned from patches of size %d' % k, fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.savefig(save_name, bbox_inches='tight')
# Load data from https://www.openml.org/d/554
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
# X = X.values ### Uncomment this line if you are having type errors in plotting. It is loading as a pandas dataframe, but our indexing is for numpy array.
X = X / 255.
print('X.shape', X.shape)
print('y.shape', y.shape)
'''
Each row of X is a vectroization of an image of 28 x 28 = 784 pixels.
The corresponding row of y holds the true class label from {0,1, .. , 9}.
'''
# Unconstrained matrix factorization and dictionary images
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X[idx,:].T
W, H = ALS(X=X0,
n_components=25,
n_iter=50,
subsample_ratio=1,
W_nonnegativity=False,
H_nonnegativity=False,
compute_recons_error=True)
display_dictionary(W)
# PCA and dictionary images (principal components)
pca = PCA(n_components=24)
pca.fit(X)
W = pca.components_.T
s = pca.singular_values_
display_dictionary(W, score=s, save_name = "MNIST_PCA_ex1.pdf", grid_shape=[1,24])
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X[idx,:].T
n_iter = 10
W_list = []
nonnegativitiy = [[False, False], [False, True], [True, True]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(nonnegativitiy)):
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "MNIST_NMF_ex1.pdf")
# MF and PCA on MNIST
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X[idx,:].T
n_iter = 100
W_list = []
H_list = []
nonnegativitiy = ['PCA', [False, False], [False, True], [True, True]]
#PCA
pca = PCA(n_components=25)
pca.fit(X)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "MNIST_PCA_NMF_ex1.pdf")
def random_padding(img, thickness=1):
# img = a x b image
[a,b] = img.shape
Y = np.zeros(shape=[a+thickness, b+thickness])
r_loc = np.random.choice(np.arange(thickness+1))
c_loc = np.random.choice(np.arange(thickness+1))
Y[r_loc:r_loc+a, c_loc:c_loc+b] = img
return Y
def list2onehot(y, list_classes):
"""
y = list of class lables of length n
output = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
"""
Y = np.zeros(shape = [len(y), len(list_classes)], dtype=int)
for i in np.arange(Y.shape[0]):
for j in np.arange(len(list_classes)):
if y[i] == list_classes[j]:
Y[i,j] = 1
return Y
def onehot2list(y, list_classes=None):
"""
y = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
output = list of class lables of length n
"""
if list_classes is None:
list_classes = np.arange(y.shape[1])
y_list = []
for i in np.arange(y.shape[0]):
idx = np.where(y[i,:]==1)
idx = idx[0][0]
y_list.append(list_classes[idx])
return y_list
def sample_multiclass_MNIST_padding(list_digits=['0','1', '2'], full_MNIST=[X,y], padding_thickness=10):
# get train and test set from MNIST of given digits
# e.g., list_digits = ['0', '1', '2']
# pad each 28 x 28 image with zeros so that it has now "padding_thickness" more rows and columns
# The original image is superimposed at a uniformly chosen location
if full_MNIST is not None:
X, y = full_MNIST
else:
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
X = X / 255.
Y = list2onehot(y.tolist(), list_digits)
idx = [i for i in np.arange(len(y)) if y[i] in list_digits] # list of indices where the label y is in list_digits
X01 = X[idx,:]
y01 = Y[idx,:]
X_train = []
X_test = []
y_test = [] # list of one-hot encodings (indicator vectors) of each label
y_train = [] # list of one-hot encodings (indicator vectors) of each label
for i in trange(X01.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
img_padded = random_padding(X01[i,:].reshape(28,28), thickness=padding_thickness)
img_padded_vec = img_padded.reshape(1,-1)
if U<0.8:
X_train.append(img_padded_vec[0,:].copy())
y_train.append(y01[i,:].copy())
else:
X_test.append(img_padded_vec[0,:].copy())
y_test.append(y01[i,:].copy())
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
return X_train, X_test, y_train, y_test
# Simple MNIST binary classification experiments
list_digits=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
X_train, X_test, y_train, y_test = sample_multiclass_MNIST_padding(list_digits=list_digits,
full_MNIST=[X,y],
padding_thickness=20)
idx = np.random.choice(np.arange(X_train.shape[1]), 100)
X0 = X_train[idx,:].T
n_iter = 100
W_list = []
nonnegativitiy = [[False, False], [False, True], [True, True]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(nonnegativitiy)):
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "MNIST_NMF_ex2.pdf")
# MF and PCA on MNIST + padding
list_digits=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
X_train, X_test, y_train, y_test = sample_multiclass_MNIST_padding(list_digits=list_digits,
full_MNIST=[X,y],
padding_thickness=20)
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X_train[idx,:].T
n_iter = 100
W_list = []
H_list = []
nonnegativitiy = ['PCA', [False, False], [False, True], [True, True]]
#PCA
pca = PCA(n_components=25)
pca.fit(X)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "MNIST_PCA_NMF_ex2.pdf")
```
## Dictionary Learing for Face datasets
```
from sklearn.datasets import fetch_olivetti_faces
faces, _ = fetch_olivetti_faces(return_X_y=True, shuffle=True,
random_state=np.random.seed(0))
n_samples, n_features = faces.shape
# global centering
#faces_centered = faces - faces.mean(axis=0)
# local centering
#faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
print("faces_centered.shape", faces.shape)
# Plot some sample images
ncols = 10
nrows = 4
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=[15, 6.5])
for j in np.arange(ncols):
for i in np.arange(nrows):
ax[i,j].imshow(faces[i*ncols + j].reshape(64,64), cmap="gray")
#if i == 0:
# ax[i,j].set_title("label$=$%s" % y[idx_subsampled[i]], fontsize=14)
# ax[i].legend()
plt.subplots_adjust(wspace=0.3, hspace=-0.1)
plt.savefig('Faces_ex1.pdf', bbox_inches='tight')
# PCA and dictionary images (principal components)
X0 = faces.T
pca = PCA(n_components=24)
pca.fit(X0.T)
W = pca.components_.T
s = pca.singular_values_
display_dictionary(W, score=s, save_name = "Faces_PCA_ex1.pdf", grid_shape=[2,12])
# Variable nonnegativity constraints
X0 = faces.T
#X0 /= 100 * np.linalg.norm(X0)
n_iter = 200
W_list = []
nonnegativitiy = [[False, False], [False, True], [True, True]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(nonnegativitiy)):
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "Face_NMF_ex1.pdf")
n_iter = 200
W_list = []
H_list = []
X0 = faces.T
#X0 /= 100 * np.linalg.norm(X0)
nonnegativitiy = ['PCA', [False, False], [False, True], [True, True]]
#PCA
pca = PCA(n_components=25)
pca.fit(X0.T)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "Faces_PCA_NMF_ex1.pdf")
# Variable regularizer for W
X0 = faces.T
print('X0.shape', X0.shape)
n_iter = 200
W_list = []
W_sparsity = [[0, 0], [0.5, 0], [0, 3]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
a1 = W_sparsity[i][0], # L1 regularizer for W
a12 = W_sparsity[i][1], # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=True,
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(W_sparsity)):
label = "W_$L_{1}$-regularizer = %.2f" % W_sparsity[i][0] + "\n" + "W_$L_{2}$-regularizer = %.2f" % W_sparsity[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "Face_NMF_ex2.pdf")
n_iter = 200
W_list = []
H_list = []
X0 = faces.T
#X0 /= 100 * np.linalg.norm(X0)
W_sparsity = ['PCA', [0, 0], [0.5, 0], [0, 3]]
#PCA
pca = PCA(n_components=25)
pca.fit(X0.T)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
a1 = W_sparsity[i][0], # L1 regularizer for W
a12 = W_sparsity[i][1], # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=True,
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(W_sparsity)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_$L_{1}$-regularizer = %.2f" % W_sparsity[i][0] + "\n" + "W_$L_{2}$-regularizer = %.2f" % W_sparsity[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(W_sparsity)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "Faces_PCA_NMF_ex2.pdf")
```
## Topic modeling for 20Newsgroups dataset
```
from nltk.corpus import stopwords
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from wordcloud import WordCloud, STOPWORDS
from scipy.stats import entropy
import pandas as pd
def list2onehot(y, list_classes):
"""
y = list of class lables of length n
output = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
"""
Y = np.zeros(shape = [len(y), len(list_classes)], dtype=int)
for i in np.arange(Y.shape[0]):
for j in np.arange(len(list_classes)):
if y[i] == list_classes[j]:
Y[i,j] = 1
return Y
def onehot2list(y, list_classes=None):
"""
y = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
output = list of class lables of length n
"""
if list_classes is None:
list_classes = np.arange(y.shape[1])
y_list = []
for i in np.arange(y.shape[0]):
idx = np.where(y[i,:]==1)
idx = idx[0][0]
y_list.append(list_classes[idx])
return y_list
remove = ('headers','footers','quotes')
stopwords_list = stopwords.words('english')
stopwords_list.extend(['thanks','edu','also','would','one','could','please','really','many','anyone','good','right','get','even','want','must','something','well','much','still','said','stay','away','first','looking','things','try','take','look','make','may','include','thing','like','two','or','etc','phone','oh','email'])
categories = [
'comp.graphics',
'comp.sys.mac.hardware',
'misc.forsale',
'rec.motorcycles',
'rec.sport.baseball',
'sci.med',
'sci.space',
'talk.politics.guns',
'talk.politics.mideast',
'talk.religion.misc'
]
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)
newsgroups_labels = newsgroups_train.target
# remove numbers
data_cleaned = [re.sub(r'\d+','', file) for file in newsgroups_train.data]
# print 10 random documents
#for i in np.arange(10):
# idx = np.random.choice(len(data_cleaned))
# print('>>>> %i th doc \n\n %s \n\n' % (idx, data_cleaned[idx]))
print('len(newsgroups_labels)', len(newsgroups_labels))
print('newsgroups_labels', newsgroups_labels)
print('data_cleaned[1]', data_cleaned[1])
print('newsgroups_labels[1]', newsgroups_labels[1])
# vectorizer = TfidfVectorizer(stop_words=stopwords_list)
vectorizer_BOW = CountVectorizer(stop_words=stopwords_list)
vectors_BOW = vectorizer_BOW.fit_transform(data_cleaned).transpose() # words x docs # in the form of sparse matrix
vectorizer = TfidfVectorizer(stop_words=stopwords_list)
vectors = vectorizer.fit_transform(data_cleaned).transpose() # words x docs # in the form of sparse matrix
idx_to_word = np.array(vectorizer.get_feature_names()) # list of words that corresponds to feature coordinates
print('>>>> vectors.shape', vectors.shape)
i = 4257
print('newsgroups_labels[i]', newsgroups_labels[i])
print('>>>> data_cleaned[i]', data_cleaned[i])
# print('>>>> vectors[:,i] \n', vectors[:,i])
a = vectors[:,i].todense()
I = np.where(a>0)
count_list = []
word_list = []
for j in np.arange(len(I[0])):
# idx = np.random.choice(I[0])
idx = I[0][j]
# print('>>>> %i th coordinate <===> %s, count %i' % (idx, idx_to_word[idx], vectors[idx, i]))
count_list.append([idx, vectors_BOW[idx, i], vectors[idx, i]])
word_list.append(idx_to_word[idx])
d = pd.DataFrame(data=np.asarray(count_list).T, columns=word_list).T
d.columns = ['Coordinate', 'Bag-of-words', 'tf-idf']
cols = ['Coordinate', 'Bag-of-words']
d[cols] = d[cols].applymap(np.int64)
print(d)
def sample_multiclass_20NEWS(list_classes=[0, 1], full_data=None, vectorizer = 'tf-idf', verbose=True):
# get train and test set from 20NewsGroups of given categories
# vectorizer \in ['tf-idf', 'bag-of-words']
# documents are loaded up from the following 10 categories
categories = [
'comp.graphics',
'comp.sys.mac.hardware',
'misc.forsale',
'rec.motorcycles',
'rec.sport.baseball',
'sci.med',
'sci.space',
'talk.politics.guns',
'talk.politics.mideast',
'talk.religion.misc'
]
data_dict = {}
data_dict.update({'categories': categories})
if full_data is None:
remove = ('headers','footers','quotes')
stopwords_list = stopwords.words('english')
stopwords_list.extend(['thanks','edu','also','would','one','could','please','really','many','anyone','good','right','get','even','want','must','something','well','much','still','said','stay','away','first','looking','things','try','take','look','make','may','include','thing','like','two','or','etc','phone','oh','email'])
newsgroups_train_full = fetch_20newsgroups(subset='train', categories=categories, remove=remove) # raw documents
newsgroups_train = [re.sub(r'\d+','', file) for file in newsgroups_train_full.data] # remove numbers (we are only interested in words)
y = newsgroups_train_full.target # document class labels
Y = list2onehot(y.tolist(), list_classes)
if vectorizer == 'tfidf':
vectorizer = TfidfVectorizer(stop_words=stopwords_list)
else:
vectorizer = CountVectorizer(stop_words=stopwords_list)
X = vectorizer.fit_transform(newsgroups_train) # words x docs # in the form of sparse matrix
X = np.asarray(X.todense())
print('!! X.shape', X.shape)
idx2word = np.array(vectorizer.get_feature_names()) # list of words that corresponds to feature coordinates
data_dict.update({'newsgroups_train': data_cleaned})
data_dict.update({'newsgroups_labels': y})
data_dict.update({'feature_matrix': vectors})
data_dict.update({'idx2word': idx2word})
else:
X, y = full_data
idx = [i for i in np.arange(len(y)) if y[i] in list_classes] # list of indices where the label y is in list_classes
X01 = X[idx,:]
Y01 = Y[idx,:]
X_train = []
X_test = []
y_test = [] # list of one-hot encodings (indicator vectors) of each label
y_train = [] # list of one-hot encodings (indicator vectors) of each label
for i in np.arange(X01.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
if U<0.8:
X_train.append(X01[i,:])
y_train.append(Y01[i,:].copy())
else:
X_test.append(X01[i,:])
y_test.append(Y01[i,:].copy())
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
data_dict.update({'X_train': X_train})
data_dict.update({'X_test': X_test})
data_dict.update({'y_train': y_train})
data_dict.update({'y_test': y_test})
return X_train, X_test, y_train, y_test, data_dict
# test
X_train, X_test, y_train, y_test, data_dict = sample_multiclass_20NEWS(list_classes=[0, 1, 2,3,4,5,6,7,8,9],
vectorizer = 'tf-idf',
full_data=None)
print('X_train.shape', X_train.shape)
print('X_test.shape', X_test.shape)
print('y_train.shape', y_train.shape)
print('y_test.shape', y_test.shape)
print('y_test', y_test)
#print('y_list', onehot2list(y_test))
idx2word = data_dict.get('idx2word')
categories = data_dict.get('categories')
import random
def grey_color_func(word, font_size, position, orientation, random_state=None,
**kwargs):
return "hsl(0, 0%%, %d%%)" % random.randint(60, 100)
def plot_topic_wordcloud(W, idx2word, num_keywords_in_topic=5, save_name=None, grid_shape = [2,5]):
# plot the class-conditioanl PMF as wordclouds
# W = (p x r) (words x topic)
# idx2words = list of words used in the vectorization of documents
# categories = list of class labels
# prior on class labels = empirical PMF = [ # class i examples / total ]
# class-conditional for class i = [ # word j in class i examples / # words in class i examples]
fig, axs = plt.subplots(nrows=grid_shape[0], ncols=grid_shape[1], figsize=(15, 6), subplot_kw={'xticks': [], 'yticks': []})
for ax, i in zip(axs.flat, np.arange(W.shape[1])):
# dist = W[:,i]/np.sum(W[:,i])
### Take top k keywords in each topic (top k coordinates in each column of W)
### to generate text data corresponding to the ith topic, and then generate its wordcloud
list_words = []
idx = np.argsort(W[:,i])
idx = np.flip(idx)
for j in range(num_keywords_in_topic):
list_words.append(idx2word[idx[j]])
Y = " ".join(list_words)
#stopwords = STOPWORDS
#stopwords.update(["’", "“", "”", "000", "000 000", "https", "co", "19", "2019", "coronavirus",
# "virus", "corona", "covid", "ncov", "covid19", "amp"])
wc = WordCloud(background_color="black",
relative_scaling=0,
width=400,
height=400).generate(Y)
ax.imshow(wc.recolor(color_func=grey_color_func, random_state=3),
interpolation="bilinear")
# ax.set_xlabel(categories[i], fontsize='20')
# ax.axis("off")
plt.tight_layout()
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.08)
if save_name is not None:
plt.savefig(save_name, bbox_inches='tight')
X0 = X_train.T
print('X0.shape', X0.shape)
W, H = ALS(X=X0,
n_components=10,
n_iter=20,
subsample_ratio=1,
a1 = 0, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=True,
compute_recons_error=True)
plot_topic_wordcloud(W, idx2word=idx2word, num_keywords_in_topic=7, grid_shape=[2,5], save_name="20NEWS_topic1.pdf")
# Topic modeling by NMF
X0 = X_train.T
W, H = ALS(X=X0,
n_components=10,
n_iter=20,
subsample_ratio=1,
a1 = 0, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=False,
compute_recons_error=True)
plot_topic_wordcloud(W, idx2word=idx2word, num_keywords_in_topic=7, grid_shape = [2,5], save_name="20NEWS_topic2.pdf")
```
## EM algorithm for PCA
```
# Gram-Schmidt Orthogonalization of a given matrix
def orthogonalize(U, eps=1e-15):
"""
Orthogonalizes the matrix U (d x n) using Gram-Schmidt Orthogonalization.
If the columns of U are linearly dependent with rank(U) = r, the last n-r columns
will be 0.
Args:
U (numpy.array): A d x n matrix with columns that need to be orthogonalized.
eps (float): Threshold value below which numbers are regarded as 0 (default=1e-15).
Returns:
(numpy.array): A d x n orthogonal matrix. If the input matrix U's cols were
not linearly independent, then the last n-r cols are zeros.
"""
n = len(U[0])
# numpy can readily reference rows using indices, but referencing full rows is a little
# dirty. So, work with transpose(U)
V = U.T
for i in range(n):
prev_basis = V[0:i] # orthonormal basis before V[i]
coeff_vec = np.dot(prev_basis, V[i].T) # each entry is np.dot(V[j], V[i]) for all j < i
# subtract projections of V[i] onto already determined basis V[0:i]
V[i] -= np.dot(coeff_vec, prev_basis).T
if np.linalg.norm(V[i]) < eps:
V[i][V[i] < eps] = 0. # set the small entries to 0
else:
V[i] /= np.linalg.norm(V[i])
return V.T
# Example:
A = np.random.rand(2,2)
print('A \n', A)
print('orthogonalize(A) \n', orthogonalize(A))
print('A.T @ A \n', A.T @ A)
def EM_PCA(X,
n_components = 10, # number of columns in the dictionary matrix W
n_iter=10,
W_ini=None,
subsample_ratio=1,
n_workers = 1):
'''
Given data matrix X of shape (d x n), compute its rank r=n_components PCA:
\hat{W} = \argmax_{W} var(Proj_{W}(X))
= \argmin_{W} || X - Proj_{W}(X) ||_{F}^{2}
where W is an (d x r) matrix of rank r.
'''
d, n = X.shape
r = n_components
X_mean = np.mean(X, axis=1).reshape(-1,1)
X_centered = X - np.repeat(X_mean, X0.shape[1], axis=1)
print('subsample_size:', n//subsample_ratio)
# Initialize factors
W_list = []
loss_list = []
for i in trange(n_workers):
W = np.random.rand(d,r)
if W_ini is not None:
W = W_ini
A = np.zeros(shape=[r, n//subsample_ratio]) # aggregate matrix for code H
# Perform EM updates
for j in np.arange(n_iter):
idx_data = np.random.choice(np.arange(X.shape[1]), X.shape[1]//subsample_ratio, replace=False)
X1 = X_centered[:,idx_data]
H = np.linalg.inv(W.T @ W) @ (W.T @ X1) # E-step
# A = (1-(1/(j+1)))*A + (1/(j+1))*H # Aggregation
W = X1 @ H.T @ np.linalg.inv(H @ H.T) # M-step
# W = X1 @ A.T @ np.linalg.inv(A @ A.T) # M-step
# W = orthogonalize(W)
#if compute_recons_error and (j > n_iter-2) :
# print('iteration %i, reconstruction error %f' % (j, np.linalg.norm(X_centered-W@(W.T @ X_centered))))
W_list.append(W.copy())
loss_list.append(np.linalg.norm(X_centered-W@(W.T @ X_centered)))
idx = np.argsort(loss_list)[0]
W = W_list[idx]
print('loss_list',np.asarray(loss_list)[np.argsort(loss_list)])
return orthogonalize(W)
# Load Olivetti Face dataset
from sklearn.datasets import fetch_olivetti_faces
faces, _ = fetch_olivetti_faces(return_X_y=True, shuffle=True,
random_state=np.random.seed(0))
n_samples, n_features = faces.shape
# global centering
#faces_centered = faces - faces.mean(axis=0)
# local centering
#faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
print("faces_centered.shape", faces.shape)
# EM_PCA and dictionary images (principal components)
X0 = faces.T
W = EM_PCA(X0, W_ini = None, n_workers=10, n_iter=200, subsample_ratio=2, n_components=24)
display_dictionary(W, score=None, save_name = "Faces_EM_PCA_ex1.pdf", grid_shape=[2,12])
cov = np.cov(X0)
print('(cov @ W)[:,0] / W[:,0]', (cov @ W)[:,0] / W0[:,0])
print('var coeff', np.std((cov @ W)[:,0] / W[:,0]))
print('var coeff exact', np.std((cov @ W0)[:,0] / W0[:,0]))
# plot coefficients of Cov @ W / W for exact PCA and EM PCA
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 3))
pca = PCA(n_components=24)
pca.fit(X0.T)
W0 = pca.components_.T
axs[0].plot((cov @ W0)[:,0] / W0[:,0], label='Exact PCA, 1st comp.')
axs[0].legend(fontsize=13)
axs[1].plot((cov @ W)[:,0] / W[:,0], label='EM PCA, 1st comp.')
axs[1].legend(fontsize=13)
plt.savefig("EM_PCA_coeff_plot1.pdf", bbox_inches='tight')
X0 = faces.T
pca = PCA(n_components=24)
pca.fit(X0.T)
W0 = pca.components_.T
s = pca.singular_values_
cov = np.cov(X0)
print('(cov @ W)[:,0] / W[:,0]', (cov @ W0)[:,0] / W0[:,0])
display_dictionary(W0, score=s, save_name = "Faces_PCA_ex1.pdf", grid_shape=[2,12])
X_mean = np.sum(X0, axis=1).reshape(-1,1)/X0.shape[1]
X_centered = X0 - np.repeat(X_mean, X0.shape[1], axis=1)
Cov = (X_centered @ X_centered.T) / X0.shape[1]
(Cov @ W)[:,0] / W[:,0]
cov = np.cov(X0)
(cov @ W0)[:,0] / W0[:,0]
np.real(eig_val[0])
np.sort(np.real(eig_val))
x = np.array([
[0.387,4878, 5.42],
[0.723,12104,5.25],
[1,12756,5.52],
[1.524,6787,3.94],
])
#centering the data
x0 = x - np.mean(x, axis = 0)
cov = np.cov(x0, rowvar = False)
print('cov', cov)
print('cov', np.cov(x, rowvar = False))
evals , evecs = np.linalg.eigh(cov)
evals
```
| github_jupyter |
<img src="https://www.ibm.com/watson/health/ai-stories/assets/images/ibm-watson-health-logo.png" style="float: left; width: 40%; margin-bottom: 0.5em;">
## Develop a neuropathy onset predictive model using the FHIR diabetic patient data (prepared in Notebook 2)
**FHIR Dev Day Notebook 3**
Author: **Gigi Yuen-Reed** <gigiyuen@us.ibm.com>
[Section 1: Environment setup and credentials](#section_1)
[Section 2: Data ingestion](#section_2)
[Section 3: Data understanding](#section_3)
[Section 4: Construct analysis cohort](#section_4)
[Section 5: Model development and validation](#section_5)
<a id='section_1'></a>
## 1. Environment setup and credentails
```
import types
import pandas as pd
import ibm_boto3
import glob
from ibm_botocore.client import Config
from pprint import pprint
import shutil
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from numpy import set_printoptions
import ibmos2spark
from pyspark.sql.functions import *
from pyspark.sql.types import *
# import ML packages
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn import tree
from sklearn.svm import SVC
from sklearn.feature_selection import SelectKBest, f_classif, chi2
from sklearn.metrics import classification_report
from sklearn.metrics import classification_report, confusion_matrix, roc_curve, auc, roc_auc_score
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', 500)
spark.conf.set("spark.sql.repl.eagerEval.enabled",True)
# Temporary credentials provided for DevDays only
synthetic_mass_read_only = \
{
"apikey": "HNJj8lVRmT-wX-n3ns2d8A8_iLFITob7ibC6aH66GZQX",
"endpoints": "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints",
"iam_apikey_description": "Auto-generated for key 418c8c60-5c31-4ed0-8a08-0f6641a01d46",
"iam_apikey_name": "dev_days",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Reader",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/f0dfe396162db060e2e2a53ff465dfa0::serviceid:ServiceId-e13864d8-8b73-4901-8060-b84123e5ca1c",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-storage:global:a/f0dfe396162db060e2e2a53ff465dfa0:3067bed7-8108-4d6e-ba32-5d5f643700e5::"
}
cos_api_key = synthetic_mass_read_only
input_bucket = 'whc-save-fhir'
credentials = {
'service_id': cos_api_key['iam_serviceid_crn'],
'api_key': cos_api_key['apikey'],
'endpoint': 'https://s3.private.us-south.cloud-object-storage.appdomain.cloud',
'iam_service_endpoint': 'https://iam.ng.bluemix.net/oidc/token'
}
configuration_name = 'syntheticmass-write' #Must be unique for each bucket / configuration!
spark_cos = ibmos2spark.CloudObjectStorage(sc, credentials, configuration_name, 'bluemix_cos')
# COS API setup
client = ibm_boto3.client(
service_name='s3',
ibm_api_key_id=cos_api_key['apikey'],
ibm_auth_endpoint=credentials['iam_service_endpoint'],
config=Config(signature_version='oauth'),
endpoint_url=credentials['endpoint'])
# explore what is inside the COS bucket
# client.list_objects(Bucket=input_bucket).get('Contents')
```
<a id='section_2'></a>
## 2. Data ingestion
Read in LPR generated from Notebook 2. Recall LPR_Row contains diabetic patient data where each observation represents an unqiue patient-comorbidty combination
```
# versify COS bucket location
input_file = 'lpr/lpr_row'
spark_cos.url(input_file, input_bucket)
# read in LPR_row
%time lprRow = spark.read.parquet(spark_cos.url(input_file, input_bucket))
lprRow.count()
lprRow.limit(20)
# load into pandas (no longer distributed)
%time df = lprRow.toPandas()
df.info()
print("panda dataframe size: ", df.shape)
# store snomed code to name mapping => next iteration, upgrade to storing mapping in dictionary
from pandas import DataFrame
snomed_map = df.groupby(['snomed_code', 'snomed_name']).first().index.tolist() #a list
mapdf = DataFrame(snomed_map,columns=['snomed_code','snomed_name']) #a dataframe
mapdf
```
<a id='section_3'></a>
## 3. Data Understanding
Data exploration and use case validation. Please note that the patient data characteristics are likely the artifact of Synetha's data generation engine, they may not reflect the nuance and diversity of disease progression one would observed in practice.
**Clinical references**
Top diabetic comorbidities, see example in Nowakowska et al (2019) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6659216/
Common diabetic complications, see the US CDC list https://www.cdc.gov/diabetes/library/features/prevent-complications.html
```
# unique patient count
n = len(pd.unique(df['patient_id']))
print("number of unique patients = ", n)
# review prevalence of co-morbidities
df.snomed_name.value_counts()
```
**Observation**: Hypertension, diabetic renal disease and neuropathy due to T2D are the top 3 co-morbidities for diabetic patients. About 40% of our diabetic population has neuropathy at some point in their lives
```
# set date format
df['birth_date'] = pd.to_datetime(df['birth_date'])
df['target_first_date'] = pd.to_datetime(df['target_first_date'])
df['first_observation_date'] = pd.to_datetime(df['first_observation_date'])
# calculate patient age at diabete onset
df['onsetAge'] = (df['target_first_date'] - df['birth_date'])/ pd.to_timedelta(1, unit='D') / 365
df.head()
# review patient age at diabetes onset
firstdf = df.groupby('patient_id').first().reset_index()
onsetAgePlot = firstdf.hist(column='onsetAge', bins=25, grid=False)
# review co-morbidities onset timing in comparison to diabetic onset
# positive means co-morbidity was first reported AFTER onset; negative means BEFORE)
df['yearDiff'] = (df['first_observation_date'] - df['target_first_date']) / pd.to_timedelta(1, unit='D') / 365
df.head()
#plot histrogram of onset time difference(in years) by comorbidity
onsetDiffPlot = df.hist(column='yearDiff', by='snomed_name', bins=10, grid=False, figsize=(15,30), layout=(22,1), sharex=True)
```
**Observation**: Majority of co-morbidities were first reported AFTER diabetic onset, recall this is a synethic dataset.
```
# review data availability dates
print("diabetic onset date ranges", df.target_first_date.min(), df.target_first_date.max())
print("comorbidity start date ranges", df.first_observation_date.min(), df.first_observation_date.max())
print("comorbidity end date ranges",df.last_observation_date.min(), df.last_observation_date.max())
```
<a id='section_4'></a>
## 4. Construct analysis cohort
Model Objective: What is the likelihood of developing neuropathy 5 years after onset?
Observation period: Demographic and known comorbidities as of diabetic onset
Prediction period: 5 years after diabetic onset
### 4.1 Define patient cohort with inclusion/exclusion criteria
```
# cohort exclusion 1: remove patients who has neuropathy due to diabetes diagnosis prior to diabetes onset date
e1 = df[(df["snomed_code"]=="368581000119106") & (df["yearDiff"]<0)][['patient_id']]
print("number of unique patients in exclusion 1: ", len(pd.unique(e1['patient_id'])))
# cohort exclusion 2: remove patients who has less than 3 year of data prior to diabetes onset
e2 = df[df["target_first_date"] < pd.Timestamp('19301009')][['patient_id']]
print("number of unique patients in exclusion 2: ", len(pd.unique(e2['patient_id'])))
# cohort exclusion 3: remove patient has less than 5 years of data in prediction window
e3 = df[df["target_first_date"] > pd.Timestamp('20140417')][['patient_id']]
print("number of unique patients in exclusion 3: ", len(pd.unique(e3['patient_id'])))
# cohort exclusion 4: patient age at diabetic onset >= 18
e4 = df[df["onsetAge"] < 18][['patient_id']]
print("number of unique patients in exclusion 4: ", len(pd.unique(e4['patient_id'])))
# construct cohort - remove all records for patients that meet any of the 4 exclusion criteria
# total patient count prior to filtering
# n = len(pd.unique(df['patient_id'])) #repeat from earlier
print("number of unique patients prior to filtering = ", n)
cohort = df[ (~df['patient_id'].isin(e1['patient_id'])) & (~df['patient_id'].isin(e2['patient_id'])) & (~df['patient_id'].isin(e3['patient_id'])) & (~df['patient_id'].isin(e4['patient_id']))]
print(cohort.shape)
print(len(pd.unique(cohort['patient_id'])))
# alternative 1:
# e4 = df[df["onsetAge"] < 18][['patient_id']]
# update_df = df.drop(e4.index, axis=0)
# alternative 2:
# e4 = df[df["onsetAge"] < 18][['patient_id']].index.tolist()
# update_df = df.drop(e4)
```
### 4.2 Construct model input features and prediction target
Develop model inputs, codify prediction target, normalize data
```
# step 1: transpose table such that each row is a patient
temp1 = cohort.loc[:,['patient_id','gender','onsetAge']]
temp1 = temp1.drop_duplicates(subset=['patient_id'])
print("patient demographic block size: ", temp1.shape)
temp2 = cohort.pivot_table(index=["patient_id"], columns='snomed_code', values='yearDiff')
print("patient condition block size: ", temp2.shape)
cohort1 = temp1.merge(temp2, left_on="patient_id", right_on="patient_id")
print("combined patient block size: ", cohort1.shape)
cohort1.head()
# set target: neuropathy (368581000119106)
cohort1.rename(columns={"368581000119106":"target"}, inplace=True)
# demographic features
demographic = ["onsetAge", "gender"]
# condition features
temp3 = cohort1.drop(["patient_id", "onsetAge", "gender"], axis=1)
condition_list = list(temp3)
print (condition_list)
# prepare condition features: value = 1 if condition was pre-existing to diabetes onset
for column in cohort1[condition_list]:
cohort1[column] = cohort1[column].apply(lambda x: 1 if x <= 0 else 0)
# prepare gender input features, female = 1 male =0
cohort1["gender"] = cohort1["gender"].apply(lambda x: 1 if x == "male" else 0)
# prepare target value = 1 if neuropathy occurs within 5 years AFTER diabetes onset
cohort1["target"] = cohort1["target"].apply(lambda x: 1 if (x > 0) & (x <=5) else 0)
# construct new feature: number of knwon comorbidities as of diabetic onset
cohort1['comorbid_ct'] = cohort1[condition_list].sum(axis=1)
cohort1.head(10)
# verify cohort characteristics
cohort1.mean()
cohort1.groupby("target").describe()
# normalize non-sparse data
cohort1["onsetAge"] = (cohort1["onsetAge"] - np.min(cohort1["onsetAge"])) / (np.max(cohort1["onsetAge"]) - np.min(cohort1["onsetAge"]))
cohort1["comorbid_ct"] = (cohort1["comorbid_ct"] - np.min(cohort1["comorbid_ct"])) / (np.max(cohort1["comorbid_ct"]) - np.min(cohort1["comorbid_ct"]))
```
<a id='section_5'></a>
## 5. Model development
[Section 5.1: Prepare train/test data](#section_5.1)
[Section 5.2 Univariate analysis](#section_5.2)
[Section 5.3.1: Logistic regression with all features](#section_5.3.1)
[Section 5.3.2: Logistic regression with only 5 features](#section_5.3.2)
[Section 5.4: Non-linear Support Vector Machine (SVM)](#section_5.4)
[Section 5.5: Decision Tree Classifier](#section_5.5)
<a id='section_5.1'></a>
### 5.1 Prepare train/test data
```
# prepare train/test data
y = cohort1["target"]
x = cohort1.drop(["patient_id","target"], axis = 1)
#x = cohort1.drop(["patient_id","target", "comorbid_ct"], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.3,random_state=0)
print("whole data shape", cohort1.shape)
print("training input data shape:", x_train.shape)
print("test input data shape:",x_test.shape)
print("training output data shape:", len(y_train))
print("test output data shape:",len(y_test))
```
<a id='section_5.2'></a>
### 5.2 Univariate analysis
```
#capture a list of input feature IDs
feature_id = list(x)
print(feature_id)
# univariate analysis - analysis all 24 features
feature_selector = SelectKBest(score_func=chi2)
fit = feature_selector.fit(x_train, y_train)
pvalue = DataFrame(list(zip(feature_id, fit.pvalues_)),columns=['feature_id','pvalues'])
pv_df = pd.merge(pvalue, mapdf, left_on='feature_id', right_on='snomed_code', how = 'outer').drop(["snomed_code"], axis=1).dropna(subset=['feature_id'])
#pv_df = pv_df.rename(columns = {'snomed_name':'description'})
#pv_df.loc[df.feature_id == "onsetAge", "description"] = "onsetAge"
pv_df['pvalue < 0.1'] = pv_df['pvalues'].apply(lambda x: 1 if x <= 0.1 else 0) #create indicator for pvalue < 0.1
pv_df = pv_df.reindex(columns=['feature_id', 'snomed_name', 'pvalues', 'pvalue < 0.1']) #rearragen columns
pv_df
```
**Obsevation**: Comorbidity count has very high significance (low p-value). Besides that, only a few features have sufficiently low p-value (hypertension, CAD). "nan" p-value when all comorbidty values are 0
<a id='section_5.3.1'></a>
### 5.3.1 Logistic regression with all features
```
# train Logistic Regression with all features
lr = LogisticRegression(solver='lbfgs').fit(x_train,y_train)
acc_train = lr.score(x_train,y_train)*100
acc_test = lr.score(x_test,y_test)*100
print("Test Accuracy for train set {:.2f}%".format(acc_train))
print("Test Accuracy for test set {:.2f}%".format(acc_test))
```
**Observation** no overfitting
```
# review model intercept and coefficients
print("model intercept ", lr.intercept_, "\n")
print("model coeff ", lr.coef_, "\n")
# review model importance
#list(zip(feature_id, lr.coef_[0]))
# put togther feature importance for LR, based on coefficeint value
importance = DataFrame(list(zip(feature_id, lr.coef_[0])),columns=['feature_id','coeff'])
importance = pd.merge(importance, mapdf, left_on='feature_id', right_on='snomed_code', how = 'outer').drop(["snomed_code"], axis=1).dropna(subset=['feature_id']) #merge in snomed description
importance = importance[importance['coeff'] != 0] #remove feature with coeff = 0, i.e., no impact
# sort by importance
#importance = importance.reindex(importance.coeff.abs().sort_values().index) #does not support descending sort
importance['abs_coeff']= importance.coeff.abs()
importance = importance.sort_values(by='abs_coeff', ascending=False).drop(["abs_coeff"], axis=1) #rearrnge columns, sort table by coefficient value
# add descriptions
importance = importance.reindex(columns=['feature_id', 'snomed_name', 'coeff']) #rearrnge columns, sort table by coefficient value
importance.loc[importance.feature_id == "gender", "snomed_name"] = "male = 1, female = 0"
importance.loc[importance.feature_id == "comorbid_ct", "snomed_name"] = "# of known conditions at time of diabetic onset"
importance.loc[importance.feature_id == "onsetAge", "snomed_name"] = "Age at diabetic onset, normalized from (18-54) to (0-1)"
importance = importance.rename(columns = {'snomed_name':'description'})
# show feature selection count
print("Feature importance from LR model (selected ", importance.shape[0], " out of ", len(feature_id), " features)")
importance
```
**Observation**: comorbid_ct is the single most important feature
Note that sckitlearn has built in regularization to reduce model features (where coefficients are 0). It currently does not natively generate p-values. Consider using statsmodels package instead if deeper statistical features are desired
```
# show confusion matrix
y_lr = lr.predict(x_test)
cm_lr = confusion_matrix(y_test,y_lr)
ax = plt.subplot()
sns.heatmap(cm_lr,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for Logistic Regression Model with built in regularizer');
# show classification report
target_names = ['class 0', 'class 1']
print(classification_report(y_test, y_lr, target_names=target_names))
```
Note that recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”.
```
# generate a no skill prediction (majority class)
ns_probs = [0 for _ in range(len(y_test))]
# calculate roc scores
ns_auc = roc_auc_score(y_test, ns_probs)
lr_auc = roc_auc_score(y_test, y_lr)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# # calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
lr_fpr, lr_tpr, _ = roc_curve(y_test, y_lr)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(lr_fpr, lr_tpr, marker='.', label='Logistic')
# axis labels
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate')
plt.title('ROC for Logistic Regression with built in regularizer');
plt.legend() # show the legend
plt.show() # show the plot
```
<a id='section_5.3.2'></a>
### 5.3.2 Logistic regression with only 5 features
```
# Feature selection: only keep top 5 features
feature_selector = SelectKBest(score_func=chi2, k=5)
fit2 = feature_selector.fit(x_train, y_train)
# create training dataset with selected features
x_train2 = feature_selector.fit_transform(x_train, y_train)
print(x_train2.shape)
#get selected feature names
mask = feature_selector.get_support() #list of booleans
new_features = [] # The list of your K best features
for bool, feature in zip(mask, feature_id):
if bool:
new_features.append(feature)
#cols = fit.get_support(indices=True) #get selected feature indexx_
#print(cols)
# make pvalue table
# set_printoptions(precision=3)
pvalue = list(zip(new_features, fit.pvalues_))
from pprint import pprint
print("(features, p-value)")
pprint(pvalue)
# trim test set to match selected features
x_test2 = x_test[new_features]
print(x_test2.shape)
# train Logistic Regression with seleced features
lr2 = LogisticRegression(solver='lbfgs').fit(x_train2,y_train)
acc_train = lr2.score(x_train2,y_train)*100
acc_test = lr2.score(x_test2,y_test)*100
print("Test Accuracy for train set {:.2f}%".format(acc_train))
print("Test Accuracy for test set {:.2f}%".format(acc_test))
y_lr2 = lr2.predict(x_test2)
cm_lr2 = confusion_matrix(y_test,y_lr2)
ax = plt.subplot()
sns.heatmap(cm_lr2,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for Logistic Regression with 5 selected features');
# show classification report
# target_names = ['class 0', 'class 1']
print(classification_report(y_test, y_lr2, target_names=target_names))
```
**Observation**: low sensitivity but high specificity, not bad performance given the number of features
```
# generate a no skill prediction (majority class)
#ns_probs = [0 for _ in range(len(y_test))]
# calculate roc scores
#ns_auc = roc_auc_score(y_test, ns_probs)
lr2_auc = roc_auc_score(y_test, y_lr2)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# # calculate roc curves
#ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
lr_fpr2, lr_tpr2, _ = roc_curve(y_test, y_lr2)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(lr_fpr2, lr_tpr2, marker='.', label='Logistic')
# axis labels
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate')
plt.title('ROC for Logistic Regression with 5 selected features');
plt.legend() # show the legend
plt.show() # show the plot
```
<a id='section_5.4'></a>
### 5.4 Non-linear Support Vector Machine (SVM) model
```
# train SVM
svm = SVC(random_state = 4, gamma='auto').fit(x_train, y_train)
acc_train = svm.score(x_train,y_train)*100
acc_test = svm.score(x_test,y_test)*100
print("Train Accuracy of SVM Algorithm: {:.2f}%".format(acc_train))
print("Test Accuracy of SVM Algorithm: {:.2f}%".format(acc_test))
# show confusion matrix
y_svm = svm.predict(x_test)
cm_svm = confusion_matrix(y_test,y_svm)
ax = plt.subplot()
sns.heatmap(cm_svm,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for SVM');
```
**Observation** SVM classifies no patients getting neuropathy within 5 years of onset. Very low sensitivity. Performance is significantly worse than logistic regression (with regularization), possibly due to small training data volume, it's worth further investigation
<a id='section_5.5'></a>
### 5.5 Decision Tree Classifer
```
dtree = tree.DecisionTreeClassifier().fit(x_train,y_train)
acc_train = dtree.score(x_train,y_train)*100
acc_test = dtree.score(x_test,y_test)*100
print("Train Accuracy of Decision Tree Classifer: {:.2f}%".format(acc_train))
print("Test Accuracy of Decision Tree Classifer: {:.2f}%".format(acc_test))
# show confusion matrix
y_tree = dtree.predict(x_test)
cm_tree = confusion_matrix(y_test,y_tree)
ax = plt.subplot()
sns.heatmap(cm_tree,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for Decision Tree Classifier');
# show classification report
target_names = ['class 0', 'class 1']
print(classification_report(y_test, y_tree, target_names=target_names))
```
**Observation**: 0.90 sensitivity and 1.00 specificity, best performing model yet.
```
#generate a no skill prediction (majority class)
#ns_probs = [0 for _ in range(len(y_test))]
## calculate roc scores
#ns_auc = roc_auc_score(y_test, ns_probs)
tree_auc = roc_auc_score(y_test, y_tree)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('SVM: ROC AUC=%.3f' % (tree_auc))
## calculate roc curves
#ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
tree_fpr, tree_tpr, _ = roc_curve(y_test, y_tree)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(tree_fpr, tree_tpr, marker='.', label='tree')
# axis labels
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate')
plt.title('ROC for Decision Tree Classifier');
plt.legend() # show the legend
plt.show() # show the plot
text_representation = tree.export_text(dtree)
print(text_representation)
fig = plt.figure(figsize=(25,20))
_ = tree.plot_tree(dtree,
feature_names=feature_id,
class_names=target_names,
filled=True)
```
## Synetha metabolic syndrome disease (which includes diabetes) data generation logic
<img src="https://synthetichealth.github.io/synthea/graphviz/metabolic_syndrome_disease.png" style="float: left; width: 40%; margin-bottom: 0.5em;">
https://synthetichealth.github.io/synthea/graphviz/metabolic_syndrome_disease.png
| github_jupyter |
# Get Target
```
!pip install -r requirements_colab.txt -q
```
> To speed up the review process , i provided the ***drive id*** of the data i've created from the Train creation folder noteboooks .
---
> I also add each data drive link in the Readme Pdf file attached with this solution
---
> The data Used in this notebook is **S2TrainObs1**
```
!gdown --id 1hNRbtcqd9F6stMOK1xAZApDITwAjiSDJ
#import necessary dependecies
import os
import warnings
import numpy as np
import pandas as pd
import random
import gc
from sklearn.metrics import log_loss
from sklearn.model_selection import StratifiedKFold
from lightgbm import LGBMClassifier
from catboost import CatBoostClassifier
warnings.filterwarnings('ignore')
np.random.seed(111)
random.seed(111)
def create_target():
train =pd.read_csv("S2TrainObs1.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train[['field_id']]
```
# Optimized Blend
```
y_true = pd.get_dummies(create_target()['label']).values
test_dict = {'Catboost': 'S1_Catboost.csv',
'LightGbm': 'S1_LGBM.csv',
'Xgboost': 'S1_XGBOOST.csv',
'NN Attention': 'S1_NNAttention.csv' ,
'Neural Network' : 'S1_NN.csv',
'CatboostS2': 'S2_Catboost.csv',
'XgboostS2' : 'S2_Xgboost.csv',
'NN_AttentionS2': 'S2_NNAttention.csv' ,
'Neural_NetworkS2' : 'S2_NN.csv',
}
BlendTest = np.zeros((len(test_dict), 35295,y_true.shape[1]))
for i in range(BlendTest.shape[0]):
BlendTest[i] = pd.read_csv(list(test_dict.values())[i]).values[:,1:]
BlendPreds = np.tensordot([0.175 , 0.2275, 0.1225, 0.0875, 0.0875,0.0975, 0.0825, 0.06 , 0.06 ], BlendTest, axes = ((0), (0)))
BlendPreds.shape
pred_df = pd.DataFrame(BlendPreds)
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = pd.read_csv("S1_10FoldsCatboost_V5.csv")['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
```
# Optimized Stacking
```
y_true = create_target()
oof_dict = {'Catboost': 'S1_oof_cat.npy',
'LightGbm': 'S1_oof_lgbm.npy',
'Xgboost': 'S1_oof_XGBOOST.npy',
'NN Attention': 'S1_oof_NNAttention.npy' ,
'Neural Network' : 'S1_oof_NN.npy',
'CatboostS2': 'S2_oof_cat.npy',
'XgboostS2' : 'S2_oof_xgb.npy',
'NN_AttentionS2' : 'S2_oof_NNAttention.npy' ,
'Neural_NetworkS2' : 'S2_NN.npy',
}
oof_data = y_true.copy()
for i in range(len(list(oof_dict.values())) ):
local = pd.DataFrame(np.load(list(oof_dict.values())[i]),columns=[f'{list(oof_dict.keys())[i]}_oof_{j}' for j in range(1,10)] )
oof_data = pd.concat([oof_data,local],axis=1)
oof_data.head()
test_dict = {'Catboost': 'S1_Catboost.csv',
'LightGbm': 'S1_LGBM.csv',
'Xgboost': 'S1_XGBOOST.csv',
'NN Attention': 'S1_NNAttention.csv' ,
'Neural Network' : 'S1_NN.csv',
'CatboostS2': 'S2_Catboost.csv',
'XgboostS2' : 'S2_Xgboost.csv',
'NN_AttentionS2': 'S2_NNAttention.csv' ,
'Neural_NetworkS2' : 'S2_NN.csv',
}
test_data = pd.read_csv("S1_10FoldsCatboost_V5.csv")[['field_id']]
for i in range(len(list(test_dict.values()))):
local = pd.DataFrame(pd.read_csv(list(test_dict.values())[i]).iloc[:,1:].values,columns=[f'{list(test_dict.keys())[i]}_oof_{j}' for j in range(1,10)] )
test_data = pd.concat([test_data,local],axis=1)
test_data.head()
```
# Stacker-Model
```
class AzerStacking :
def __init__(self,y) :
self.y = y
def StackingRegressor(self ,KFOLD,stacking_train : pd.DataFrame , stacking_test : pd.DataFrame) :
cols = stacking_train.drop(['field_id','label'],1).columns.tolist()
X , y , Test = stacking_train[cols] , stacking_train['label'] , stacking_test[cols]
final_preds = [] ; err_cb = []
oof_stack = np.zeros((len(X),9)) ;
for fold,(train_index, test_index) in enumerate(KFOLD.split(X,y)):
X_train, X_test = X.values[train_index], X.values[test_index]
y_train, y_test = y.values[train_index], y.values[test_index]
model1 = LGBMClassifier(verbose=10)
model2 = CatBoostClassifier(iterations=50,verbose=0)
model1.fit(X_train,y_train)
model2.fit(X_train,y_train)
preds1=model1.predict_proba(X_test)
preds2=model2.predict_proba(X_test)
preds = preds1*0.7+preds2*0.3
oof_stack[test_index] = preds
err_cb.append(log_loss(y_test,preds))
print(f'logloss fold-{fold+1}/10',log_loss(y_test,preds))
test_pred1 = model1.predict_proba(Test.values)
test_pred2 = model2.predict_proba(Test.values)
test_pred = test_pred1*0.7+test_pred2*0.3
final_preds.append(test_pred)
print(2*'--------------------------------------')
print('STACKING Log Loss',log_loss(y, oof_stack))
return oof_stack,np.mean(final_preds,axis=0)
folds = StratifiedKFold(n_splits=10,shuffle=True,random_state=47)
AzerStacker = AzerStacking(y=oof_data['label'])
oof_stack,stack_preds = AzerStacker.StackingRegressor(KFOLD=folds ,stacking_train=oof_data ,stacking_test=test_data)
pred_df = pd.DataFrame(stack_preds)
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = pd.read_csv("S1_10FoldsCatboost_V5.csv")['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
```
# Stacking - Blend
```
pred_df = pd.DataFrame(stack_preds*0.7+ BlendPreds*0.3)
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = pd.read_csv("S1_10FoldsCatboost_V5.csv")['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
# Write the predicted probabilites to a csv for submission
pred_df.to_csv('S1_Stacking_Blending.csv', index=False)
```
| github_jupyter |
# About: 設定ファイルの変更--httpd.conf
---
MoodleコンテナのApache HTTPサーバの設定内容を変更します。
## 概要
設定ファイルを変更する例としてMoodleコンテナのApache HTTPサーバの設定ファイルを編集して、起動時のサーバプロセス数を変更してみます。

操作手順は以下のようになります。
1. ホスト環境に配置されている設定ファイルをNotebook環境に取得する
2. 取得したファイルのバックアップを作成する
3. Notebookの編集機能を利用して設定ファイルの変更をおこなう
4. 変更した設定ファイルをホスト環境に配置する
5. 設定ファイルの変更を反映させるためにコンテナを再起動する
アプリケーションテンプレートによって構築したMoodle環境では、利用者が変更する可能性のある `httpd.conf` などの設定ファイルをホスト環境に配置しています。ホスト環境に配置したファイルはコンテナ内から参照するために[bind mount](https://docs.docker.com/storage/bind-mounts/)の設定を行っています。このため、設定ファイルを編集する場合はコンテナ内のパス名とホスト環境のパス名のどちらを指定するか、またbindする際のマウントポイントがどこになっているかなどについて留意する必要があります。
Moodleコンテナにおける設定ファイルの対応関係を以下の表に示します。
<table>
<tr>
<th style="text-align:left;">コンテナ内のパス</th>
<th style="text-align:left;">ホスト環境のパス</th>
</tr>
<tr>
<td style="text-align:left;">/etc/php.ini</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/php.ini</td>
</tr>
<tr>
<td style="text-align:left;">/etc/php.d/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/php.d/</td>
</tr>
<tr>
<td style="text-align:left;">/etc/httpd/conf/httpd.conf</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/httpd/conf/httpd.conf</td>
</tr>
<tr>
<td style="text-align:left;">/etc/httpd/conf.d/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/httpd/conf.d/</td>
</tr>
<tr>
<td style="text-align:left;">/etc/httpd/conf.modules.d/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/httpd/conf.modules.d/</td>
</tr>
<tr>
<td style="text-align:left;">/etc/pki/ca-trust/source/anchors/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/ca-trust/</td>
</tr>
</table>
## パラメータの設定
変更対象のコンテナ名、設定ファイル名などを指定します。
### グループ名の指定
このNotebookの操作対象となるAnsibleのグループ名を設定します。
```
# (例)
# target_group = 'Moodle'
target_group =
```
#### チェック
指定された `target_group` の値が適切なものかチェックします。
`target_group` に対応する `group_vars` ファイルが存在していることを確認します。
```
from pathlib import Path
if not (Path('group_vars') / (target_group + '.yml')).exists():
raise RuntimeError(f"ERROR: not exists {target_group + '.yml'}")
```
`target_group`で指定したホストにAnsibleで到達可能であることを確認します。
```
!ansible {target_group} -m ping
```
### コンテナの指定
設定ファイルを変更する対象となるコンテナを指定します。
現在実行中のコンテナの一覧を確認します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose ps --services'
```
表示されたコンテナ一覧から、対象とするコンテナ名を指定してください。
```
target_container = 'moodle'
```
### 設定ファイルの指定
変更する設定ファイルのパスを指定してください。ここではコンテナ内のパスを指定してください。
```
target_file = '/etc/httpd/conf.modules.d/00-mpm.conf'
```
## 設定ファイルの編集
Moodleコンテナの設定ファイルをローカル環境に取得しJupyter Notebookの編集機能を用いて設定ファイルを編集します。

次のセルを実行すると、以下の手順が実行されます。
1. ホスト環境に配置されているApache HTTP Serverコンテナの設定ファイルをローカル環境に取得する
2. 取得した設定ファイルのバックアップを作成する
3. Jupyter Notebookの編集機能を利用して設定ファイルを編集するためのリンクを表示する
```
%run scripts/edit_conf.py
fetch_conf(target_group, target_container, target_file)
```
上のセルの出力に表示されているリンクをクリックして設定ファイルの編集を行ってください。
この例では、ファイルの末尾に以下の内容を追加してください。
```
<IfModule mpm_prefork_module>
StartServers 20
MinSpareServers 20
MaxSpareServers 20
ServerLimit 20
MaxRequestsPerChild 50
</IfModule>
```
この設定を追加することによりApache HTTP Serverの起動時のプロセス数がデフォルトの 5 から 20 に増えます。
> ファイルの編集後は**必ず**、メニューの[File]-[Save]を選択してファイルの保存を行ってください。
ローカル環境に取得したファイルは、以下のパスに格納されています。
`./edit/{target_group}/{YYYYMMDDHHmmssffffff}/00-mpm.conf`
`{target_group}` にはAnsibleのグループ名が、`{YYYYMMDDHHmmssfffff}` にはファイルを取得したタイムスタンプが入ります。
また、バックアップファイルは以下のパスに格納されます。
`./edit/{target_group}/{YYYYMMDDHHmmssffffff}/00-mpm.conf.orig`
次のセルを実行すると編集の前後での差分を確認することができます。
```
show_local_conf_diff(target_group, target_container, target_file)
```
## 編集した設定ファイルの反映
編集したファイルをホスト環境に配置して、設定ファイルの変更内容をコンテナに反映させます。

### 変更前の状態を確認する
設定ファイル変更の反映前の状態を確認します。
Apache HTTP Serverのプロセス数に関する設定を変更したので、Apache HTTP Serverコンテナのプロセス一覧を確認します。
プロセス一覧を表示します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose exec -T {target_container} ps ax '
```
### 編集内容の反映
前章で編集した設定ファイルを Apache HTTP Serverコンテナに反映します。
次のセルを実行すると、以下の手順が実行されます。
1. 編集前と編集後の設定ファイルの差分を表示する
2. 編集した設定ファイルをホスト環境に配置する
3. コンテナを再起動して変更した設定ファイルの反映を行う
```
apply_conf(target_group, target_container, target_file)
```
### 変更後の状態を確認する
設定ファイル変更の反映後の状態を確認します。
Apache HTTP Serverコンテナのプロセス一覧を確認します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose exec -T {target_container} ps ax '
```
## 変更を取り消す
編集前の設定ファイルの状態に戻します。

次のセルを実行すると、以下の手順が実行されます。
1. 編集後と編集前の設定ファイルの差分を表示する
2. 編集前の設定ファイルをホスト環境に配置する
3. コンテナを再起動して設定値の反映を行う
```
revert_conf(target_group, target_container, target_file)
```
設定ファイルを戻した後の状態を確認します。Apache HTTP Serverコンテナのプロセス一覧を確認します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose exec -T {target_container} ps ax '
```
| github_jupyter |
# HU Extension --- Final Project --- S89A DL for NLP
# Michael Lee & Micah Nickerson
# PART 2B - ADVERSARIAL ATTACK GENERATOR
This is a notebook used to create the different adversarial attack **word perturbations**.
```
adversarial_dir = "Data Sets/adversarial_asap"
test_set_file = adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-ML.xls"
#verify data paths
print(test_set_file)
# Attack 1: Shuffling Words
#load excel into dataframe
test_set_shuffle = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_shuffle = test_set_shuffle.drop(['domain2_predictionid'], axis=1)
for i in test_set_shuffle.index:
words= test_set_shuffle.at[i, 'essay'].split()
random.shuffle(words)
new_sentence = ' '.join(words)
test_set_shuffle.at[i,'essay'] = new_sentence
test_set_shuffle.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SHUFFLE.xls")
```
### Anchor: Library
```
# Attack 2a: Appending - "Library"
#load excel into dataframe
test_set_append = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_append = test_set_append.drop(['domain2_predictionid'], axis=1)
for i in test_set_append.index:
words= test_set_append.at[i, 'essay'].split()
words.append("library")
new_sentence = ' '.join(words)
test_set_append.at[i,'essay'] = new_sentence
test_set_append.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-APPEND_LIBRARY.xls")
# Attack 3a: Progressive Overload - "Library"
#load excel into dataframe
test_set_progressive = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_progressive = test_set_progressive.drop(['domain2_predictionid'], axis=1)
for i in test_set_progressive.index:
words= test_set_progressive.at[i, 'essay'].split()
if i < 591:
continue
if i < 641:
for x in range(0,i-590):
words[x] = "library"
new_sentence = ' '.join(words)
test_set_progressive.at[i,'essay'] = new_sentence
test_set_progressive.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-PROGRESSIVE_LIBRARY.xls")
# Attack 4a: Single Substitution - "Library"
#load excel into dataframe
test_set_single = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_single = test_set_single.drop(['domain2_predictionid'], axis=1)
for i in test_set_single.index:
words= test_set_single.at[i, 'essay'].split()
if i < 591:
continue
if i < 641:
words[i-591] = "library"
new_sentence = ' '.join(words)
test_set_single.at[i,'essay'] = new_sentence
test_set_single.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SINGLE_LIBRARY.xls")
# Attack 5a: Insertion of anchor in random locations
#load excel into dataframe
test_set_insertion = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_insertion = test_set_insertion.drop(['domain2_predictionid'], axis=1)
for i in test_set_insertion.index:
words= test_set_insertion.at[i, 'essay'].split()
x = random.randint(0,len(words))
words.insert(x, 'library')
new_sentence = ' '.join(words)
test_set_insertion.at[i,'essay'] = new_sentence
test_set_insertion.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-INSERTION_LIBRARY.xls")
```
### Anchor: Censorship
```
# Attack 2b: Appending - "Censorship"
#load excel into dataframe
test_set_append = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_append = test_set_append.drop(['domain2_predictionid'], axis=1)
for i in test_set_append.index:
words= test_set_append.at[i, 'essay'].split()
words.append("censorship")
new_sentence = ' '.join(words)
test_set_append.at[i,'essay'] = new_sentence
test_set_append.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-APPEND_CENSORSHIP.xls")
# Attack 3b: Progressive Overload - "Censorship"
#load excel into dataframe
test_set_progressive = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_progressive = test_set_progressive.drop(['domain2_predictionid'], axis=1)
for i in test_set_progressive.index:
words= test_set_progressive.at[i, 'essay'].split()
if i < 591:
continue
if i < 641:
for x in range(0,i-590):
words[x] = "censorship"
new_sentence = ' '.join(words)
test_set_progressive.at[i,'essay'] = new_sentence
test_set_progressive.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-PROGRESSIVE_CENSORSHIP.xls")
# Attack 4b: Single Substitution - "Censorship"
#load excel into dataframe
test_set_single = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_single = test_set_single.drop(['domain2_predictionid'], axis=1)
for i in test_set_single.index:
words= test_set_single.at[i, 'essay'].split()
if i < 591:
continue
if i < 641:
words[i-591] = "censorship"
new_sentence = ' '.join(words)
test_set_single.at[i,'essay'] = new_sentence
test_set_single.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SINGLE_CENSORSHIP.xls")
# Attack 5b: Insertion of "censorship" in random locations
#load excel into dataframe
test_set_insertion = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_insertion = test_set_insertion.drop(['domain2_predictionid'], axis=1)
for i in test_set_insertion.index:
words= test_set_insertion.at[i, 'essay'].split()
x = random.randint(0,len(words))
words.insert(x, 'censorship')
new_sentence = ' '.join(words)
test_set_insertion.at[i,'essay'] = new_sentence
test_set_insertion.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-INSERTION_CENSORSHIP.xls")
```
### Anchor: The
```
# Attack 2c: Appending - "The"
#load excel into dataframe
test_set_append = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_append = test_set_append.drop(['domain2_predictionid'], axis=1)
for i in test_set_append.index:
words= test_set_append.at[i, 'essay'].split()
words.append("the")
new_sentence = ' '.join(words)
test_set_append.at[i,'essay'] = new_sentence
test_set_append.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-APPEND_THE.xls")
# Attack 3c: Progressive Overload - "The"
#load excel into dataframe
test_set_progressive = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_progressive = test_set_progressive.drop(['domain2_predictionid'], axis=1)
for i in test_set_progressive.index:
words= test_set_progressive.at[i, 'essay'].split()
if i < 591:
continue
if i < 641:
for x in range(0,i-590):
words[x] = "the"
new_sentence = ' '.join(words)
test_set_progressive.at[i,'essay'] = new_sentence
test_set_progressive.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-PROGRESSIVE_THE.xls")
# Attack 4c: Single Substitution - "The"
#load excel into dataframe
test_set_single = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_single = test_set_single.drop(['domain2_predictionid'], axis=1)
for i in test_set_single.index:
words= test_set_single.at[i, 'essay'].split()
if i < 591:
continue
if i < 641:
words[i-591] = "censorship"
new_sentence = ' '.join(words)
test_set_single.at[i,'essay'] = new_sentence
test_set_single.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SINGLE_THE.xls")
# Attack 5c: Insertion of "the" in random locations
#load excel into dataframe
test_set_insertion = pd.read_excel(test_set_file, sheet_name='valid_set')
#remove empty n/a cells
test_set_insertion = test_set_insertion.drop(['domain2_predictionid'], axis=1)
for i in test_set_insertion.index:
words= test_set_insertion.at[i, 'essay'].split()
x = random.randint(0,len(words))
words.insert(x, 'the')
new_sentence = ' '.join(words)
test_set_insertion.at[i,'essay'] = new_sentence
test_set_insertion.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-INSERTION_THE.xls")
```
| github_jupyter |
# Facial Keypoint Detection
This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with.
Let's take a look at some examples of images and corresponding facial keypoints.
<img src='images/key_pts_example.png' width="500" height="500"/>
Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.
<img src='images/landmarks_numbered.jpg' width="300" height="300"/>
---
## Load and Visualize Data
The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.
#### Training and Testing Data
This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.
* 3462 of these images are training images, for you to use as you create a model to predict keypoints.
* 2308 are test images, which will be used to test the accuracy of your model.
The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).
---
First, before we do anything, we have to load in our image data. This data is stored in a zip file and in the below cell, we access it by it's URL and unzip the data in a `/data/` directory that is separate from the workspace home directory.
```
# -- DO NOT CHANGE THIS CELL -- #
!mkdir /data
!wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
!unzip -n /data/train-test-data.zip -d /data
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import copy
import random
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import torch
import cv2
```
Then, let's load in our training data and display some stats about that dat ato make sure it's been loaded in correctly!
```
key_pts_frame = pd.read_csv('/data/training_frames_keypoints.csv')
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts: {}'.format(key_pts[:4]))
```
## Look at some images
Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
```
def show_keypoints(image, key_pts, gt_pts=None):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# Display a few different types of images by changing the index n
# select an image by index in our data frame
plt.figure()
show_keypoints(mpimg.imread(os.path.join('/data/training/', image_name)),
key_pts)
plt.show()
```
## Dataset class and Transformations
To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
#### Dataset class
``torch.utils.data.Dataset`` is an abstract class representing a
dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.
Your custom dataset should inherit ``Dataset`` and override the following
methods:
- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
- ``__getitem__`` to support the indexing such that ``dataset[i]`` can
be used to get the i-th sample of image/keypoint data.
Let's create a dataset class for our face keypoints dataset. We will
read the CSV file in ``__init__`` but leave the reading of images to
``__getitem__``. This is memory efficient because all the images are not
stored in the memory at once but read as required.
A sample of our dataset will be a dictionary
``{'image': image, 'keypoints': key_pts}``. Our dataset will take an
optional argument ``transform`` so that any required processing can be
applied on the sample. We will see the usefulness of ``transform`` in the
next section.
```
from torch.utils.data import Dataset, DataLoader, TensorDataset
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
```
Now that we've defined this class, let's instantiate the dataset and display some images.
```
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
```
## Transforms
Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.
Therefore, we will need to write some pre-processing code.
Let's create four transforms:
- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]
- ``Rescale``: to rescale an image to a desired size.
- ``RandomCrop``: to crop an image randomly.
- ``ToTensor``: to convert numpy images to torch images.
We will write them as callable classes instead of simple functions so
that parameters of the transform need not be passed everytime it's
called. For this, we just need to implement ``__call__`` method and
(if we require parameters to be passed in), the ``__init__`` method.
We can then use a transform like this:
tx = Transform(params)
transformed_sample = tx(sample)
Observe below how these transforms are generally applied to both the image and its keypoints.
```
from torchvision import datasets, transforms, models, utils
# tranforms
#Normalize
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
#Rescale
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
#Random Crop
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
#Convert to tensor
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
```
## Test out the transforms
Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
```
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 100
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
```
## Create the transformed dataset
Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
```
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/',
transform=data_transform)
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
```
## Data Iteration and Batching
Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:
- Batch the data
- Shuffle the data
- Load the data in parallel using ``multiprocessing`` workers.
``torch.utils.data.DataLoader`` is an iterator which provides all these
features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!
---
## Ready to Train!
Now that you've seen how to load and transform our data, you're ready to build a neural network to train on this data.
In the next notebook, you'll be tasked with creating a CNN for facial keypoint detection.
| github_jupyter |
# About Notebook
- [**Kaggle Housing Dataset**](https://www.kaggle.com/ananthreddy/housing)
- Implement linear regression using:
1. **Batch** Gradient Descent
2. **Stochastic** Gradient Descent
3. **Mini-batch** Gradient Descent
**Note**: _Trying to implement using **PyTorch** instead of numpy_
```
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import torch
def banner(msg, _verbose=1):
if not _verbose:
return
print("-"*80)
print(msg.upper())
print("-"*80)
```
# Data import and preprocessing
```
df = pd.read_csv('Housing.csv', index_col=0)
def convert_to_binary(string):
return int('yes' in string)
for col in df.columns:
if df[col].dtype == 'object':
df[col] = df[col].apply(convert_to_binary)
data = df.values
scaler = StandardScaler()
data = scaler.fit_transform(data)
X = data[:, 1:]
y = data[:, 0]
print("X: ", X.shape)
print("y: ", y.shape)
X_train, X_valid, y_train, y_valid = map(torch.from_numpy, train_test_split(X, y, test_size=0.2))
print("X_train: ", X_train.shape)
print("y_train: ", y_train.shape)
print("X_valid: ", X_valid.shape)
print("y_valid: ", y_valid.shape)
class LinearRegression:
def __init__(self, X_train, y_train, X_valid, y_valid):
self.X_train = X_train
self.y_train = y_train
self.X_valid = X_valid
self.y_valid = y_valid
self.Theta = torch.randn((X_train.shape[1]+1)).type(type(X_train))
def _add_bias(self, tensor):
bias = torch.ones((tensor.shape[0], 1)).type(type(tensor))
return torch.cat((bias, tensor), 1)
def _forward(self, tensor):
return torch.matmul(
self._add_bias(tensor),
self.Theta
).view(-1)
def forward(self, train=True):
if train:
return self._forward(self.X_train)
else:
return self._forward(self.X_valid)
def _cost(self, X, y):
y_hat = self._forward(X)
mse = torch.sum(torch.pow(y_hat - y, 2))/2/X.shape[0]
return mse
def cost(self, train=True):
if train:
return self._cost(self.X_train, self.y_train)
else:
return self._cost(self.X_valid, self.y_valid)
def batch_update_vectorized(self):
m, _ = self.X_train.size()
return torch.matmul(
self._add_bias(self.X_train).transpose(0, 1),
(self.forward() - self.y_train)
) / m
def batch_update_iterative(self):
m, _ = self.X_train.size()
update_theta = None
X = self._add_bias(self.X_train)
for i in range(m):
if type(update_theta) == torch.DoubleTensor:
update_theta += (self._forward(self.X_train[i].view(1, -1)) - self.y_train[i]) * X[i]
else:
update_theta = (self._forward(self.X_train[i].view(1, -1)) - self.y_train[i]) * X[i]
return update_theta/m
def batch_train(self, tolerance=0.01, alpha=0.01):
converged = False
prev_cost = self.cost()
init_cost = prev_cost
num_epochs = 0
while not converged:
self.Theta = self.Theta - alpha * self.batch_update_vectorized()
cost = self.cost()
if (prev_cost - cost) < tolerance:
converged = True
prev_cost = cost
num_epochs += 1
banner("Batch")
print("\tepochs: ", num_epochs)
print("\tcost before optim: ", init_cost)
print("\tcost after optim: ", cost)
print("\ttolerance: ", tolerance)
print("\talpha: ", alpha)
def stochastic_train(self, tolerance=0.01, alpha=0.01):
converged = False
m, _ = self.X_train.size()
X = self._add_bias(self.X_train)
init_cost = self.cost()
num_epochs=0
while not converged:
prev_cost = self.cost()
for i in range(m):
self.Theta = self.Theta - alpha * (self._forward(self.X_train[i].view(1, -1)) - self.y_train[i]) * X[i]
cost = self.cost()
if prev_cost-cost < tolerance:
converged=True
num_epochs += 1
banner("Stochastic")
print("\tepochs: ", num_epochs)
print("\tcost before optim: ", init_cost)
print("\tcost after optim: ", cost)
print("\ttolerance: ", tolerance)
print("\talpha: ", alpha)
def mini_batch_train(self, tolerance=0.01, alpha=0.01, batch_size=8):
converged = False
m, _ = self.X_train.size()
X = self._add_bias(self.X_train)
init_cost = self.cost()
num_epochs=0
while not converged:
prev_cost = self.cost()
for i in range(0, m, batch_size):
self.Theta = self.Theta - alpha / batch_size * torch.matmul(
X[i:i+batch_size].transpose(0, 1),
self._forward(self.X_train[i: i+batch_size]) - self.y_train[i: i+batch_size]
)
cost = self.cost()
if prev_cost-cost < tolerance:
converged=True
num_epochs += 1
banner("Stochastic")
print("\tepochs: ", num_epochs)
print("\tcost before optim: ", init_cost)
print("\tcost after optim: ", cost)
print("\ttolerance: ", tolerance)
print("\talpha: ", alpha)
%%time
l = LinearRegression(X_train, y_train, X_valid, y_valid)
l.mini_batch_train()
%%time
l = LinearRegression(X_train, y_train, X_valid, y_valid)
l.stochastic_train()
%%time
l = LinearRegression(X_train, y_train, X_valid, y_valid)
l.batch_train()
```
| github_jupyter |
# How to handle WelDX files
In this notebook we will demonstrate how to create, read, and update ASDF files created by WelDX. All the needed funcationality is contained in a single class named `WeldxFile`. We are going to show different modes of operation, like working with physical files on your harddrive, and in-memory files, both read-only and read-write mode.
## Imports
The WeldxFile class is being imported from the top-level of the weldx package.
```
from datetime import datetime
import numpy as np
from weldx import WeldxFile
```
## Basic operations
Now we create our first file, by invoking the `WeldxFile` constructor without any additional arguments. By doing so, we create an in-memory file. This means, that your changes will be temporary until you write it to an actual file on your harddrive. The `file_handle` attribute will point to the actual underlying file. In this case it is the in-memory file or buffer as shown below.
```
file = WeldxFile()
file.file_handle
```
Next we assign some dictionary like data to the file, by storing it some attribute name enclosed by square brackets.
Then we look at the representation of the file header or contents. This will depend on the execution environment.
In JupyterLab you will see an interactive tree like structure, which can be expanded and searched.
The root of the tree is denoted as "root" followed by children created by the ASDF library "asdf_library" and "history". We attached the additional child "some_data" with our assignment.
```
data = {"data_sets": {"first": np.random.random(100), "time": datetime.now()}}
file["some_data"] = data
file
```
Note, that here we are using some very common types, namely an NumPy array and a timestamp. For weldx specialized types like the coordinates system manager, (welding) measurements etc., the weldx package provides ASDF extensions to handle those types automatically during loading and saving ASDF data. You do not need to worry about them. If you try to save types, which cannot be handled by ASDF, you will trigger an error.
We could also have created the same structure in one step:
```
file = WeldxFile(tree=data, mode="rw")
file
```
You might have noticed, that we got a warning about the in-memory operation during showing the file in Jupyter.
Now we have passed the additional argument mode="rw", which indiciates, that we want to perform write operations just in memory,
or alternatively to the passed physical file. So this warning went away.
We can use all dictionary operations on the data we like, e.g. update, assign, and delete items.
```
file["data_sets"]["second"] = {"data": np.random.random(100), "time": datetime.now()}
# delete the first data set again:
del file["data_sets"]["first"]
file
```
We can also iterate over all keys as usual. You can also have a look at the documentation of the builtin type `dict` for a complete overview of its features.
```
for key, value in file.items():
print(key, value)
```
### Access to data by attributes
The access by key names can be tedious, when deeply nested dictionaries are involved. We provide a handling via attributes like this
```
accessible_by_attribute = file.as_attr()
accessible_by_attribute.data_sets.second
```
## Writing files to disk
In order to make your changes persistent, we are going to save the memory-backed file to disk by invoking `WeldxFile.write_to`.
```
file.write_to("example.asdf")
```
This newly created file can be opened up again, in read-write mode like by passing the appropriate arguments.
```
example = WeldxFile("example.asdf", mode="rw")
example["updated"] = True
example.close()
```
Note, that we closed the file here explictly. Before closing, we wanted to write a simple item to tree. But lets see what happens, if we open the file once again.
```
example = WeldxFile("example.asdf", mode="rw")
display(example)
example.close()
```
As you see the `updated` state has been written, because we closed the file properly. If we omit closing the file,
our changes would be lost when the object runs out of scope or Python terminates.
## Handling updates within a context manager
To ensure you will not forget to update your file after making changes,
we are able to enclose our file-changing operations within a context manager.
This ensures that all operations done in this context (the `with` block) are being written to the file, once the context is left.
Note that the underlying file is also closed after the context ends. This is useful, when you have to update lots of files, as there is a limited amount of file handles an operating system can deal with.
```
with WeldxFile("example.asdf", mode="rw") as example:
example["updated"] = True
fh = example.file_handle
# now the context ends, and the file is being saved to disk again.
# lets check the file handle has been closed, after the context ended.
assert fh.closed
```
Let us inspect the file once again, to see whether our `updated` item has been correctly written.
```
WeldxFile("example.asdf")
```
In case an error got triggered (e.g. an exception has been raised) inside the context, the underlying file is still updated. You could prevent this behavior, by passing `sync=False` during file construction.
```
try:
with WeldxFile("example.asdf", mode="rw") as file:
file["updated"] = False
raise Exception("oh no")
except Exception as e:
print("expected error:", e)
WeldxFile("example.asdf")
```
## Keeping a log of changes when manipulating a file
It can become quite handy to know what has been done to file in the past. Weldx files provide a history log, in which arbitrary strings can be stored with time stamps and used software. We quickly run you through the process of adding history entries to your file.
```
filename_hist = "example_history.asdf"
with WeldxFile(filename_hist, mode="rw") as file:
file["some"] = "changes"
file.add_history_entry("added some changes")
WeldxFile(filename_hist)["history"]
```
When you want to describe a custom software,
which is lets say a library or tool used to generate/modify the data in the file and we passed it into the creation of our WeldxFile.
```
software = dict(
name="my_tool", version="1.0", homepage="https://my_tool.org", author="the crowd"
)
with WeldxFile(filename_hist, mode="rw", software_history_entry=software) as file:
file["some"] = "changes"
file.add_history_entry("added more changes")
```
Lets now inspect how we wrote history.
```
WeldxFile(filename_hist)["history"]["entries"][-1]
```
The entries key is a list of all log entries, where new entries are appended to. We have proper time stamps indicating when the change happened, the actual log entry, and optionally a custom software used to make the change.
## Handeling of custom schemas
An important aspect of WelDX or ASDF files is, that you can validate them to comply with a defined schema. A schema defines required and optional attributes a tree structure has to provide to pass the schema validation. Further the types of these attributes can be defined, e.g. the data attribute should be a NumPy array, or a timestamp should be of type `pandas.Timestamp`.
There are several schemas provided by WelDX, which can be used by passing them to the `custom_schema` argument. It is expected to be a path-like type, so a string (`str`) or `pathlib.Path` is accepted. The provided utility function `get_schema_path` returns the path to named schema. So its output can directly be used in WeldxFile(schema=...)
```
from weldx.asdf.util import get_schema_path
schema = get_schema_path("single_pass_weld-0.1.0")
schema
```
This schema defines a complete experimental setup with measurement data, e.g requires the following attributes to be defined in our tree:
- workpiece
- TCP
- welding_current
- welding_voltage
- measurements
- equipment
We use a testing function to provide this data now, and validate it against the schema by passing the `custom_schema` during WeldxFile creation.
Here we just have a look at the process parameters sub-dictionary.
```
from weldx.asdf.cli.welding_schema import single_pass_weld_example
_, single_pass_weld_data = single_pass_weld_example(out_file=None)
display(single_pass_weld_data["process"])
```
That is a lot of data, containing complex data structures and objects describing the whole experiment including measurement data.
We can now create new `WeldxFile` and validate the data against the schema.
```
WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode="rw")
```
But what would happen, if we forget an import attribute? Lets have a closer look...
```
# simulate we forgot something important, so we delete the workpiece:
del single_pass_weld_data["workpiece"]
# now create the file again, and see what happens:
try:
WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode="rw")
except Exception as e:
display(e)
```
We receive a ValidationError from the ASDF library, which tells us exactly what the missing information is. The same will happen, if we accidentially pass the wrong type.
```
# simulate a wrong type by changing it to a NumPy array.
single_pass_weld_data["welding_current"] = np.zeros(10)
# now create the file again, and see what happens:
try:
WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode="rw")
except Exception as e:
display(e)
```
Here we see, that a `signal` tag is expected, but a `asdf/core/ndarray-1.0.0` was received.
The ASDF library assignes tags to certain types to handle their storage in the file format.
As shown, the `signal` tag is contained in `weldx/measurement` container, provided by `weldx.bam.de`. The tags and schemas also provide a version number, so future updates in the software become managable.
Custom schemas can be used to define own protocols or standards describing your data.
## Summary
In this tutorial we have encountered how to easily open, inspect, manipulate, and update ASDF files created by WelDX. We've learned that these files can store a variety of different data types and structures.
Discussed features:
* Opening in read/write mode `WeldxFile(mode="rw")`.
* Creating files in memory (passing no file name to `WeldxFile()` constructor).
* Writing to disk (`WeldxFile.write_to`).
* Keeping log of changes (`WeldxFile.history`, `WeldxFile.add_history_entry`).
* Validation against a schema `WeldxFile(custom_schema="/path/my_schema.yaml")`
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import os
import os.path as path
import itertools
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.layers import Input,InputLayer, Dense, Activation, BatchNormalization, Flatten, Conv2D
from tensorflow.keras.layers import MaxPooling2D, Dropout
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.callbacks import ModelCheckpoint,LearningRateScheduler, \
EarlyStopping
from tensorflow.keras import backend as K
from tensorflow.keras.utils import to_categorical, multi_gpu_model, Sequence
from tensorflow.keras.preprocessing.image import ImageDataGenerator
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
data_dir = 'data/'
# train_data = np.load(path.join(data_dir, 'imagenet_6_class_172_train_data.npz'))
# val_data = np.load(path.join(data_dir, 'imagenet_6_class_172_val_data.npz'))
x_train = np.load(path.join(data_dir, 'imagenet_6_class_172_x_train.npy'))
y_train = np.load(path.join(data_dir, 'imagenet_6_class_172_y_train.npy'))
x_val = np.load(path.join(data_dir, 'imagenet_6_class_172_x_val.npy'))
y_val = np.load(path.join(data_dir, 'imagenet_6_class_172_y_val.npy'))
y_list = np.load(path.join(data_dir, 'imagenet_6_class_172_y_list.npy'))
# x_train = train_data['x_data']
# y_train = train_data['y_data']
# x_val = val_data['x_data']
# y_val = val_data['y_data']
x_test = x_val
y_test = y_val
# y_list = val_data['y_list']
x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape, y_list.shape
y_train = to_categorical(y_train)
y_val = to_categorical(y_val)
y_test = y_val
x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape
input_shape = x_train[0].shape
output_size = len(y_list)
def build_2d_cnn_custom_ch_32_DO(conv_num=1):
input_layer = Input(shape=input_shape)
x = input_layer
for i in range(conv_num):
x = Conv2D(kernel_size=5, filters=32*(2**(i//2)), strides=(1,1), padding='same')(x)
# x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=2, strides=(2,2), padding='same')(x)
x = Flatten()(x)
x = Dropout(0.75)(x)
output_layer = Dense(output_size, activation='softmax')(x)
model = Model(inputs=input_layer, outputs=output_layer)
return model
for i in range(1, 8):
model = build_2d_cnn_custom_ch_32_DO(conv_num=i)
model.summary()
del model
class BalanceDataGenerator(Sequence):
def __init__(self, x_data, y_data, batch_size, shuffle=True):
self.x_data = x_data
self.y_data = y_data
self.batch_size = batch_size
self.shuffle = shuffle
self.sample_size = int(np.sum(y_data, axis=0).min())
self.data_shape = x_data.shape[1:]
self.y_label = self.y_data.argmax(axis=1)
self.labels = np.unique(self.y_label)
self.on_epoch_end()
def __len__(self):
return int(np.ceil(len(self.labels) * self.sample_size / self.batch_size))
def on_epoch_end(self):
self.indexes = np.zeros((len(self.labels), self.sample_size))
for i, label in enumerate(self.labels):
y_index = np.argwhere(self.y_label==label).squeeze()
if self.shuffle == True:
self.indexes[i] = np.random.choice(y_index,
self.sample_size,
replace=False)
else:
self.indexes[i] = y_index[:self.sample_size]
self.indexes = self.indexes.flatten().astype(np.int32)
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __getitem__(self, batch_idx):
indices = self.indexes[batch_idx*self.batch_size: (batch_idx+1)*self.batch_size]
return self.x_data[indices], self.y_data[indices]
batch_size = 40
data_generator = BalanceDataGenerator(x_train, y_train,
batch_size=batch_size)
for i in range(6, 8):
base = 'vis_imagenet_6_class_2D_CNN_custom_ch_32_DO_075_DO'
model_name = base+'_{}_conv'.format(i)
# with tf.device('/cpu:1'):
model = build_2d_cnn_custom_ch_32_DO(conv_num=i)
# model = multi_gpu_model(model, gpus=2)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-4),
metrics=['accuracy'])
model_path = 'model/checkpoint/'+model_name+'_checkpoint/'
os.makedirs(model_path, exist_ok=True)
model_filename = model_path+'{epoch:03d}-{val_loss:.4f}.hdf5'
checkpointer = ModelCheckpoint(filepath = model_filename, monitor = "val_loss",
verbose=1, save_best_only=True)
early_stopping = EarlyStopping(monitor='val_loss', patience=50)
hist = model.fit_generator(data_generator,
steps_per_epoch=len(x_train)//batch_size,
epochs=10000,
validation_data=(x_val, y_val),
callbacks = [checkpointer, early_stopping],
workers=8,
use_multiprocessing=True
)
print()
print(model_name, 'Model')
fig, ax = plt.subplots()
ax.plot(hist.history['loss'], 'y', label='train loss')
ax.plot(hist.history['val_loss'], 'r', label='val loss')
ax.plot(hist.history['acc'], 'b', label='train acc')
ax.plot(hist.history['val_acc'], 'g', label='val acc')
ax.set_xlabel('epoch')
ax.set_ylabel('loss')
ax.legend(loc='upper left')
plt.show()
png_path = 'visualization/learning_curve/'
filename = model_name+'.png'
os.makedirs(png_path, exist_ok=True)
fig.savefig(png_path+filename, transparent=True)
model.save(model_path+'000_last.hdf5')
del(model)
model_path = 'model/checkpoint/'+model_name+'_checkpoint/'
model_filename = model_path + sorted(os.listdir(model_path))[-1]
model = load_model(model_filename)
[loss, accuracy] = model.evaluate(x_test, y_test)
print('Loss:', loss, 'Accuracy:', accuracy)
print()
del(model)
log_dir = 'log'
os.makedirs(log_dir, exist_ok=True)
base = 'vis_imagenet_6_class_2D_CNN_custom_ch_32_DO_075_DO'
with open(path.join(log_dir, base), 'w') as log_file:
for i in range(6, 8):
model_name = base+'_{}_conv'.format(i)
print()
print(model_name, 'Model')
model_path = 'model/checkpoint/'+model_name+'_checkpoint/'
model_filename = model_path + sorted(os.listdir(model_path))[-1]
model = load_model(model_filename)
model.summary()
[loss, accuracy] = model.evaluate(x_test, y_test)
print('Loss:', loss, 'Accuracy:', accuracy)
del(model)
log_file.write('\t'.join([model_name, str(accuracy), str(loss)])+'\n')
for i in range(6, 8):
model_name = base+'_{}_conv'.format(i)
print()
print(model_name, 'Model')
model_path = 'model/checkpoint/'+model_name+'_checkpoint/'
model_filename = model_path + '000_last.hdf5'
model = load_model(model_filename)
model.summary()
[loss, accuracy] = model.evaluate(x_test, y_test)
print('Loss:', loss, 'Accuracy:', accuracy)
del(model)
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report, confusion_matrix
i = 6
model_name = base+'_{}_conv'.format(i)
print()
print(model_name, 'Model')
model_path = 'model/checkpoint/'+model_name+'_checkpoint/'
model_filename = model_path + sorted(os.listdir(model_path))[-1]
model = load_model(model_filename)
model.summary()
[loss, accuracy] = model.evaluate(x_test, y_test)
print('Loss:', loss, 'Accuracy:', accuracy)
Y_pred = model.predict(x_test)
y_pred = np.argmax(Y_pred, axis=1)
y_real = np.argmax(y_test, axis=1)
confusion_mat = confusion_matrix(y_real, y_pred)
print('Confusion Matrix')
print(confusion_mat)
print()
print('Classification Report')
print(classification_report(y_real, y_pred))
print()
# labels = y_table.T[0]
plt.figure(figsize=(4,4), dpi=100)
plt.xticks(np.arange(len(y_list)), y_list)
plt.yticks(np.arange(len(y_list)), y_list)
plt.imshow(confusion_mat, interpolation='nearest', cmap=plt.cm.bone_r)
del(model)
```
| github_jupyter |
# This is the EDA file
## The Main Findings I found were as follows
1. The Main Table in the database is "Results" Table
2. There are 28 columns in total in the "Results" Table
3. There are total 17 ID columns where each ID column referes to some other table in database
4. Row number 46360 contains all null values so we can drop it
5. There are 7 rows with Activity_ID = ######### and every feature corresponding to those rows is blank and we can drop those
6. Lab_ID contains ["None", "Unknown", "LabID", "none", "NONE", "None "] type of null values and we can map them to nans
7. Date_Collected has no null values
8. Date on Time_Collected column looks wrong as the dates are from year 1899-1902
9. Also all times in Date_Collected columns are 00:00:00 So maybe we can just take time from Time_Collected and Date from date collected and join those together to get accurate date_time
10. Actual_Result has >,<,*,'.', as special charecters
11. More findings coming in future
```
import pandas as pd
import os
import datetime
pd.set_option("display.max_rows", 999)
os.getcwd()
data_folder = "../data/charles_river_samples_csv"
os.chdir(data_folder)
os.getcwd()
os.listdir()
#Results is the central table where all the measurements are taken, it's encoded in latin-1
results = pd.read_csv("Results.csv",encoding="latin-1")
results.columns.value_counts().sum()
#There are 28 columns in total
results.columns
# We can separately treat ID columns as I think most of them are categorical or level types
# There are total 17 ID columns
id_columns = [col for col in results.columns if "ID" in col]
sum([1 for i in id_columns])
results.shape
#There are 46,458 rows and 28 columns out of which 17 are ID columns and 11 are non_ID columns
results.head()
results.isnull().sum()
#Every row contains a null value
results.index[results.isnull().all(1)]
# So row number 46360 contains all null values so we can drop it
results = results.drop(46360)
results.isnull().sum()
results.shape
results["Activity_ID"].value_counts()
# There is a row with Activity_ID = ######### which is wrong
# Lets find what rows are those
results[results["Activity_ID"] == "################"]
# So these are the rows with incorrect values in them. We can drop them
results.drop(results.loc[results['Activity_ID']=="################"].index, inplace=True)
(results["Activity_ID"].value_counts()>1).sum()
# Now I think we can safetly assume that Activity_ID uniquely describes each row in the dataset
#Checking out percentages of null values in dataset
results.isnull().sum()*100/results.shape[0]
# Most of the highly null columns(>70%) are comments and we can drop those columns
drop_colmns = ["Associated_ID", "Result_Comment","Field_Comment","Event_Comment","QAQC_Comment","Percent_RPD"]
results.drop(drop_colmns, axis=1,inplace=True)
results.isnull().sum()*100/results.shape[0]
results.isna().sum()*100/results.shape[0]
# Now let's concentrate on Lab_ID
for k,v in results["Lab_ID"].value_counts(sort=True).to_dict().items():
print(k,v)
# Change first few rows to nans
nans = ["None", "Unknown", "LabID", "none", "NONE", "None "]
results["Lab_ID"] = results["Lab_ID"].map(lambda x: "nan" if x in nans else x)
# Date_Collected has no null values
# However Date on Time Collected column looks wrong as the dates are from 1899-1902 while
# dates in Date_Collected column dont match with it
# Also all times in Date_Collected columns are 00:00:00 So
# Maybe we can just take time from Time_Collected and Date from date collected and join those together
# to get accurate date_time
results["Time_Collected"].value_counts(sort=True)
# Extracting date and time from Date_Collected and Time_Collected we can merge them or use them as features separetly
results["Date_Collected"] = pd.to_datetime(results["Date_Collected"])
results["Year"] = pd.DatetimeIndex(results["Date_Collected"]).year
results["Month"] = pd.DatetimeIndex(results["Date_Collected"]).month
results["Day"] = pd.DatetimeIndex(results["Date_Collected"]).day
results["Hour"] = pd.DatetimeIndex(results["Time_Collected"]).hour
results["Minute"] = pd.DatetimeIndex(results["Time_Collected"]).minute
results["Second"] = pd.DatetimeIndex(results["Time_Collected"]).second
## Site ID
for k,v in results["Actual_Result"].value_counts(sort=True).to_dict().items():
print(k,v)
```
| github_jupyter |
# Gaussian Mixture Model with ADVI
Here, we describe how to use ADVI for inference of Gaussian mixture model. First, we will show that inference with ADVI does not need to modify the stochastic model, just call a function. Then, we will show how to use mini-batch, which is useful for large dataset. In this case, where the model should be slightly changed.
First, create artificial data from a mixuture of two Gaussian components.
```
%matplotlib inline
%env THEANO_FLAGS=device=cpu,floatX=float32
import theano
import pymc3 as pm
from pymc3 import Normal, Metropolis, sample, MvNormal, Dirichlet, \
DensityDist, find_MAP, NUTS, Slice
import theano.tensor as tt
from theano.tensor.nlinalg import det
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
n_samples = 100
rng = np.random.RandomState(123)
ms = np.array([[-1, -1.5], [1, 1]])
ps = np.array([0.2, 0.8])
zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T
xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)
for z, m in zip(zs, ms)]
data = np.sum(np.dstack(xs), axis=2)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)
plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)
plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)
```
Gaussian mixture models are usually constructed with categorical random variables. However, any discrete rvs does not fit ADVI. Here, class assignment variables are marginalized out, giving weighted sum of the probability for the gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable.
In the below code, DensityDist class is used as the likelihood term. The second argument, logp_gmix(mus, pi, np.eye(2)), is a python function which recieves observations (denoted by 'value') and returns the tensor representation of the log-likelihood.
```
from pymc3.math import logsumexp
# Log likelihood of normal distribution
def logp_normal(mu, tau, value):
# log probability of individual samples
k = tau.shape[0]
delta = lambda mu: value - mu
return (-1 / 2.) * (k * tt.log(2 * np.pi) + tt.log(1./det(tau)) +
(delta(mu).dot(tau) * delta(mu)).sum(axis=1))
# Log likelihood of Gaussian mixture distribution
def logp_gmix(mus, pi, tau):
def logp_(value):
logps = [tt.log(pi[i]) + logp_normal(mu, tau, value)
for i, mu in enumerate(mus)]
return tt.sum(logsumexp(tt.stacklists(logps)[:, :n_samples], axis=0))
return logp_
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i,
mu=pm.floatX(np.zeros(2)),
tau=pm.floatX(0.1 * np.eye(2)),
shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
```
For comparison with ADVI, run MCMC.
```
with model:
start = find_MAP()
step = Metropolis()
trace = sample(1000, step, start=start)
```
Check posterior of component means and weights. We can see that the MCMC samples of the component mean for the lower-left component varied more than the upper-right due to the difference of the sample size of these clusters.
```
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
mu_0, mu_1 = trace['mu_0'], trace['mu_1']
plt.scatter(mu_0[:, 0], mu_0[:, 1], c="r", s=10)
plt.scatter(mu_1[:, 0], mu_1[:, 1], c="b", s=10)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
sns.barplot([1, 2], np.mean(trace['pi'][:], axis=0),
palette=['red', 'blue'])
```
We can use the same model with ADVI as follows.
```
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
with model:
%time approx = pm.fit(n=4500, obj_optimizer=pm.adagrad(learning_rate=1e-1))
means = approx.bij.rmap(approx.mean.eval())
cov = approx.cov.eval()
sds = approx.bij.rmap(np.diag(cov)**.5)
```
The function returns three variables. 'means' and 'sds' are the mean and standart deviations of the variational posterior. Note that these values are in the transformed space, not in the original space. For random variables in the real line, e.g., means of the Gaussian components, no transformation is applied. Then we can see the variational posterior in the original space.
```
from copy import deepcopy
mu_0, sd_0 = means['mu_0'], sds['mu_0']
mu_1, sd_1 = means['mu_1'], sds['mu_1']
def logp_normal_np(mu, tau, value):
# log probability of individual samples
k = tau.shape[0]
delta = lambda mu: value - mu
return (-1 / 2.) * (k * np.log(2 * np.pi) + np.log(1./np.linalg.det(tau)) +
(delta(mu).dot(tau) * delta(mu)).sum(axis=1))
def threshold(zz):
zz_ = deepcopy(zz)
zz_[zz < np.max(zz) * 1e-2] = None
return zz_
def plot_logp_normal(ax, mu, sd, cmap):
f = lambda value: np.exp(logp_normal_np(mu, np.diag(1 / sd**2), value))
g = lambda mu, sd: np.arange(mu - 3, mu + 3, .1)
xx, yy = np.meshgrid(g(mu[0], sd[0]), g(mu[1], sd[1]))
zz = f(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).reshape(xx.shape)
ax.contourf(xx, yy, threshold(zz), cmap=cmap, alpha=0.9)
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
plot_logp_normal(ax, mu_0, sd_0, cmap='Reds')
plot_logp_normal(ax, mu_1, sd_1, cmap='Blues')
plt.xlim(-6, 6)
plt.ylim(-6, 6)
```
TODO: We need to backward-transform 'pi', which is transformed by 'stick_breaking'.
'elbos' contains the trace of ELBO, showing stochastic convergence of the algorithm.
```
plt.plot(approx.hist)
```
To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution.
```
n_samples = 100000
zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T
xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)
for z, m in zip(zs, ms)]
data = np.sum(np.dstack(xs), axis=2)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)
plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)
plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
```
MCMC took 55 seconds, 20 times longer than the small dataset.
```
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
start = find_MAP()
step = Metropolis()
trace = sample(1000, step, start=start)
```
Posterior samples are concentrated on the true means, so looks like single point for each component.
```
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
mu_0, mu_1 = trace['mu_0'], trace['mu_1']
plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=50)
plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=50)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
```
For ADVI with mini-batch, put theano tensor on the observed variable of the ObservedRV. The tensor will be replaced with mini-batches. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term should be appropriately scaled. To tell the log-likelihood term, we need to give ObservedRV objects ('minibatch_RVs' below) where mini-batch is put. Also we should keep the tensor ('minibatch_tensors').
```
minibatch_size = 200
# In memory Minibatches for better speed
data_t = pm.Minibatch(data, minibatch_size)
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data_t, total_size=len(data))
```
Run ADVI. It's much faster than MCMC, though the problem here is simple and it's not a fair comparison.
```
# Used only to write the function call in single line for using %time
# is there more smart way?
def f():
approx = pm.fit(n=1500, obj_optimizer=pm.adagrad(learning_rate=1e-1), model=model)
means = approx.bij.rmap(approx.mean.eval())
sds = approx.bij.rmap(approx.std.eval())
return means, sds, approx.hist
%time means, sds, elbos = f()
```
The result is almost the same.
```
from copy import deepcopy
mu_0, sd_0 = means['mu_0'], sds['mu_0']
mu_1, sd_1 = means['mu_1'], sds['mu_1']
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
plt.scatter(mu_0[0], mu_0[1], c="r", s=50)
plt.scatter(mu_1[0], mu_1[1], c="b", s=50)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
```
The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples.
```
plt.plot(elbos);
```
| github_jupyter |
# EXAMPLE: Personal Workout Tracking Data
This Notebook provides an example on how to import data downloaded from a specific service Apple Health.
NOTE: This is still a work-in-progress.
# Dependencies and Libraries
```
from datetime import date, datetime as dt, timedelta as td
import pytz
import numpy as np
import pandas as pd
# functions to convert UTC to Eastern time zone and extract date/time elements
convert_tz = lambda x: x.to_pydatetime().replace(tzinfo=pytz.utc).astimezone(pytz.timezone('US/Eastern'))
get_year = lambda x: convert_tz(x).year
get_month = lambda x: '{}-{:02}'.format(convert_tz(x).year, convert_tz(x).month) #inefficient
get_date = lambda x: '{}-{:02}-{:02}'.format(convert_tz(x).year, convert_tz(x).month, convert_tz(x).day) #inefficient
get_day = lambda x: convert_tz(x).day
get_hour = lambda x: convert_tz(x).hour
get_day_of_week = lambda x: convert_tz(x).weekday()
```
# Import Data
# Workouts
```
# apple health
workouts = pd.read_csv("C:/Users/brand/Desktop/Healthcare Info Systems/90day_workouts.csv")
workouts.head()
```
# Drop unwanted metrics
```
new_workouts = workouts.drop(['Average Pace','Average Speed','Average Cadence','Elevation Ascended','Elevation Descended','Weather Temperature','Weather Humidity'], axis=1)
new_workouts.head()
age = input("Enter your age: ")
print(new_workouts.dtypes)
#new_workouts["Duration"] = pd.to_numeric(new_workouts.Duration, errors='coerce')
display(new_workouts.describe())
```
# Create Avg HR Intesnity
```
new_workouts['Avg Heart Rate Intensity'] = new_workouts['Average Heart Rate'] / input().age
# other try- new_workouts['Average Heart Rate'].div(age)
workouts.tail()
```
# Exercise Guidelines
```
# Minutes of Weekly Exercise
def getExer():
global ex_time
ex_time = input("Enter weekly exercise time in minutes: ")
print("For more educational information on recommended daily exercise for adults, visit", "\nhttps://health.gov/paguidenes/second-edition/pdf/Physical_Activity_Guidelines_2nd_edition.pdf#page=55")
print()
if int(ex_time) <= 149:
print("Your daily exercise time of", ex_time, "is less than recommended. Consider increasing it to achieve at least 150 minutes per week to improve your health.")
elif int(ex_time) >= 150 and int(ex_time) <= 300:
print("Your daily exercise time of", ex_time, "is within the recommended amount. Achieving 150-300 minutes per week will continue to improve your health.")
elif int(ex_time) >= 301:
print("Your daily exercise time of", ex_time, "exceeds the recommended amount. Your weekly total should benefit your health.")
else:
print("Invalid entry for minutes of daily exercise")
getExer()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/wisrovi/pyimagesearch-buy/blob/main/skin_detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

# Skin Detection: A Step-by-Step Example using Python and OpenCV
### by [PyImageSearch.com](http://www.pyimagesearch.com)
## Welcome to **[PyImageSearch Plus](http://pyimg.co/plus)** Jupyter Notebooks!
This notebook is associated with the [Skin Detection: A Step-by-Step Example using Python and OpenCV](https://www.pyimagesearch.com/2014/08/18/skin-detection-step-step-example-using-python-opencv/) blog post published on 2014-08-18.
Only the code for the blog post is here. Most codeblocks have a 1:1 relationship with what you find in the blog post with two exceptions: (1) Python classes are not separate files as they are typically organized with PyImageSearch projects, and (2) Command Line Argument parsing is replaced with an `args` dictionary that you can manipulate as needed.
We recommend that you execute (press ▶️) the code block-by-block, as-is, before adjusting parameters and `args` inputs. Once you've verified that the code is working, you are welcome to hack with it and learn from manipulating inputs, settings, and parameters. For more information on using Jupyter and Colab, please refer to these resources:
* [Jupyter Notebook User Interface](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html#notebook-user-interface)
* [Overview of Google Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)
As a reminder, these PyImageSearch Plus Jupyter Notebooks are not for sharing; please refer to the **Copyright** directly below and **Code License Agreement** in the last cell of this notebook.
Happy hacking!
*Adrian*
<hr>
***Copyright:*** *The contents of this Jupyter Notebook, unless otherwise indicated, are Copyright 2020 Adrian Rosebrock, PyimageSearch.com. All rights reserved. Content like this is made possible by the time invested by the authors. If you received this Jupyter Notebook and did not purchase it, please consider making future content possible by joining PyImageSearch Plus at http://pyimg.co/plus/ today.*
### Download the code zip file
```
!wget https://www.pyimagesearch.com/wp-content/uploads/2014/08/skin-detection.zip
!unzip -qq skin-detection.zip
%cd skin-detection
```
## Blog Post Code
### Import Packages
```
# import the necessary packages
from pyimagesearch import imutils
import numpy as np
import argparse
import cv2
```
### Detecting Skin in Images & Video Using Python and OpenCV
```
# construct the argument parse and parse the arguments
#ap = argparse.ArgumentParser()
#ap.add_argument("-v", "--video",
# help = "path to the (optional) video file")
#args = vars(ap.parse_args())
# since we are using Jupyter Notebooks we can replace our argument
# parsing code with *hard coded* arguments and values
args = {
"video": "video/skin_example.mov",
"output" : "output.avi"
}
# define the upper and lower boundaries of the HSV pixel
# intensities to be considered 'skin'
lower = np.array([0, 48, 80], dtype = "uint8")
upper = np.array([20, 255, 255], dtype = "uint8")
# if a video path was not supplied, grab the reference
# to the gray
if not args.get("video", False):
camera = cv2.VideoCapture(0)
# otherwise, load the video
else:
camera = cv2.VideoCapture(args["video"])
# initialize pointer to output video file
writer = None
# keep looping over the frames in the video
while True:
# grab the next frame
frame = camera.read()[1]
# if we did not grab a frame then we have reached the end of the
# video
if frame is None:
break
# resize the frame, convert it to the HSV color space,
# and determine the HSV pixel intensities that fall into
# the speicifed upper and lower boundaries
frame = imutils.resize(frame, width = 400)
converted = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
skinMask = cv2.inRange(converted, lower, upper)
# apply a series of erosions and dilations to the mask
# using an elliptical kernel
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
skinMask = cv2.erode(skinMask, kernel, iterations = 2)
skinMask = cv2.dilate(skinMask, kernel, iterations = 2)
# blur the mask to help remove noise, then apply the
# mask to the frame
skinMask = cv2.GaussianBlur(skinMask, (3, 3), 0)
skin = cv2.bitwise_and(frame, frame, mask = skinMask)
# if the video writer is None *AND* we are supposed to write
# the output video to disk initialize the writer
if writer is None and args["output"] is not None:
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
writer = cv2.VideoWriter(args["output"], fourcc, 20,
(skin.shape[1], skin.shape[0]), True)
# if the writer is not None, write the frame with recognized
# faces to disk
if writer is not None:
writer.write(skin)
# do a bit of cleanup
camera.release()
# check to see if the video writer point needs to be released
if writer is not None:
writer.release()
!ffmpeg -i output.avi output.mp4
#@title Display video inline
from IPython.display import HTML
from base64 import b64encode
mp4 = open("output.mp4", "rb").read()
dataURL = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % dataURL)
```
For a detailed walkthrough of the concepts and code, be sure to refer to the full tutorial, [*Skin Detection: A Step-by-Step Example using Python and OpenCV*](https://www.pyimagesearch.com/2014/08/18/skin-detection-step-step-example-using-python-opencv/) published on 2014-08-18.
# Code License Agreement
```
Copyright (c) 2020 PyImageSearch.com
SIMPLE VERSION
Feel free to use this code for your own projects, whether they are
purely educational, for fun, or for profit. THE EXCEPTION BEING if
you are developing a course, book, or other educational product.
Under *NO CIRCUMSTANCE* may you use this code for your own paid
educational or self-promotional ventures without written consent
from Adrian Rosebrock and PyImageSearch.com.
LONGER, FORMAL VERSION
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files
(the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
Notwithstanding the foregoing, you may not use, copy, modify, merge,
publish, distribute, sublicense, create a derivative work, and/or
sell copies of the Software in any work that is designed, intended,
or marketed for pedagogical or instructional purposes related to
programming, coding, application development, or information
technology. Permission for such use, copying, modification, and
merger, publication, distribution, sub-licensing, creation of
derivative works, or sale is expressly withheld.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
| github_jupyter |
# Granger Causality with Google Trends - Did `itaewon class` cause `โคชูจัง`?
We will give an example of Granger casuality test with interests over time of `itaewon class` and `โคชูจัง` in Thailand during 2020-01 to 2020-04. During the time, gochujang went out of stock in many supermarkets supposedly because people micking the show. We are examining the hypothesis that the interst over time of `itaewon class` Granger causes that of `โคชูจัง`.
$x_t$ Granger causes $y_t$ means that the past values of $x_t$ could contain information that is useful in predicting $y_t$.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tqdm
import warnings
warnings.filterwarnings('ignore')
import matplotlib
matplotlib.rc('font', family='Ayuthaya') # MacOS
```
## 1. Get Trend Objects with Thailand Offset
We can get interest over time of a keyword with the unofficial `pytrends` library.
```
from pytrends.request import TrendReq
#get trend objects with thailand offset 7*60 = 420 minutes
trend = TrendReq(hl='th-TH', tz=420)
#compare 2 keywords
kw_list = ['โคชูจัง','itaewon class']
trend.build_payload(kw_list, geo='TH',timeframe='2020-01-01 2020-04-30')
df = trend.interest_over_time().iloc[:,:2]
df.head()
df.plot()
```
## 2. Stationarity Check: Augmented Dickey-Fuller Test
Stationarity is a pre-requisite for Granger causality test. We first use [augmented Dickey-Fuller test](https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test) to detect stationarity. For the following model:
$$\Delta y_t = \alpha + \beta t + \gamma y_{t-1} + \delta_1 \Delta y_{t-1} + \cdots + \delta_{p-1} \Delta y_{t-p+1} + \varepsilon_t$$
where $\alpha$ is a constant, $\beta$ the coefficient on a time trend and $p$ the lag order of the autoregressive process. The null hypothesis is that $\gamma$ is 0; that is, $y_{t-1}$ does not have any valuable contribution to predicting $y_t$. If we reject the null hypothesis, that means $y_t$ does not have a unit root.
```
from statsmodels.tsa.stattools import adfuller
#test for stationarity with augmented dickey fuller test
def unit_root(name,series):
signif=0.05
r = adfuller(series, autolag='AIC')
output = {'test_statistic':round(r[0],4),'pvalue':round(r[1],4),'n_lags':round(r[2],4),'n_obs':r[3]}
p_value = output['pvalue']
def adjust(val,lenght=6):
return str(val).ljust(lenght)
print(f'Augmented Dickey-Fuller Test on "{name}"')
print('-'*47)
print(f'Null Hypothesis: Data has unit root. Non-Stationary.')
print(f'Observation = {output["n_obs"]}')
print(f'Significance Level = {signif}')
print(f'Test Statistic = {output["test_statistic"]}')
print(f'No. Lags Chosen = {output["n_lags"]}')
for key,val in r[4].items():
print(f'Critical value {adjust(key)} = {round(val,3)}')
if p_value <= signif:
print(f'=> P-Value = {p_value}. Rejecting null hypothesis.')
print(f'=> Series is stationary.')
else:
print(f'=> P-Value = {p_value}. Cannot reject the null hypothesis.')
print(f'=> "{name}" is non-stationary.')
```
2.1. `โคชูจัง` unit root test
```
name = 'โคชูจัง'
series = df.iloc[:,0]
unit_root(name,series)
```
2.2. `itaewon class` unit root test
```
name = 'itaewon class'
series = df.iloc[:,1]
unit_root(name,series)
```
We could not reject the null hypothesis of augmented Dickey-Fuller test for both time series. This should be evident just by looking at the plot that they are not stationary (have stable same means, variances, autocorrelations over time).
## 3. Taking 1st Difference
Most commonly used methods to "stationarize" a time series is to take the first difference aka $\frac{y_t-y_{t-1}}{y_{t-1}}$ of the time series.
```
diff_df = df.diff(1).dropna()
```
3.1. 1st Difference of `โคชูจัง` unit root test
```
name = 'โคชูจัง'
series = diff_df.iloc[:,0]
unit_root(name,series)
```
3.2. 1st Difference of `itaewon class` unit root test
```
name = 'itaewon class'
series = diff_df.iloc[:,1]
unit_root(name,series)
diff_df.plot()
```
- 1st Difference of `itaewon class` is stationary at 5% significance.
- 1st Difference of `โคชูจัง` is not stationary but 0.0564 is close enough to 5% significance, we are making an exception for this example.
## 5. Find Lag Length
Note that `maxlag` is an important hyperparameter that determines if your Granger test is significant or not. There are some criteria you can you to find that lag but as with any frequentist statistical test, you need to understand what assumptions you are making.
```
import statsmodels.tsa.api as smt
df_test = diff_df.copy()
df_test.head()
# make a VAR model
model = smt.VAR(df_test)
res = model.select_order(maxlags=None)
print(res.summary())
```
One thing to note is that this hyperparameter affects the conclusion of the test and the best solution is to have a strong theoretical assumption but if not the empirical methods above could be the next best thing.
```
#find the optimal lag
lags = list(range(1,23))
res = grangercausalitytests(df_test, maxlag=lags, verbose=False)
p_values = []
for i in lags:
p_values.append({'maxlag':i,
'ftest':res[i][0]['ssr_ftest'][1],
'chi2':res[i][0]['ssr_chi2test'][1],
'lr':res[i][0]['lrtest'][1],
'params_ftest':res[i][0]['params_ftest'][1],})
p_df = pd.DataFrame(p_values)
p_df.iloc[:,1:].plot()
```
## 6. Granger Causality Test
The Granger causality test the null hypothesis that $x_t$ **DOES NOT** Granger cause $y_t$ with the following models:
$$y_{t}=a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{m}y_{t-m}+{\text{error}}_{t}$$
and
$$y_{t}=a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{m}y_{t-m}+b_{p}x_{t-p}+\cdots +b_{q}x_{t-q}+{\text{error}}_{t}$$
An F-statistic is then calculated by ratio of residual sums of squares of these two models.
```
from statsmodels.tsa.stattools import grangercausalitytests
def granger_causation_matrix(data, variables,test,verbose=False):
x = pd.DataFrame(np.zeros((len(variables),len(variables))), columns=variables,index=variables)
for c in x.columns:
for r in x.index:
test_result = grangercausalitytests(data[[r,c]], maxlag=maxlag, verbose=False)
p_values = [round(test_result[i+1][0][test][1],4) for i in range(maxlag)]
if verbose:
print(f'Y = {r}, X= {c},P Values = {p_values}')
min_p_value = np.min(p_values)
x.loc[r,c] = min_p_value
x.columns = [var + '_x' for var in variables]
x.index = [var + '_y' for var in variables]
return x
# maxlag is the maximum lag that is possible number by statsmodels default
nobs = len(df_test.index)
maxlag = round(12*(nobs/100.)**(1/4.))
maxlag
data = df_test
variables = df_test.columns
```
#### 6.1. SSR based F test
```
test = 'ssr_ftest'
ssr_ftest = granger_causation_matrix(data, variables, test)
ssr_ftest['test'] = 'ssr_ftest'
```
#### 6.2. SSR based chi2 test
```
test = 'ssr_chi2test'
ssr_chi2test = granger_causation_matrix(data, variables, test)
ssr_chi2test['test'] = 'ssr_chi2test'
```
#### 6.3. Likelihood ratio test
```
test = 'lrtest'
lrtest = granger_causation_matrix(data, variables, test)
lrtest['test'] = 'lrtest'
```
#### 6.4. Parameter F test
```
test = 'params_ftest'
params_ftest = granger_causation_matrix(data, variables, test)
params_ftest['test'] = 'params_ftest'
frames = [ssr_ftest, ssr_chi2test, lrtest, params_ftest]
all_test = pd.concat(frames)
all_test
```
We may conclude that `itaewon class` Granger caused `โคชูจัง`, but `โคชูจัง` did not Granger cause `itaewon class`.
# What About Chicken and Eggs?
We use the annual chicken and eggs data from [Thurman and Fisher (1988) hosted by UIUC](http://www.econ.uiuc.edu/~econ536/Data/).
## 1. Get data from csv file
```
#chicken and eggs
chickeggs = pd.read_csv('chickeggs.csv')
chickeggs
#normalize for 1930 to be 1
df = chickeggs.iloc[:,1:]
df['chic'] = df.chic / df.chic[0]
df['egg'] = df.egg / df.egg[0]
df = df[['chic','egg']]
df
df.plot()
```
## 2. Stationarity check: Augmented Dickey-Fuller Test
2.1. `egg` unit root test
```
name = 'egg'
series = df.iloc[:,0]
unit_root(name,series)
```
2.2. `chic` unit root test
```
name = 'chic'
series = df.iloc[:,1]
unit_root(name,series)
```
## 3. Taking 1st Difference
```
diff_df = df.diff(1).dropna()
diff_df
diff_df.plot()
```
3.1. 1st Difference of `egg` unit root test
```
name = 'egg'
series = diff_df.iloc[:,0]
unit_root(name,series)
```
3.2. 1st Difference of `chic` unit root test
```
name = 'chic'
series = diff_df.iloc[:,1]
unit_root(name,series)
```
## 4. Find Lag Length
```
# make a VAR model
model = smt.VAR(diff_df)
res = model.select_order(maxlags=None)
print(res.summary())
#find the optimal lag
lags = list(range(1,23))
res = grangercausalitytests(diff_df, maxlag=lags, verbose=False)
p_values = []
for i in lags:
p_values.append({'maxlag':i,
'ftest':res[i][0]['ssr_ftest'][1],
'chi2':res[i][0]['ssr_chi2test'][1],
'lr':res[i][0]['lrtest'][1],
'params_ftest':res[i][0]['params_ftest'][1],})
p_df = pd.DataFrame(p_values)
print('Eggs Granger cause Chickens')
p_df.iloc[:,1:].plot()
#find the optimal lag
lags = list(range(1,23))
res = grangercausalitytests(diff_df[['egg','chic']], maxlag=lags, verbose=False)
p_values = []
for i in lags:
p_values.append({'maxlag':i,
'ftest':res[i][0]['ssr_ftest'][1],
'chi2':res[i][0]['ssr_chi2test'][1],
'lr':res[i][0]['lrtest'][1],
'params_ftest':res[i][0]['params_ftest'][1],})
p_df = pd.DataFrame(p_values)
print('Chickens Granger cause Eggs')
p_df.iloc[:,1:].plot()
```
## 5. Granger Causality Test
```
# nobs is number of observation
nobs = len(diff_df.index)
# maxlag is the maximum lag that is possible number
maxlag = round(12*(nobs/100.)**(1/4.))
data = diff_df
variables = diff_df.columns
```
#### 5.1. SSR based F test
```
test = 'ssr_ftest'
ssr_ftest = granger_causation_matrix(data, variables, test)
ssr_ftest['test'] = 'ssr_ftest'
```
#### 5.2. SSR based chi2 test
```
test = 'ssr_chi2test'
ssr_chi2test = granger_causation_matrix(data, variables, test)
ssr_chi2test['test'] = 'ssr_chi2test'
```
#### 5.3. Likelihood ratio test
```
test = 'lrtest'
lrtest = granger_causation_matrix(data, variables, test)
lrtest['test'] = 'lrtest'
```
#### 5.4. Parameter F test
```
test = 'params_ftest'
params_ftest = granger_causation_matrix(data, variables, test)
params_ftest['test'] = 'params_ftest'
frames = [ssr_ftest, ssr_chi2test, lrtest, params_ftest]
all_test = pd.concat(frames)
all_test
```
With this we can conclude that eggs Granger cause chickens!
| github_jupyter |
# CAM Methods Benchmark
**Goal:** to compare CAM methods using regular explaining metrics.
**Author:** lucas.david@ic.unicamp.br
Use GPUs if you are running *Score-CAM* or *Quantitative Results* sections.
```
#@title
from google.colab import drive
drive.mount('/content/drive')
import tensorflow as tf
# base_dir = '/content/drive/MyDrive/'
base_dir = '/home/ldavid/Workspace'
# data_dir = '/root/tensorflow_datasets'
data_dir = '/home/ldavid/Workspace/datasets/'
class Config:
seed = 218402
class data:
path = '/root/tensorflow_datasets/amazon-from-space'
size = (256, 256)
shape = (*size, 3)
batch_size = 32
shuffle_buffer_size = 8 * batch_size
prefetch_buffer_size = tf.data.experimental.AUTOTUNE
train_shuffle_seed = 120391
shuffle = False
class model:
backbone = tf.keras.applications.ResNet101V2
last_spatial_layer = 'post_relu'
# backbone = tf.keras.applications.EfficientNetB6
# last_spatial_layer = 'eb6'
# backbone = tf.keras.applications.VGG16
# last_spatial_layer = 'block5_pool'
gap_layer_name = 'avg_pool'
include_top = False
classifier_activation = None
custom = True
fine_tune_layers = 0.6
freeze_batch_norm = False
weights = f'{base_dir}/logs/amazon-from-space/resnet101-sw-ce-fine-tune/weights.h5'
class training:
valid_size = 0.3
class explaining:
noise = tf.constant(.2)
repetitions = tf.constant(8)
score_cam_activations = 'all'
λ_pos = tf.constant(1.)
λ_neg = tf.constant(1.)
λ_bg = tf.constant(1.)
report = f'{base_dir}/logs/amazon-from-space/resnet101-sw-ce-fine-tune/cam-score.txt'
preprocess = tf.keras.applications.resnet_v2.preprocess_input
deprocess = lambda x: (x + 1) * 127.5
# preprocess = tf.keras.applications.res.preprocess_input
# deprocess = lambda x: x
# preprocess = tf.keras.applications.vgg16.preprocess_input
# deprocess = lambda x: x[..., ::-1] + [103.939, 116.779, 123.68]
to_image = lambda x: tf.cast(tf.clip_by_value(deprocess(x), 0, 255), tf.uint8)
masked = lambda x, maps: x * tf.image.resize(maps, Config.data.size)
```
## Setup
```
! pip -qq install tensorflow_addons
import os
import shutil
from time import time
from math import ceil
import numpy as np
import pandas as pd
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow.keras import callbacks
for d in tf.config.list_physical_devices('GPU'):
print(d)
print(f'Setting device {d} to memory-growth mode.')
try:
tf.config.experimental.set_memory_growth(d, True)
except Exception as e:
print(e)
R = tf.random.Generator.from_seed(Config.seed, alg='philox')
C = np.asarray(sns.color_palette("Set1", 21))
CMAP = sns.color_palette("Set1", 21, as_cmap=True)
sns.set_style("whitegrid", {'axes.grid' : False})
np.set_printoptions(linewidth=120)
def normalize(x, reduce_min=True, reduce_max=True):
if reduce_min: x -= tf.reduce_min(x, axis=(-3, -2), keepdims=True)
if reduce_max: x = tf.math.divide_no_nan(x, tf.reduce_max(x, axis=(-3, -2), keepdims=True))
return x
def visualize(
image,
title=None,
rows=2,
cols=None,
i0=0,
figsize=(9, 4),
cmap=None,
full=True
):
if image is not None:
if isinstance(image, (list, tuple)) or len(image.shape) > 3: # many images
if full: plt.figure(figsize=figsize)
cols = cols or ceil(len(image) / rows)
for ix in range(len(image)):
plt.subplot(rows, cols, i0+ix+1)
visualize(image[ix],
cmap=cmap,
title=title[ix] if title is not None and len(title) > ix else None)
if full: plt.tight_layout()
return
if isinstance(image, tf.Tensor): image = image.numpy()
if image.shape[-1] == 1: image = image[..., 0]
plt.imshow(image, cmap=cmap)
if title is not None: plt.title(title)
plt.axis('off')
def observe_labels(probs, labels, ix):
p = probs[ix]
l = labels[ix]
s = tf.argsort(p, direction='DESCENDING')
d = pd.DataFrame({
'idx': s,
'label': tf.gather(CLASSES, s).numpy().astype(str),
'predicted': tf.gather(p, s).numpy().round(2),
'ground-truth': tf.gather(l, s).numpy()
})
return d[(d['ground-truth']==1) | (d['predicted'] > 0.05)]
def plot_heatmap(i, m):
plt.imshow(i)
plt.imshow(m, cmap='jet', alpha=0.5)
plt.axis('off')
```
## Related Work
### Summary
```
#@title
d = [
['1512.04150', 'CAM', 'GoogleNet', 'ILSVRC-15 val', 56.4, 43.00, None, None, None, None],
['1512.04150', 'CAM', 'VGG-16', 'ILSVRC-15 val', 57.2, 45.14, None, None, None, None],
['1512.04150', 'Backprop', 'GoogleNet', 'ILSVRC-15 val', 61.31, 50.55, None, None, None, None],
['1512.04150', 'Backprop', 'GoogleNet', 'ILSVRC-15 test', None, 37.10, None, None, None, None],
['1610.02391', 'CAM', 'VGG-16', 'ILSVRC-15 val', 57.2, 45.14, None, None, None, None, None],
['1610.02391', 'Grad-CAM', 'VGG-16', 'ILSVRC-15 val', 56.51, 46.41, None, None, None, None, None],
['1610.02391', 'CAM', 'GoogleNet', 'ILSVRC-15 val', 60.09, 49.34, None, None, None, None, None],
['1610.02391', 'Grad-CAM', 'GoogleNet', 'ILSVRC-15 val', 60.09, 49.34, None, None, None, None, None],
['1710.11063', 'Grad-CAM', 'VGG-16', 'ILSVRC-2012 val', None, None, 46.56, 13.42, 29.28, None, None],
['1710.11063', 'Grad-CAM++', 'VGG-16', 'ILSVRC-2012 val', None, None, 36.84, 17.05, 70.72, None, None],
['1710.11063', 'Grad-CAM', 'AlexNet', 'ILSVRC-2012 val', None, None, 82.86, 3.16, 13.44, None, None],
['1710.11063', 'Grad-CAM++', 'AlexNet', 'ILSVRC-2012 val', None, None, 62.75, 8.24, 86.56, None, None],
['1710.11063', 'Grad-CAM', 'VGG-16', 'Pascal 2007 val', None, None, 28.54, 21.43, 39.44, None, None],
['1710.11063', 'Grad-CAM++', 'VGG-16', 'Pascal 2007 val', None, None, 19.53, 18.96, 61.47, None, None],
['1710.11063', 'Grad-CAM', 'AlexNet', 'Pascal 2007 val', None, None, 45.82, 14.38, 27.21, None, None],
['1710.11063', 'Grad-CAM++', 'AlexNet', 'Pascal 2007 val', None, None, 29.16, 19.76, 72.79, None, None],
['1710.11063', 'Grad-CAM', 'ResNet-50', 'ILSVRC-2012 val', None, None, 30.36, 22.11, 39.49, None, None],
['1710.11063', 'Grad-CAM++', 'ResNet-50', 'ILSVRC-2012 val', None, None, 28.90, 22.16, 60.51, None, None],
['1710.11063', 'Grad-CAM', 'ResNet-50', 'Pascal 2007 val', None, None, 20.86, 21.99, 41.39, None, None],
['1710.11063', 'Grad-CAM++', 'ResNet-50', 'Pascal 2007 val', None, None, 16.19, 19.52, 58.61, None, None],
['1710.11063', 'Grad-CAM', 'VGG-16', 'Pascal 2012 val', None, None, None, None, None, 0.33, None],
['1710.11063', 'Grad-CAM++', 'VGG-16', 'Pascal 2012 val', None, None, None, None, None, 0.34, None],
['1910.01279', 'Backprop Vanilla', 'VGG-16', 'ILSVRC-2012 val', None, None, None, None, None, None, 41.3],
['1910.01279', 'Backprop Smooth', 'VGG-16', 'ILSVRC-2012 val', None, None, None, None, None, None, 42.4],
['1910.01279', 'Backprop Integraded', 'VGG-16', 'ILSVRC-2012 val', None, None, None, None, None, None, 44.7],
['1910.01279', 'Grad-CAM', 'VGG-16', 'ILSVRC-2012 val', None, None, 47.80, 19.60, None, None, 48.1],
['1910.01279', 'Grad-CAM++', 'VGG-16', 'ILSVRC-2012 val', None, None, 45.50, 18.90, None, None, 49.3],
['1910.01279', 'Score-CAM', 'VGG-16', 'ILSVRC-2012 val', None, None, 31.50, 30.60, None, None, 63.7],
]
d = pd.DataFrame(
d,
columns=[
'Source', 'Method', 'Arch', 'Dataset',
'Loc_Error_T-1', 'Loc_Error_T-5',
'Avg_Drop', 'Incr_in_confidence', 'Win_%', 'mLoc_I^c(s=0)',
'E-Pointing_Game'
]
)
#@title
(d.groupby(['Dataset', 'Method'], as_index=False)
.mean()
.replace(np.nan, '', regex=True))
#@title Full Report
d.replace(np.nan, '', regex=True)
```
**Localization Error**
As defined in http://image-net.org/challenges/LSVRC/2015/index#maincomp:
Let $d(c_i,C_k)=0$ if $c_i=C_k$ and 1 otherwise. Let $f(b_i,B_k)=0$ if $b_i$ and $B_k$ have more than 50% overlap, and 1 otherwise. The error of the algorithm on an individual image will be computed using:
$$e=\frac{1}{n} \cdot \sum_k min_{i} min_{m} max \{d(c_i,C_k), f(b_i,B_{km}) \}$$
**Pixel Perturbation** (Full-Gradient)
First form: remove $k$ most salient pixels from the image and measure impact on output confidence (high impact expected for good saliency methods, similar to Avg. Drop %). This might add artifacts (edges).
Second form: remove $k$ least salient pixels from the image and measure output confidence (low impact expected for good saliency methods).
**Average Drop %** (Grad-CAM++, Score-CAM)
The avg. percentual drop in the confidence of a model for a particular image $x_i$ and class $c$, when only the highlighted region is provided ($M_i\circ x_i$):
$$∑_i^N \frac{max(0, Y_i^c − O_i^c)}{Y_i^c} 100$$
* $Y_i^c = f(x_i)^c$
* $O_i^c = f(M_i\circ x_i)^c$
**Increase in confidence %** (Grad-CAM++, Score-CAM)
Measures scenarios where removing background noise must improve classification confidence.
$$∑^N_i \frac{Y^c_i < O^c_i}{N}100$$
### CAM
$M(f, x)^u_{ij} = \text{relu}(\sum_k w^u_k A_{ij}^k)$
```
@tf.function
def sigmoid_cam(x, y):
print(f'CAM tracing x:{x.shape} y:{y.shape}')
l, a = nn_s(x, training=False)
y = tf.einsum('bhwk,ku->buhw', a, sW)
return l, y[..., tf.newaxis]
```
### Grad-CAM
$M(f, x)^u_{ij} = \text{relu}(\sum_k \sum_{lm}\frac{\partial S_u}{\partial A_{lm}^k} A_{ij}^k)$
```
@tf.function
def sigmoid_gradcam(x, y):
print(f'Grad-CAM tracing x:{x.shape} y:{y.shape}')
with tf.GradientTape(watch_accessed_variables=False) as t:
t.watch(x)
l, a = nn_s(x, training=False)
dlda = t.batch_jacobian(l, a)
weights = tf.reduce_sum(dlda, axis=(-3, -2)) # bc(hw)k -> bck
maps = tf.einsum('bhwc,buc->buhw', a, weights)
return l, maps[..., tf.newaxis]
```
Note that for fully-convolutional network, with a single densely-connected softmax classifier at its end:
$S_u = \sum_k w^k_u [\frac{1}{hw}\sum_{lm}^{hw} A^k_{lm}]$
Then:
\begin{align}
\frac{\partial S_u}{\partial A_{lm}^k} &= \frac{\partial}{\partial A_{lm}^k} \sum_k w^k_u [\frac{1}{hw}\sum_{lm}^{hw} A^k_{lm}] \\
&= w^k_u [\frac{1}{hw}\frac{\partial A^k_{lm}}{\partial A^k_{lm}}] \\
&= \frac{w^k_u}{hw}
\end{align}
All constants (including $\frac{1}{hw}$) are erased when we apply normalization.
Therefore, in these conditions, this this method is equivalent to `CAM`.
### Grad-CAM++
$M(f, x)^u_{ij} = \sum_k \sum_{lm} \alpha_{lm}^{ku} \text{relu}(\frac{\partial S_u}{\partial A_{lm}^k}) A_{ij}^k$
Where
$\alpha_{lm}^{ku} = \frac{(\frac{\partial S_u}{\partial A_{lm}^k})^2}{2(\frac{\partial S_u}{\partial A_{lm}^k})^2 + \sum_{ab} A_{ab}^k(\frac{\partial S_u}{\partial A_{lm}^k})^3}$
```
@tf.function
def sigmoid_gradcampp(x, y):
print(f'Grad-CAM++ tracing x:{x.shape} y:{y.shape}')
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(x)
s, a = nn_s(x, training=False)
dsda = tape.batch_jacobian(s, a)
dyda = tf.einsum('bu,buhwk->buhwk', tf.exp(s), dsda)
d2 = dsda**2
d3 = dsda**3
aab = tf.reduce_sum(a, axis=(1, 2)) # (BK)
akc = tf.math.divide_no_nan(
d2,
2.*d2 + tf.einsum('bk,buhwk->buhwk', aab, d3)) # (2*(BUHWK) + (BK)*BUHWK)
weights = tf.einsum('buhwk,buhwk->buk', akc, tf.nn.relu(dyda)) # w: buk
maps = tf.einsum('buk,bhwk->buhw', weights, a) # a:bhwk, m: buhw
return s, maps[..., tf.newaxis]
```
### Score CAM
$M(f, x)^u_{ij} = \text{relu}(∑_k \text{softmax}(C(A_l^k)_u) A_l^k)$
Where
$
C(A_l^k)_u = f(X_b \circ \psi(\text{up}(A_l^k)))_u - f(X_b)_u
$
This algorithm has gone through updated. I followed the most recent implementation in [haofanwang/Score-CAM](https://github.com/haofanwang/Score-CAM/blob/master/cam/scorecam.py#L47).
```
def sigmoid_scorecam(x, y):
acts_used = Config.explaining.score_cam_activations
l, a = nn_s(x, training=False)
if acts_used == 'all' or acts_used is None:
acts_used = a.shape[-1]
# Sorting kernels from highest to lowest variance.
std = tf.math.reduce_std(a, axis=(1, 2))
a_high_std = tf.argsort(std, axis=-1, direction='DESCENDING')[:, :acts_used]
a = tf.gather(a, a_high_std, axis=3, batch_dims=1)
s = tf.Variable(tf.zeros((x.shape[0], y.shape[1], *Config.data.size)), name='sc_maps')
for ix in range(acts_used):
a_ix = a[..., ix:ix+1]
if tf.reduce_min(a_ix) == tf.reduce_max(a_ix):
break
s.assign_add(_scorecam_feed(x, a_ix))
return l, s[..., tf.newaxis]
@tf.function
def _scorecam_feed(x, a_ix):
print('Score-CAM feed tracing')
a_ix = tf.image.resize(a_ix, Config.data.size)
b = normalize(a_ix)
fm = nn(x * b, training=False)
fm = tf.nn.sigmoid(fm)
fm = tf.einsum('bc,bhw->bchw', fm, a_ix[..., 0])
return fm
```
Vectorized implementation:
The number of activating kernels used was defined in `Config.explaining.score_cam_activations`.
For each batch of 16 samples (`100 MB = 16 × 512×512×3 × 8÷1000÷1000`):
1. 16 samples are feed-forwarded, generating the logits `(16, 20)` and activations `(16, 16, 16, score_cam_activations)`
2. The masked input `(16 × score_cam_activations, 512, 512, 3)` is created (`score_cam_activations × 100 MB`).
3. The masked input is feed-forwarded (batching used to prevent the GPU from blowing up).
```python
def sigmoid_scorecam(x, y):
acts_used = Config.explaining.score_cam_activations
l, a = nn_s(x, training=False)
std = tf.math.reduce_std(a, axis=(1, 2))
a_high_std = tf.argsort(std, axis=-1, direction='DESCENDING')[:, :acts_used]
a = tf.gather(a, a_high_std, axis=-1, batch_dims=-1)
a = tf.image.resize(a, Config.data.size)
b = normalize(a)
b = tf.einsum('bhwc,bhwk->bkhwc', x, b) # outer product over 2 ranks
b = tf.reshape(b, (-1, *Config.data.shape)) # batchify (B*A, H, W, C)
fm = nn.predict(b, batch_size=Config.data.batch_size)
fm = tf.nn.sigmoid(fm)
fm = tf.reshape(fm, (x.shape[0], acts_used, fm.shape[1])) # unbatchify
s = tf.einsum('bhwk,bkc->bchw', a, fm)
s = tf.nn.relu(s)
s = s[..., tf.newaxis]
return l, s
```
## Dataset
### Augmentation Policy
```
def default_policy_fn(image):
# image = tf.image.resize_with_crop_or_pad(image, *Config.data.size)
# mask = tf.image.resize_with_crop_or_pad(mask, *Config.data.size)
return image
```
### Preparing and Performance Settings
```
data_path = Config.data.path
%%bash
if [ ! -d /root/tensorflow_datasets/amazon-from-space/ ]; then
mkdir -p /root/tensorflow_datasets/amazon-from-space/
# gdown --id 12wCmah0FFPIjI78YJ2g_YWFy97gaA5S9 --output /root/tensorflow_datasets/amazon-from-space/train-jpg.tfrecords
cp /content/drive/MyDrive/datasets/amazon-from-space/train-jpg.tfrecords \
/root/tensorflow_datasets/amazon-from-space/
else
echo "Dir $data_path found. Skipping."
fi
class AmazonFromSpace:
num_train_samples = 40479
num_test_samples = 61191
classes_ = np.asarray(
['agriculture', 'artisinal_mine', 'bare_ground', 'blooming', 'blow_down',
'clear', 'cloudy', 'conventional_mine', 'cultivation', 'habitation', 'haze',
'partly_cloudy', 'primary', 'road', 'selective_logging', 'slash_burn', 'water'])
@classmethod
def int2str(cls, indices):
return cls.classes_[indices]
@staticmethod
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value.tobytes()]))
@staticmethod
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
@staticmethod
def decode_fn(record_bytes):
r = tf.io.parse_single_example(record_bytes, {
'filename': tf.io.FixedLenFeature([], tf.string),
'image': tf.io.FixedLenFeature([], tf.string),
'height': tf.io.FixedLenFeature([], tf.int64, default_value=[256]),
'width': tf.io.FixedLenFeature([], tf.int64, default_value=[256]),
'channels': tf.io.FixedLenFeature([], tf.int64, default_value=[3]),
'label': tf.io.VarLenFeature(tf.int64),
})
r['image'] = tf.reshape(tf.io.decode_raw(r['image'], tf.uint8),
(r['height'], r['width'], r['channels']))
r['label'] = tf.sparse.to_dense(r['label'])
return r
@classmethod
def load(cls, tfrecords_path):
return tf.data.TFRecordDataset(tfrecords_path).map(cls.decode_fn, num_parallel_calls=tf.data.AUTOTUNE)
CLASSES = AmazonFromSpace.classes_
int2str = AmazonFromSpace.int2str
num_samples = AmazonFromSpace.num_train_samples
num_train_samples = int((1-Config.training.valid_size)*num_samples)
num_valid_samples = int(Config.training.valid_size*num_samples)
from functools import partial
@tf.function
def load_fn(d, augment=False):
image = d['image']
labels = d['label']
image = tf.cast(image, tf.float32)
image = tf.ensure_shape(image, Config.data.shape)
# image, _ = adjust_resolution(image)
image = (augment_policy_fn(image)
if augment
else default_policy_fn(image))
image = preprocess(image)
return image, labels_to_one_hot(labels)
def adjust_resolution(image):
es = tf.constant(Config.data.size, tf.float32)
xs = tf.cast(tf.shape(image)[:2], tf.float32)
ratio = tf.reduce_min(es / xs)
xsn = tf.cast(tf.math.ceil(ratio * xs), tf.int32)
image = tf.image.resize(image, xsn, preserve_aspect_ratio=True, method='nearest')
return image, ratio
def labels_to_one_hot(labels):
return tf.reduce_max(
tf.one_hot(labels, depth=CLASSES.shape[0]),
axis=0)
def prepare(ds, batch_size, cache=False, shuffle=False, augment=False):
if cache: ds = ds.cache()
if shuffle: ds = ds.shuffle(Config.data.shuffle_buffer_size, reshuffle_each_iteration=True, seed=Config.data.train_shuffle_seed)
return (ds.map(partial(load_fn, augment=augment), num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size, drop_remainder=True)
.prefetch(Config.data.prefetch_buffer_size))
train_dataset = AmazonFromSpace.load(f'{data_dir}/amazon-from-space/train-jpg.tfrecords')
valid_dataset = train_dataset.take(num_valid_samples)
train_dataset = train_dataset.skip(num_valid_samples)
# train = prepare(train_dataset, Config.data.batch_size)
valid = prepare(valid_dataset, Config.data.batch_size)
```
### Examples in The Dataset
```
#@title
for stage, batches, samples in zip(('validation',),
(valid,),
(num_valid_samples,)):
print(stage)
print(f' {batches}')
print(f' samples: {samples}')
print(f' steps : {samples // Config.data.batch_size}')
print()
#@title
for images, labels in valid.take(1):
gt = [' '.join((e[:3] for e in CLASSES[l].astype(str))) for l in labels.numpy().astype(bool)]
visualize(to_image(images[:16]), gt, rows=4, figsize=(12, 10));
```
## Network
```
print(f'Loading {Config.model.backbone.__name__}')
backbone = Config.model.backbone(
input_shape=Config.data.shape,
include_top=Config.model.include_top,
classifier_activation=Config.model.classifier_activation,
)
from tensorflow.keras.layers import Conv2D, Dense, Dropout, GlobalAveragePooling2D
class DenseKur(Dense):
"""Dense with Softmax Weights.
"""
def call(self, inputs):
kernel = self.kernel
ag = kernel # tf.abs(kernel)
ag = ag - tf.reduce_max(ag, axis=-1, keepdims=True)
ag = tf.nn.softmax(ag)
outputs = inputs @ (ag*kernel)
if self.use_bias:
outputs = tf.nn.bias_add(outputs, self.bias)
if self.activation is not None:
outputs = self.activation(outputs)
return outputs
def build_specific_classifier(
backbone,
classes,
dropout_rate=0.5,
name=None,
gpl='avg_pool',
):
x = tf.keras.Input(Config.data.shape, name='images')
y = backbone(x)
y = GlobalAveragePooling2D(name='avg_pool')(y)
# y = Dense(classes, name='predictions')(y)
y = DenseKur(classes, name='predictions')(y)
return tf.keras.Model(x, y, name=name)
backbone.trainable = False
nn = build_specific_classifier(backbone, len(CLASSES), name='resnet101_afs')
if Config.model.fine_tune_layers:
print(f'Unfreezing {Config.model.fine_tune_layers:.0%} layers.')
backbone.trainable = True
frozen_layer_ix = int((1-Config.model.fine_tune_layers) * len(backbone.layers))
for ix, l in enumerate(backbone.layers):
l.trainable = (ix > frozen_layer_ix and
(not isinstance(l, tf.keras.layers.BatchNormalization) or
not Config.model.freeze_batch_norm))
print(f'Loading weights from {Config.model.weights}')
nn.load_weights(Config.model.weights)
backbone.trainable = False
nn_s = tf.keras.Model(
inputs=nn.inputs,
outputs=[nn.output, nn.get_layer(Config.model.gap_layer_name).input],
name='nn_spatial')
sW, sb = nn_s.get_layer('predictions').weights
sW = sW * tf.nn.softmax(sW - tf.reduce_max(sW, axis=-1, keepdims=True))
```
## Saliency Methods
```
DT = 0.5
λ_pos = Config.explaining.λ_pos
λ_neg = Config.explaining.λ_neg
λ_bg = Config.explaining.λ_bg
```
### Min-Max CAM (ours, test ongoing)
$M(f, x)^u_{ij} = \sum_k [w^u_k - \frac{1}{|N_x|} \sum_{n\in N_x} w_k^n] A_{i,j}^k$
Where
$N_x = C_x\setminus \{u\}$
```
@tf.function
def min_max_sigmoid_cam(x, y):
print(f'Min-Max CAM (tracing x:{x.shape} p:{y.shape})')
l, a = nn_s(x, training=False)
c = len(CLASSES)
s_n = tf.reduce_sum(sW, axis=-1, keepdims=True)
s_n = s_n - sW
w = λ_pos*sW - λ_neg*s_n/(c-1)
maps = tf.einsum('bhwk,ku->buhw', a, w)
return l, maps[..., tf.newaxis]
@tf.function
def contextual_min_max_sigmoid_cam(x, y):
print(f'Contextual Min-Max CAM (tracing x:{x.shape} p:{y.shape})')
l, a = nn_s(x, training=False)
p = tf.nn.sigmoid(l)
d = tf.cast(p > DT, tf.float32)
c = tf.reduce_sum(d, axis=-1)
c = tf.reshape(c, (-1, 1, 1)) # detections (b, 1)
w = d[:, tf.newaxis, :] * sW[tf.newaxis, ...] # expand kernels in d and batches in sW
w_n = tf.reduce_sum(w, axis=-1, keepdims=True)
w_n = w_n - w
w = λ_pos*sW - λ_neg*w_n / tf.maximum(c-1, 1)
maps = tf.einsum('bhwk,bku->buhw', a, w)
return l, maps[..., tf.newaxis]
```
#### Contextual ReLU MinMax
$M(f, x)^u_{ij} = \sum_k [w^{u+}_k - \frac{1}{|N_x|} \sum_{n\in N_x} w^{n+}_k +\frac{1}{|C_i|}\sum_{n\in C_i} w^{n-}_k] A_{i,j}^k$
Where
$N_x = C_x\setminus \{u\}$
```
@tf.function
def contextual_relu_min_max_sigmoid_cam(x, y):
print(f'Contextual ReLU Min-Max CAM (tracing x:{x.shape} p:{y.shape})')
l, a = nn_s(x, training=False)
p = tf.nn.sigmoid(l)
d = tf.cast(p > DT, tf.float32)
c = tf.reshape(tf.reduce_sum(d, axis=-1), (-1, 1, 1)) # select only detected
w = d[:, tf.newaxis, :] * sW[tf.newaxis, ...] # expand kernels in d and batches in sW
wa = tf.reduce_sum(w, axis=-1, keepdims=True)
wn = wa - w
w = ( λ_pos * tf.nn.relu(sW)
- λ_neg * tf.nn.relu(wn) / tf.maximum(c-1, 1)
+ λ_bg * tf.minimum(0., wa) / tf.maximum(c, 1))
maps = tf.einsum('bhwk,bku->buhw', a, w)
return l, maps[..., tf.newaxis]
@tf.function
def contextual_relu_min_max_sigmoid_cam_2(x, y):
l, a = nn_s(x, training=False)
p = tf.nn.sigmoid(l)
aw = tf.einsum('bhwk,ku->buhw', a, sW)
d = p > DT
c = tf.reduce_sum(tf.cast(d, tf.float32), axis=-1)
c = tf.reshape(c, (-1, 1, 1, 1))
e = tf.repeat(d[:, tf.newaxis, ...], d.shape[1], axis=1)
e = tf.linalg.set_diag(e, tf.fill(e.shape[:-1], False))
z = tf.fill(aw.shape, -np.inf)
an = tf.where(e[..., tf.newaxis, tf.newaxis], aw[:, tf.newaxis, ...], z[:, tf.newaxis, ...])
ab = tf.where(d[..., tf.newaxis, tf.newaxis], aw, z)
an = tf.reduce_max(an, axis=2)
ab = tf.reduce_max(ab, axis=2, keepdims=True)
maps = (
tf.maximum(0., aw)
- tf.maximum(0., an)
+ tf.minimum(0., ab)
)
return l, maps[..., tf.newaxis]
```
### Min-Max Grad-CAM (ours, test ongoing)
$M(f, x)^u_{ij} = \text{relu}(\sum_k \sum_{l,m}\frac{\partial J_u}{\partial A_{l,m}^k} A_{i,j}^k)$
Where
$J_u = S_u - \frac{1}{|N|} \sum_{n\in N_x} S_n$
$N_x = C_x\setminus \{u\}$
```
def min_max_activation_gain(y, s):
c = len(CLASSES)
# shape(s) == (b, c)
s_n = tf.reduce_sum(s, axis=-1, keepdims=True) # shape(s_n) == (b)
s_n = s_n-s # shape(s_n) == (b, c)
return λ_pos*s - λ_neg*s_n / (c-1)
@tf.function
def min_max_sigmoid_gradcam(x, y):
print(f'Min-Max Grad-CAM (tracing x:{x.shape} p:{y.shape})')
with tf.GradientTape(watch_accessed_variables=False) as t:
t.watch(x)
l, a = nn_s(x, training=False)
loss = min_max_activation_gain(y, l)
dlda = t.batch_jacobian(loss, a)
weights = tf.reduce_sum(dlda, axis=(-3, -2))
maps = tf.einsum('bhwc,buc->buhw', a, weights)
return l, maps[..., tf.newaxis]
def contextual_min_max_activation_gain(y, s, p):
d = tf.cast(p > DT, tf.float32)
c = tf.reduce_sum(d, axis=-1, keepdims=True) # only detections
sd = s*d
s_n = tf.reduce_sum(sd, axis=-1, keepdims=True) # sum logits detected (b, 1)
return λ_pos*s - λ_neg*(s_n - sd)/tf.maximum(c-1, 1)
@tf.function
def contextual_min_max_sigmoid_gradcam(x, y):
print(f'Contextual Min-Max Grad-CAM (tracing x:{x.shape} p:{y.shape})')
with tf.GradientTape(watch_accessed_variables=False) as t:
t.watch(x)
l, a = nn_s(x, training=False)
p = tf.nn.sigmoid(l)
loss = contextual_min_max_activation_gain(y, l, p)
dlda = t.batch_jacobian(loss, a)
weights = tf.reduce_sum(dlda, axis=(-3, -2))
maps = tf.einsum('bhwc,buc->buhw', a, weights) # a*weights
return l, maps[..., tf.newaxis]
def contextual_relu_min_max_activation_gain(y, s):
p = tf.nn.sigmoid(s)
d = tf.cast(p > DT, tf.float32)
c = tf.reduce_sum(d, axis=-1, keepdims=True)
sd = s*d # only detections
sa = tf.reduce_sum(sd, axis=-1, keepdims=True) # sum logits detected (b, 1)
sn = sa - sd
return tf.stack((
λ_pos * s,
λ_neg * sn / tf.maximum(c-1, 1),
λ_bg * (sn+sd) / tf.maximum(c, 1)
), axis=1)
@tf.function
def contextual_relu_min_max_sigmoid_gradcam(x, y):
print(f'Contextual ReLU Min-Max Grad-CAM (tracing x:{x.shape} y:{y.shape})')
with tf.GradientTape(watch_accessed_variables=False) as t:
t.watch(x)
l, a = nn_s(x, training=False)
loss = contextual_relu_min_max_activation_gain(y, l)
dlda = t.batch_jacobian(loss, a)
w, wn, wa = dlda[:, 0], dlda[:, 1], dlda[:, 2]
w = ( tf.nn.relu(w)
- tf.nn.relu(wn)
+ tf.minimum(0., wa))
weights = tf.reduce_sum(w, axis=(-3, -2))
maps = tf.einsum('bhwc,buc->buhw', a, weights)
return l, maps[..., tf.newaxis]
```
## Qualitative Analysis
```
#@title
def visualize_explaining_many(
x,
y,
p,
maps,
N=None,
max_detections=3
):
N = N or len(x)
plt.figure(figsize=(16, 2*N))
rows = N
cols = 1+2*max_detections
actual = [','.join(CLASSES[_y]) for _y in y.numpy().astype(bool)]
for ix in range(N):
detections = p[ix] > DT
visualize_explaining(
x[ix],
actual[ix],
detections,
p[ix],
maps[ix],
i0=cols*ix,
rows=rows,
cols=cols,
full=False,
max_detections=max_detections
)
plt.tight_layout()
def visualize_explaining(image,
labels,
detections,
probs,
cams,
full=True,
i0=0,
rows=2,
cols=None,
max_detections=3):
detections = detections.numpy()
im = to_image(image)
_maps = tf.boolean_mask(cams, detections)
_maps = tf.image.resize(_maps, Config.data.size)
_masked = to_image(masked(image, _maps))
plots = [im, *_maps[:max_detections]]
title = [labels] + [f'{d} {p:.0%}' for d, p in zip(CLASSES[detections], probs.numpy()[detections])]
visualize(plots, title, full=full, rows=rows, cols=cols, i0=i0)
for ix, s in enumerate(_maps[:max_detections]):
plt.subplot(rows, cols, i0+len(plots)+ix+1)
plot_heatmap(im, s[..., 0])
cams = {}
for x, y in valid.take(1):
l = tf.convert_to_tensor(nn.predict(x))
p = tf.nn.sigmoid(l)
# Only samples with two or more objects:
s = tf.reduce_sum(tf.cast(p > DT, tf.int32), axis=1) > 1
x, y, l, p = x[s], y[s], l[s], p[s]
```
#### CAM
```
_, maps = sigmoid_cam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
cams['cam'] = maps
visualize_explaining_many(x, y, p, maps)
```
#### Grad-CAM
```
_, maps = sigmoid_gradcam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
# visualize_explaining_many(x, y, p, maps)
cams['gradcam'] = maps
```
#### Grad-CAM++
```
_, maps = sigmoid_gradcampp(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
# visualize_explaining_many(x, y, p, maps)
cams['gradcampp'] = maps
```
### Score-CAM
```
_, maps = sigmoid_scorecam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
visualize_explaining_many(x, y, p, maps)
cams['scorecam'] = maps
```
### Min-Max CAM
#### Vanilla
```
%%time
_, maps = min_max_sigmoid_cam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
# visualize_explaining_many(x, y, p, maps)
cams['minmax_cam'] = maps
```
#### Contextual
```
%%time
_, maps = contextual_min_max_sigmoid_cam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
# visualize_explaining_many(x, y, p, maps)
cams['contextual_minmax_cam'] = maps
```
#### Contextual ReLU
```
%%time
_, maps = contextual_relu_min_max_sigmoid_cam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
# visualize_explaining_many(x, y, p, maps)
cams['contextual_relu_minmax_cam'] = maps
%%time
_, maps = contextual_relu_min_max_sigmoid_cam_2(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
# visualize_explaining_many(x, y, p, maps)
cams['contextual_relu_minmax_cam_2'] = maps
```
### Min-Max Grad-CAM
#### Vanilla
```
_, maps = min_max_sigmoid_gradcam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
visualize_explaining_many(x, y, p, maps)
cams['minmax_gradcam'] = maps
```
#### Contextual
```
_, maps = contextual_min_max_sigmoid_gradcam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
visualize_explaining_many(x, y, p, maps)
cams['contextual_minmax_gradcam'] = maps
```
#### Contextual ReLU
```
_, maps = contextual_relu_min_max_sigmoid_gradcam(x, y)
maps = tf.nn.relu(maps)
maps = normalize(maps)
visualize_explaining_many(x, y, p, maps)
cams['contextual_relu_minmax_gradcam'] = maps
```
### Summary
```
observing = 'cam minmax_cam contextual_minmax_cam contextual_relu_minmax_cam'.split()
titles = 'Input CAM MM C-MM CG-MM'.split()
print('Results selected for vis:', *observing, sep='\n ')
#@title
detections = p > DT
indices = tf.where(detections)
sample_ix, label_ix = indices[:, 0], indices[:, 1]
visualize(
sum(zip(to_image(tf.gather(x, sample_ix)[:48]).numpy(),
*(tf.image.resize(cams[c][detections][:48], Config.data.size).numpy()
for c in observing)),
()),
title=titles,
rows=sample_ix[:48].shape[0],
figsize=(12, 80)
);
#@title
plt.figure(figsize=(12, 80))
selected_images = to_image(tf.gather(x, sample_ix)[:48]).numpy()
rows = len(selected_images)
cols = len(observing) + 1
for ix, im in enumerate(selected_images):
plt.subplot(rows, cols, ix*cols + 1)
plt.imshow(im)
plt.axis('off')
for j, method in enumerate(observing):
map = cams[method][detections][ix]
map = tf.image.resize(map, Config.data.size)
plt.subplot(rows, cols, ix*cols + j + 2)
plot_heatmap(im, map[..., 0])
if ix == 0:
plt.title(titles[j+1])
plt.tight_layout()
```
## Quantitative Analysis
#### Metrics
##### **Increase in Confidence %**
> *Removing background noise should improve confidence (higher=better)*
$\frac{1}{∑ |C_i|} ∑^N_i∑^{C_i}_c [Y^c_i < O_{ic}^c] 100$
Thought: this probably works better for *softmax* classifiers.
```
def increase_in_confidence(
p, # f(x) (batch, classes)
y, # f(x)[f(x) > 0.5] (detections, 1)
o, # f(x*mask(x, m)) (detections, classes)
samples_ix,
units_ix
):
oc = tf.gather(o, units_ix, axis=1, batch_dims=1) # (detections, 1)
incr = np.zeros(p.shape, np.uint32)
incr[samples_ix, units_ix] = tf.cast(y < oc, tf.uint32).numpy()
return incr.sum(axis=0)
```
##### **Average Drop %**
> *Masking with an accurate mask should not decrease confidence (lower=better)*
$\frac{1}{∑ |C_i|} ∑_i^N ∑_c^{C_i} \frac{max(0, Y_i^c − O_{ic}^c)}{Y_i^c} 100$
Measures if your mask is correctly positioned on top of the important regions that determine the class of interest.
```
def average_drop(p, y, o, samples_ix, units_ix):
oc = tf.gather(o, units_ix, axis=1, batch_dims=1)
drop = np.zeros(p.shape)
drop[samples_ix, units_ix] = (tf.nn.relu(y - oc) / y).numpy()
return drop.sum(axis=0)
```
##### **Average Retention % (ours, testing ongoing)**
> *Masking the input with an accurate complement mask for class $c$ should decrease confidence in class $c$ (higher=better)*
$\frac{1}{∑ |C_i|} ∑_i^N ∑_c^{C_i} \frac{max(0, Y_i^c − \bar{O}_{ic}^c)}{Y_i^c} 100$
Where $\bar{O}_{ic}^c = f(x_i \circ (1-\psi(M(f, x_i)_{hw}^c))^c$
Masking the input $x_i$ for all classes except $c$ should cause the model's confidence in $c$ to drop.
###### **Average Drop & Retention % (ours)**
\begin{align}
\frac{\text{drop} + (1-\text{retention})}{2}
&= \frac{1}{2 ∑ |C_i|} ∑_i^N ∑_c^{C_i} [\frac{max(0, Y_i^c − O_{ic}^c)}{Y_i^c} + (1-\frac{max(0, Y_i^c − \bar{O}_{ic}^c)}{Y_i^c})] 100 \\
&= \frac{1}{2 ∑ |C_i|} ∑_i^N ∑_c^{C_i} [1 + \frac{Y_i^c − O_{ic}^c - (Y_i^c − \bar{O}_{ic}^c)}{Y_i^c}] 100 \\
&= \frac{1}{2 ∑ |C_i|} ∑_i^N ∑_c^{C_i} [1 + \frac{\bar{O}_{ic}^c − O_{ic}^c}{Y_i^c}] 100
\end{align}
Where
* $O_{ic}^c = f(x_i \circ \psi(M(f, x_i)_{hw}^c)^c$
* $\bar{O}_{ic}^c = f(x_i \circ (1-\psi(M(f, x_i)_{hw}^c))^c$
##### **Average Drop of Others % (ours, testing ongoing)**
> *An ideal mask for class $c$ is not retaining any objects of other classes (higher=better)*
$\frac{1}{∑ |C_i|} ∑_i^N ∑_c^{C_i} \frac{1}{|D_i|} ∑_d^{D_i} \frac{max(0, Y_i^d − O_{ic}^d)}{Y_i^d} 100$
Masking the input $x_i$ for a given class $c$ should cause the confidence in other classes to drop.
I.e., $f(x_i\circ \psi(M(f, x_i)^c_{hw}))^d \sim 0, \forall d\in D_i = C_i\setminus \{c\}$.
For single-label problems, $D_i = \emptyset$ and Average Drop of Others is not defined. How to solve this?
```
def average_drop_of_others(p, s, y, o, samples_ix, units_ix):
# Drop of all units, for all detections
d = tf.gather(p, samples_ix)
d = tf.nn.relu(d - o) / d
# Remove drop of class `c` and non-detected classes
detected = tf.cast(tf.gather(s, samples_ix), tf.float32)
d = d*detected
d = tf.reduce_sum(d, axis=-1) - tf.gather(d, units_ix, axis=1, batch_dims=1)
c = tf.reduce_sum(detected, axis=-1)
# Normalize by the number of peer labels for detection `c`
d = d / tf.maximum(1., c -1)
drop = np.zeros(p.shape)
drop[samples_ix, units_ix] = d.numpy()
return drop.sum(axis=0)
```
##### **Average Retention of Others % (ours, testing ongoing)**
> *An ideal mask complement for class $c$ should cover all objects of the other classes (lower=better)*
$\frac{1}{∑ |D_i|} ∑_i^N ∑_c^{C_i} \frac{1}{|D_i|} ∑_d^{D_i} \frac{max(0, Y_i^d − \bar{O}_{ic}^d)}{Y_i^d} 100$
Masking the input $x_i$ for all classes except $c$ should cause the confidence in other classes to stay the same or increase.
I.e., $f(x_i\circ (1-\psi(M(f, x_i)^c_{ij})))^d \approx f(x_i)^d, \forall d\in D_i = C_i\setminus \{c\}$.
#### Experiments
```
#@title Testing Loop
def experiment_with(dataset, cam_method, cam_modifier):
print(f'Testing {cam_method.__name__}')
t = time()
r = cam_evaluation(nn, dataset, cam_method=cam_method, cam_modifier=cam_modifier)
print(f'elapsed: {(time() - t)/60:.1f} minutes', end='\n\n')
return r.assign(method=cam_method.__name__)
def cam_evaluation(nn, dataset, cam_method, cam_modifier):
metric_names = ('increase %', 'avg drop %', 'avg retention %',
'avg drop of others %', 'avg retention of others %',
'detections')
metrics = (np.zeros(len(CLASSES), np.uint16),
np.zeros(len(CLASSES)),
np.zeros(len(CLASSES)),
np.zeros(len(CLASSES)),
np.zeros(len(CLASSES)),
np.zeros(len(CLASSES), np.uint16))
try:
for step, (x, y) in enumerate(dataset):
p, maps = cam_method(x, y)
p = tf.nn.sigmoid(p)
maps = cam_modifier(maps)
for e, f in zip(metrics, cam_evaluation_step(nn, x, p, maps)):
e += f
print('.', end='' if (step+1) % 80 else '\n')
print()
except KeyboardInterrupt:
print('interrupted')
metrics, detections = metrics[:-1], metrics[-1]
results = {n: 100*m/detections for n, m in zip(metric_names, metrics)}
results['label'] = CLASSES
results['detections'] = detections
results = pd.DataFrame(results)
print(f'Average Drop %: {results["avg drop %"].mean():.4}%')
print(f'Average Increase %: {results["increase %"].mean():.4}%')
return results
def cam_evaluation_step(nn, x, p, m):
s = p > DT
w = tf.where(s)
samples_ix, units_ix = w[:, 0], w[:, 1]
md = tf.image.resize(m[s], Config.data.size)
detections = tf.reduce_sum(tf.cast(s, tf.uint32), axis=0)
y = p[s] # (batch, c) --> (detections)
xs = tf.gather(x, samples_ix) # (batch, 300, 300, 3) --> (detections, 300, 300, 3)
o = nn.predict(masked(xs, md), batch_size=Config.data.batch_size)
o = tf.nn.sigmoid(o)
co = nn.predict(masked(xs, 1 -md), batch_size=Config.data.batch_size)
co = tf.nn.sigmoid(co)
samples_ix, units_ix = samples_ix.numpy(), units_ix.numpy()
incr = increase_in_confidence(p, y, o, samples_ix, units_ix)
drop = average_drop(p, y, o, samples_ix, units_ix)
rete = average_drop(p, y, co, samples_ix, units_ix)
drop_of_others = average_drop_of_others(p, s, y, o, samples_ix, units_ix)
rete_of_others = average_drop_of_others(p, s, y, co, samples_ix, units_ix)
return incr, drop, rete, drop_of_others, rete_of_others, detections.numpy()
methods_being_tested = (
# Baseline
# sigmoid_cam,
# sigmoid_gradcam,
# Best solutions
# sigmoid_gradcampp,
sigmoid_scorecam,
# Ours
# min_max_sigmoid_cam,
# contextual_min_max_sigmoid_cam,
# contextual_relu_min_max_sigmoid_cam,
# contextual_relu_min_max_sigmoid_cam_2,
)
relu_and_normalize = lambda c: normalize(tf.nn.relu(c))
results = pd.concat(
[
experiment_with(valid, m, relu_and_normalize)
for m in methods_being_tested
]
)
# if os.path.exists(Config.report):
# raise FileExistsError('You are asking me to override a report file.')
results.to_csv(Config.report, index=False)
```
#### Report
```
results = pd.read_csv(Config.report)
methods = (
'sigmoid_cam',
'sigmoid_gradcampp',
'sigmoid_scorecam',
'min_max_sigmoid_cam',
'contextual_min_max_sigmoid_cam',
'contextual_relu_min_max_sigmoid_cam',
)
methods_detailed = (
'sigmoid_cam',
'sigmoid_scorecam',
'contextual_min_max_sigmoid_cam',
'contextual_relu_min_max_sigmoid_cam'
)
metric_names = (
'increase %',
'avg drop %',
'avg retention %',
'avg drop of others %',
'avg retention of others %',
'f1 score',
'f1 score negatives'
)
minimizing_metrics = {'avg drop %', 'avg retention of others %', 'f1 score negatives'}
def fb_score(a, b, beta=1):
beta2 = beta**2
denom = (beta2 * b + a)
denom[denom == 0.] = 1
return (1+beta2) * a * b / denom
results['f1/2 score'] = fb_score(results['avg retention %'], results['avg drop of others %'], beta=1/2)
results['f1 score'] = fb_score(results['avg retention %'], results['avg drop of others %'], beta=1)
results['f2 score'] = fb_score(results['avg retention %'], results['avg drop of others %'], beta=2)
results['f1/2 score negatives'] = fb_score(results['avg drop %'], results['avg retention of others %'], beta=1/2)
results['f1 score negatives'] = fb_score(results['avg drop %'], results['avg retention of others %'], beta=1)
results['f2 score negatives'] = fb_score(results['avg drop %'], results['avg retention of others %'], beta=2)
(results
.drop('detections', axis=1)
.groupby('method')
.mean()
.round(4)/100)
#@title Macro Average (Class-Balanced)
macro_avg = (
results
.groupby('method')
.mean()
.reindex(methods)[list(metric_names)]
) / 100
macro_avg_hm = macro_avg.copy()
for m in minimizing_metrics:
macro_avg_hm[m] = 1-macro_avg_hm[m]
macro_avg_hm -= macro_avg_hm.min(axis=0)
macro_avg_hm /= macro_avg_hm.max(axis=0) + 1e-07
plt.figure(figsize=(6, 6))
sns.heatmap(
macro_avg_hm,
fmt='.2%',
annot=macro_avg,
cmap='RdPu',
cbar=False,
xticklabels=[c.replace('_', ' ') for c in macro_avg.columns],
yticklabels=[i.replace('_', ' ') for i in macro_avg.index],
);
#@title Weighted Average (Class Frequency Weighted)
total_detections = (
results
.groupby('method')
.agg({'detections': 'sum'})
.rename(columns={'detections': 'total_detections'})
)
w_avg = results.merge(total_detections, how='left', left_on='method', right_index=True)
metric_results = {
m: w_avg[m] * w_avg.detections / w_avg.total_detections
for m in metric_names
}
metric_results['method'] = w_avg.method
metric_results['label'] = w_avg.label
w_avg = (
pd.DataFrame(metric_results)
.groupby('method')
.sum()
.reindex(methods) / 100
)
hm = w_avg.copy()
for m in minimizing_metrics:
hm[m] = 1-hm[m]
hm -= hm.min(axis=0)
hm /= hm.max(axis=0) + 1e-07
plt.figure(figsize=(6, 6))
sns.heatmap(
hm,
fmt='.2%',
annot=w_avg,
cmap='RdPu',
cbar=False,
xticklabels=[c.replace('_', ' ') for c in w_avg.columns],
yticklabels=[i.replace('_', ' ') for i in w_avg.index]
);
#@title Detailed Results per Class
plt.figure(figsize=(16, 6))
sns.boxplot(
data=results[results.method.isin(methods_detailed)]
.melt(('method', 'label'), metric_names, 'metric'),
hue='method',
x='metric',
y='value'
);
#@title F1 Score by Label and CAM Method
plt.figure(figsize=(16, 6))
sns.barplot(
data=results.sort_values('f1 score', ascending=False),
hue='method',
y='f1 score',
x='label'
)
plt.xticks(rotation=-45);
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Running TFLite models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Examples/TFLite_Week1_Linear_Regression.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Examples/TFLite_Week1_Linear_Regression.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
## Setup
```
try:
%tensorflow_version 2.x
except:
pass
import pathlib
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
print('\u2022 Using TensorFlow Version:', tf.__version__)
```
## Create a Basic Model of the Form y = mx + c
```
# Create a simple Keras model.
x = [-1, 0, 1, 2, 3, 4]
y = [-3, -1, 1, 3, 5, 7]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd',
loss='mean_squared_error')
model.fit(x, y, epochs=200)
```
## Generate a SavedModel
```
export_dir = 'saved_model/1'
tf.saved_model.save(model, export_dir)
```
## Convert the SavedModel to TFLite
```
# Convert the model.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
tflite_model_file = pathlib.Path('model.tflite')
tflite_model_file.write_bytes(tflite_model)
```
## Initialize the TFLite Interpreter To Try It Out
```
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the TensorFlow Lite model on random input data.
input_shape = input_details[0]['shape']
inputs, outputs = [], []
for _ in range(100):
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
tflite_results = interpreter.get_tensor(output_details[0]['index'])
# Test the TensorFlow model on random input data.
tf_results = model(tf.constant(input_data))
output_data = np.array(tf_results)
inputs.append(input_data[0][0])
outputs.append(output_data[0][0])
```
## Visualize the Model
```
%matplotlib inline
plt.plot(inputs, outputs, 'r')
plt.show()
```
## Download the TFLite Model File
If you are running this notebook in a Colab, you can run the cell below to download the tflite model to your local disk.
**Note**: If the file does not download when you run the cell, try running the cell a second time.
```
try:
from google.colab import files
files.download(tflite_model_file)
except:
pass
```
| github_jupyter |
## Facial keypoints detection
In this task you will create facial keypoint detector based on CNN regressor.

### Load and preprocess data
Script `get_data.py` unpacks data — images and labelled points. 6000 images are located in `images` folder and keypoint coordinates are in `gt.csv` file. Run the cell below to unpack data.
```
from get_data import unpack
unpack('facial-keypoints-data.zip')
```
Now you have to read `gt.csv` file and images from `images` dir. File `gt.csv` contains header and ground truth points for every image in `images` folder. It has 29 columns. First column is a filename and next 28 columns are `x` and `y` coordinates for 14 facepoints. We will make following preprocessing:
1. Scale all images to resolution $100 \times 100$ pixels.
2. Scale all coordinates to range $[-0.5; 0.5]$. To obtain that, divide all x's by width (or number of columns) of image, and divide all y's by height (or number of rows) of image and subtract 0.5 from all values.
Function `load_imgs_and_keypoint` should return a tuple of two numpy arrays: `imgs` of shape `(N, 100, 100, 3)`, where `N` is the number of images and `points` of shape `(N, 28)`.
```
### Useful routines for preparing data
from numpy import array, zeros
from os.path import join
from skimage.color import gray2rgb
from skimage.io import imread
from skimage.transform import resize
def load_imgs_and_keypoints(dirname='facial-keypoints'):
# Write your code for loading images and points here
pass
imgs, points = load_imgs_and_keypoints()
# Example of output
%matplotlib inline
from skimage.io import imshow
imshow(imgs[0])
points[0]
```
### Visualize data
Let's prepare a function to visualize points on image. Such function obtains two arguments: an image and a vector of points' coordinates and draws points on image (just like first image in this notebook).
```
import matplotlib.pyplot as plt
# Circle may be useful for drawing points on face
# See matplotlib documentation for more info
from matplotlib.patches import Circle
def visualize_points(img, points):
# Write here function which obtains image and normalized
# coordinates and visualizes points on image
pass
visualize_points(imgs[1], points[1])
```
### Train/val split
Run the following code to obtain train/validation split for training neural network.
```
from sklearn.model_selection import train_test_split
imgs_train, imgs_val, points_train, points_val = train_test_split(imgs, points, test_size=0.1)
```
### Simple data augmentation
For better training we will use simple data augmentation — flipping an image and points. Implement function flip_img which flips an image and its' points. Make sure that points are flipped correctly! For instance, points on right eye now should be points on left eye (i.e. you have to mirror coordinates and swap corresponding points on the left and right sides of the face). VIsualize an example of original and flipped image.
```
def flip_img(img, points):
# Write your code for flipping here
pass
f_img, f_points = flip_img(imgs[1], points[1])
visualize_points(f_img, f_points)
```
Time to augment our training sample. Apply flip to every image in training sample. As a result you should obtain two arrays: `aug_imgs_train` and `aug_points_train` which contain original images and points along with flipped ones.
```
# Write your code here
visualize_points(aug_imgs_train[2], aug_points_train[2])
visualize_points(aug_imgs_train[3], aug_points_train[3])
```
### Network architecture and training
Now let's define neural network regressor. It will have 28 outputs, 2 numbers per point. The precise architecture is up to you. We recommend to add 2-3 (`Conv2D` + `MaxPooling2D`) pairs, then `Flatten` and 2-3 `Dense` layers. Don't forget about ReLU activations. We also recommend to add `Dropout` to every `Dense` layer (with p from 0.2 to 0.5) to prevent overfitting.
```
from keras.models import Sequential
from keras.layers import (
Conv2D, MaxPooling2D, Flatten,
Dense, Dropout
)
model = Sequential()
# Define here your model
```
Time to train! Since we are training a regressor, make sure that you use mean squared error (mse) as loss. Feel free to experiment with optimization method (SGD, Adam, etc.) and its' parameters.
```
# ModelCheckpoint can be used for saving model during training.
# Saved models are useful for finetuning your model
# See keras documentation for more info
from keras.callbacks import ModelCheckpoint
from keras.optimizers import SGD, Adam
# Choose optimizer, compile model and run training
```
### Visualize results
Now visualize neural network results on several images from validation sample. Make sure that your network outputs different points for images (i.e. it doesn't output some constant).
```
# Example of output
```
| github_jupyter |
```
import os
import math
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
### Немного о ресурсах в Spark
Spark управляет ресурсами через Driver, а ресурсы - распределенные Executors.

`Executor` это рабочий процесс, который запускает индивидуальные заданий в общем Spark Job. Они запускаются с процессом, выполняют свою задачу и посылают результаты на `driver`.
`Driver` это процесс осуществляющий контроль над запущенным Spark Job. Он конвертирует код в задачи и составляет расписание задач для `executors`.
Рабочий процесс, который они делуют на основе вашего года можно изобрать так:
- driver запрашивает ресурсы у cluster manager
- запускаются executors, если доступны выделеные ресурсы
- driver планирует работу и трансформирует ваш код в план выполнения
- executors выполняют задачу и отправляют отчет о своей работе на driver
- каждый процесс должен быть завершен и driver по нему должен получить результат (отчет), иначе процесс должен быть перезапущен driver'ом
- spark.stop() завершает работу и освобождает ресурсы.

#### Сравним работу hadoop и spark
`Процесс работы Hadoop:`




-----------------------------------------
`Процесс работы Spark(client mode):`


`Процесс работы Spark(cluster mode):`


**Для режима cluster mode нужна интеграция с YARN**
Для этого нужны настроенные параметры в Spark для взаимодействия с Hadoop:
- настроить HADOOP_CONF_DIR переменную среды
- настроить SPARK_HOME переменную среды
- в SPARK_HOME/conf/spark-defaults.conf установить параметр spark.master в yarn: `spark.master | yarn`
- копировать SPARK_HOME/jars в hdfs для взаимодействия между нодами: `hdfs dfs -put *.jar /user/spark/share/lib`
Пример `.bashrc`:
```bash
export HADOOP_CONF_DIR=/<path of hadoop dir>/etc/hadoop
export YARN_CONF_DIR=/<path of hadoop dir>/etc/hadoop
export SPARK_HOME=/<path of spark dir>
export LD_LIBRARY_PATH=/<path of hadoop dir>/lib/native:$LD_LIBRARY_PATH
```
Добавить изменения в конфиг `spark-default.conf`:
```bash
spark.master yarn
spark.yarn.jars hdfs://path_to_master:9000/user/spark/share/lib/*.jar
spark.executor.memory 1g
spark.driver.memory 512m
spark.yarn.am.memory 512m
```
## Расчет параметров для Spark
```
# установите параметры кластера
# сколько хотите задействовать excecutors
# (это может быть любое число от 1 до +inf, это нужно для расчетов в качестве индексов)
max_executor = 300
# количествол узлов в кластере (запомним, что 1 Node = 1 Worker process)
# ед. - штук
nodes = 10
nodes = nodes - 1
# количество процессоров на кластере (сумма по всем нодам) (в примере: вы будете использовать 1/3 от кластера)
# ед. - штук
cpu = 80
cpu = int((cpu - 1) / 3)
# количество ОЗУ на кластере (в примере: вы будете использовать 1/3 от кластера)
# ед. - мегабайт
memo = 1048576
memo = int((memo - 1024) / 3)
# установите коэффициент overhead памяти (здесь базовое значение)
overhead_coef = 0.1
# кол-во параллеризма - базовое количество партиций в RDD
parallelizm_per_core = 5
# фактор партиционирования в hdfs
partition_factor = 3
```
### Запомним!
Мы распеределяем ресурсы для Spark, надо учитывать, что Spark выполяет такие операции:

```
# создаем DataFrame с установкой параметров (кол-во executors устанавливаем как нарастающее значение)
df = pd.DataFrame(dict(executors=np.arange(1, max_executor)))
# рассчёт кол-ва памяти на 1 executor
# можно использовать формулу NUM_CORES * ((EXECUTOR_MEMORY + MEMORY_OVERHEAD) / EXECUTOR_CORES) (результат Гб на executor)
# но здесь ещё один метод рассчета
df['total_memo_per_executor'] = np.floor((memo / df.executors) * 0.9)
# overhead по стандарту 10% от executor
df['total_memooverhead_per_executor'] = df['total_memo_per_executor'] * 0.10
# остаток не используемой памяти на ноде
df['unused_memo_per_node'] = memo - (df.executors * df['total_memo_per_executor'])
# сколько займет процессоров
df['total_core_per_executor'] = np.floor(cpu / df.executors)
# остаток процессоров на ноде
df['unused_core_per_nodes'] = cpu - (df.executors * df['total_core_per_executor'])
# расчет памяти на executor
df['overhead_memo'] = (df['total_memo_per_executor'] * overhead_coef)
df['executor_memo'] = df['total_memo_per_executor'] - df['overhead_memo']
# кол-во процессор на executor
df['executor_cores'] = np.floor(cpu / df.executors)
# минус 1 для driver
df['executor_instance'] = (df.executors *df['executor_cores']) - 1
# расчитываем или меняем по коду в ручную (можно начать с 2 для parallelizm_per_core)
df['parallelism'] = df['executor_instance'] * df['executor_cores'] * parallelizm_per_core
df['num_partitions'] = df['executor_instance'] * df['executor_cores'] * partition_factor
# % используемой памяти на кластере
df['used_memo_persentage'] = (1- ((df['overhead_memo'] + df['executor_memo']) / memo)) * 100
# % использумых процессоров на кластере
df['used_cpu_persentage'] = ((cpu - df['unused_core_per_nodes']) / cpu) * 100
df.head(10)
# посмотрим на графике распределение ресурсов
(df[(df['used_memo_persentage'] > 0) & \
(df['used_cpu_persentage'] > 0) & \
(df['used_memo_persentage'] <= 100) & \
(df['used_cpu_persentage'] <= 100) & \
(df['executor_instance'] > 0) & \
(df['parallelism'] > 0) & \
(df['num_partitions'] > 0)
])[['executors',
'executor_cores',
'executor_instance',
'used_cpu_persentage'
]].plot(kind='box', figsize=(6,6), title='Распределение значений по параметрам')
# посмотрим на графике распределение ресурсов
fig, ax = plt.subplots()
tdf1 = (df[(df['used_memo_persentage'] > 0) & \
(df['used_cpu_persentage'] > 0) & \
(df['used_memo_persentage'] <= 100) & \
(df['used_cpu_persentage'] <= 100) & \
(df['executor_instance'] > 0) & \
(df['parallelism'] > 0) & \
(df['num_partitions'] > 0)
])[['executors',
'executor_cores',
'executor_instance',
'used_cpu_persentage',
'used_memo_persentage'
]].plot(ax = ax, figsize=(10, 6))
tdf2 = (df[(df['used_memo_persentage'] > 0) & \
(df['used_cpu_persentage'] > 0) & \
(df['used_memo_persentage'] <= 100) & \
(df['used_cpu_persentage'] <= 100) & \
(df['executor_instance'] > 0) & \
(df['parallelism'] > 0) & \
(df['num_partitions'] > 0)
])[[
'executor_memo'
]]
ax2 = ax.twinx()
rspine = ax2.spines['right']
rspine.set_position(('axes', 1.15))
ax2.set_frame_on(True)
ax2.patch.set_visible(False)
fig.subplots_adjust(right=0.7)
tdf2.plot(ax=ax2, color='black')
ax2.legend(bbox_to_anchor=(1.5, 0.5))
# установите количество используйемой памяти и процессоров
# например: в расчете вы использовали 1/3 от доступных ресурсов (так как вас 3 человека на кластере)
# сделайте выбор всех доступных вам ресурсов
df_opt = df[(df['used_memo_persentage'] == df['used_memo_persentage'].max())]
df_opt.head()
# вы видете, что max() по ОЗУ превышает кол-во CPU и вы не можете использовать такие параметры
# сделаем выбор по другим фильтрам (чтобы получить нормальные параметры)
# executor_instance
# parallelism
# num_partitions
# установим максимальное использование выделенных ресурсов.
df_opt = df[(df['executor_instance'] > 0) & \
(df['parallelism'] > 0) & \
(df['num_partitions'] > 0)]
df_opt = df_opt[(df_opt['used_memo_persentage'] == df_opt['used_memo_persentage'].max())]
df_opt
# запишем параметры в переменные
sparkYarnExecutorMemoryOverhead = "{}Mb".format(df_opt['overhead_memo'].astype('int').values[0])
sparkExecutorsMemory = "{}Mb".format(df_opt['executor_memo'].astype('int').values[0])
sparkDriverMemory = "{}Mb".format(df_opt['executor_memo'].astype('int').values[0])
sparkDriverMaxResultSize = "{}Mb".format(df_opt['executor_memo'].astype('int').values[0] if df_opt['executor_memo'].astype('int').values[0] <= 4080 else 4080)
sparkExecutorCores = "{}".format(df_opt['executor_cores'].astype('int').values[0])
sparkDriverCores = "{}".format(df_opt['executor_cores'].astype('int').values[0])
defParallelism = "{}".format(df_opt['parallelism'].astype('int').values[0])
sparkDriverMemory
# Добавим параметры
# sparkDynamicAllocationEnabled при "true" будет динамическое выделение ресурсов на весь доступный объем
sparkDynamicAllocationEnabled = "true"
sparkShuffleServiceEnabled = "true"
# время освобождения ресурсов
sparkDynamicAllocationExecutorIdleTimeout = "60s"
# освобождение кэшированных данных
sparkDynamicAllocationCachedExecutorIdleTimeout = "600s"
# ограничиваем кол-во ресурсов на 1 spark session
sparkDynamicAllocationMaxExecutors = "{}".format(df_opt['executors'].astype('int').values[0])
sparkDynamicAllocationMixExecutors = "{}".format(int(df_opt['executors'].values[0] /10))
# устанавливаем сериалайзер
sparkSerializer = "org.apache.spark.serializer.KryoSerializer"
# перебераем порты для соединения, вдруг что-то будет занято
sparkPortMaxRetries = "10"
# архивируем папку spark и загружаем на hdfs для ускорения работы
sparkYarnArchive = "hdfs:///nes/spark.zip"
# устанавливаем работы GC
sparkExecutorExtraJavaOptions = "-XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent=35 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:OnOutOfMemoryError='kill -9 %p'",
spark.driver.extraJavaOptions = "-XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent=35 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:OnOutOfMemoryError='kill -9 %p'",
yarnNodemanagerVmem_check_enabled = "false",
yarnNodemanagerPmem_check_enabled = "false"
```
| github_jupyter |
<table width=60% >
<tr style="background-color: white;">
<td><img src='https://www.creativedestructionlab.com/wp-content/uploads/2018/05/xanadu.jpg'></td>></td>
</tr>
</table>
---
<img src='https://raw.githubusercontent.com/XanaduAI/strawberryfields/master/doc/_static/strawberry-fields-text.png'>
---
<br>
<center> <h1> Gaussian boson sampling tutorial </h1></center>
To get a feel for how Strawberry Fields works, let's try coding a quantum program, Gaussian boson sampling.
## Background information: Gaussian states
A Gaussian state is one that can be described by a [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function) in the phase space. For example, for a single mode Gaussian state, squeezed in the $x$ quadrature by squeezing operator $S(r)$, could be described by the following [Wigner quasiprobability distribution](Wigner quasiprobability distribution):
$$W(x,p) = \frac{2}{\pi}e^{-2\sigma^2(x-\bar{x})^2 - 2(p-\bar{p})^2/\sigma^2}$$
where $\sigma$ represents the **squeezing**, and $\bar{x}$ and $\bar{p}$ are the mean **displacement**, respectively. For multimode states containing $N$ modes, this can be generalised; Gaussian states are uniquely defined by a [multivariate Gaussian function](https://en.wikipedia.org/wiki/Multivariate_normal_distribution), defined in terms of the **vector of means** ${\mu}$ and a **covariance matrix** $\sigma$.
### The position and momentum basis
For example, consider a single mode in the position and momentum quadrature basis (the default for Strawberry Fields). Assuming a Gaussian state with displacement $\alpha = \bar{x}+i\bar{p}$ and squeezing $\xi = r e^{i\phi}$ in the phase space, it has a vector of means and a covariance matrix given by:
$$ \mu = (\bar{x},\bar{p}),~~~~~~\sigma = SS\dagger=R(\phi/2)\begin{bmatrix}e^{-2r} & 0 \\0 & e^{2r} \\\end{bmatrix}R(\phi/2)^T$$
where $S$ is the squeezing operator, and $R(\phi)$ is the standard two-dimensional rotation matrix. For multiple modes, in Strawberry Fields we use the convention
$$ \mu = (\bar{x}_1,\bar{x}_2,\dots,\bar{x}_N,\bar{p}_1,\bar{p}_2,\dots,\bar{p}_N)$$
and therefore, considering $\phi=0$ for convenience, the multimode covariance matrix is simply
$$\sigma = \text{diag}(e^{-2r_1},\dots,e^{-2r_N},e^{2r_1},\dots,e^{2r_N})\in\mathbb{C}^{2N\times 2N}$$
If a continuous-variable state *cannot* be represented in the above form (for example, a single photon Fock state or a cat state), then it is non-Gaussian.
### The annihilation and creation operator basis
If we are instead working in the creation and annihilation operator basis, we can use the transformation of the single mode squeezing operator
$$ S(\xi) \left[\begin{matrix}\hat{a}\\\hat{a}^\dagger\end{matrix}\right] = \left[\begin{matrix}\cosh(r)&-e^{i\phi}\sinh(r)\\-e^{-i\phi}\sinh(r)&\cosh(r)\end{matrix}\right] \left[\begin{matrix}\hat{a}\\\hat{a}^\dagger\end{matrix}\right]$$
resulting in
$$\sigma = SS^\dagger = \left[\begin{matrix}\cosh(2r)&-e^{i\phi}\sinh(2r)\\-e^{-i\phi}\sinh(2r)&\cosh(2r)\end{matrix}\right]$$
For multiple Gaussian states with non-zero squeezing, the covariance matrix in this basis simply generalises to
$$\sigma = \text{diag}(S_1S_1^\dagger,\dots,S_NS_N^\dagger)\in\mathbb{C}^{2N\times 2N}$$
## Introduction to Gaussian boson sampling
<div class="alert alert-info">
“If you need to wait exponential time for \[your single photon sources to emit simultaneously\], then there would seem to be no advantage over classical computation. This is the reason why so far, boson sampling has only been demonstrated with 3-4 photons. When faced with these problems, until recently, all we could do was shrug our shoulders.” - [Scott Aaronson](https://www.scottaaronson.com/blog/?p=1579)
</div>
While [boson sampling](https://en.wikipedia.org/wiki/Boson_sampling) allows the experimental implementation of a quantum sampling problem that it countably hard classically, one of the main issues it has in experimental setups is one of **scalability**, due to its dependence on an array of simultaneously emitting single photon sources.
Currently, most physical implementations of boson sampling make use of a process known as [Spontaneous Parametric Down-Conversion](http://en.wikipedia.org/wiki/Spontaneous_parametric_down-conversion) to generate the single photon source inputs. Unfortunately, this method is non-deterministic - as the number of modes in the apparatus increases, the average time required until every photon source emits a simultaneous photon increases *exponentially*.
In order to simulate a *deterministic* single photon source array, several variations on boson sampling have been proposed; the most well known being scattershot boson sampling ([Lund, 2014](https://link.aps.org/doi/10.1103/PhysRevLett.113.100502)). However, a recent boson sampling variation by [Hamilton et al.](https://link.aps.org/doi/10.1103/PhysRevLett.119.170501) negates the need for single photon Fock states altogether, by showing that **incident Gaussian states** - in this case, single mode squeezed states - can produce problems in the same computational complexity class as boson sampling. Even more significantly, this negates the scalability problem with single photon sources, as single mode squeezed states can be easily simultaneously generated experimentally.
Aside from changing the input states from single photon Fock states to Gaussian states, the Gaussian boson sampling scheme appears quite similar to that of boson sampling:
1. $N$ single mode squeezed states $\left|{\xi_i}\right\rangle$, with squeezing parameters $\xi_i=r_ie^{i\phi_i}$, enter an $N$ mode linear interferometer with unitary $U$.
<br>
2. The output of the interferometer is denoted $\left|{\psi'}\right\rangle$. Each output mode is then measured in the Fock basis, $\bigotimes_i n_i\left|{n_i}\middle\rangle\middle\langle{n_i}\right|$.
Without loss of generality, we can absorb the squeezing parameter $\phi$ into the interferometer, and set $\phi=0$ for convenience. The covariance matrix **in the creation and annihilation operator basis** at the output of the interferometer is then given by:
$$\sigma_{out} = \frac{1}{2} \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right]\sigma_{in} \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right]$$
Using phase space methods, [Hamilton et al.](https://link.aps.org/doi/10.1103/PhysRevLett.119.170501) showed that the probability of measuring a Fock state is given by
$$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(U\bigoplus_i\tanh(r_i)U^T)]_{st}\right|^2}{n_1!n_2!\cdots n_N!\sqrt{|\sigma_{out}+I/2|}},$$
i.e. the sampled single photon probability distribution is proportional to the **Hafnian** of a submatrix of $U\bigoplus_i\tanh(r_i)U^T$, dependent upon the output covariance matrix.
<div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9">
<p style="color: #119a68;">**The Hafnian**</p>
The Hafnian of a matrix is defined by
<br><br>
$$\text{Haf}(A) = \frac{1}{n!2^n}\sum_{\sigma=S_{2N}}\prod_{i=1}^N A_{\sigma(2i-1)\sigma(2i)}$$
<br>
$S_{2N}$ is the set of all permutations of $2N$ elements. In graph theory, the Hafnian calculates the number of perfect <a href="https://en.wikipedia.org/wiki/Matching_(graph_theory)">matchings</a> in an **arbitrary graph** with adjacency matrix $A$.
<br>
Compare this to the permanent, which calculates the number of perfect matchings on a *bipartite* graph - the Hafnian turns out to be a generalisation of the permanent, with the relationship
$$\begin{align}
\text{Per(A)} = \text{Haf}\left(\left[\begin{matrix}
0&A\\
A^T&0
\end{matrix}\right]\right)
\end{align}$$
As any algorithm that could calculate (or even approximate) the Hafnian could also calculate the permanent - a #P problem - it follows that calculating or approximating the Hafnian must also be a classically hard problem.
</div>
### Equally squeezed input states
In the case where all the input states are squeezed equally with squeezing factor $\xi=r$ (i.e. so $\phi=0$), we can simplify the denominator into a much nicer form. It can be easily seen that, due to the unitarity of $U$,
$$\left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] = \left[ \begin{matrix}UU^\dagger&0\\0&U^*U^T\end{matrix} \right] =I$$
Thus, we have
$$\begin{align}
\sigma_{out} +\frac{1}{2}I &= \sigma_{out} + \frac{1}{2} \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] = \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \frac{1}{2} \left(\sigma_{in}+I\right) \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right]
\end{align}$$
where we have subtituted in the expression for $\sigma_{out}$. Taking the determinants of both sides, the two block diagonal matrices containing $U$ are unitary, and thus have determinant 1, resulting in
$$\left|\sigma_{out} +\frac{1}{2}I\right| =\left|\frac{1}{2}\left(\sigma_{in}+I\right)\right|=\left|\frac{1}{2}\left(SS^\dagger+I\right)\right| $$
By expanding out the right hand side, and using various trig identities, it is easy to see that this simply reduces to $\cosh^{2N}(r)$ where $N$ is the number of modes; thus the Gaussian boson sampling problem in the case of equally squeezed input modes reduces to
$$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(UU^T\tanh(r))]_{st}\right|^2}{n_1!n_2!\cdots n_N!\cosh^N(r)},$$
## The Gaussian boson sampling circuit
The multimode linear interferometer can be decomposed into two-mode beamsplitters (`BSgate`) and single-mode phase shifters (`Rgate`) (<a href="https://doi.org/10.1103/physrevlett.73.58">Reck, 1994</a>), allowing for an almost trivial translation into a continuous-variable quantum circuit.
For example, in the case of a 4 mode interferometer, with arbitrary $4\times 4$ unitary $U$, the continuous-variable quantum circuit for Gaussian boson sampling is given by
<img src="https://s3.amazonaws.com/xanadu-img/gaussian_boson_sampling.svg" width=70%/>
In the above,
* the single mode squeeze states all apply identical squeezing $\xi=r$,
* the detectors perform Fock state measurements (i.e. measuring the photon number of each mode),
* the parameters of the beamsplitters and the rotation gates determines the unitary $U$.
For $N$ input modes, we must have a minimum of $N$ columns in the beamsplitter array ([Clements, 2016](https://arxiv.org/abs/1603.08788)).
## Simulating boson sampling in Strawberry Fields
```
import strawberryfields as sf
from strawberryfields.ops import *
from strawberryfields.utils import random_interferometer
```
Strawberry Fields makes this easy; there is an `Interferometer` quantum operation, and a utility function that allows us to generate the matrix representing a random interferometer.
```
U = random_interferometer(4)
```
The lack of Fock states and non-linear operations means we can use the Gaussian backend to simulate Gaussian boson sampling. In this example program, we are using input states with squeezing parameter $\xi=1$, and the randomly chosen interferometer generated above.
```
eng, q = sf.Engine(4)
with eng:
# prepare the input squeezed states
S = Sgate(1)
All(S) | q
# interferometer
Interferometer(U) | q
state = eng.run('gaussian')
```
We can see the decomposed beamsplitters and rotation gates, by calling `eng.print_applied()`:
```
eng.print_applied()
```
<div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9">
<p style="color: #119a68;">**Available decompositions**</p>
Check out our <a href="https://strawberryfields.readthedocs.io/en/stable/conventions/decompositions.html">documentation</a> to see the available CV decompositions available in Strawberry Fields.
</div>
## Analysis
Let's now verify the Gaussian boson sampling result, by comparing the output Fock state probabilities to the Hafnian, using the relationship
$$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(UU^T\tanh(r))]_{st}\right|^2}{n_1!n_2!\cdots n_N!\cosh^N(r)}$$
### Calculating the Hafnian
For the right hand side numerator, we first calculate the submatrix $[(UU^T\tanh(r))]_{st}$:
```
B = (np.dot(U, U.T) * np.tanh(1))
```
In Gaussian boson sampling, we determine the submatrix by taking the rows and columns corresponding to the measured Fock state. For example, to calculate the submatrix in the case of the output measurement $\left|{1,1,0,0}\right\rangle$,
```
B[:,[0,1]][[0,1]]
```
To calculate the Hafnian in Python, we can use the direct definition
$$\text{Haf}(A) = \frac{1}{n!2^n} \sum_{\sigma \in S_{2n}} \prod_{j=1}^n A_{\sigma(2j - 1), \sigma(2j)}$$
Notice that this function counts each term in the definition multiple times, and renormalizes to remove the multiple counts by dividing by a factor $\frac{1}{n!2^n}$. **This function is extremely slow!**
```
from itertools import permutations
from scipy.special import factorial
def Haf(M):
n=len(M)
m=int(n/2)
haf=0.0
for i in permutations(range(n)):
prod=1.0
for j in range(m):
prod*=M[i[2*j],i[2*j+1]]
haf+=prod
return haf/(factorial(m)*(2**m))
```
## Comparing to the SF result
In Strawberry Fields, both Fock and Gaussian states have the method `fock_prob()`, which returns the probability of measuring that particular Fock state.
#### Let's compare the case of measuring at the output state $\left|0,1,0,1\right\rangle$:
```
B = (np.dot(U,U.T) * np.tanh(1))[:, [1,3]][[1,3]]
np.abs(Haf(B))**2 / np.cosh(1)**4
state.fock_prob([0,1,0,1])
```
#### For the measurement result $\left|2,0,0,0\right\rangle$:
```
B = (np.dot(U,U.T) * np.tanh(1))[:, [0,0]][[0,0]]
np.abs(Haf(B))**2 / (2*np.cosh(1)**4)
state.fock_prob([2,0,0,0])
```
#### For the measurement result $\left|1,1,0,0\right\rangle$:
```
B = (np.dot(U,U.T) * np.tanh(1))[:, [0,1]][[0,1]]
np.abs(Haf(B))**2 / np.cosh(1)**4
state.fock_prob([1,1,0,0])
```
#### For the measurement result $\left|1,1,1,1\right\rangle$, this corresponds to the full matrix $B$:
```
B = (np.dot(U,U.T) * np.tanh(1))
np.abs(Haf(B))**2 / np.cosh(1)**4
state.fock_prob([1,1,1,1])
```
#### For the measurement result $\left|0,0,0,0\right\rangle$, this corresponds to a **null** submatrix, which has a Hafnian of 1:
```
1/np.cosh(1)**4
state.fock_prob([0,0,0,0])
```
As you can see, like in the boson sampling tutorial, they agree with almost negligable difference.
<div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9">
<p style="color: #119a68;">**Exercises**</p>
Repeat this notebook with
<ol>
<li> A Fock backend such as NumPy, instead of the Gaussian backend</li>
<li> Different beamsplitter and rotation parameters</li>
<li> Input states with *differing* squeezed values $r_i$. You will need to modify the code to take into account the fact that the output covariance matrix determinant must now be calculated!
</ol>
</div>
| github_jupyter |
# Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the [Index of Coincident Economic Indicators](http://www.newyorkfed.org/research/regional_economy/coincident_summary.html)) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
## Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on [FRED](https://research.stlouisfed.org/fred2/); the ID of the series used below is given in parentheses):
- Industrial production (IPMAN)
- Real aggregate income (excluding transfer payments) (W875RX1)
- Manufacturing and trade sales (CMRMTSPL)
- Employees on non-farm payrolls (PAYEMS)
In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005.
```
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas_datareader.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
```
**Note**: in a recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT.
This has since (02/11/16) been corrected, however the series could also be constructed by hand from HMRMT and CMRMT, as shown below (process taken from the notes in the Alfred xls file).
```
# HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
# CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
# HMRMT_growth = HMRMT.diff() / HMRMT.shift()
# sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# # Fill in the recent entries (1997 onwards)
# sales[CMRMT.index] = CMRMT
# # Backfill the previous entries (pre 1997)
# idx = sales.loc[:'1997-01-01'].index
# for t in range(len(idx)-1, 0, -1):
# month = idx[t]
# prev_month = idx[t-1]
# sales.loc[prev_month] = sales.loc[month] / (1 + HMRMT_growth.loc[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.loc[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
```
Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
```
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
```
## Dynamic factors
A general dynamic factor model is written as:
$$
\begin{align}
y_t & = \Lambda f_t + B x_t + u_t \\
f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\\
u_t & = C_1 u_{t-1} + \dots + C_q u_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma)
\end{align}
$$
where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors.
This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters.
## Model specification
The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process.
Thus the specification considered here is:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \\
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \\
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\\
\end{align}
$$
where $i$ is one of: `[indprod, income, sales, emp ]`.
This model can be formulated using the `DynamicFactor` model built-in to Statsmodels. In particular, we have the following specification:
- `k_factors = 1` - (there is 1 unobserved factor)
- `factor_order = 2` - (it follows an AR(2) process)
- `error_var = False` - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below)
- `error_order = 2` - (the errors are autocorrelated of order 2: i.e. AR(2) processes)
- `error_cov_type = 'diagonal'` - (the innovations are uncorrelated; this is again the default)
Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the `fit()` method.
**Note**: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow.
**Aside**: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in `DynamicFactor` class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below.
## Parameter estimation
Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method.
```
# Get the endogenous data
endog = dta.loc['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params, disp=False)
```
## Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference:
- The estimated parameters
- The estimated factor
### Parameters
The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret.
One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor.
Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence.
```
print(res.summary(separate_params=False))
```
### Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons:
1. The sign-related identification issue described above.
2. Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data.
It is for these reasons that the coincident index is created (see below).
With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity.
```
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
```
## Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
```
res.plot_coefficients_of_determination(figsize=(8,2));
```
## Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
```
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.loc['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.loc['1992-07-01'] / coincident_index.loc['1992-07-01'])
return coincident_index
```
Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
```
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
```
## Appendix 1: Extending the dynamic factor model
Recall that the previous specification was described by:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \\
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \\
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\\
\end{align}
$$
Written in state space form, the previous specification of the model had the following observation equation:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \\
y_{\text{income}, t} \\
y_{\text{sales}, t} \\
y_{\text{emp}, t} \\
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
\lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
f_t \\
f_{t-1} \\
u_{\text{indprod}, t} \\
u_{\text{income}, t} \\
u_{\text{sales}, t} \\
u_{\text{emp}, t} \\
u_{\text{indprod}, t-1} \\
u_{\text{income}, t-1} \\
u_{\text{sales}, t-1} \\
u_{\text{emp}, t-1} \\
\end{bmatrix}
$$
and transition equation:
$$
\begin{bmatrix}
f_t \\
f_{t-1} \\
u_{\text{indprod}, t} \\
u_{\text{income}, t} \\
u_{\text{sales}, t} \\
u_{\text{emp}, t} \\
u_{\text{indprod}, t-1} \\
u_{\text{income}, t-1} \\
u_{\text{sales}, t-1} \\
u_{\text{emp}, t-1} \\
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \\
0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \\
0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \\
0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \\
f_{t-2} \\
u_{\text{indprod}, t-1} \\
u_{\text{income}, t-1} \\
u_{\text{sales}, t-1} \\
u_{\text{emp}, t-1} \\
u_{\text{indprod}, t-2} \\
u_{\text{income}, t-2} \\
u_{\text{sales}, t-2} \\
u_{\text{emp}, t-2} \\
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \\
\varepsilon_{t}
\end{bmatrix}
$$
the `DynamicFactor` model handles setting up the state space representation and, in the `DynamicFactor.update` method, it fills in the fitted parameter values into the appropriate locations.
The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in \{\text{indprod}, \text{income}, \text{sales} \}\\
y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \\
u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \\
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\\
\end{align}
$$
Now, the corresponding observation equation should look like the following:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \\
y_{\text{income}, t} \\
y_{\text{sales}, t} \\
y_{\text{emp}, t} \\
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
\lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
f_t \\
f_{t-1} \\
f_{t-2} \\
f_{t-3} \\
u_{\text{indprod}, t} \\
u_{\text{income}, t} \\
u_{\text{sales}, t} \\
u_{\text{emp}, t} \\
u_{\text{indprod}, t-1} \\
u_{\text{income}, t-1} \\
u_{\text{sales}, t-1} \\
u_{\text{emp}, t-1} \\
\end{bmatrix}
$$
Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation:
$$
\begin{bmatrix}
f_t \\
f_{t-1} \\
f_{t-2} \\
f_{t-3} \\
u_{\text{indprod}, t} \\
u_{\text{income}, t} \\
u_{\text{sales}, t} \\
u_{\text{emp}, t} \\
u_{\text{indprod}, t-1} \\
u_{\text{income}, t-1} \\
u_{\text{sales}, t-1} \\
u_{\text{emp}, t-1} \\
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \\
f_{t-2} \\
f_{t-3} \\
f_{t-4} \\
u_{\text{indprod}, t-1} \\
u_{\text{income}, t-1} \\
u_{\text{sales}, t-1} \\
u_{\text{emp}, t-1} \\
u_{\text{indprod}, t-2} \\
u_{\text{income}, t-2} \\
u_{\text{sales}, t-2} \\
u_{\text{emp}, t-2} \\
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \\
\varepsilon_{t}
\end{bmatrix}
$$
This model cannot be handled out-of-the-box by the `DynamicFactor` class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way.
First, notice that if we had set `factor_order = 4`, we would almost have what we wanted. In that case, the last line of the observation equation would be:
$$
\begin{bmatrix}
\vdots \\
y_{\text{emp}, t} \\
\end{bmatrix} = \begin{bmatrix}
\vdots & & & & & & & & & & & \vdots \\
\lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
f_t \\
f_{t-1} \\
f_{t-2} \\
f_{t-3} \\
\vdots
\end{bmatrix}
$$
and the first line of the transition equation would be:
$$
\begin{bmatrix}
f_t \\
\vdots
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\vdots & & & & & & & & & & & \vdots \\
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \\
f_{t-2} \\
f_{t-3} \\
f_{t-4} \\
\vdots
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \\
\varepsilon_{t}
\end{bmatrix}
$$
Relative to what we want, we have the following differences:
1. In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters.
2. We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4).
Our strategy will be to subclass `DynamicFactor`, and let it do most of the work (setting up the state space representation, etc.) where it assumes that `factor_order = 4`. The only things we will actually do in the subclass will be to fix those two issues.
First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods `__init__`, `start_params`, `param_names`, `transform_params`, `untransform_params`, and `update` form the core of all state space models in Statsmodels, not just the `DynamicFactor` class.
```
from statsmodels.tsa.statespace import tools
class ExtendedDFM(sm.tsa.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
return super(ExtendedDFM, self).param_names + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
constrained = super(ExtendedDFM, self).transform_params(
unconstrained[:-3])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained, unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
unconstrained = super(ExtendedDFM, self).untransform_params(
constrained[:-3])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained, constrained[-3:]]
def update(self, params, transformed=True, complex_step=False):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
params[self._params_factor_zero] = 0
# Now perform the usual DFM update, but exclude our new parameters
super(ExtendedDFM, self).update(params[:-3], transformed=True, complex_step=complex_step)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
```
So what did we just do?
#### `__init__`
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with `factor_order=4`, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
#### `start_params`
`start_params` are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
#### `param_names`
`param_names` are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
#### `transform_params` and `untransform_params`
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and `transform_params` is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. `untransform_params` is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons:
1. The version in the `DynamicFactor` class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters.
2. The version in the `DynamicFactor` class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here.
#### `update`
The most important reason we need to specify a new `update` method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent `DynamicFactor.update` class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually.
```
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(maxiter=1000, disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000)
print(extended_res.summary(separate_params=False))
```
Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results.
```
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# AutoML 02: Regression with local compute
In this example we use the scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) to showcase how you can use AutoML for a simple regression problem.
Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.
In this notebook you would see
1. Creating an Experiment using an existing Workspace
2. Instantiating AutoMLConfig
3. Training the Model using local compute
4. Exploring the results
5. Testing the fitted model
## Create Experiment
As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments.
```
import logging
import os
import random
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
ws = Workspace.from_config()
# choose a name for the experiment
experiment_name = 'automl-local-regression'
# project folder
project_folder = './sample_projects/automl-local-regression'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data = output, index = ['']).T
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
```
### Read Data
```
# load diabetes dataset, a well-known built-in small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
```
## Instantiate Auto ML Config
Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
|Property|Description|
|-|-|
|**task**|classification or regression|
|**primary_metric**|This is the metric that you want to optimize.<br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>|
|**max_time_sec**|Time limit in seconds for each iteration|
|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|
|**n_cross_validations**|Number of cross validation splits|
|**X**|(sparse) array-like, shape = [n_samples, n_features]|
|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |
|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|
```
automl_config = AutoMLConfig(task='regression',
max_time_sec = 600,
iterations = 10,
primary_metric = 'spearman_correlation',
n_cross_validations = 5,
debug_log = 'automl.log',
verbosity = logging.INFO,
X = X_train,
y = y_train,
path=project_folder)
```
## Training the Model
You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.
You will see the currently running iterations printing to the console.
```
local_run = experiment.submit(automl_config, show_output=True)
local_run
```
## Exploring the results
#### Widget for monitoring runs
The widget will sit on "loading" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.
NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details.
```
from azureml.train.widgets import RunDetails
RunDetails(local_run).show()
```
#### Retrieve All Child Runs
You can also use sdk methods to fetch all the child runs and see individual metrics that we log.
```
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
```
### Retrieve the Best Model
Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*.
```
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
```
#### Best Model based on any other metric
Show the run and model that has the smallest `root_mean_squared_error` (which turned out to be the same as the one with largest `spearman_correlation` value):
```
lookup_metric = "root_mean_squared_error"
best_run, fitted_model = local_run.get_output(metric=lookup_metric)
print(best_run)
print(fitted_model)
```
#### Model from a specific iteration
Simply show the run and model from the 3rd iteration:
```
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
```
### Testing the Fitted Model
Predict on training and test set, and calculate residual values.
```
y_pred_train = fitted_model.predict(X_train)
y_residual_train = y_train - y_pred_train
y_pred_test = fitted_model.predict(X_test)
y_residual_test = y_test - y_pred_test
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.metrics import mean_squared_error, r2_score
# set up a multi-plot chart
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Regression Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(16)
# plot residual values of training set
a0.axis([0, 360, -200, 200])
a0.plot(y_residual_train, 'bo', alpha = 0.5)
a0.plot([-10,360],[0,0], 'r-', lw = 3)
a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)
a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)), fontsize = 12)
a0.set_xlabel('Training samples', fontsize = 12)
a0.set_ylabel('Residual Values', fontsize = 12)
# plot histogram
a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');
a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);
# plot residual values of test set
a1.axis([0, 90, -200, 200])
a1.plot(y_residual_test, 'bo', alpha = 0.5)
a1.plot([-10,360],[0,0], 'r-', lw = 3)
a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)
a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)), fontsize = 12)
a1.set_xlabel('Test samples', fontsize = 12)
a1.set_yticklabels([])
# plot histogram
a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');
a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);
plt.show()
```
| github_jupyter |
# Линейная регрессия
https://jakevdp.github.io/PythonDataScienceHandbook/
полезная книга которую я забыл добавить в прошлый раз
# План на сегодня:
1. Как различать различные решения задачи регрессии?
2. Как подбирать параметры Линейной модели?
3. Как восстанавливать нелинейные модели с помощью Линейной модели?
4. Что делать если у нас много признаков?
5. Проблема переобучения
```
# Для начала я предлагаю посмотреть на простой пример
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
plt.rc('font', **{'size':18})
```
Сгенерим набор данных. В наших данных один признак $x$ и одна целевая переменная $y$,
следом добавим к целевой переменной немного шума распределенного по нормальному закону $N(0, 10)$:
$$y = f(x) = 3 x + 6 + N(0, 10), x \in [0, 30]$$
```
random = np.random.RandomState(4242)
X = np.linspace(0, 30, 60)
y = X * 3 + 6
y_noisy = y + random.normal(scale=10, size=X.shape)
plt.figure(figsize=(15,5))
plt.plot(X, y, label='Y = 3x + 6', color='tab:red', lw=0.5);
plt.scatter(X, y_noisy, label='Y + $\epsilon$', color='tab:blue', alpha=0.5, s=100)
plt.title('True law (red line) vs Observations (blue points)')
plt.xlabel('X, признак')
plt.ylabel('Y, target')
plt.legend();
```
Задачей регрессии называют задачу восстановления закона (функции) $f(x)$ по набору наблюдений $(x, y)$. Мне дали новое значение $x$ которого я раньше не встречал, могу ли я предсказать для него значение $y$?
```
plt.figure(figsize=(15,5))
plt.scatter(X, y_noisy, label='Y + $\epsilon$', color='tab:blue', alpha=0.5, s=100)
plt.scatter(40, -5, marker='x', s=200, label='(X=40, Y=?)', color='tab:red')
plt.plot([40,40], [-7, 250], ls='--', color='tab:red');
plt.text(35, 150, 'Y = ???');
plt.xlabel('X, признак')
plt.ylabel('Y, target')
plt.legend(loc=2);
```
Модель линейной регрессии предлагает построить через это облако точек - прямую,
то есть искать функцию $f(x)$ в виде $f(x) = ax + b$, что сводит задачу к поиску двух
коэффициентов $a$ и $b$. Здесь однако возникает два важных вопроса:
1. Предположим мы каким то образом нашли две прямые $(a_1, b_1)$ и $(a_2, b_2)$.
Как понять какая из этих двух прямых лучше? И что вообще значит, лучше?
2. Как найти эти коэффициенты `a` и `b`
## 1. Какая прямая лучше?
```
plt.figure(figsize=(20,20))
plot1(plt.subplot(221), 2, 4, 'tab:blue')
plot1(plt.subplot(222), 2.5, 15, 'tab:green')
plot1(plt.subplot(223), 3, 6, 'tab:orange')
axes = plt.subplot(224)
axes.scatter(X, y_noisy, c='tab:red', alpha=0.5, s=100)
y_hat = X * 2 + 4
axes.plot(X, y_hat, color='tab:blue', label='$f_1(x)=2x+4$')
y_hat = X * 2.5 + 15
axes.plot(X, y_hat, color='tab:green', label='$f_2(x)=2.5x+15$')
y_hat = X * 3 + 6
axes.plot(X, y_hat, color='tab:orange', label='$f_3(x)=3x+6$');
axes.legend();
```
Кажется что $f_1$ (синяя прямая) отпадает сразу, но как выбрать между оставшимися двумя?
Интуитивный ответ таков: надо посчитать ошибку предсказания. Это значит что для каждой точки из
набора $X$ (для которой нам известно значение $y$) мы можем воспользовавшись функцией $f(x)$
посчитать соответственное $y_{pred}$. И затем сравнить $y$ и $y_{pred}$.
```
plt.figure(figsize=(10,10))
plt.scatter(X, y_noisy, s=100, c='tab:blue', alpha=0.1)
y_hat = X * 3 + 6
plt.plot(X, y_hat, label='$y_{pred} = 3x+6$')
plt.scatter(X[2:12], y_noisy[2:12], s=100, c='tab:blue', label='y')
for _x, _y in zip(X[2:12], y_noisy[2:12]):
plt.plot([_x, _x], [_y, 3*_x+6], c='b')
plt.legend();
```
Как же считать эту разницу? Существует множество способов:
$$
MSE(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - \hat{f}(x_i)\ \right) ^ 2
$$
$$
MAE(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left|\ y_i - \hat{f}(x_i)\ \right|
$$
$$
RMSLE(\hat{f}, x) = \sqrt{\frac{1}{N} \sum_{i=1}^{N}\left(\ \log(y_i + 1) - \log(\hat{f}(x_i) + 1)\ \right)^2}
$$
$$
MAPE (\hat{f}, x) = \frac{100}{N} \sum_{i=1}^{N}\left| \frac{y_i - \hat{f}(x_i)}{y_i} \right|
$$
и другие.
---
**Вопрос 1.** Почему бы не считать ошибку вот так:
$$
ERROR(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - \hat{f}(x_i)\ \right)
$$
---
**Вопрос 2.** Чем отличаются `MSE`, `MAE`, `RMSLE`, `MAPE`? Верно ли, что модель наилучшая с точки зрения одной меры, всегда будет лучше с точки зрения другой/остальных?
---
Пока что мы остановимся на MSE. Давайте теперь сравним наши прямые используя MSE
```
def mse(y1, y2, prec=2):
return np.round(np.mean((y1 - y2)**2),prec)
def plot2(axes, a, b, color='b', X=X, y=y_noisy):
axes.plot(X, y_noisy, 'r.')
y_hat = X * a + b
axes.plot(X, y_hat, color=color, label='y = {}x + {}'.format(a,b))
axes.set_title('MSE = {:.2f}'.format(mse(y_hat, y_noisy)));
axes.legend()
plt.figure(figsize=(20,12))
plot2(plt.subplot(221), 2.5, 15, 'g')
plot2(plt.subplot(222), 3, 6, 'orange')
plot2(plt.subplot(223), 2, 4, 'b')
```
Понятно что чем меньше значение MSE тем меньше ошибка предсказания, а значит выбирать нужно модель для которой MSE наименьшее. В нашем случае это $f_3(x) = 3x+6$. Отлично, мы ответили на первый вопрос, как из многих прямых выбрать одну, теперь попробуем ответить на второй.
# 2. Как найти параметры прямой?
Зафиксируем что нам на текущий момент известно.
1. У нас есть данные ввиде множества пар $X$ и $y$: $\{(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\}$
2. Мы хотим найти такую функцию $\hat{f}(x)$ которая бы минимизировала
$$MSE(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - \hat{f}(x_i)\ \right) ^ 2 \rightarrow \text{min}$$
3. Мы будем искать $\hat{f}(x)$ в предположении что это линейная функция:
$$\hat{f}(x) = ax + b$$
----
Подставив теперь $\hat{f}(x)$ в выражение для MSE получим:
$$
\frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - ax_i - b\ \right) ^ 2 \rightarrow \text{min}_{a,b}
$$
Сделать это можно по-меньшей мере двумя способами:
1. Аналитически: переписать выражение в векторном виде, посчитать первую производную и приравнять ее к 0, откуда выразить значения для параметров.
2. Численно: посчитать частные производные по a и по b и воспользоваться методом градиентного спуска.
Подробный аналитический вывод можно посмотреть например здесь https://youtu.be/Y_Ac6KiQ1t0?list=PL221E2BBF13BECF6C
(после просмотра также станет откуда взялся MSE). Нам же на будущее будет полезно сделать это в лоб (не особо вдаваясь в причины)
-----
Вектор $y$ имеет размерность $n \times 1$ вектор $x$ так же $n \times 1$, применим следующий трюк: превратим вектор $x$ в матрицу $X$ размера $n \times 2$ в которой первый столбец будет целиком состоять из 1. Тогда обзначив за $\theta = [b, a]$ получим выражение для MSE в векторном виде:
$$
\frac{1}{n}(y - X \theta)^{T}(y - X \theta) \rightarrow min_{\theta}
$$
взяв производную по $\theta$ и приравняв ее к 0, получим:
$$
y = X \theta
$$
поскольку матрица $X$ не квадратная и не имеет обратной, домножим обе части на $X^T$ слева
$$
X^T y = X^T X \theta
$$
матрица X^T X, за редким исключением (каким?) обратима, в итоге получаем выражение для $\theta$:
$$
\theta = (X^T X)^{-1} X^T y
$$
проделаем теперь эти шаги с нашими данными (Формула дающая выражение для $\theta$ называется Normal equation)
```
print(X.shape, y.shape)
print('----------')
print('Несколько первых значений X: ', np.round(X[:5],2))
print('Несколько первых значений Y: ', np.round(y[:5],2))
X_new = np.ones((60, 2))
X_new[:, 1] = X
y_new = y.reshape(-1,1)
print(X_new.shape, y_new.shape)
print('----------')
print('Несколько первых значений X:\n', np.round(X_new[:5],2))
print('Несколько первых значений Y:\n', np.round(y_new[:5],2))
theta = np.linalg.inv((X_new.T.dot(X_new))).dot(X_new.T).dot(y_new)
print(theta)
```
Таким образом мы восстановили функцию $f(x) = 3 x + 6$ (что совершенно случайно совпало с $f_3(x)$)
Отлично, это была красивая победа! Что дальше?
А дальше нас интересуют два вопроса:
1. Что если первоначальная функция была взята из нелинейного источника (например $y = 3 x^2 +1$)
2. Что делать если у нас не один признак, а много? (т.е. матрица $X$ имеет размер не $n \times 2$, а $n \times m+1$, где $m$ - число признаков)
# 3. Что если нужно восстановить нелинейную зависимость?
```
plt.figure(figsize=(10,10))
x = np.linspace(-3, 5, 60).reshape(-1,1)
y = 3*x**2 + 1 + random.normal(scale=5, size=x.shape)
y_model = 3*x**2 + 1
plt.scatter(x, y, label='$y = 3x^2 + 1 +$ N(0,5)')
plt.plot(x, y_model, label='$y = 3x^2 + 1$')
plt.legend();
```
Давайте для этого воспользуемся реализацией линейной регрессии из **sklearn**
```
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x, y)
print('y = {} X + {}'.format(np.round(model.coef_[0][0],2), np.round(model.intercept_[0], 2)))
```
Обратите внимание, мы не добавляли столбец из 1 в матрицу X, поскольку в классе
LinearRegression есть параметр fit_intercept (по умолчанию он равен True)
Посмотрим теперь как это выглядит
```
plt.figure(figsize=(20,15))
ax1 = plt.subplot(221)
ax1.scatter(x, y, )
ax1.plot(x, y_model, label=f'True source: $y = 3x^2 + 1$\nMSE={mse(y, y_model)}')
ax1.legend();
y_pred = model.coef_[0][0] * x + model.intercept_[0]
ax2 = plt.subplot(222)
ax2.scatter(x, y,)
ax2.plot(x, y_pred, label='Predicted curve: $y = {} x + {}$\nMSE={}'.format(np.round(model.coef_[0][0],2),
np.round(model.intercept_[0],2),
mse(y, y_pred)), c='r')
ax2.legend();
```
Кажется что в данном случае предсказывать "прямой" не лучшая идея, что же делать?
Если линейные признаки не дают желаемого результата, надо добавлять нелинейные!
Давайте например искать параметры $a$ и $b$ для вот такой функции:
$$
f(x) = ax^2 + b
$$
```
x_new = x**2
model.fit(x_new, y)
print('y = {} x^2 + {}'.format(np.round(model.coef_[0][0],2), np.round(model.intercept_[0], 2)))
plt.figure(figsize=(20,15))
ax1 = plt.subplot(221)
ax1.scatter(x, y, )
ax1.plot(x, y_model, label='True source: $y = 3x^2 + 1$\nMSE={}'.format(mse(y, y_model)))
ax1.legend();
y_pred = model.coef_[0][0] * x_new + model.intercept_[0]
ax2 = plt.subplot(222)
ax2.scatter(x, y,)
ax2.plot(x, y_pred, label='Predicted curve: $y = {} x^2 + {}$\nMSE={}'.format(np.round(model.coef_[0][0],2),
np.round(model.intercept_[0],2),
mse(y, y_pred)), c='r')
ax2.legend();
```
Некоторые замечания
1. Результирующая функция даже лучше всмысле MSE чем источник (причина - шум)
2. Регрессия все еще называется линейной (собственно линейной она зовется не по признакам X, а по параметрам $\theta$). Регрессия называется линейной потому что предсказываемая величина это **линейная комбинация признаков**, алгоритму неизвестно что мы там что-то возвели в квадрат или другую степень.
3. Откуда я узнал что нужно добавить именно квадратичный признак? (ниоткуда, просто угадал, дальше увидим как это делать)
### 3.1 Задача: воспользуйтесь Normal equation для того чтобы подобрать параметры a и b
Normal equation:
$$
\theta = (X^T X)^{-1} X^T y
$$
Уточнение: подобрать параметры a и b нужно для функции вида $f(x) = ax^2 + b$
# 4. Что делать если у нас не один признак, а много?
Отлично, теперь мы знаем что делать если у нас есть один признак и одна целевая переменная (например предсказывать вес по росту или стоимость квартиры на основе ее площади, или время подъезда такси на основе времени суток). Но что же нам делать если факторов несколько? Для этого давайте еще раз посмотрим на Normal equation:
$$
\theta_{m\times 1} = (X^T_{m\times n} X_{n\times m})^{-1} X^T_{m\times n} y_{n\times 1}
$$
Посчитав $\theta$ как мы будем делать предсказания для нового наблюдения $x$?
$$
y = x_{1\times m} \times \theta_{m\times 1}
$$
А что если у нас теперь не один признак а несколько (например $m$), как изменятся размерности?
Размерность X станет равна $n\times (m+1)$: $n$ строк и $(m+1)$ столбец (размерность $y$ не изменится), подставив теперь это в Normal equation получим что размерность $\theta$ изменилась и стала равна $(m+1)\times 1$, а предсказания мы будем делать все так же:$y = x \times \theta$ или если раскрыть $\theta$:
$$
y = \theta_0 + x^{[1]}\theta_1 + x^{[2]}\theta_2 + \ldots + x^{[m]}\theta_m
$$
здесь верхние индексы это индекс признака, а не наблюдения (номер столбца в матрице $X$), и не арифмитическая степень.
-----
Отлично, значит мы можем и с несколькими признаками строить линейную регрессию, что же дальше?
А дальше нам надо ответить на (очередные) два вопроса:
1. Как же все-таки подбирать какие признаки генерировать?
2. Как это делать с помощью функций из **sklearn**
Мы ответим на оба вопроса, но сперва разберем пример для того чтобы продемонстрировать
**интерпретируемость** линейной модели
# Пример. Определение цен на недвижимость.
**Интерпретируемость** линейной модели заключается в том что **увеличение значение признака на 1** ведет к **увеличению целевой переменной** на соответствующее значение **theta** (у этого признака в линейной модели):
$$
f(x_i) = \theta_0 + \theta_1 x_i^{[1]} + \ldots + \theta_j x_i^{[j]} + \ldots + \theta_m x_i^{[m]}
$$
Увеличим значение признака $j$ у наблюдения $x_i$:
$$
\bar{x_i}^{[j]} = x_i^{[j]} + 1 = (x_i+1)^{[j]}
$$
изменение значения функции составит:
$$
\Delta(f(x)) = f(\bar{x}_i) - f(x_i) = \theta_j
$$
```
import pandas as pd
from sklearn.metrics import mean_squared_log_error
# данные можно взять отсюда ---> https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data
# house_data = pd.read_csv('train.csv', index_col=0)
# trunc_data = house_data[['LotArea', '1stFlrSF', 'BedroomAbvGr', 'SalePrice']]
# trunc_data.to_csv('train_house.csv')
house_data = pd.read_csv('train_house.csv', index_col=0)
house_data.head()
X = house_data[['LotArea', '1stFlrSF', 'BedroomAbvGr']].values
y = house_data['SalePrice'].values
model = LinearRegression()
model.fit(X, y)
y_pred = model.predict(X)
print('Linear coefficients: ', list(model.coef_), 'Intercept: ', model.intercept_)
print('MSLE: ', np.sqrt(mean_squared_log_error(y, y_pred)))
print('MSE: ', mse(y, y_pred))
for y_t, y_p in zip(y[:5], y_pred[:5]):
print(y_t, np.round(y_p, 3), np.round(mean_squared_log_error([y_t], [y_p]), 6))
plt.figure(figsize=(7,7))
plt.scatter(y, y_pred);
plt.plot([0, 600000], [0, 600000], c='r');
plt.text(200000, 500000, 'Overestimated\narea')
plt.text(450000, 350000, 'Underestimated\narea')
plt.xlabel('True value')
plt.ylabel('Predicted value');
```
Вернемся к нашим вопросам:
1. Как же все-таки подбирать какие признаки генерировать?
2. Как это делать с помощью функций из **sklearn**
## 5. Генерация признаков.
```
X = np.array([0.76923077, 1.12820513, 1.48717949, 1.84615385, 2.20512821,
2.56410256, 2.92307692, 3.28205128, 3.64102564, 4.]).reshape(-1,1)
y = np.array([9.84030322, 26.33596415, 16.68207941, 12.43191433, 28.76859577,
32.31335979, 35.26001044, 31.73889375, 45.28107096, 46.6252025]).reshape(-1,1)
plt.scatter(X, y);
```
Попробуем простую модель с 1 признаком:
$$
f(x) = ax + b
$$
```
lr = LinearRegression()
lr.fit(X, y)
y_pred = lr.predict(X)
plt.scatter(X, y);
plt.plot(X, y_pred);
plt.title('MSE: {}'.format(mse(y, y_pred)));
```
Добавим квадратичные признаки
```
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error
poly = PolynomialFeatures(degree=2)
X_2 = poly.fit_transform(X)
print(X_2[:3])
lr = LinearRegression()
lr.fit(X_2, y)
y_pred_2 = lr.predict(X_2)
plt.scatter(X, y);
plt.plot(X, y_pred_2);
plt.title('MSE: {}'.format(mse(y, y_pred_2)));
```
Добавим кубические
```
poly = PolynomialFeatures(degree=3)
X_3 = poly.fit_transform(X)
print(X_3[:3])
lr = LinearRegression()
lr.fit(X_3, y)
y_pred_3 = lr.predict(X_3)
plt.scatter(X, y);
plt.plot(X, y_pred_3);
plt.title('MSE: {}'.format(mse(y, y_pred_3)));
```
We need to go deeper..
```
def plot3(ax, degree):
poly = PolynomialFeatures(degree=degree)
_X = poly.fit_transform(X)
lr = LinearRegression()
lr.fit(_X, y)
y_pred = lr.predict(_X)
ax.scatter(X, y);
ax.plot(X, y_pred, label='MSE={}'.format(mse(y,y_pred)));
ax.set_title('Polynom degree: {}'.format(degree));
ax.legend()
plt.figure(figsize=(30,15))
plot3(plt.subplot(231), 4)
plot3(plt.subplot(232), 5)
plot3(plt.subplot(233), 6)
plot3(plt.subplot(234), 7)
plot3(plt.subplot(235), 8)
plot3(plt.subplot(236), 9)
```
### Переход в многомерное нелинейное пространство
#### Как сделать регрессию линейной если зависимость нелинейная?
- $\mathbf{x}$ может зависеть не совсем линейно от $\mathbf{y}$.
- Перейдем в новое пространство - $\phi(\mathbf{x})$ где $\phi(\cdot)$ это нелинейная функция от $\mathbf{x}$.
- В наших примерах присутствуют только полиномы, вообще говоря нелинейо преобразование может быть любым: экспонента, логарифм, тригонометрические функции и пр.
- Возьмем линейную комбинацию этих нелинейных функций $$f(\mathbf{x}) = \sum_{j=1}^k w_j \phi_j(\mathbf{x}).$$
- Возьмем некотрый базис функций (например квадратичный базис)
$$\boldsymbol{\phi} = [1, x, x^2].$$
- Теперь наша функция имеет такой вид
$$f(\mathbf{x}_i) = \sum_{j=1}^m w_j \phi_{i, j} (x_i).$$
Ну чтож, выходит что полином 9 степени это лучшее что мы можем здесь сделать? или все-таки нет...
```
a = 5
b = 10
n_points = 40
x_min = 0.5
x_max = 4
x = np.linspace(x_min, x_max, n_points)[:, np.newaxis]
completely_random_number = 33
rs = np.random.RandomState(completely_random_number)
noise = rs.normal(0, 5, (n_points, 1))
y = a + b * x + noise
idx = np.arange(3,40,4)
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.scatter(x,y, s=80, c ='tab:blue', edgecolors='k', linewidths=0.3);
plt.scatter(x[idx],y[idx], s=80, c='tab:red');
plt.subplot(1,2,2)
plt.scatter(x[idx],y[idx], s=80, c ='tab:red', edgecolors='k', linewidths=0.3);
x_train = x[idx]
y_train = y[idx]
lr_linear = LinearRegression(fit_intercept=True)
lr_linear.fit(x_train, y_train)
y_linear = lr_linear.predict(x_train)
# Cubic
cubic = PolynomialFeatures(degree=3)
x_cubic = cubic.fit_transform(x_train)
lr_3 = LinearRegression(fit_intercept=False)
lr_3.fit(x_cubic, y_train)
y_cubic = lr_3.predict(x_cubic)
# 9'th fit
poly = PolynomialFeatures(degree=9)
x_poly = poly.fit_transform(x_train)
lr_9 = LinearRegression(fit_intercept=False)
lr_9.fit(x_poly, y_train)
y_poly = lr_9.predict(x_poly)
xx = np.linspace(0.75,4,50).reshape(-1,1)
xx_poly = poly.fit_transform(xx)
yy_poly = lr_9.predict(xx_poly)
# PREDICTION ON WHOLE DATA
# linear prediction
y_pred_linear = lr_linear.predict(x)
# cubic prediction
x_cubic_test = cubic.transform(x)
y_pred_cubic = lr_3.predict(x_cubic_test)
# poly 9 prediction
x_poly_test = poly.transform(x)
y_pred_poly = lr_9.predict(x_poly_test)
def plot4(ax, x, y, y_regression, test_idx=None):
ax.scatter(x,y, s=80, c ='tab:red', edgecolors='k', linewidths=0.3, label='Test');
ax.plot(x,y_regression);
if test_idx is not None:
ax.scatter(x[test_idx], y[test_idx], s=80, c ='tab:blue',
edgecolors='k',
linewidths=0.3,
label ='Train');
ax.legend()
ax.set_title('MSE = {}'.format(np.round(mse(y, y_regression), 2)));
# PLOT PICTURES
plt.figure(figsize=(24,12))
plot4(plt.subplot(231), x_train,y_train,y_linear)
plot4(plt.subplot(232), x_train,y_train,y_cubic)
plot4(plt.subplot(233), x_train,y_train,y_poly)
plot4(plt.subplot(234), x,y,y_pred_linear, test_idx=idx)
plot4(plt.subplot(235), x,y,y_pred_cubic, test_idx=idx)
plot4(plt.subplot(236), x[3:],y[3:],y_pred_poly[3:], test_idx=idx-3)
print('FIRST ROW is TRAIN data set, SECOD ROW is WHOLE data')
```
#### Вопрос: Почему на графиках в последней колонке поведение функции отличается на TRAIN и TEST данных? (области возрастания, убывания и кривизна)
**Ответ:**
```
mse_train = []
mse_test = []
for degree in range(1,10):
idx_train = [3, 7, 11, 15, 19, 23, 27, 31, 35, 39]
idx_test = [ 0, 1, 2, 4, 5, 6, 8, 9, 10, 12, 13, 14,
16, 17, 18, 20, 21, 22, 24, 25, 26, 28,
29, 30, 32, 33, 34, 36, 38, 39]
x_train, x_test = x[idx_train], x[idx_test]
y_train, y_test = y[idx_train], y[idx_test]
poly = PolynomialFeatures(degree=degree)
lr = LinearRegression(fit_intercept=True)
x_train = poly.fit_transform(x_train)
x_test = poly.transform(x_test)
lr.fit(x_train, y_train)
y_pred_train = lr.predict(x_train)
y_pred_test = lr.predict(x_test)
mse_train.append(mse(y_train, y_pred_train))
mse_test.append(mse(y_test, y_pred_test))
plt.figure(figsize=(15,10))
plt.plot(list(range(1,6)), mse_train[:5], label='Train error')
plt.plot(list(range(1,6)), mse_test[:5], label='Test error')
plt.legend();
```

1. http://scott.fortmann-roe.com/docs/BiasVariance.html
2. http://www.inf.ed.ac.uk/teaching/courses/mlsc/Notes/Lecture4/BiasVariance.pdf
| github_jupyter |
```
import numpy as np
from pandas import Series, DataFrame
import pandas as pd
from sklearn import preprocessing, tree
from sklearn.metrics import accuracy_score
# from sklearn.model_selection import train_test_split, KFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import KFold
df=pd.read_json('../01_Preprocessing/First.json').sort_index()
df.head(2)
def mydist(x, y):
return np.sum((x-y)**2)
def jaccard(a, b):
intersection = float(len(set(a) & set(b)))
union = float(len(set(a) | set(b)))
return 1.0 - (intersection/union)
# http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
dist=['braycurtis','canberra','chebyshev','cityblock','correlation','cosine','euclidean','dice','hamming','jaccard','kulsinski','matching','rogerstanimoto','russellrao','sokalsneath','yule']
algorithm=['ball_tree', 'kd_tree', 'brute']
len(dist)
```
## On country (only MS)
```
df.columns
oldDf=df.copy()
df=df[['countryCoded','degreeCoded','engCoded', 'fieldGroup','fund','gpaBachelors','gre', 'highLevelBachUni', 'paper']]
df=df[df.degreeCoded==0]
del df['degreeCoded']
bestAvg=[]
for alg in algorithm:
for dis in dist:
k_fold = KFold(n=len(df), n_folds=5)
scores = []
try:
clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis)
except Exception as err:
# print(alg,dis,'err')
continue
for train_indices, test_indices in k_fold:
xtr = df.iloc[train_indices,(df.columns != 'countryCoded')]
ytr = df.iloc[train_indices]['countryCoded']
xte = df.iloc[test_indices, (df.columns != 'countryCoded')]
yte = df.iloc[test_indices]['countryCoded']
clf.fit(xtr, ytr)
ypred = clf.predict(xte)
acc=accuracy_score(list(yte),list(ypred))
scores.append(acc*100)
print(alg,dis,np.average(scores))
bestAvg.append(np.average(scores))
print('>>>>>>>Best: ',np.max(bestAvg))
```
## On Fund (only MS)
```
bestAvg=[]
for alg in algorithm:
for dis in dist:
k_fold = KFold(n=len(df), n_folds=5)
scores = []
try:
clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis)
except Exception as err:
continue
for train_indices, test_indices in k_fold:
xtr = df.iloc[train_indices, (df.columns != 'fund')]
ytr = df.iloc[train_indices]['fund']
xte = df.iloc[test_indices, (df.columns != 'fund')]
yte = df.iloc[test_indices]['fund']
clf.fit(xtr, ytr)
ypred = clf.predict(xte)
acc=accuracy_score(list(yte),list(ypred))
score=acc*100
scores.append(score)
if (len(bestAvg)>1) :
if(score > np.max(bestAvg)) :
bestClf=clf
bestAvg.append(np.average(scores))
print (alg,dis,np.average(scores))
print('>>>>>>>Best: ',np.max(bestAvg))
```
### Best : ('brute', 'cityblock', 76.894261294261298)
```
me=[1,2,0,2.5,False,False,1.5]
n=bestClf.kneighbors([me])
n
for i in n[1]:
print(xtr.iloc[i])
```
| github_jupyter |
# Koopman Training and Validation for 2D Tail-Actuated Robotic Fish
This file uses experimental measurements using a 2D Tail-Actuated Robotic Fish to train an approximate Koopman operator. Using the initial conditions of each experiment, the data-driven solution is then used to predict the system forward and compared against the real experimental measurements. All fitness plots are generated at the end.
## Import Data
```
%%capture # suppress cell output
!git clone https://github.com/giorgosmamakoukas/DataSet.git # Import data from user location
!mv DataSet/* ./ # Move 'DataSet' folder to main directory
```
## Import User Functions
```
# This file includes all user-defined functions
from math import atan, sqrt, sin, cos
from numpy import empty, sign, dot,zeros
from scipy import io, linalg
def Psi_k(s,u): #Creates a vector of basis functions using states s and control u
x, y, psi, v_x, v_y, omega = s # Store states in local variables
if (v_y == 0) and (v_x == 0):
atanvXvY = 0; # 0/0 gives NaN
psi37 = 0;
psi40 = 0;
psi52 = 0;
psi56 = 0;
else:
atanvXvY = atan(v_y/v_x);
psi37 = v_x * pow(v_y,2) * omega/sqrt(pow(v_x,2)+pow(v_y,2));
psi40 = pow(v_x,2) * v_y * omega / sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY;
psi52 = pow(v_x,2) * v_y * omega / sqrt(pow(v_x,2) + pow(v_y,2));
psi56 = v_x * pow(v_y,2) * omega * atanvXvY / sqrt(pow(v_x,2) + pow(v_y,2));
Psi = empty([62,1]); # declare memory to store psi vector
# System States
Psi[0,0] = x;
Psi[1,0] = y;
Psi[2,0] = psi;
Psi[3,0] = v_x;
Psi[4,0] = v_y;
Psi[5,0] = omega;
# f(t): terms that appear in dynamics
Psi[6,0] = v_x * cos(psi) - v_y * sin(psi);
Psi[7,0] = v_x * sin(psi) + v_y * cos(psi);
Psi[8,0] = v_y * omega;
Psi[9,0] = pow(v_x,2);
Psi[10,0] = pow(v_y,2);
Psi[11,0] = v_x * omega;
Psi[12,0] = v_x * v_y;
Psi[13,0] = sign(omega) * pow(omega,2);
# df(t)/dt: terms that appear in derivative of dynamics
Psi[14,0] = v_y * omega * cos(psi);
Psi[15,0] = pow(v_x,2) * cos(psi);
Psi[16,0] = pow(v_y,2) * cos(psi);
Psi[17,0] = v_x * omega * sin(psi);
Psi[18,0] = v_x * v_y * sin(psi);
Psi[19,0] = v_y * omega * sin(psi);
Psi[20,0] = pow(v_x,2) * sin(psi);
Psi[21,0] = pow(v_y,2) * sin(psi);
Psi[22,0] = v_x * omega * cos(psi);
Psi[23,0] = v_x * v_y * cos(psi);
Psi[24,0] = v_x * pow(omega,2);
Psi[25,0] = v_x * v_y * omega;
Psi[26,0] = v_x * pow(v_y,2);
Psi[27,0] = v_y * sign(omega) * pow(omega,2);
Psi[28,0] = pow(v_x,3);
Psi[29,0] = v_y * pow(omega,2);
Psi[30,0] = v_x * omega * sqrt(pow(v_x,2) + pow(v_y,2));
Psi[31,0] = v_y * omega * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY;
Psi[32,0] = pow(v_x,2) * v_y;
Psi[33,0] = v_x * sign(omega) * pow(omega,2);
Psi[34,0] = pow(v_y,3);
Psi[35,0] = pow(v_x,3) * atanvXvY;
Psi[36,0] = v_x * pow(v_y,2) * atanvXvY;
Psi[37,0] = psi37;
Psi[38,0] = pow(v_x,2) * v_y * pow(atanvXvY,2);
Psi[39,0] = pow(v_y,3) * pow(atanvXvY,2);
Psi[40,0] = psi40;
Psi[41,0] = pow(v_y,2) * omega;
Psi[42,0] = v_x * v_y * sqrt(pow(v_x,2) + pow(v_y,2));
Psi[43,0] = pow(v_y,2) * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY;
Psi[44,0] = pow(v_x,2) * omega;
Psi[45,0] = pow(v_x,2) * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY;
Psi[46,0] = v_x * v_y * sign(omega) * omega;
Psi[47,0] = pow(omega, 3);
Psi[48,0] = v_y * omega * sqrt(pow(v_x,2) + pow(v_y,2));
Psi[49,0] = pow(v_x,3);
Psi[50,0] = v_x * pow(v_y,2);
Psi[51,0] = pow(v_x,2) * v_y * atanvXvY;
Psi[52,0] = psi52;
Psi[53,0] = v_x * omega * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY;
Psi[54,0] = pow(v_x,3) * pow(atanvXvY,2);
Psi[55,0] = v_x * pow(v_y,2) * pow(atanvXvY,2);
Psi[56,0] = psi56;
Psi[57,0] = pow(v_y, 3) * atanvXvY;
Psi[58,0] = v_x * pow(omega,2);
Psi[59,0] = v_y * sign(omega) * pow(omega,2);
# add control inputs
Psi[60,0] = u[0];
Psi[61,0] = u[1];
return Psi
def A_and_G(s_1, s_2, u): # Uses measurements s(t_k) & s(t_{k+1}) to calculate A and G
A = dot(Psi_k(s_2, u), Psi_k(s_1, u).transpose());
G = dot(Psi_k(s_1, u), Psi_k(s_1, u).transpose());
return A, G
def TrainKoopman(): # Train an approximate Koopman operator
######## 1. IMPORT DATA ########
mat = io.loadmat('InterpolatedData_200Hz.mat', squeeze_me=True)
positions = mat['Lengths'] - 1 # subtract 1 to convert MATLAB indices to python
x = mat['x_int_list']
y = mat['y_int_list']
psi = mat['psi_int_list']
v_x = mat['v1_int_list']
v_y = mat['v2_int_list']
omega = mat['omega_int_list']
u1 = mat['u1_list']
u2 = mat['u2_list']
######## 2. INITIALIZE A and G matrices
A = zeros((62, 62)) # 62 is the size of the Ψ basis functions
G = zeros((62, 62))
######## 3. TRAINING KOOPMAN ########
for i in range(x.size-1):
# print('{:.2f} % completed'.format(i/x.size*100))
if i in positions:
i += 1 # jump to next trial at the end of each trial
# Create pair of state measurements
s0 = [x[i], y[i], psi[i], v_x[i], v_y[i], omega[i]]
sn = [x[i+1], y[i+1], psi[i+1], v_x[i+1], v_y[i+1], omega[i+1]]
Atemp, Gtemp = A_and_G(s0,sn,[u1[i],u2[i]])
A = A+Atemp;
G = G+Gtemp;
Koopman_d = dot(A,linalg.pinv2(G)) # more accurate than numpy
# Koopman_d = dot(A,numpy.linalg.pinv(G))
# io.savemat('SavedData.mat', {'A' : A, 'G': G, 'Kd': Koopman_d}) # save variables to Matlab file
return Koopman_d
```
## Train Koopman & Test Fitness
```
# This file trains and tests the accuracy of approximate Koopman operator
######## 0. IMPORT PYTHON FUNCTIONS ########
import matplotlib.pyplot as plt
from numpy import arange, insert, linspace
######## 1. IMPORT EXPERIMENTAL DATA ########
mat = io.loadmat('InterpolatedData_200Hz.mat', squeeze_me=True)
positions = mat['Lengths'] - 1 # subtract 1 to convert MATLAB indices to python
# positions includes indices with last measurement of each experiment
x = mat['x_int_list']
y = mat['y_int_list']
psi = mat['psi_int_list']
v_x = mat['v1_int_list']
v_y = mat['v2_int_list']
omega = mat['omega_int_list']
u1 = mat['u1_list']
u2 = mat['u2_list']
positions = insert(positions, 0, -1) # insert -1 as index that precedes the 1st experiment
######## 2. PREDICT DATA USING TRAINED KOOPMAN ########
Kd = TrainKoopman() # Train Koopman
for exp_i in range(0, positions.size -2): # for each experiment
indx = positions[exp_i]+1 # beginning index of each trial
Psi_predicted = empty((positions[exp_i+1]-(indx), 62))
s0 = [x[indx], y[indx], psi[indx], v_x[indx], v_y[indx], omega[indx]]
Psi_predicted[0,:] = Psi_k(s0, [u1[indx], u2[indx]]).transpose() # Initialize with same initial conditions as experiment
for j in range(0, positions[exp_i+1]-1-(indx)):
Psi_predicted[j+1, :] = dot(Kd,Psi_predicted[j, :])
######## 3. PLOT EXPERIMENTAL VS PREDICTED DATA ########
ylabels = ['x (m)', 'y (m)', 'ψ (rad)', r'$\mathregular{v_x (m/s)}$', r'$\mathregular{v_x (m/s)}$', 'ω (rad/s)']
exp_data = [x, y, psi, v_x, v_y, omega]
time = linspace(0, 1./200*(j+2), j+2) # create time vector
fig = plt.figure()
for states_i in range(6):
plt.subplot('23'+str(states_i+1)) # 2 rows # 3 columns
plt.plot(time, Psi_predicted[:, states_i])
plt.plot(time, exp_data[states_i][indx:positions[exp_i+1]])
plt.ylabel(ylabels[states_i])
plt.gca().legend(('Predicted','Experimental'))
Amp_values = [15, 20, 25, 30]
Bias_values = [-20, -30, -40, -50, 0, 20, 30, 40, 50]
titles = 'Amp: ' + str(Amp_values[(exp_i)//18]) + ' Bias: ' + str(Bias_values[(exp_i % 18) //2])
fig.suptitle(titles)
plt.show(block=False)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from datascience import *
# Table.interactive()
import matplotlib
# from ipywidgets import interact, Dropdown
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
```
# Project 2: Topic
## Table of Contents
<a href='#section 0'>Background Knowledge: Topic</a>
1. <a href='#section 1'> The Data Science Life Cycle</a>
a. <a href='#subsection 1a'>Formulating a question or problem</a>
b. <a href='#subsection 1b'>Acquiring and cleaning data</a>
c. <a href='#subsection 1c'>Conducting exploratory data analysis</a>
d. <a href='#subsection 1d'>Using prediction and inference to draw conclusions</a>
<br><br>
### Background Knowledge <a id='section 0'></a>
Anecdote / example that is applicable to their everyday.
Where do have they seen this topic before? What can catch their interests?
Add relative image if available: <#img src="..." width = 700/>
# The Data Science Life Cycle <a id='section 1'></a>
## Formulating a question or problem <a id='subsection 1a'></a>
It is important to ask questions that will be informative and that will avoid misleading results. There are many different questions we could ask about Covid-19, for example, many researchers use data to predict the outcomes based on intervention techniques such as social distancing.
<div class="alert alert-warning">
<b>Question:</b> Take some time to formulate questions you have about this **TOPIC** and the data you would need to answer the questions. In addition, add the link of an article you found interesting with a description an why it interested you.
</div>
You can find [resources](...)(**ADD LINK**) here to choose from.
Your questions: *here*
Data you would need: *here*
Article: *link*
## Acquiring and cleaning data <a id='subsection 1b'></a>
We'll be looking at the Data from (...). You can find the raw data [here](...)(**ADD LINK**). We've cleaned up the datasets a bit, and we will be investigating the (**ADD DESCRIPTION OF DATA**).
The following table, `...`, contains the (**DESCRIPTION**).
**ADD CODE BOOK**
```
# data = Table().read_table("...csv")
# data.show(10)
# In this cell, we are RELABELING the columns ...
# BASIC CLEANING WE MAY WANT THEM TO DO
```
<div class="alert alert-warning">
<b>Question:</b> It's important to evalute our data source. What do you know about the source? What motivations do they have for collecting this data? What data is missing?
</div>
*Insert answer*
<div class="alert alert-warning">
<b>Question:</b> Do you see any missing (nan) values? Why might they be there?
</div>
*Insert answer here*
<div class="alert alert-warning">
<b>Question:</b> We want to learn more about the dataset. First, how many total rows are in this table? What does each row represent?
</div>
```
total_rows = ...
```
*Insert answer here*
## Conducting exploratory data analysis <a id='subsection 1c'></a>
Visualizations help us to understand what the dataset is telling us.**OVERVIEW OF WHAT CHARTS WE WILL REVIEW / WHAT QUESTION AND WE WORKING TOWARD**.
### Part 1: One Branch of analysis (Understanding ratios / patterns in the data / grouping / filtering)
<div class="alert alert-warning">
<b>Question:</b> ...
</div>
<div class="alert alert-warning">
<b>Question:</b> Next, visualize ...
</div>
<div class="alert alert-warning">
<b>Question:</b> Compare ...
</div>
<div class="alert alert-warning">
<b>Question:</b> Now make another bar chart...
</div>
```
...
```
<div class="alert alert-warning">
<b>Question:</b> What are some possible reasons for the disparities between charts? Hint: Think about ...
</div>
*Insert answer here.*
### Part 2: Other Data / Second Branch of analysis (Understanding ratios / patterns in the data / grouping / filtering)
Is there additional data we need to understand / answer our question? Are we starting another branch of analysis / focus on a different column / topics than last section
```
# possible other data
# other_data = Table().read_table("...csv")
# other_data.show(10)
```
<div class="alert alert-warning">
<b>Question:</b> Grouping
</div>
<div class="alert alert-warning">
<b>Question:</b> Compare
</div>
<div class="alert alert-warning">
<b>Question:</b> Adding to existing tables
</div>
<div class="alert alert-warning">
<b>Question:</b> Compare & visualize
</div>
<div class="alert alert-warning">
<b>Question:</b> What differences do you see from the visualizations? What do they imply for the broader world?
</div>
*Insert answer here.*
## Using prediction and inference to draw conclusions <a id='subsection 1a'></a>
Now that we have some experience making these visualizations, let's go back to **BACKGROUND INFO / PERSONAL EXPERIENCES / KNOWLEDGE**. We know that... From the previous section, we also know that we need to take into account ...
Now we will read in two tables: Covid by State and Population by state in order to look at the percentage of the cases. And the growth of the
```
# possible other data
# other_data = Table().read_table("...csv")
# other_data.show(10)
```
#### MAPPING / MORE COMPLICATED VISUAL / SUMMARIZING VISUAL TO PULL TOGETHER CONCEPTS THROUGHOUT THE WHOLE PROJECT
#### WHAT NARRATIVE DO WE WANT TO END ON? FINAL POINT FOR THEIR OWN PRESENTATIONS
Look at the VISUAL (...) and try to explain using your knowledge and other sources. Tell a story. (Presentation)
Tell us what you learned FROM THE PROJECT.
### BRING BACK ETHICS & CONTEXT
Tell us something interesting about this data
Source: ....
Notebook Authors: Alleanna Clark, Ashley Quiterio, Karla Palos Castellanos
| github_jupyter |
# 解方程
## 简单的一元一次方程
\begin{equation}x + 16 = -25\end{equation}
\begin{equation}x + 16 - 16 = -25 - 16\end{equation}
\begin{equation}x = -25 - 16\end{equation}
\begin{equation}x = -41\end{equation}
```
x = -41 # 验证方程的解
x + 16 == -25
```
## 带系数的方程
\begin{equation}3x - 2 = 10 \end{equation}
\begin{equation}3x - 2 + 2 = 10 + 2 \end{equation}
\begin{equation}3x = 12 \end{equation}
\begin{equation}x = 4\end{equation}
```
x = 4 # 代入 x = 4
3 * x - 2 == 10
```
## 系数是分数的方程
\begin{equation}\frac{x}{3} + 1 = 16 \end{equation}
\begin{equation}\frac{x}{3} = 15 \end{equation}
\begin{equation}\frac{3}{1} \cdot \frac{x}{3} = 15 \cdot 3 \end{equation}
\begin{equation}x = 45 \end{equation}
```
x = 45
x/3 + 1 == 16
```
## 需要合并同类项的例子
\begin{equation}3x + 2 = 5x - 1 \end{equation}
\begin{equation}3x + 3 = 5x \end{equation}
\begin{equation}3 = 2x \end{equation}
\begin{equation}\frac{3}{2} = x \end{equation}
\begin{equation}x = \frac{3}{2} \end{equation}
\begin{equation}x = 1\frac{1}{2} \end{equation}
```
x = 1.5
3*x + 2 == 5*x -1
```
## 一元方程练习
\begin{equation}\textbf{4(x + 2)} + \textbf{3(x - 2)} = 16 \end{equation}
\begin{equation}4x + 8 + 3x - 6 = 16 \end{equation}
\begin{equation}7x + 2 = 16 \end{equation}
\begin{equation}7x = 14 \end{equation}
\begin{equation}\frac{7x}{7} = \frac{14}{7} \end{equation}
\begin{equation}x = 2 \end{equation}
```
x = 2
4 * (x + 2) + 3 * (x - 2) == 16
```
# 线性方程
## 例子
\begin{equation}2y + 3 = 3x - 1 \end{equation}
\begin{equation}2y + 4 = 3x \end{equation}
\begin{equation}2y = 3x - 4 \end{equation}
\begin{equation}y = \frac{3x - 4}{2} \end{equation}
```
import numpy as np
from tabulate import tabulate
x = np.array(range(-10, 11)) # 从 -10 到 10 的21个数据点
y = (3 * x - 4) / 2 # 对应的函数值
print(tabulate(np.column_stack((x,y)), headers=['x', 'y']))
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "last_expr"
# %matplotlib inline
from matplotlib import pyplot as plt
plt.plot(x, y, color="grey", marker = "o")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.show()
```
## 截距
```
plt.plot(x, y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline() # 画出坐标轴
plt.axvline()
plt.show()
```
- x轴上的截距
\begin{equation}0 = \frac{3x - 4}{2} \end{equation}
\begin{equation}\frac{3x - 4}{2} = 0 \end{equation}
\begin{equation}3x - 4 = 0 \end{equation}
\begin{equation}3x = 4 \end{equation}
\begin{equation}x = \frac{4}{3} \end{equation}
\begin{equation}x = 1\frac{1}{3} \end{equation}
- y轴上的截距
\begin{equation}y = \frac{3\cdot0 - 4}{2} \end{equation}
\begin{equation}y = \frac{-4}{2} \end{equation}
\begin{equation}y = -2 \end{equation}
```
plt.plot(x, y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
plt.annotate('x',(1.333, 0)) # 标出截距点
plt.annotate('y',(0,-2))
plt.show()
```
> 截距的作用是明显的:两点成一线。连接截距就可以画出函数曲线。
## 斜率
\begin{equation}slope = \frac{\Delta{y}}{\Delta{x}} \end{equation}
\begin{equation}m = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \end{equation}
\begin{equation}m = \frac{7 - -2}{6 - 0} \end{equation}
\begin{equation}m = \frac{7 + 2}{6 - 0} \end{equation}
\begin{equation}m = 1.5 \end{equation}
```
plt.plot(x, y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
m = 1.5
xInt = 4 / 3
yInt = -2
mx = [0, xInt]
my = [yInt, yInt + m * xInt]
plt.plot(mx, my, color='red', lw=5) # 用红色标出
plt.show()
plt.grid() # 放大图
plt.axhline()
plt.axvline()
m = 1.5
xInt = 4 / 3
yInt = -2
mx = [0, xInt]
my = [yInt, yInt + m * xInt]
plt.plot(mx, my, color='red', lw=5)
plt.show()
```
### 直线的斜率、截距形式
\begin{equation}y = mx + b \end{equation}
\begin{equation}y = \frac{3x - 4}{2} \end{equation}
\begin{equation}y = 1\frac{1}{2}x + -2 \end{equation}
```
m = 1.5
yInt = -2
x = np.array(range(-10, 11))
y2 = m * x + yInt # 斜率截距形式
plt.plot(x, y2, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
plt.annotate('y', (0, yInt))
plt.show()
```
# 线形方程组
> 解的含义:直线的交点
\begin{equation}x + y = 16 \end{equation}
\begin{equation}10x + 25y = 250 \end{equation}
```
l1p1 = [16, 0] # 线1点1
l1p2 = [0, 16] # 线1点2
l2p1 = [25,0] # 线2点1
l2p2 = [0,10] # 线2点2
plt.plot(l1p1,l1p2, color='blue')
plt.plot(l2p1, l2p2, color="orange")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.show()
```
### 解线形方程组 (消去法)
\begin{equation}x + y = 16 \end{equation}
\begin{equation}10x + 25y = 250 \end{equation}
\begin{equation}-10(x + y) = -10(16) \end{equation}
\begin{equation}10x + 25y = 250 \end{equation}
\begin{equation}-10x + -10y = -160 \end{equation}
\begin{equation}10x + 25y = 250 \end{equation}
\begin{equation}15y = 90 \end{equation}
\begin{equation}y = \frac{90}{15} \end{equation}
\begin{equation}y = 6 \end{equation}
\begin{equation}x + 6 = 16 \end{equation}
\begin{equation}x = 10 \end{equation}
```
x = 10
y = 6
print ((x + y == 16) & ((10 * x) + (25 * y) == 250))
```
# 指数、根和对数
## 指数
\begin{equation}2^{2} = 2 \cdot 2 = 4\end{equation}
\begin{equation}2^{3} = 2 \cdot 2 \cdot 2 = 8\end{equation}
```
x = 5**3
print(x)
```
## 根
\begin{equation}?^{2} = 9 \end{equation}
\begin{equation}\sqrt{9} = 3 \end{equation}
\begin{equation}\sqrt[3]{64} = 4 \end{equation}
```
import math
x = math.sqrt(9) # 平方根
print (x)
cr = round(64 ** (1. / 3)) # 立方根
print(cr)
```
### 根是分数形式的指数
\begin{equation} 8^{\frac{1}{3}} = \sqrt[3]{8} = 2 \end{equation}
\begin{equation} 9^{\frac{1}{2}} = \sqrt{9} = 3 \end{equation}
```
print (9**0.5)
print (math.sqrt(9))
```
## 对数
> 对数是指数的逆运算
\begin{equation}4^{?} = 16 \end{equation}
\begin{equation}log_{4}(16) = 2 \end{equation}
```
x = math.log(16, 4)
print(x)
```
### 以10为底的对数
\begin{equation}log(64) = 1.8061 \end{equation}
### 自然对数
\begin{equation}log_{e}(64) = ln(64) = 4.1589 \end{equation}
```
print(math.log10(64))
print (math.log(64))
```
## 幂运算 (合并同类项)
\begin{equation}2y = 2x^{4} ( \frac{x^{2} + 2x^{2}}{x^{3}} ) \end{equation}
\begin{equation}2y = 2x^{4} ( \frac{3x^{2}}{x^{3}} ) \end{equation}
\begin{equation}2y = 2x^{4} ( 3x^{-1} ) \end{equation}
\begin{equation}2y = 6x^{3} \end{equation}
\begin{equation}y = 3x^{3} \end{equation}
```
x = np.array(range(-10, 11))
y3 = 3 * x ** 3
print(tabulate(np.column_stack((x, y3)), headers=['x', 'y']))
plt.plot(x, y3, color="magenta") # y3是曲线
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
plt.show()
```
### 看一个成指数增长的例子
\begin{equation}y = 2^{x} \end{equation}
```
y4 = 2.0**x
plt.plot(x, y4, color="magenta")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
plt.show()
```
### 复利计算
> 存款100,年利5%,20年后余额是多少(记复利)
\begin{equation}y1 = 100 + (100 \cdot 0.05) \end{equation}
\begin{equation}y1 = 100 \cdot 1.05 \end{equation}
\begin{equation}y2 = 100 \cdot 1.05 \cdot 1.05 \end{equation}
\begin{equation}y2 = 100 \cdot 1.05^{2} \end{equation}
\begin{equation}y20 = 100 \cdot 1.05^{20} \end{equation}
```
year = np.array(range(1, 21)) # 年份
balance = 100 * (1.05 ** year) # 余额
plt.plot(year, balance, color="green")
plt.xlabel('Year')
plt.ylabel('Balance')
plt.show()
```
# 多项式
\begin{equation}12x^{3} + 2x - 16 \end{equation}
三项:
- 12x<sup>3</sup>
- 2x
- -16
- 两个系数(12 和 2) 和一个常量-16
- 变量 x
- 指数 <sup>3</sup>
## 标准形式
随着x的指数增长排列(升幂)
\begin{equation}3x + 4xy^{2} - 3 + x^{3} \end{equation}
指数最高项在前(降幂)
\begin{equation}x^{3} + 4xy^{2} + 3x - 3 \end{equation}
## 多项式化简
\begin{equation}x^{3} + 2x^{3} - 3x - x + 8 - 3 \end{equation}
\begin{equation}3x^{3} - 4x + 5 \end{equation}
```
from random import randint
x = randint(1,100) # 取任意值验证多项式化简
(x**3 + 2*x**3 - 3*x - x + 8 - 3) == (3*x**3 - 4*x + 5)
```
## 多项式相加
\begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \end{equation}
\begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \end{equation}
\begin{equation}5x^{3} + 3x^{2} - 6x + 7 \end{equation}
```
x = randint(1,100)
(3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7
```
## 多项式相减
\begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \end{equation}
\begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \end{equation}
\begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \end{equation}
\begin{equation}x^{2} - 2x + 3 \end{equation}
```
from random import randint
x = randint(1,100)
(2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3
```
## 多项式相乘
1. 用第一个多项式的每一项乘以第二个多项式
2. 把相乘的结果合并同类项
\begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \end{equation}
\begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \end{equation}
```
x = randint(1,100)
(x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6
```
## 多项式相除
### 简单例子
\begin{equation}(4x + 6x^{2}) \div 2x \end{equation}
\begin{equation}\frac{4x + 6x^{2}}{2x} \end{equation}
\begin{equation}\frac{4x}{2x} + \frac{6x^{2}}{2x}\end{equation}
\begin{equation}2 + 3x\end{equation}
```
x = randint(1,100)
(4*x + 6*x**2) / (2*x) == 2 + 3*x
```
### 长除法
\begin{equation}(x^{2} + 2x - 3) \div (x - 2) \end{equation}
\begin{equation} x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation} \;\;\;\;x \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation} \;x^{2} -2x \end{equation}
\begin{equation} \;\;\;\;x \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation}- (x^{2} -2x) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}
\begin{equation} \;\;\;\;\;\;\;\;x + 4 \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation}- (x^{2} -2x) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}
\begin{equation}- (\;\;\;\;\;\;\;\;\;\;\;\;4x -8) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;5} \end{equation}
\begin{equation}x + 4 + \frac{5}{x-2} \end{equation}
```
x = randint(3,100)
(x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2))
```
# 因式
16可以如下表示
- 1 x 16
- 2 x 8
- 4 x 4
或者 1, 2, 4, 8 是 16 的因子
## 用多项式乘来表示多项式
\begin{equation}-6x^{2}y^{3} \end{equation}
\begin{equation}(2xy^{2})(-3xy) \end{equation}
又如
\begin{equation}(x + 2)(2x^{2} - 3y + 2) = 2x^{3} + 4x^{2} - 3xy + 2x - 6y + 4 \end{equation}
那么**x+2** 和 **2x<sup>2</sup> - 3y + 2** 都是 **2x<sup>3</sup> + 4x<sup>2</sup> - 3xy + 2x - 6y + 4**的因子
## 最大公共因子
| 16 | 24 |
|--------|--------|
| 1 x 16 | 1 x 24 |
| 2 x 8 | 2 x 12 |
| 2 x **8** | 3 x **8** |
| 4 x 4 | 4 x 6 |
8是16和24的最大公约数
\begin{equation}15x^{2}y\;\;\;\;\;\;\;\;9xy^{3}\end{equation}
这两个多项式的最大公共因子是?
## 最大公共因子
先看系数,他们都包含**3**
- 3 x 5 = 15
- 3 x 3 = 9
再看***x***项, x<sup>2</sup> 和 x。
最后看***y***项, y 和 y<sup>3</sup>。
最大公共因子是
\begin{equation}3xy\end{equation}
## 最大公共因子
可见最大公共因子总包括
- 系数的最大公约数
- 变量指数的最小值
用多项式除来验证:
\begin{equation}\frac{15x^{2}y}{3xy}\;\;\;\;\;\;\;\;\frac{9xy^{3}}{3xy}\end{equation}
\begin{equation}3xy(5x) = 15x^{2}y\end{equation}
\begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation}
```
x = randint(1,100)
y = randint(1,100)
print((3*x*y)*(5*x) == 15*x**2*y)
print((3*x*y)*(3*y**2) == 9*x*y**3)
```
## 用系数最大公约数作因式分解
\begin{equation}6x + 15y \end{equation}
\begin{equation}6x + 15y = 3(2x) + 3(5y) \end{equation}
\begin{equation}6x + 15y = 3(2x) + 3(5y) = \mathbf{3(2x + 5y)} \end{equation}
```
x = randint(1,100)
y = randint(1,100)
(6*x + 15*y) == (3*(2*x) + 3*(5*y)) == (3*(2*x + 5*y))
```
## 用最大公共因子作因式分解
\begin{equation}15x^{2}y + 9xy^{3}\end{equation}
\begin{equation}3xy(5x) = 15x^{2}y\end{equation}
\begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation}
\begin{equation}15x^{2}y + 9xy^{3} = \mathbf{3xy(5x + 3y^{2})}\end{equation}
```
x = randint(1,100)
y = randint(1,100)
(15*x**2*y + 9*x*y**3) == (3*x*y*(5*x + 3*y**2))
```
## 用平方差公式作因式分解
\begin{equation}x^{2} - 9\end{equation}
\begin{equation}x^{2} - 3^{2}\end{equation}
\begin{equation}(x - 3)(x + 3)\end{equation}
```
x = randint(1,100)
(x**2 - 9) == (x - 3)*(x + 3)
```
## 用平方作因式分解
\begin{equation}x^{2} + 10x + 25\end{equation}
\begin{equation}(x + 5)(x + 5)\end{equation}
\begin{equation}(x + 5)^{2}\end{equation}
一般的
\begin{equation}(a + b)^{2} = a^{2} + b^{2}+ 2ab \end{equation}
```
a = randint(1,100)
b = randint(1,100)
a**2 + b**2 + (2*a*b) == (a + b)**2
```
# 二次方程
\begin{equation}y = 2(x - 1)(x + 2)\end{equation}
\begin{equation}y = 2x^{2} + 2x - 4\end{equation}
```
x = np.array(range(-9, 9))
y = 2 * x **2 + 2 * x - 4
plt.plot(x, y, color="grey") # 画二次曲线 (抛物线)
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
plt.show()
```
>会是什么形状?
\begin{equation}y = -2x^{2} + 6x + 7\end{equation}
```
x = np.array(range(-8, 12))
y = -2 * x ** 2 + 6 * x + 7
plt.plot(x, y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
plt.show()
```
## 抛物线的顶点
\begin{equation}y = ax^{2} + bx + c\end{equation}
***a***, ***b***, ***c***是系数
它产生的抛物线有顶点,或者是最高点,或者是最低点。
```
def plot_parabola(a, b, c):
vx = (-1*b)/(2*a) # 顶点
vy = a*vx**2 + b*vx + c
minx = int(vx - 10) # 范围
maxx = int(vx + 11)
x = np.array(range(minx, maxx))
y = a * x ** 2 + b * x + c
miny = y.min()
maxy = y.max()
plt.plot(x, y, color="grey") # 画曲线
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
sx = [vx, vx] # 画中心线
sy = [miny, maxy]
plt.plot(sx, sy, color='magenta')
plt.scatter(vx,vy, color="red") # 画顶点
plot_parabola(2, 2, -4)
plt.show()
plot_parabola(-2, 3, 5)
plt.show()
```
## 抛物线的交点(二次方程的解)
\begin{equation}y = 2(x - 1)(x + 2)\end{equation}
\begin{equation}2(x - 1)(x + 2) = 0\end{equation}
\begin{equation}x = 1\end{equation}
\begin{equation}x = -2\end{equation}
```
# 画曲线
plot_parabola(2, 2, -4)
# 画交点
x1 = -2
x2 = 1
plt.scatter([x1,x2],[0,0], color="green")
plt.annotate('x1',(x1, 0))
plt.annotate('x2',(x2, 0))
plt.show()
```
## 二次方程的解公式
\begin{equation}ax^{2} + bx + c = 0\end{equation}
\begin{equation}x = \frac{-b \pm \sqrt{b^{2} - 4ac}}{2a}\end{equation}
```
def plot_parabola_from_formula (a, b, c):
plot_parabola(a, b, c) # 画曲线
x1 = (-b + (b*b - 4*a*c)**0.5)/(2 * a)
x2 = (-b - (b*b - 4*a*c)**0.5)/(2 * a)
plt.scatter([x1, x2], [0, 0], color="green") # 画解
plt.annotate('x1', (x1, 0))
plt.annotate('x2', (x2, 0))
plt.show()
plot_parabola_from_formula (2, -16, 2)
```
# 函数
\begin{equation}f(x) = x^{2} + 2\end{equation}
\begin{equation}f(3) = 11\end{equation}
```
def f(x):
return x**2 + 2
f(3)
x = np.array(range(-100, 101))
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
plt.plot(x, f(x), color='purple')
plt.show()
```
## 函数的定义域
\begin{equation}f(x) = x + 1, \{x \in \rm I\!R\}\end{equation}
\begin{equation}g(x) = (\frac{12}{2x})^{2}, \{x \in \rm I\!R\;\;|\;\; x \ne 0 \}\end{equation}
简化形式:
\begin{equation}g(x) = (\frac{12}{2x})^{2},\;\; x \ne 0\end{equation}
```
def g(x):
if x != 0:
return (12/(2*x))**2
x = range(-100, 101)
y = [g(a) for a in x]
print(g(0.1))
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
plt.plot(x, y, color='purple')
# 绘制临界点, 如果取接近0的值,临界点附件函数的形状变得不可见
plt.plot(0, g(1), color='purple', marker='o', markerfacecolor='w', markersize=8)
plt.show()
```
\begin{equation}h(x) = 2\sqrt{x}, \{x \in \rm I\!R\;\;|\;\; x \ge 0 \}\end{equation}
```
def h(x):
if x >= 0:
return 2 * np.sqrt(x)
x = range(-100, 101)
y = [h(a) for a in x]
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
plt.plot(x, y, color='purple')
# 画出边界
plt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)
plt.show()
```
\begin{equation}j(x) = x + 2,\;\; x \ge 0 \text{ and } x \le 5\end{equation}
\begin{equation}\{x \in \rm I\!R\;\;|\;\; 0 \le x \le 5 \}\end{equation}
```
def j(x):
if x >= 0 and x <= 5:
return x + 2
x = range(-100, 101)
y = [j(a) for a in x]
plt.xlabel('x')
plt.ylabel('j(x)')
plt.grid()
plt.plot(x, y, color='purple')
# 两个边界点
plt.plot(0, j(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)
plt.plot(5, j(5), color='purple', marker='o', markerfacecolor='purple', markersize=8)
plt.show()
```
### 阶梯函数
\begin{equation}
k(x) = \begin{cases}
0, & \text{if } x = 0, \\
1, & \text{if } x = 100
\end{cases}
\end{equation}
```
def k(x):
if x == 0:
return 0
elif x == 100:
return 1
x = range(-100, 101)
y = [k(a) for a in x]
plt.xlabel('x')
plt.ylabel('k(x)')
plt.grid()
plt.scatter(x, y, color='purple')
plt.show()
```
### 函数的值域
\begin{equation}p(x) = x^{2} + 1\end{equation}
\begin{equation}\{p(x) \in \rm I\!R\;\;|\;\; p(x) \ge 1 \}\end{equation}
```
def p(x):
return x**2 + 1
x = np.array(range(-100, 101))
plt.xlabel('x')
plt.ylabel('p(x)')
plt.grid()
plt.plot(x, p(x), color='purple')
plt.show()
```
| github_jupyter |
# Introduction to Seaborn
***
We got a good a glimpse of the data. But that's the thing with Data Science the more you get involved the harder is it for you to stop exploring.
Now, We want to **analyze** the data in order to extract some insights.We can use the Seaborn library for that.
We can use Seaborn to do both **Univariate and Multivariate analysis**. How? we will see soon.
## So what is Seaborn? (1/2)
***
Seaborn is a Python visualization library based on matplotlib.
It provides a high-level interface for drawing attractive statistical graphics.
Some of the features that seaborn offers are :
* Several built-in themes for styling matplotlib graphics
* Tools for choosing color palettes to make beautiful plots that reveal patterns in your data
* Functions for visualizing univariate and bivariate distributions or for comparing them between subsets of data
## So what is Seaborn? (2/2)
***
* Tools that fit and visualize linear regression models for different kinds of independent and dependent variables
* Functions that visualize matrices of data and use clustering algorithms to discover structure in those matrices
* A function to plot statistical timeseries data with flexible estimation and representation of uncertainty around the estimate
* High-level abstractions for structuring grids of plots that let you easily build complex visualizations
<div class="alert alert-block alert-success">**You can import Seaborn as below :**</div>
```
import seaborn as sns
```
# **Scikit-learn**
---
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python.
It features various algorithms like support vector machine, random forests, and k-neighbours, and it also supports Python numerical and scientific libraries like NumPy and SciPy.
Some popular groups of models provided by scikit-learn include:
**Clustering**: for grouping unlabeled data such as KMeans.
Cross Validation: for estimating the performance of supervised models on unseen data.
**Datasets:** for test datasets and for generating datasets with specific properties for investigating model behavior.
**Dimensionality Reduction:** for reducing the number of attributes in data for summarization, visualization and feature selection such as Principal component analysis.
**Ensemble methods:** for combining the predictions of multiple supervised models.
**Feature extraction:** for defining attributes in image and text data.
**Feature selection:** for identifying meaningful attributes from which to create supervised models.
**Parameter Tuning:** for getting the most out of supervised models.
**Manifold Learning:** For summarizing and depicting complex multi-dimensional data.
**Supervised Models:** a vast array not limited to generalized linear models, discriminate analysis, naive bayes, lazy methods, neural networks, support vector machines and decision trees.
## **Linear Regression for Machine Learning**
Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. One variable is considered to be an explanatory variable, and the other is considered to be a dependent variable. For example, a modeler might want to relate the weights of individuals to their heights using a linear regression model.
Before attempting to fit a linear model to observed data, a modeler should first determine whether or not there is a relationship between the variables of interest. This does not necessarily imply that one variable causes the other (for example, higher GATE scores do not cause higher college grades), but that there is some significant association between the two variables.
A linear regression line has an equation of the form **Y = a + bX**, where **X** is the explanatory variable and **Y** is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0).
```
# Necessary Imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from sklearn.linear_model import LinearRegression
```
## Dataset
***
Let's start by loading the dataset. We'll be using two `.csv` files. One having only one predictor and the other having multiple predictors. Since the target variable(we'll find out what target variables and predictors are below) is **quantitative/continuous**, this is the best for regression problems.
Let's start loading the data for univariate analysis.
```
data = pd.read_csv("house_prices.csv")
data.info
data.head()
```
In order to learn to make predictions, it is important to learn what a Predictor is.
## So what is a predictor?
***
How could you say if a person went to tier 1, 2 or 3 business college in India?
Simple, if someone is determined to pursue a MBA degree, Higher CAT scores (or GPA) leads to more college admissions!
so CAT score here is known as predictors and the variable of interest is known as the target variable.
## Predictors & Target Variable for our dataset
***
Here, our target variable would be as mentioned above ____________
** What could be the predictors for our target variable? **
Let's go with the **LotArea**
We would want to see if the price of a house is really affected by the area of the house
Intuitively, we all know the outcome but let's try to understand why we're doing this
## Plotting our data
***
- Starting simple, let's just check how our data looks like in a scatter plot where:
- Area is taken along the X-axis
- Price is taken along the Y-axis
```
import matplotlib.pyplot as plt
plt.scatter(data['LotArea'], data['SalePrice'])
plt.title('House pricing')
plt.xlabel('Area')
plt.ylabel('Price')
plt.show()
import matplotlib.pyplot as plt
plt.scatter(data.LotArea, data.SalePrice)
plt.plot(data.LotArea, 30000 + 15*data.LotArea, "r-")
plt.title('NYC House pricing')
plt.xlabel('Area')
plt.ylabel('Price')
plt.show()
```
## Is there a relation ?
***
- By seeing our plot above, we can see an upward trend in the House Prices as the Area of the house increases
- We can say that as the Area of a house increases, it's price increases too.
<br/>
Now, let's say we want to predict the price of the house whose area is 14000 sq feet, how should we go about it?
## Is there a relation ?
***
- By seeing our plot above, we can see an upward trend in the House Prices as the Area of the house increases
- We can say that as the Area of a house increases, it's price increases too.
<br/>
Now, let's say we want to predict the price of the house whose area is 14000 sq feet, how should we go about it?
```
plt.scatter(data['LotArea'], data['SalePrice'])
plt.axvline(x=14000,linewidth='1',color='r')
plt.title('House pricing')
plt.xlabel('Area')
plt.ylabel('Price')
plt.show()
```
## Which line to choose?
***
As you saw, there are many lines which would seem to be fitting reasonably well.
consider following lines,
$$ price = 30000 + 15∗area\\
price=10000 + 17 ∗ area\\
price= 50000 + 12 ∗ area
$$
<div class="alert alert-block alert-success">**Let's try and plot them and see if they are a good fit**</div>
```
import matplotlib.pyplot as plt
plt.scatter(data.LotArea, data.SalePrice)
plt.plot(data.LotArea, 30000 + 15*data.LotArea, "r-")
plt.plot(data.LotArea, 10000 + 17*data.LotArea, "y-")
plt.plot(data.LotArea, 50000 + 12*data.LotArea, "c-")
plt.title(' House pricing')
plt.xlabel('Area')
plt.ylabel('Price')
plt.show()
```
## Which line to choose?
***
As you can see although all three seemed like a good fit, they are quite different from each other. And in the end, they will result in very different predictions.
For example, for house area = 9600, the predictions for red, black and yellow lines are
This function is defined as:
$$(Y_{pred}-Y_{actual})^2$$
The farther the points, more the the distance and more is the value of the function !
It is known as the **cost function** and since this function captures square of distance, it is known as the **least-squares cost function**.
The idea is to **minimize** the cost function to get the best fitting line.
```
# red line:
print("red line:", 30000 + 15*9600) # <-- Inserted value 9600 inplace of LotArea
# yellow line:
print('yellow line:', 10000 + 17*9600) # <-- Inserted value 9600 inplace of LotArea
# cyan line:
print('cyan line:', 50000 + 12*9600) # <-- Inserted value 9600 inplace of LotArea
```
## Which line to choose?
***
As you can see the price predictions are varying from each other significantly. So how do we choose the best line?
Well, we can define a function that measures how near or far the prediction is from the actual value.
If we consider the actual and predicted values as points in space, we can just calculate the distance between these two points!
This function is defined as:
$$(Y_{pred}-Y_{actual})^2$$
The farther the points, more the the distance and more is the value of the function !
It is known as the **cost function** and since this function captures square of distance, it is known as the **least-squares cost function**.
The idea is to **minimize** the cost function to get the best fitting line.
## Introducing *Linear Regression* :
***
Linear regression using least squared cost function is known as **Ordinary Least Squared Linear Regression**.
This allows us to analyze the relationship between two quantitative variables and derive some meaningful insights
## Notations !
***
We will start to use following notations as it helps us represent the problem in a concise way.
* $x^{(i)}$ denotes the predictor(s) - in our case it's the Area
* $y^{(i)}$ denotes the target variable (Price)
A pair ($x^{(i)}$ , $y^{(i)}$) is called a training example.
Let's consider that any given dataset contains **"m"** training examples or Observations
{ $x^{(i)}$ , $y^{(i)}$ ; i = 1, . . . , m} — is called a **training set**.
In this example, m = 1326 (Nos. of row)
For example, 2nd training example, ( x(2) , y(2) ) corresponds to **(9600,181500)**
## **Cost Function - Why is it needed ?**
* An ideal case would be when all the individual points in the scatter plot fall directly on the line OR a straight line passes through all the points in our plot, but in reality, that rarely happens
* We can see that for a Particular Area, there is a difference between Price given by our data point (which is the correct observation) and the line (predicted observation or Fitted Value)
So how can we Mathematically capture such differences and represent it?
### **Cost Function - Mathemtical Representation**
We choose θs so that predicted values are as close to the actual values as possible
We can define a mathematical function to capture the difference between the predicted and actual values.
This function is known as the cost function and denoted by J(θ)
$$J(θ) = \frac{1}{2m} \sum _{i=1}^m (h_\theta(X^{(i)})-Y^{(i)})^2$$
θ is the coefficient of 'x' for our linear model intuitively. It measures how much of a unit change of 'x' will have an effect on 'y'
Here, we need to figure out the values of intercept and coefficients so that the cost function is minimized.
We do this by a very important and widely used Algorithm: **Gradient Descent**
---
## **Optimizing using gradient descent**
---
- Gradient Descent is an iterative method that starts with some “initial random value” for θ, and that repeatedly changes θ to make J(θ) smaller, until hopefully it converges to a value of θ that minimizes J(θ)
- It repeatedly performs an update on θ as shown:
$$ \theta_{j} := \theta_{j}-\alpha \frac{\partial }{\partial \theta_{j}}J(\theta) $$
- Here α is called the learning rate. This is a very natural algorithm that repeatedly takes a step in the direction of steepest decrease of J(θ)
## Gradient Descent Algorithm
***
To get the optimal value of θ , perform following algorithm known as the **Batch Gradient Descent Algorithm**
- Assume initial θ
- Calculate h(θ) for i=1 to m
- Calculate J(θ). Stop when value of J(θ) assumes global/local minima
- Calculate $\thinspace\sum_{i=1}^{m}(y^{(i)}-h_{\theta}(x^{(i)}))*x_{j}$ for all $\theta_{j}'s$
- Calculate new $\thinspace\theta_{j}'s$
- Go to step 2
## Linear Regression in `sklearn`
***
**`sklearn`** provides an easy api to fit a linear regression and predict values using linear regression
Let's see how it works
```
X = data.LotArea[:,np.newaxis] # Reshape
y = data.SalePrice
X
# Fitting Simple Linear Regression to the Training set
regressor = LinearRegression()
regressor.fit(X, y)
# Predicting the Test set results
y_pred = regressor.predict(X)
```
## Plotting the Best Fitting Line
```
# Train Part
plt.scatter(X, y)
plt.plot(X, y_pred, "r-")
plt.title('Housing Price ')
plt.xlabel('Area')
plt.ylabel('Price')
plt.show()
```
## Prediction made Easy
***
- Visually, now we now have a nice approximation of how Area affects the Price
- We can also make a prediction, the easy way of course!
- For example: If we want to buy a house of 14,000 sq. ft, we can simply draw a vertical line from 14,000 up to our Approximated Trend line and continue that line towards the y-axis
```
# Train Part
plt.scatter(X, y)
plt.plot(X, y_pred, "r-")
plt.title('Housing Price ')
plt.xlabel('Area')
plt.ylabel('Price')
plt.axvline(x=14000,c='g');
```
- We can see that for a house whose area ~ 14,000 we need to pay ~ 2,00,000-2,25,000
## Multivariate Linear Regression
***
- In Univariate Linear Regression we used only two variable. One as Dependent Variable and Other as Independent variable.
- Now, we will use Multiple Dependent variables instead of one and will predict the Price i.e. Independent variable.
- i.e the equation for multivariate linear regression is modified as below:
$$ y = \theta_{0}+\theta_{1}x_{1}+\theta_{2}x_{2}+\cdots +\theta_{n}x_{n} $$
- So, along with Area we will consider other variables as such as Pool etc.
```
#Loading the data
NY_Housing = pd.read_csv("house_prices_multivariate.csv")
NY_Housing.head()
# making Independent and Dependent variables from the dataset
X = NY_Housing.iloc[:,:-1] # Selecting everything except the last column
y = NY_Housing.SalePrice
# Fitting Multiple Linear Regression
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X, y)
print("intercept:", regressor.intercept_) # This is the y-intercept
print("coefficients of predictors:", regressor.coef_) # These are the weights or regression coefficients.
```
## Predicting the price
***
Now let's say I want to predict the price of a house with following specifications.
```
my_house = X.iloc[155]
my_house
pred_my_house = regressor.predict(my_house.values.reshape(1, -1))
print("predicted value:", pred_my_house[0])
print("actual value:", y[155])
```
As you can see the predicted value is not very far away from the actual value.
Now let's try to predict the price for all the houses in the dataset.
```
# Predicting the results
y_pred = regressor.predict(X)
y_pred[:10]
```
<div class="alert alert-block alert-success">Great! now, let's put the predicted values next to the actual values and see how good a job have we done!</div>
```
prices = pd.DataFrame({"actual": y,
"predicted": y_pred})
prices.head(10)
```
## Measuring the goodness of fit
***
Must say we have done a reasonably good job of predicting the house prices.
However, as the number of predictions increase it would be difficult to manually check the goodness of fit. In such a case, we can use the cost function to check the goodness of fit.
<div class="alert alert-block alert-success">Let's first start by finding the mean squared error (MSE)</div>
```
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y)
```
<div class="alert alert-block alert-warning">**What do you think about the error value?**</div>
As you would notice the error value seems very high (in billions!). Why has it happened?
MSE is a relative measure of goodness of fit. We say that because the measure of goodness of MSE depends on the unit.
As it turns out, Linear regression depends on certain underlying assumptions. Violation of these assumptions outputs poor results.
Hence, it would be a good idea to understand these assumptions.
## Assumptions in Linear Regression
***
There are some key assumptions that are made whilst dealing with Linear Regression
These are pretty intuitive and very essential to understand as they play an important role in finding out some relationships in our dataset too!
Let's discuss these assumptions, their importance and mainly **how we validate these assumptions**!
### Assumption - 1
***
1) **Linear Relationship Assumption: **
Relationship between response (Dependent Variables) and feature variables (Independent Variables) should be linear.
- **Why it is important:**
<div class="alert alert-block alert-info">Linear regression only captures the linear relationship, as it's trying to fit a linear model to the data.</div>
- **How do we validate it:**
<div class="alert alert-block alert-success">The linearity assumption can be tested using scatter plots.</div>
### Assumption - 2
***
2) **Little or No Multicollinearity Assumption:**
It is assumed that there is little or no multicollinearity in the data.
- **Why it is important:**
<div class="alert alert-block alert-info">It results in unstable parameter estimates which makes it very difficult to assess the effect of independent variables.</div>
- **How to validate it:**
Multicollinearity occurs when the features (or independent variables) are not independent from each other. <div class="alert alert-block alert-success">Pair plots of features help validate.</div>
### Assumption - 3
***
3) **Homoscedasticity Assumption: **
Homoscedasticity describes a situation in which the error term (that is, the “noise” or random disturbance in the relationship between the independent variables and the dependent variable) is the same across all values of the independent variables.
- **Why it is important:**
<div class="alert alert-block alert-info">Generally, non-constant variance arises in presence of outliers or extreme leverage values.</div>
- **How to validate:**
<div class="alert alert-block alert-success">Plot between dependent variable vs error
.</div>
***
In an ideal plot the variance around the regression line is the same for all values of the predictor variable.
In this plot we can see that the variance around our regression line is nearly the same and hence it satisfies the condition of homoscedasticity.

<br/><br/>
Image Source : https://en.wikipedia.org/wiki/Homoscedasticity
### Assumption - 4
***
4) **Little or No autocorrelation in residuals:**
There should be little or no autocorrelation in the data.
Autocorrelation occurs when the residual errors are not independent from each other.
- **Why it is important:**
<div class="alert alert-block alert-info">The presence of correlation in error terms drastically reduces model's accuracy.
This usually occurs in time series models. If the error terms are correlated, the estimated standard errors tend to underestimate the true standard error.</div>
- **How to validate:**
<div class="alert alert-block alert-success">Residual vs time plot. look for the seasonal or correlated pattern in residual values.</div>
### Assumption - 5
***
5) **Normal Distribution of error terms**
- **Why it is important:**
<div class="alert alert-block alert-info">Due to the Central Limit Theorem, we may assume that there are lots of underlying facts affecting the process and the sum of these individual errors will tend to behave like in a zero mean normal distribution. In practice, it seems to be so.
</div>
- **How to validate:**
<div class="alert alert-block alert-success">You can look at QQ plot
</div>
| github_jupyter |
# Scaling up ML using Cloud AI Platform
In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud AI Platform. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates *how* to package up a TensorFlow model to run it within Cloud AI Platform.
Later in the course, we will look at ways to make a more effective machine learning model.
## Environment variables for project and bucket
Note that:
<ol>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
```
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Python Code
# Model Info
MODEL_NAME = 'taxifare'
# Model Version
MODEL_VERSION = 'v1'
# Training Directory name
TRAINING_DIR = 'taxi_trained'
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
os.environ['TRAINING_DIR'] = TRAINING_DIR
os.environ['TFVERSION'] = '1.14' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
```
### Create the bucket to store model and training data for deploying to Google Cloud Machine Learning Engine Component
```
%%bash
# The bucket needs to exist for the gsutil commands in next cell to work
gsutil mb -p ${PROJECT} gs://${BUCKET}
```
### Enable the Cloud Machine Learning Engine API
The next command works with Cloud AI Platform API. In order for the command to work, you must enable the API using the Cloud Console UI. Use this [link.](https://console.cloud.google.com/project/_/apis/library) Then search the API list for Cloud Machine Learning and enable the API before executing the next cell.
Allow the Cloud AI Platform service account to read/write to the bucket containing training data.
```
%%bash
# This command will fail if the Cloud Machine Learning Engine API is not enabled using the link above.
echo "Getting the service account email associated with the Cloud AI Platform API"
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print (response['serviceAccount'])") # If this command fails, the Cloud Machine Learning Engine API has not been enabled above.
echo "Authorizing the Cloud AI Platform account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored.
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET
```
## Packaging up the code
Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> containing the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>).
```
%%bash
find ${MODEL_NAME}
%%bash
cat ${MODEL_NAME}/trainer/model.py
```
## Find absolute paths to your data
Note the absolute paths below.
```
%%bash
echo "Working Directory: ${PWD}"
echo "Head of taxi-train.csv"
head -1 $PWD/taxi-train.csv
echo "Head of taxi-valid.csv"
head -1 $PWD/taxi-valid.csv
```
## Running the Python module from the command-line
#### Clean model training dir/output dir
```
%%bash
# This is so that the trained model is started fresh each time. However, this needs to be done before
# tensorboard is started
rm -rf $PWD/${TRAINING_DIR}
%%bash
# Setup python so it sees the task module which controls the model.py
export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME}
# Currently set for python 2. To run with python 3
# 1. Replace 'python' with 'python3' in the following command
# 2. Edit trainer/task.py to reflect proper module import method
python -m trainer.task \
--train_data_paths="${PWD}/taxi-train*" \
--eval_data_paths=${PWD}/taxi-valid.csv \
--output_dir=${PWD}/${TRAINING_DIR} \
--train_steps=1000 --job-dir=./tmp
%%bash
ls $PWD/${TRAINING_DIR}/export/exporter/
%%writefile ./test.json
{"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2}
%%bash
# This model dir is the model exported after training and is used for prediction
#
model_dir=$(ls ${PWD}/${TRAINING_DIR}/export/exporter | tail -1)
# predict using the trained model
gcloud ai-platform local predict \
--model-dir=${PWD}/${TRAINING_DIR}/export/exporter/${model_dir} \
--json-instances=./test.json
```
## Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
#### Clean model training dir/output dir
```
%%bash
# This is so that the trained model is started fresh each time. However, this needs to be done before
# tensorboard is started
rm -rf $PWD/${TRAINING_DIR}
```
## Running locally using gcloud
```
%%bash
# Use Cloud Machine Learning Engine to train the model in local file system
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/${MODEL_NAME}/trainer \
-- \
--train_data_paths=${PWD}/taxi-train.csv \
--eval_data_paths=${PWD}/taxi-valid.csv \
--train_steps=1000 \
--output_dir=${PWD}/${TRAINING_DIR}
```
Use TensorBoard to examine results. When I ran it (due to random seeds, your results will be different), the ```average_loss``` (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13.
```
%%bash
ls $PWD/${TRAINING_DIR}
```
## Submit training job using gcloud
First copy the training data to the cloud. Then, launch a training job.
After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress.
<b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job.
```
%%bash
# Clear Cloud Storage bucket and copy the CSV files to Cloud Storage bucket
echo $BUCKET
gsutil -m rm -rf gs://${BUCKET}/${MODEL_NAME}/smallinput/
gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/${MODEL_NAME}/smallinput/
%%bash
OUTDIR=gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}
JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
# Clear the Cloud Storage Bucket used for the training job
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/${MODEL_NAME}/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-train*" \
--eval_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-valid*" \
--output_dir=$OUTDIR \
--train_steps=10000
```
Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
## Deploy model
Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>.
```
%%bash
gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter
```
#### Deploy model : step 1 - remove version info
Before an existing cloud model can be removed, it must have any version info removed. If an existing model does not exist, this command will generate an error but that is ok.
```
%%bash
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter | tail -1)
echo "MODEL_LOCATION = ${MODEL_LOCATION}"
gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
```
#### Deploy model: step 2 - remove existing model
Now that the version info is removed from an existing model, the actual model can be removed. If an existing model is not deployed, this command will generate an error but that is ok. It just means the model with the given name is not deployed.
```
%%bash
gcloud ai-platform models delete ${MODEL_NAME}
```
#### Deploy model: step 3 - deploy new model
```
%%bash
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
```
#### Deploy model: step 4 - add version info to the new model
```
%%bash
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter | tail -1)
echo "MODEL_LOCATION = ${MODEL_LOCATION}"
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
```
## Prediction
```
%%bash
gcloud ai-platform predict --model=${MODEL_NAME} --version=${MODEL_VERSION} --json-instances=./test.json
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{
'pickuplon': -73.885262,
'pickuplat': 40.773008,
'dropofflon': -73.987232,
'dropofflat': 40.732403,
'passengers': 2,
}
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, MODEL_NAME, MODEL_VERSION)
response = api.projects().predict(body=request_data, name=parent).execute()
print ("response={0}".format(response))
```
## Train on larger dataset
I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow.
Go to http://bigquery.cloud.google.com/ and type the query:
<pre>
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'nokeyindata' AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(HASH(pickup_datetime)) % 1000 == 1
</pre>
Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.):
<ol>
<li> Click on the "Save As Table" button and note down the name of the dataset and table.
<li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name.
<li> Click on "Export Table"
<li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu)
<li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv)
<li> Download the two files, remove the header line and upload it back to GCS.
</ol>
<p/>
<p/>
## Run Cloud training on 1-million row dataset
This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help.
```
%%bash
XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line.
OUTDIR=gs://${BUCKET}/${MODEL_NAME}/${TRAINING_DIR}
JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S)
CRS_BUCKET=cloud-training-demos # use the already exported data
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/${MODEL_NAME}/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/train.csv" \
--eval_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/valid.csv" \
--output_dir=$OUTDIR \
--train_steps=100000
```
## Challenge Exercise
Modify your solution to the challenge exercise in d_trainandevaluate.ipynb appropriately. Make sure that you implement training and deployment. Increase the size of your dataset by 10x since you are running on the cloud. Does your accuracy improve?
### Clean-up
#### Delete Model : step 1 - remove version info
Before an existing cloud model can be removed, it must have any version info removed.
```
%%bash
gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
```
#### Delete model: step 2 - remove existing model
Now that the version info is removed from an existing model, the actual model can be removed.
```
%%bash
gcloud ai-platform models delete ${MODEL_NAME}
```
Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
```
import spacy
from IPython.display import SVG, YouTubeVideo
from spacy import displacy
```
# Intro to Clinical NLP
### Instructor: Alec Chapman
### Email: abchapman93@gmail.com
Welcome to the NLP module! We'll start this module by watching a short introduction of the instructor and of Natural Language Processing (NLP) in medicine. Then we'll learn how to perform clinical NLP in spaCy and will end by applying an NLP system to several clinical tasks and datasets.
### Introduction videos:
- [Meet the Instructor: Dr. Wendy Chapman](https://youtu.be/piJc8RXCZW4)
- [Intro to Clinical NLP / Meet the Instructor: Alec Chapman](https://youtu.be/suVOm0CFX7A)
Slides: [Intro-to-NLP.pdf](https://github.com/Melbourne-BMDS/mimic34md2020_materials/blob/master/slides/Intro-to-NLP.pdf)
```
# Introduction to the instructor: Wendy Chapman
YouTubeVideo("piJc8RXCZW4")
YouTubeVideo("suVOm0CFX7A")
```
# Intro to spaCy
```
YouTubeVideo("agmaqyUMAkI")
```
One very popular tool for NLP is [spaCy](https://spacy.io). SpaCy offers many out-of-the-box tools for processing and analyzing text, and the spaCy framework allows users to extend the models for their own purposes. SpaCy consists mostly of **statistical NLP** models. In statistical models, a large corpus of text is processed and mathematical methods are used to identify patterns in the corpus. This process is called **training**. Once a model has been trained, we can use it to analyze new text. But as we'll see, we can also use spaCy to implement sophisticated rules and custom logic.
SpaCy comes with several pre-trained models, meaning that we can quickly load a model which has been trained on large amounts of data. This way, we can take advantage of work which has already been done by spaCy developers and focus on our own NLP tasks. Additionally, members of the open-source spaCy community can train and publish their own models.
<img alt="SpaCy logo" height="100" width="250" src="https://spacy.io/static/social_default-1d3b50b1eba4c2b06244425ff0c49570.jpg">
# Agenda
- We'll start by looking at the basic usage of spaCy
- Next, we'll focus on specific NLP task, **named entity recognition (NER)**, and see how this works in spaCy, as well as some of the limitations with clinical data
- Since spaCy's built-in statistical models don't accomplish the tasks we need in clinical NLP, we'll use spaCy's pattern matchers to write rules to extract clinical concepts
- We will then download and use a statistical model to extract clinical concepts from text
- Some of these limitations can be addressed by writing our own rules for concept extraction, and we'll practice that with some clinical texts. We'll then go a little deeper into how spaCy's models are implemented and how we can modify them. Finally, we'll end the day by spaCy models which were designed specifically for use in the biomedical domain.
# spaCy documentation
spaCy has great documentation. As we're going along today, try browsing through their documentation to find examples and instructions. Start by opening up these two pages and navigating through the documentation:
[Basic spaCy usage](https://spacy.io/usage/models)
[API documentation](https://spacy.io/api)
spaCy also has a really good, free online class. If you want to dig deeper into spaCy after this class, it's a great resource for using this library:
https://course.spacy.io/
It's also available on DataCamp (the first two chapters will be assigned for homework): https://learn.datacamp.com/courses/advanced-nlp-with-spacy
# Basic usage of spaCy
In this notebook, we'll look at the basic fundamentals of spaCy:
- Main classes in spaCy
- Linguistic attributes
- Named entity recognition (NER)
## How to use spaCy
At a high-level, here are the steps for using spaCy:
- Start by loading a pre-trained NLP model
- Process a string of text with the model
- Use the attributes in our processed documents for downstream NLP tasks like NER or document classification
For example, here's a very short example of how this works. For the sake of demonstration, we'll use this snippet of a business news article:
```
# First, load a pre-trained model
nlp = spacy.load("en_core_web_sm")
# Process a string of text with the model
text = """Taco Bell’s latest marketing venture, a pop-up hotel, opened at 10 a.m. Pacific Time Thursday.
The rooms sold out within two minutes.
The resort has been called “The Bell: A Taco Bell Hotel and Resort.” It’s located in Palm Springs, California."""
doc = nlp(text)
doc
# Use the attributes in our processed documents for downstream NLP tasks
# Here, we'll visualize the entities in this text identified through NER
displacy.render(doc, style="ent")
```
Let's dive a little deeper into how spaCy is structured and what we have to work with.
## SpaCy Architecture
The [spaCy documentation](https://spacy.io/api) offers a detailed description of the package's architecture. In this notebook, we'll focus on these 5 classes:
- `Language`: The NLP model used to process text
- `Doc`: A sequence of text which has been processed by a `Language` object
- `Token`: A single word or symbol in a Doc
- `Span`: A slice from a Doc
- `EntityRecognizer`: A model which extracts mentions of **named entities** from text
# `nlp`
The `nlp` object in spaCy is the linguistic model which will be used for processing text. We instantiate a `Language` class by providing the name of a pre-trained model which we wish to use. We typically name this object `nlp`, and this will be our primary entry point.
```
nlp = spacy.load("en_core_web_sm")
nlp
```
The `nlp` model we instantiated above is a **small** ("sm"), **English** ("en")-language model trained on **web** ("web") data, but there are currently 16 different models from 9 different languages. See the [spaCy documentation](https://spacy.io/usage/models) for more information on each of the models.
# Documents, spans and tokens
The `nlp` object is what we'll be using to process text. The next few classes represent the output of our NLP model.
## `Doc` class
The `doc` object represents a single document of text. To create a `doc` object, we call `nlp` on a string of text. This runs that text through a spaCy pipeline, which we'll learn more about in a future notebook.
```
text = 'Taco Bell’s latest marketing venture, a pop-up hotel, opened at 10 a.m. Pacific Time Thursday.'
doc = nlp(text)
type(doc)
print(doc)
```
## Tokens and Spans
### Token
A `Token` is a single word, symbol, or whitespace in a `doc`. When we create a `doc` object, the text broken up into individual tokens. This is called **"tokenization"**.
**Discussion**: Look at the tokens generated from this text snippet. What can you say about the tokenization method? Is it as simple as splitting up into words every time we reach a whitespace?
```
token = doc[0]
token
type(token)
doc
for token in doc:
print(token)
```
### Span
A `Span` is a slice of a document, or a consecutive sequence of tokens.
```
span = doc[1:4]
span
type(span)
```
## Linguistic Attributes
Because spaCy comes with pre-trained linguistic models, when we call `nlp` on a text we have access to a number of linguistic attributes in the `doc` or `token` objects.
### POS Tagging
Parts of speech are categories of words. For example, "nouns", "verbs", and "adjectives" are all examples of parts of speech. Assigning parts of speech to words is useful for downstream NLP texts such as word sense disambiguation and named entity recognition.
**Discussion**: What to the POS tags below mean?
```
print(f"Token -> POS\n")
for token in doc:
print(f"{token.text} -> {token.pos_}")
spacy.explain("PROPN")
```
### Lemma
The **lemma** of a word refers to the **root form** of a word. For example, "eat", "eats", and "ate" are all different inflections of the lemma "eat".
```
print(f"Token -> Lemma\n")
for token in doc:
print(f"{token.text} -> {token.lemma_}")
```
### Dependency Parsing
In dependency parsing, we analyze the structure of a sentence. We won't spend too much time on this, but here is a nice visualization of dependency parse looks like. Take a minute to look at the arrows between words and try to figure out what they mean.
```
doc = nlp("The cat sat on the green mat")
displacy.render(doc, style='dep')
```
### Other attributes
Look at spaCy's [Token class documentation](https://spacy.io/api/token) for a full list of additional attributes available for each token in a document.
# NER with spaCy
**"Named Entity Recognition"** is a subtask of NLP where we extract specific named entities from the text. The definition of a "named entity" changes depending on the domain we're working on. We'll look at clinical NER later, but first we'll look at some examples in more general domains.
NER is often performed using news articles as source texts. In this case, named entities are typically proper nouns, such as:
- People
- Geopolitical entities, like countries
- Organizations
We won't go into the details of how NER is implemented in spaCy. If you want to learn more about NER and various way it's implemented, a great resource is [Chapter 17.1 of Jurafsky and Martin's textbook "Speech and Language Processing."](https://web.stanford.edu/~jurafsky/slp3/17.pdf)
Here is an excerpt from an article in the Guardian. We'll process this document with our nlp object and then look at what entities are extracted. One way to do this is using spaCy's `displacy` package, which visualizes the results of a spaCy pipeline.
```
text = """Germany will fight to the last hour to prevent the UK crashing out of the EU without a deal and is willing
to hear any fresh ideas for the Irish border backstop, the country’s ambassador to the UK has said.
Speaking at a car manufacturers’ summit in London, Peter Wittig said Germany cherished its relationship
with the UK and was ready to talk about solutions the new prime minister might have for the Irish border problem."""
doc = nlp(text)
displacy.render(doc, style="ent")
```
We can use spaCy's `explain` function to see definitions of what an entity type is. Look up any entity types that you're not familiar with:
```
spacy.explain("GPE")
```
The last example comes from a political news article, which is pretty typical for what NER is often trained on and used for. Let's look at another news article, this one with a business focus:
```
# Example 2
text = """Taco Bell’s latest marketing venture, a pop-up hotel, opened at 10 a.m. Pacific Time Thursday.
The rooms sold out within two minutes.
The resort has been called “The Bell: A Taco Bell Hotel and Resort.” It’s located in Palm Springs, California."""
doc = nlp(text)
displacy.render(doc, style="ent")
```
## Discussion
Compare how the NER performs on each of these texts. Can you see any errors? Why do you think it might make those errors?
Once we've processed a text with `nlp`, we can iterate through the entities through the `doc.ents` attribute. Each entity is a spaCy `Span`. You can see the label of the entity through `ent.label_`.
```
for ent in doc.ents:
print(ent, ent.label_)
```
# spaCy Processing Pipelines
How does spaCy generate information like POS tags and entities? Under the hood, the `nlp` object goes through a number of sequential steps to processt the text. This is called a **pipeline** and it allows us to create modular, independent processing steps when analyzing text. The model we loaded comes with a default **pipeline** which helps extract linguistic attributes from the text. We can see the names of our pipeline components through the `nlp.pipe_names` attribute:
```
nlp.pipe_names
```
The image below shows a visual representation of this. In this default spaCy pipeline,
- We pass the text into the pipeline by calling `nlp(text)`
- The text is split into **tokens** by the `tokenizer`
- POS tags are assigned by the `tagger`
- A dependency parse is generated by the `parser`
- Entities are extracted by the `ner`
- a `Doc` object is returned
These are the steps taken in the default pipeline. However, as we'll see later we can add our own processing **components** and add them to our pipeline to do additional analysis.
<img alt="SpaCy logo" src="https://d33wubrfki0l68.cloudfront.net/16b2ccafeefd6d547171afa23f9ac62f159e353d/48b91/pipeline-7a14d4edd18f3edfee8f34393bff2992.svg">
# Clinical Text
Let's now try using spaCy's built-in NER model on clinical text and see what information we can extract.
```
clinical_text = "76 year old man with hypotension, CKD Stage 3, status post RIJ line placement and Swan. "
doc = nlp(clinical_text)
displacy.render(doc, style="ent")
```
### Discussion
- How did spaCy do with this sentence?
- What do you think caused it to make errors in the classifications?
General purpose NER models are typically made for extracting entities out of news articles. As we saw before, this includes mainly people, organizations, and geopolitical entities. We can see which labels are available in spaCy's NER model by looking at the NER component. As you can see, not many of these are very useful for clinical text extraction.
### Discussion
- What are some entity types we are interested in in clinical domain?
- Does spaCy's out-of-the-box NER handle any of these types?
# Next Steps
Since spaCy's model doesn't extract the information we need by default, we'll need to do some additional work to extract clinical concepts. In the next notebook, we'll look at how spaCy allows **rule-based NLP** through **pattern matching**.
[nlp-02-medspacy-concept-extraction.ipynb](nlp-02-medspacy-concept-extraction.ipynb)
| github_jupyter |
# Pre-trained embeddings for Text
```
import gzip
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
glove_path = '../data/embeddings/glove.6B.50d.txt.gz'
with gzip.open(glove_path, 'r') as fin:
line = fin.readline().decode('utf-8')
line
def parse_line(line):
values = line.decode('utf-8').strip().split()
word = values[0]
vector = np.asarray(values[1:], dtype='float32')
return word, vector
embeddings = {}
word_index = {}
word_inverted_index = []
with gzip.open(glove_path, 'r') as fin:
for idx, line in enumerate(fin):
word, vector = parse_line(line) # parse a line
embeddings[word] = vector # add word vector
word_index[word] = idx # add idx
word_inverted_index.append(word) # append word
word_index['good']
word_inverted_index[219]
embeddings['good']
embedding_size = len(embeddings['good'])
embedding_size
plt.plot(embeddings['good']);
plt.subplot(211)
plt.plot(embeddings['two'])
plt.plot(embeddings['three'])
plt.plot(embeddings['four'])
plt.title("A few numbers")
plt.ylim(-2, 5)
plt.subplot(212)
plt.plot(embeddings['cat'])
plt.plot(embeddings['dog'])
plt.plot(embeddings['rabbit'])
plt.title("A few animals")
plt.ylim(-2, 5)
plt.tight_layout()
vocabulary_size = len(embeddings)
vocabulary_size
```
## Loading pre-trained embeddings in Keras
```
from keras.models import Sequential
from keras.layers import Embedding
embedding_weights = np.zeros((vocabulary_size,
embedding_size))
for word, index in word_index.items():
embedding_weights[index, :] = embeddings[word]
emb_layer = Embedding(input_dim=vocabulary_size,
output_dim=embedding_size,
weights=[embedding_weights],
mask_zero=False,
trainable=False)
word_inverted_index[0]
model = Sequential()
model.add(emb_layer)
embeddings['cat']
cat_index = word_index['cat']
cat_index
model.predict([[cat_index]])
```
## Gensim
```
import gensim
from gensim.scripts.glove2word2vec import glove2word2vec
glove_path = '../data/embeddings/glove.6B.50d.txt.gz'
glove_w2v_path = '../data/embeddings/glove.6B.50d.txt.vec'
glove2word2vec(glove_path, glove_w2v_path)
from gensim.models import KeyedVectors
glove_model = KeyedVectors.load_word2vec_format(
glove_w2v_path, binary=False)
glove_model.most_similar(positive=['good'], topn=5)
glove_model.most_similar(positive=['two'], topn=5)
glove_model.most_similar(positive=['king', 'woman'],
negative=['man'], topn=3)
```
## Visualization
```
import os
model_dir = '/tmp/ztdl_models/embeddings/'
from shutil import rmtree
rmtree(model_dir, ignore_errors=True)
os.makedirs(model_dir)
n_viz = 4000
emb_layer_viz = Embedding(n_viz,
embedding_size,
weights=[embedding_weights[:n_viz]],
mask_zero=False,
trainable=False)
model = Sequential([emb_layer_viz])
word_embeddings = emb_layer_viz.weights[0]
word_embeddings
import keras.backend as K
import tensorflow as tf
sess = K.get_session()
saver = tf.train.Saver([word_embeddings])
saver.save(sess, os.path.join(model_dir, 'model.ckpt'), 1)
os.listdir(model_dir)
fname = os.path.join(model_dir, 'metadata.tsv')
with open(fname, 'w', encoding="utf-8") as fout:
for index in range(0, n_viz):
word = word_inverted_index[index]
fout.write(word + '\n')
config = """embeddings {{
tensor_name: "{tensor}"
metadata_path: "{metadata}"
}}""".format(tensor=word_embeddings.name,
metadata='metadata.tsv')
print(config)
fname = os.path.join(model_dir, 'projector_config.pbtxt')
with open(fname, 'w', encoding="utf-8") as fout:
fout.write(config)
```
| github_jupyter |
<div class="alert alert-info">
Launch in Binder [](https://mybinder.org/v2/gh/esowc/UNSEEN-open/master?filepath=doc%2FNotebooks%2Fexamples%2FCalifornia_Fires.ipynb)
<!-- Or launch an [Rstudio instance](https://mybinder.org/v2/gh/esowc/UNSEEN-open/master?urlpath=rstudio?filepath=doc%2Fexamples%2FCalifornia_Fires.ipynb)
-->
</div>
# California fires
In August 2020 in California, wildfires have burned more than [a million acres of land](https://edition.cnn.com/2020/10/06/us/gigafire-california-august-complex-trnd/index.html).
This years' fire season was also unique in the number of houses destroyed.
Here we retrieve average august temperatures over California within ERA5 1979-2020 and show anomalous August 2020 was.

We furthermore create an UNSEEN ensemble and show that these kind of fire seasons can be expected to occur more often in our present climate since we find a clear trend in temperature extremes over the last decades.
### Retrieve data
<div class="alert alert-info">
Note
In this notebook you cannot use the python functions under Retrieve and Preprocess (they are here only for documentation on the entire workflow, see [retrieve](../1.Download/1.Retrieve.ipynb) if you want to download your own dataset. The resulting preprocessed dataset is provided so you can perform statistical analysis on the dataset and rerun the evaluation and examples provided.
</div>
The main functions to retrieve all forecasts (SEAS5) and reanalysis (ERA5) are `retrieve_SEAS5` and `retrieve_ERA5`. We want to download 2m temperature for August over California. By default, the hindcast years of 1981-2016 are downloaded for SEAS5. We include the years 1981-2020. The folder indicates where the files will be stored, in this case outside of the UNSEEN-open repository, in a 'California_example' directory. For more explanation, see [retrieve](../1.Download/1.Retrieve.ipynb).
```
import os
import sys
sys.path.insert(0, os.path.abspath('../../../'))
os.chdir(os.path.abspath('../../../'))
import src.cdsretrieve as retrieve
import src.preprocess as preprocess
import numpy as np
import xarray as xr
retrieve.retrieve_SEAS5(
variables=['2m_temperature', '2m_dewpoint_temperature'],
target_months=[8],
area=[70, -130, 20, -70],
years=np.arange(1981, 2021),
folder='E:/PhD/California_example/SEAS5/')
retrieve.retrieve_ERA5(variables=['2m_temperature', '2m_dewpoint_temperature'],
target_months=[8],
area=[70, -130, 20, -70],
folder='E:/PhD/California_example/ERA5/')
```
### Preprocess
In the preprocessing step, we first merge all downloaded files into one xarray dataset, then take the spatial average over the domain and a temporal average over the MAM season. Read the docs on [preprocessing](../2.Preprocess/2.Preprocess.ipynb) for more info.
```
SEAS5_California = preprocess.merge_SEAS5(folder ='E:/PhD/California_example/SEAS5/', target_months = [8])
```
And for ERA5:
```
ERA5_California = xr.open_mfdataset('E:/PhD/California_example/ERA5/ERA5_????.nc',combine='by_coords')
```
We calculate the [standardized anomaly of the 2020 event](../California_august_temperature_anomaly.ipynb) and select the 2m temperature over the region where 2 standard deviations from the 1979-2010 average was exceeded. This is a simple average, an area-weighed average is more appropriate, since grid cell area decreases with latitude, see [preprocess](../2.Preprocess/2.Preprocess.ipynb).
```
ERA5_anomaly = ERA5_California['t2m'] - ERA5_California['t2m'].sel(time=slice('1979','2010')).mean('time')
ERA5_sd_anomaly = ERA5_anomaly / ERA5_California['t2m'].std('time')
ERA5_California_events = (
ERA5_California['t2m'].sel( # Select 2 metre temperature
longitude = slice(-125,-100), # Select the longitude
latitude = slice(45,20)). # And the latitude
where(ERA5_sd_anomaly.sel(time = '2020').squeeze('time') > 2). ##Mask the region where 2020 sd >2.
mean(['longitude', 'latitude'])) #And take the mean
```
Plot the August temperatures over the defined California domain:
```
ERA5_California_events.plot()
```
Select the same domain for SEAS5 and extract the events.
```
SEAS5_California_events = (
SEAS5_California['t2m'].sel(
longitude = slice(-125,-100), # Select the longitude
latitude = slice(45,20)).
where(ERA5_sd_anomaly.sel(time = '2020').squeeze('time') > 2).
mean(['longitude', 'latitude']))
```
And here we store the data in the Data section so the rest of the analysis in R can be reproduced.
```
SEAS5_California_events.to_dataframe().to_csv('Data/SEAS5_California_events.csv')
ERA5_California_events.to_dataframe().to_csv('Data/ERA5_California_events.csv')
```
### Evaluate
<div class="alert alert-info">
Note
From here onward we use R and not python!
We switch to R since we believe R has a better functionality in extreme value statistics.
</div>
```
setwd('../../..')
getwd()
SEAS5_California_events <- read.csv("Data/SEAS5_California_events.csv", stringsAsFactors=FALSE)
ERA5_California_events <- read.csv("Data/ERA5_California_events.csv", stringsAsFactors=FALSE)
## Convert Kelvin to Celsius
SEAS5_California_events$t2m <- SEAS5_California_events$t2m - 273.15
ERA5_California_events$t2m <- ERA5_California_events$t2m - 273.15
## Convert character time to Date format
ERA5_California_events$time <- lubridate::ymd(ERA5_California_events$time)
SEAS5_California_events$time <- lubridate::ymd(SEAS5_California_events$time)
```
*Is the UNSEEN ensemble realistic?*
To answer this question, we perform three statistical tests: independence, model stability and model fidelity tests.
These statistical tests are available through the [UNSEEN R package](https://github.com/timokelder/UNSEEN).
See [evaluation](../3.Evaluate/3.Evaluate.ipynb) for more info.
```
require(UNSEEN)
```
#### Timeseries
<a id='Timeseries'></a>
We plot the timeseries of SEAS5 (UNSEEN) and ERA5 (OBS) for the the Siberian Heatwave.
You can call the documentation of the function with `?unseen_timeseries`
```
timeseries = unseen_timeseries(
ensemble = SEAS5_California_events,
obs = ERA5_California_events,
ensemble_yname = "t2m",
ensemble_xname = "time",
obs_yname = "t2m",
obs_xname = "time",
ylab = "August California temperature (C)")
timeseries
ggsave(timeseries, height = 5, width = 6, filename = "graphs/Calif_timeseries.png")
```
The timeseries consist of **hindcast (years 1982-2016)** and **archived forecasts (years 2017-2020)**. The datasets are slightly different: the hindcasts contains 25 members whereas operational forecasts contain 51 members, the native resolution is different and the dataset from which the forecasts are initialized is different.
**For the evaluation of the UNSEEN ensemble we want to only use the SEAS5 hindcasts for a consistent dataset**. Note, 2017 is not used in either the hindcast nor the operational dataset, since it contains forecasts both initialized in 2016 (hindcast) and 2017 (forecast), see [retrieve](../1.Download/1.Retrieve.ipynb).
We split SEAS5 into hindcast and operational forecasts:
```
SEAS5_California_events_hindcast <- SEAS5_California_events[
SEAS5_California_events$time < '2017-02-01' &
SEAS5_California_events$number < 25,]
SEAS5_California_events_forecasts <- SEAS5_California_events[
SEAS5_California_events$time > '2017-02-01',]
```
And we select the same years for ERA5.
```
ERA5_California_events_hindcast <- ERA5_California_events[
ERA5_California_events$time > '1981-02-01' &
ERA5_California_events$time < '2017-02-01',]
```
Which results in the following timeseries:
```
unseen_timeseries(
ensemble = SEAS5_California_events_hindcast,
obs = ERA5_California_events_hindcast,
ensemble_yname = "t2m",
ensemble_xname = "time",
obs_yname = "t2m",
obs_xname = "time",
ylab = "August California temperature (C)")
```
#### Evaluation tests
With the hindcast dataset we evaluate the independence, stability and fidelity. Here, we plot the results for the fidelity test, for more detail on the other tests see the [evaluation section](../3.Evaluate/3.Evaluate.ipynb).
The fidelity test shows us how consistent the model simulations of UNSEEN (SEAS5) are with the observed (ERA5). The UNSEEN dataset is much larger than the observed -- hence they cannot simply be compared. For example, what if we had faced a few more or a few less heatwaves purely by chance?
This would influence the observed mean, but not so much influence the UNSEEN ensemble because of the large data sample. Therefore we express the UNSEEN ensemble as a range of plausible means, for data samples of the same length as the observed. We do the same for higher order [statistical moments](https://en.wikipedia.org/wiki/Moment_(mathematics)).
```
Eval = fidelity_test(
obs = ERA5_California_events_hindcast$t2m,
ensemble = SEAS5_California_events_hindcast$t2m,
units = 'C',
biascor = FALSE,
fontsize = 14
)
Eval
ggsave(Eval, filename = "graphs/Calif_fidelity.png")
```
The fidelity test shows that the mean of the UNSEEN ensemble is too low compared to the observed -- the blue line falls outside of the model range in a. To correct for this low bias, we can apply an additive bias correction, which only corrects the mean of the simulations.
Lets apply the additive biascor:
```
obs = ERA5_California_events_hindcast$t2m
ensemble = SEAS5_California_events_hindcast$t2m
ensemble_biascor = ensemble + (mean(obs) - mean(ensemble))
fidelity_test(
obs = obs,
ensemble = ensemble_biascor,
units = 'C',
biascor = FALSE
)
```
This shows us what we expected: the mean bias is corrected because the model simulations are shifted up (the blue line is still the same, the axis has just shifted along with the histogram), but the other statistical moments are the same.
### Illustrate
```
source('src/evt_plot.r')
```
We apply extreme value theory to analyze the trend in 100-year temperature extremes. There are different extreme value distributions that can be used to fit to the data. First, we fit a stationary Gumbel and a GEV distribution (including shape parameter) to the observed extremes. Then we fit a nonstationary GEV distribution to the observed temperatures and show that this better describes the data because the p-value of 0.006 and 0.002 are very small (much below 0.05 based on 5% significance with the likelihood ratio test).
<!-- How about nonstationarity?
Here I fit nonstationary distributions to the observed and to UNSEEN, and test whether those distributions fit better than stationary distributions. With a p value of 0.006, the nonstationary distribution is clearly a better fit. -->
```
## Fit stationary distributions
fit_obs_Gumbel <- fevd(x = obs,
type = "Gumbel"
)
fit_obs_GEV <- fevd(x = obs,
type = "GEV"
)
## And the nonstationary distribution
fit_obs_GEV_nonstat <- fevd(x = obs,
type = "GEV",
location.fun = ~ c(1:36), ##Fitting the gev with a location and scale parameter linearly correlated to the covariate (years)
scale.fun = ~ c(1:36),
use.phi = TRUE
)
#And test the fit
##1. Stationary Gumbel vs stationary GEV
lr.test(fit_obs_Gumbel, fit_obs_GEV_nonstat)
##2. Stationary GEV vs Nonstationary GEV
lr.test(fit_obs_GEV, fit_obs_GEV_nonstat)
```
For the unseen ensemble this analysis is slightly more complicated since we need a covariate that has the same length as the ensemble:
```
#Create the ensemble covariate
year_vector = as.integer(format(SEAS5_California_events_hindcast$time, format="%Y"))
covariate_ens = year_vector - 1980
# Fit the stationary distribution
fit_unseen_GEV <- fevd(x = ensemble_biascor,
type = 'GEV',
use.phi = TRUE)
fit_unseen_Gumbel <- fevd(x = ensemble_biascor,
type = 'Gumbel',
use.phi = TRUE)
# Fit the nonstationary distribution
fit_unseen_GEV_nonstat <- fevd(x = ensemble_biascor,
type = 'GEV',
location.fun = ~ covariate_ens, ##Fitting the gev with a location and scale parameter linearly correlated to the covariate (years)
scale.fun = ~ covariate_ens,
use.phi = TRUE)
```
And the likelihood ratio test tells us that the nonstationary GEV distribution is the best fit, both p-values < 2.2e-16:
```
#And test the fit
##1. Stationary Gumbel vs stationary GEV
lr.test(fit_unseen_Gumbel,fit_unseen_GEV)
##2. Stationary GEV vs Nonstationary GEV
lr.test(fit_unseen_GEV, fit_unseen_GEV_nonstat)
```
We plot unseen trends in 100-year extremes. For more info on the methods see [this paper](https://doi.org/10.31223/osf.io/hyxeq)
```
p1 <- unseen_trends1(ensemble = ensemble_biascor,
x_ens = year_vector,
x_obs = 1981:2016,
rp = 100,
obs = obs,
covariate_ens = covariate_ens,
covariate_obs = c(1:36),
GEV_type = 'GEV',
ylab = 'August temperature (c)')
p1
p2 <- unseen_trends2(ensemble = ensemble_biascor,
obs = obs,
covariate_ens = covariate_ens,
covariate_obs = c(1:36),
GEV_type = 'GEV',
ylab = 'August temperature (c)')
p2
```
**Applications:**
We have seen the worst fire season over California this year. Such fires are likely part of a chain of impacts, from droughts to heatwaves to fires, with feedbacks between them. Here we assess August temperatures and show that the 2020 August average temperature was very anomalous. We furthermore use SEAS5 forecasts to analyze the trend in rare extremes. Evaluation metrics show that the model simulations have a high bias, which we correct for using an additive bias correction. UNSEEN trend analysis shows a clear trend over time, both in the model and in the observed temperatures. Based on this analysis, temperature extremes that you would expect to occur once in 100 years in 1981 might occur once in 10 years in 2015 -- and even more frequently now!
**Note**
Our analysis shows the results of a *linear* trend analysis of August temperature averages over 1981-2015. Other time windows, different trends than linear, and spatial domains could (should?) be investigated, as well as drought estimates in addition to temperature extremes.
| github_jupyter |
# Handwritten Digit Detection
#### Helia Rasooli
#### Zahra Bakhtiar
#### Bahareh Behroozi
#### Seyyedeh Zahra Fallah MirMousavi Ajdad
# MNIST
#### The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST.
##### http://yann.lecun.com/exdb/mnist/
```
import warnings
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
from scipy import ndimage
#import itertools
warnings.filterwarnings("ignore")
def calculateDigitsAccuracy(predicted, actual):
correct = 0
for i in range(len(predicted)):
if predicted[i] == actual[i][0]:
correct += 1
return correct / len(actual)
def calculateLettersAccuracy(predicted, actual):
correct = 0
for i in range(len(predicted)):
if predicted[i] == actual[i]:
correct += 1
return correct / len(actual)
def showImage(data):
plt.imshow(np.reshape(data, (28, 28)), cmap='gray_r')
plt.show()
def showImage_L(data):
rotated_img = ndimage.rotate(np.reshape(data, (28, 28)), 90)
plt.imshow(rotated_img, cmap='gray_r',origin='lower')
plt.show()
def showPlot(points, xLabel, yLabel):
X = [x for (x, y) in points]
Y = [y for (x, y) in points]
plt.plot(X, Y)
plt.ylabel(yLabel)
plt.xlabel(xLabel)
plt.show()
def compareScores(X, trainScores, testScores, xlabel, ylabel):
fig, ax = plt.subplots()
for scores, label, style in [(trainScores, 'Train Data', ':ob'), (testScores, 'Test Data', ':or')]:
ax.plot(X, scores, style, label=label)
best_xy = max([(n, score) for n, score in zip(X, scores)], key=lambda x: x[1])
ax.annotate((best_xy[0], round(best_xy[1], 3)), xy=best_xy, xytext=(best_xy[0] + 5, best_xy[1]), arrowprops=dict(arrowstyle="->"))
ax.legend()
ax.set(xlabel=xlabel, ylabel=ylabel)
fig.show()
trainData = pd.read_csv('./MNIST_data/train_data.csv', header=None).values
trainLabels = pd.read_csv('./MNIST_data/train_label.csv', header=None).values
testData = pd.read_csv('./MNIST_data/test_data.csv', header=None).values
testLabels = pd.read_csv('./MNIST_data/test_label.csv', header=None).values
```
an example of number 6 in dataset :
```
showImage(trainData[1310])
```
# K-Nearest Neighbors
#### 1.
In KNN algorithm the output of each test depends on the k closest training examples in the feature space.
* In k-NN classification, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighborsa (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
* In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors.
#### 2.
```
from sklearn import neighbors
clf = neighbors.KNeighborsClassifier(n_neighbors=12)
clf.fit(trainData, trainLabels)
predictedTrain = clf.predict(trainData)
predictedTest = clf.predict(testData)
trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels)
testAcc = calculateDigitsAccuracy(predictedTest, testLabels)
print('train data accuracy:', trainAcc)
print('test data accuracy:', testAcc)
```
#### 3.
```
trainScores = []
testScores = []
X = [x for x in range(5, 15)]
for k in X:
clf = neighbors.KNeighborsClassifier(n_neighbors=k)
clf.fit(trainData, trainLabels)
predictedTrain = clf.predict(trainData)
predictedTest = clf.predict(testData)
trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels)
testAcc = calculateDigitsAccuracy(predictedTest, testLabels)
trainScores.append(trainAcc)
testScores.append(testAcc)
compareScores(X, trainScores, testScores, 'K Neighbors', 'Accuracy')
clf = neighbors.KNeighborsClassifier(n_neighbors=20)
clf.fit(trainData, trainLabels)
nearests = clf.kneighbors([trainData[1042]], return_distance=False)
print(nearests)
fig, ax = plt.subplots(4, 5, subplot_kw=dict(xticks=[], yticks=[]))
for (i, axi) in enumerate(ax.flat):
axi.imshow(np.reshape(trainData[nearests[0][i]], (28, 28)), cmap='gray_r')
```
#### 6.
* doesn't work well with high dimensional data
* doesn't work well with categorical features
* Heavy calculation and memory
# Decision Tree
#### 7.
A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represent classification rules.
```
from sklearn import tree
clf = tree.DecisionTreeClassifier(max_depth=22)
clf.fit(trainData, trainLabels)
predictedTrain = clf.predict(trainData)
predictedTest = clf.predict(testData)
trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels)
testAcc = calculateDigitsAccuracy(predictedTest, testLabels)
print('train data accuracy:', trainAcc)
print('test data accuracy:', testAcc)
```
#### 9.
```
trainScores = []
testScores = []
X = range(5, 30)
for depth in X:
clf = tree.DecisionTreeClassifier(max_depth=depth)
clf.fit(trainData, trainLabels)
predictedTrain = clf.predict(trainData)
predictedTest = clf.predict(testData)
trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels)
testAcc = calculateDigitsAccuracy(predictedTest, testLabels)
trainScores.append(trainAcc)
testScores.append(testAcc)
compareScores(X, trainScores, testScores, 'Max Depth', 'Accuracy')
```
## Logistic Regression
#### 10.
Logistic Regression is used when the dependent variable(target) is categorical. It uses sigmod hypothesis function (1 / 1 + e^2) for prediction.
Types of logistic regression:
* Binary Logistic Regression: The categorical response has only two 2 possible outcomes. Example: Spam or Not
* Multinomial Logistic Regression: Three or more categories without ordering. Example: Predicting which food is preferred more (Veg, Non-Veg, Vegan)
* Ordinal Logistic Regression: Three or more categories with ordering. Example: Movie rating from 1 to 5
#### 11.
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(solver='lbfgs')
clf.fit(trainData, trainLabels)
predictedTrain = clf.predict(trainData)
predictedTest = clf.predict(testData)
trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels)
testAcc = calculateDigitsAccuracy(predictedTest, testLabels)
print('train data accuracy:', trainAcc)
print('test data accuracy:', testAcc)
```
# LETTER DETECTION
```
trainData_L = []
trainLabels_L = []
testData_L = []
testLabels_L = []
train = []
test = []
train_z = pd.read_csv('./MNIST_data/emnist-letters-train.csv', header=None).values
test_z = pd.read_csv('./MNIST_data/emnist-letters-test.csv', header=None).values
for i in range(60000):
train.append(train_z[i][1:785])
for i in range(10000):
test.append(test_z[i])
trainLabel = [[row[i] for row in train] for i in range(1)][0]
print(trainLabel[0])
testLabels_L = [[row[i] for row in test] for i in range(1)][0]
for i in range(0,(len(train))):
if(trainLabel[i] < 20):
trainData_L.append(train[i])
trainLabels_L.append(trainLabel[i])
for i in range(0,(len(test))):
testData_L.append(test[i][1:785])
print(len(testData_L))
print(len(trainData_L))
print(trainLabels_L[10])
train_z = pd.read_csv('./MNIST_data/emnist-letters-train.csv', header=None).values
test_z = pd.read_csv('./MNIST_data/emnist-letters-test.csv', header=None).values
trainData_L = []
testData_L = []
trainLabels_L = []
testLabels_L = []
for i in range(60000):
if(train_z[i][0] < 20):
trainData_L.append(train_z[i][1:785])
trainLabels_L.append(train_z[i][0])
for i in range(10000):
testData_L.append(train_z[i][1:785])
testLabels_L.append(train_z[i][0])
```
an example of letter 'e' in dataset :
```
showImage_L(trainData_L[10])
```
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(solver='lbfgs', max_iter=500, multi_class='auto')
clf.fit(trainData_L, trainLabels_L)
predictedTrain = [clf.predict(trainData_L)]
predictedTest = [clf.predict(testData_L)]
trainAcc = calculateLettersAccuracy(predictedTrain[0], trainLabels_L)
testAcc = calculateLettersAccuracy(predictedTest[0], testLabels_L)
print('train data accuracy:', trainAcc)
print('test data accuracy:', testAcc)
```
# HandWritten Digit Detection(Using Neural Network : MLP)
```
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import StandardScaler
mlp = MLPClassifier(hidden_layer_sizes=(200),shuffle=True,momentum=0.9, activation='logistic', max_iter = 1000,learning_rate_init=0.001)
mlp.fit(trainData, trainLabels)
from sklearn.metrics import classification_report
predicted = mlp.predict(testData)
print(classification_report(testLabels,predicted))
```

```
calculateDigitsAccuracy(predicted, testLabels)
```
# Digit Detection using Neural Network
```
import time
# Global Variables
training_size = 60000
testing_size = 200
alpha = 0.01
iterations = 2000
epochs = 15
labels = 10
# --------------------------------------------------------
def predict(weights, testData):
print(testing_size)
print(len(testData))
testData = np.hstack((np.ones((testing_size, 1)), testData))
predicted_labels = np.dot(weights, testData.T)
# signum activation function
predicted_labels = signum(predicted_labels)
predicted_labels = np.argmax(predicted_labels, axis=0)
return predicted_labels.T
def signum(x):
x[x > 0] = 1
x[x <= 0] = -1
return x
def learning(trainData, trainLabels, weights):
epochs_values = []
error_values = []
for k in range(epochs):
missclassified = 0
for t, l in zip(trainData, trainLabels):
h = np.dot(t, weights)
h = signum(h)
if h[0] != l[0]:
missclassified += 1
gradient = t * (h - l)
# reshape gradient
gradient = gradient.reshape(gradient.shape[0], -1)
weights = weights - (gradient * alpha)
error_values.append(missclassified / training_size)
epochs_values.append(k)
return weights
"""Find optimal weights for each logistic binary classifier"""
def train(trainData, trainLabels):
# add 1's as x0
trainData = np.hstack((np.ones((training_size, 1)), trainData))
# add w0 as 0 initially
all_weights = np.zeros((labels, trainData.shape[1]))
trainLabels = trainLabels.reshape((training_size, 1))
trainLabels_copy = np.copy(trainLabels)
for j in range(labels):
print("Training Classifier: ", j+1)
trainLabels = np.copy(trainLabels_copy)
# initialize all weights to zero
weights = np.zeros((trainData.shape[1], 1))
for k in range(training_size):
if trainLabels[k, 0] == j:
trainLabels[k, 0] = 1
else:
trainLabels[k, 0] = -1
weights = learning(trainData, trainLabels, weights)
all_weights[j, :] = weights.T
return all_weights
# --------------------------------------------------------
def run(trainData, trainLabels, testData, testLabels):
print("------------------------------------------------------------------------------------")
print("Running Experiment using Perceptron Learning Rule for Thresholded Unit")
print("------------------------------------------------------------------------------------")
print("Training ...")
start_time = time.clock()
all_weights = train(trainData, trainLabels)
print("Training Time: %.2f seconds" % (time.clock() - start_time))
print("Weights Learned!")
print("Classifying Test Images ...")
start_time = time.clock()
predicted_labels = predict(all_weights, testData)
print("Prediction Time: %.2f seconds" % (time.clock() - start_time))
print("Test Images Classified!")
accuracy = calculateDigitsAccuracy(predicted_labels, testLabels) * 100
print("Accuracy: %f" % accuracy, "%")
print("---------------------\n")
# --------------------------------------------------------
def main():
# load data
trainData = []
trainLabels = []
train_z = pd.read_csv('./MNIST_data/mnist_train.csv', header=None).values
for i in range(60000):
trainData.append(train_z[i][1:785])
trainLabels.append(train_z[i][0])
testData = pd.read_csv('./MNIST_data/test_data.csv', header=None).values
testLabels = pd.read_csv('./MNIST_data/test_label.csv', header=None).values
print(len(trainData))
trainData = np.array(trainData[0:training_size])
trainLabels = np.array(trainLabels[0:training_size])
testData = np.array(testData[0:testing_size])
testLabels = np.array(testLabels[0:testing_size])
run(trainData, trainLabels, testData, testLabels)
# --------------------------------------------------------
main()
```
#
```
import time
# Global Variables
training_size = 43762
testing_size = 10000
alpha = 0.01
iterations = 2000
epochs = 15
labels = 19
# --------------------------------------------------------
def predict(weights, testData_L):
print(testing_size)
print(len(testData_L))
testData_L = np.hstack((np.ones((testing_size, 1)), testData_L))
predicted_labels = np.dot(weights, testData_L.T)
predicted_labels = signum(predicted_labels)
predicted_labels = np.argmax(predicted_labels, axis=0)
return predicted_labels.T
def signum(x):
x[x > 0] = 1
x[x <= 0] = -1
return x
def learning(trainData_L, trainLabels_L, weights):
epochs_values = []
error_values = []
for k in range(epochs):
missclassified = 0
for t, l in zip(trainData_L, trainLabels_L):
h = np.dot(t, weights)
h = signum(h)
if h[0] != l[0]:
missclassified += 1
gradient = t * (h - l)
# reshape gradient
gradient = gradient.reshape(gradient.shape[0], -1)
weights = weights - (gradient * alpha)
error_values.append(missclassified / training_size)
epochs_values.append(k)
return weights
"""Find optimal weights for each logistic binary classifier"""
def train(trainData_L, trainLabels_L):
# add 1's as x0
trainData_L = np.hstack((np.ones((training_size, 1)), trainData_L))
# add w0 as 0 initially
all_weights = np.zeros((labels, trainData_L.shape[1]))
trainLabels_L = trainLabels_L.reshape((training_size, 1))
trainLabels_L_copy = np.copy(trainLabels_L)
for j in range(labels):
print("Training Classifier: ", j+1)
trainLabels_L = np.copy(trainLabels_L_copy)
# initialize all weights to zero
weights = np.zeros((trainData_L.shape[1], 1))
for k in range(training_size):
if trainLabels_L[k, 0] == j:
trainLabels_L[k, 0] = 1
else:
trainLabels_L[k, 0] = -1
weights = learning(trainData_L, trainLabels_L, weights)
all_weights[j, :] = weights.T
return all_weights
# --------------------------------------------------------
def run(trainData_L, trainLabels_L, testData_L, testLabels_L):
print("------------------------------------------------------------------------------------")
print("Running Experiment using Perceptron Learning Rule for Thresholded Unit")
print("------------------------------------------------------------------------------------")
print("Training ...")
start_time = time.clock()
all_weights = train(trainData_L, trainLabels_L)
print("Training Time: %.2f seconds" % (time.clock() - start_time))
print("Weights Learned!")
print("Classifying Test Images ...")
start_time = time.clock()
predicted_labels = predict(all_weights, testData_L)
print("Prediction Time: %.2f seconds" % (time.clock() - start_time))
print("Test Images Classified!")
accuracy = calculateLettersAccuracy(predicted_labels, testLabels_L) * 100
print("Accuracy: %f" % accuracy, "%")
print("---------------------\n")
# --------------------------------------------------------
def main():
# load data
train_z = pd.read_csv('./MNIST_data/emnist-letters-train.csv', header=None).values
test_z = pd.read_csv('./MNIST_data/emnist-letters-test.csv', header=None).values
trainData_L = []
testData_L = []
trainLabels_L = []
testLabels_L = []
for i in range(60000):
if(train_z[i][0] < 20):
trainData_L.append(train_z[i][1:785])
trainLabels_L.append(train_z[i][0])
for i in range(10000):
testData_L.append(train_z[i][1:785])
testLabels_L.append(train_z[i][0])
trainData_L = np.array(trainData_L[:training_size])
trainLabels_L = np.array(trainLabels_L[:training_size])
testData_L = np.array(testData_L[:testing_size])
testLabels_L = np.array(testLabels_L[:testing_size])
run(trainData_L, trainLabels_L, testData_L, testLabels_L)
# --------------------------------------------------------
main()
```
| github_jupyter |
```
######################################CONSTANTS######################################
METRIC = 'calibration_error'
MODE = 'max'
HOLDOUT_RATIO = 0.1
RUNS = 100
LOG_FREQ = 100
threshold = 0.98 # threshold for x-axis cutoff
COLOR = {'non-active_no_prior': '#1f77b4',
'ts_uniform': 'red',#'#ff7f0e',
'ts_informed': 'green',
'epsilon_greedy_no_prior': 'tab:pink',
'bayesian_ucb_no_prior': 'cyan'
}
COLOR = {'non-active': '#1f77b4',
'ts': '#ff7f0e',
'epsilon_greedy': 'pink',
'bayesian_ucb': 'cyan'
}
METHOD_NAME_DICT = {'non-active': 'Non-active',
'epsilon_greedy': 'Epsilon greedy',
'bayesian_ucb': 'Bayesian UCB',
'ts': 'TS'}
TOPK_METHOD_NAME_DICT = {'non-active': 'Non-active',
'epsilon_greedy': 'Epsilon greedy',
'bayesian_ucb': 'Bayesian UCB',
'ts': 'MP-TS'}
LINEWIDTH = 13.97
######################################CONSTANTS######################################
import sys
sys.path.insert(0, '..')
import argparse
from typing import Dict, Any
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
from data_utils import DATASIZE_DICT, FIGURE_DIR, RESULTS_DIR
from data_utils import DATASET_NAMES, TOPK_DICT
import matplotlib;matplotlib.rcParams['font.family'] = 'serif'
RESULTS_DIR = RESULTS_DIR + 'active_learning_topk/'
def plot_topk_ece(ax: mpl.axes.Axes,
experiment_name: str,
topk: int,
eval_metric: str,
pool_size: int,
threshold: float) -> None:
benchmark = 'ts'
for method in METHOD_NAME_DICT:
metric_eval = np.load(
RESULTS_DIR + experiment_name + ('%s_%s.npy' % (eval_metric, method))).mean(axis=0)
x = np.arange(len(metric_eval)) * LOG_FREQ / pool_size
if topk == 1:
label = METHOD_NAME_DICT[method]
else:
label = TOPK_METHOD_NAME_DICT[method]
if method == 'non-active':
linestyle = "-"
else:
linestyle = '-'
ax.plot(x, metric_eval, linestyle, color=COLOR[method], label=label)
if method == benchmark:
if max(metric_eval) > threshold:
cutoff = list(map(lambda i: i > threshold, metric_eval.tolist()[10:])).index(True) + 10
cutoff = min(int(cutoff * 1.5), len(metric_eval) - 1)
else:
cutoff = len(metric_eval) - 1
ax.set_xlim(0, cutoff * LOG_FREQ / pool_size)
ax.set_ylim(0, 1.0)
xmin, xmax = ax.get_xlim()
step = ((xmax - xmin) / 4.0001)
ax.xaxis.set_major_formatter(ticker.PercentFormatter(xmax=1))
ax.xaxis.set_ticks(np.arange(xmin, xmax + 0.001, step))
ax.yaxis.set_ticks(np.arange(0, 1.01, 0.20))
ax.tick_params(pad=0.25, length=1.5)
return ax
def main(eval_metric: str, top1: bool, pseudocount: int, threshold: float) -> None:
fig, axes = plt.subplots(ncols=len(TOPK_DICT), dpi=300, sharey=True)
idx = 0
for dataset in TOPK_DICT:
print(dataset)
if top1:
topk = 1
else:
topk = TOPK_DICT[dataset]
experiment_name = '%s_%s_%s_top%d_runs%d_pseudocount%.2f/' % \
(dataset, METRIC, MODE, topk, RUNS, pseudocount)
plot_kwargs = {}
plot_topk_ece(axes[idx],
experiment_name,
topk,
eval_metric,
int(DATASIZE_DICT[dataset] * (1 - HOLDOUT_RATIO)),
threshold=threshold)
if topk == 1:
axes[idx].set_title(DATASET_NAMES[dataset])
else:
axes[idx].set_xlabel("#queries")
if idx > 0:
axes[idx].tick_params(left=False)
idx += 1
axes[-1].legend()
if topk == 1:
axes[0].set_ylabel("MRR, top-1")
else:
axes[0].set_ylabel("MRR, top-m")
fig.tight_layout()
fig.set_size_inches(LINEWIDTH, 2.5)
fig.subplots_adjust(bottom=0.05, wspace=0.20)
if top1:
figname = FIGURE_DIR + '%s_%s_%s_top1_pseudocount%d.pdf' % (METRIC, MODE, eval_metric, pseudocount)
else:
figname = FIGURE_DIR + '%s_%s_%s_topk_pseudocount%d.pdf' % (METRIC, MODE, eval_metric, pseudocount)
fig.savefig(figname, bbox_inches='tight', pad_inches=0)
for pseudocount in [2, 5, 10]:
for eval_metric in ['avg_num_agreement', 'mrr']:
for top1 in [True, False]:
main(eval_metric, top1, pseudocount, threshold)
```
| github_jupyter |
```
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
# The GPU id to use, usually either "0" or "1";
os.environ["CUDA_VISIBLE_DEVICES"]="1";
import numpy as np
import tensorflow as tf
import random as rn
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
from tensorflow.keras import backend as K
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.set_random_seed(1234)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
import networkx as nx
import pandas as pd
import numpy as np
import os
import random
import h5py
import matplotlib.pyplot as plt
from tqdm import tqdm
from scipy.spatial import cKDTree as KDTree
from tensorflow.keras.utils import to_categorical
import stellargraph as sg
from stellargraph.data import EdgeSplitter
from stellargraph.mapper import GraphSAGELinkGenerator
from stellargraph.layer import GraphSAGE, link_classification
from stellargraph.layer.graphsage import AttentionalAggregator
from stellargraph.data import UniformRandomWalk
from stellargraph.data import UnsupervisedSampler
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
from sklearn import preprocessing, feature_extraction, model_selection
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression
from sklearn.metrics import accuracy_score
from stellargraph import globalvar
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
```
## Load Data
```
import numpy as np
import h5py
from collections import OrderedDict
pixel_per_um = 15.3846 # from BioRxiv paper
um_per_pixel = 1.0 / pixel_per_um
f = h5py.File("../data/osmFISH_Codeluppi_et_al/mRNA_coords_raw_counting.hdf5", 'r')
keys = list(f.keys())
pos_dic = OrderedDict()
genes = []
# Exclude bad quality data, according to the supplementary material of osmFISH paper
blacklists = ['Cnr1_Hybridization4', 'Plp1_Hybridization4', 'Vtn_Hybridization4',
'Klk6_Hybridization5', 'Lum_Hybridization9', 'Tbr1_Hybridization11']
barcodes_df = pd.DataFrame({'Gene':[], 'Centroid_X':[], 'Centroid_Y':[]})
for k in keys:
if k in blacklists:
continue
gene = k.split("_")[0]
# Correct wrong gene labels
if gene == 'Tmem6':
gene = 'Tmem2'
elif gene == 'Kcnip':
gene = 'Kcnip2'
points = np.array(f[k]) * um_per_pixel
if gene in pos_dic:
pos_dic[gene] = np.vstack((pos_dic[gene], points))
else:
pos_dic[gene] = points
genes.append(gene)
barcodes_df = barcodes_df.append(pd.DataFrame({'Gene':[gene]*points.shape[0], 'Centroid_X':points[:,0], 'Centroid_Y':points[:,1]}),ignore_index=True)
# Gene panel taglist
tagList_df = pd.DataFrame(sorted(genes),columns=['Gene'])
# Spot dataframe from Codeluppi et al.
barcodes_df.reset_index(drop=True, inplace=True)
import matplotlib.pyplot as plt
X = -barcodes_df.Centroid_X
Y = -barcodes_df.Centroid_Y
plt.figure(figsize=(10,10))
plt.scatter(X,Y,s=1)
plt.axis('scaled')
```
## Build Graph
```
# Auxiliary function to compute d_max
def plotNeighbor(barcodes_df):
barcodes_df.reset_index(drop=True, inplace=True)
kdT = KDTree(np.array([barcodes_df.Centroid_X.values,barcodes_df.Centroid_Y.values]).T)
d,i = kdT.query(np.array([barcodes_df.Centroid_X.values,barcodes_df.Centroid_Y.values]).T,k=2)
plt.hist(d[:,1],bins=200);
plt.axvline(x=np.percentile(d[:,1],97),c='r')
print(np.percentile(d[:,1],97))
d_th = np.percentile(d[:,1],97)
return d_th
# Compute d_max for generating spatial graph
d_th = plotNeighbor(barcodes_df)
# Auxiliary function to build spatial gene expression graph
def buildGraph(barcodes_df, d_th, tagList_df):
G = nx.Graph()
features =[]
barcodes_df.reset_index(drop=True, inplace=True)
gene_list = tagList_df.Gene.values
# Generate node categorical features
one_hot_encoding = dict(zip(tagList_df.Gene.unique(),to_categorical(np.arange(tagList_df.Gene.unique().shape[0]),num_classes=tagList_df.Gene.unique().shape[0]).tolist()))
barcodes_df["feature"] = barcodes_df['Gene'].map(one_hot_encoding).tolist()
features.append(np.vstack(barcodes_df.feature.values))
kdT = KDTree(np.array([barcodes_df.Centroid_X.values,barcodes_df.Centroid_Y.values]).T)
res = kdT.query_pairs(d_th)
res = [(x[0],x[1]) for x in list(res)]
# Add nodes to graph
G.add_nodes_from((barcodes_df.index.values), test=False, val=False, label=0)
# Add node features to graph
nx.set_node_attributes(G,dict(zip((barcodes_df.index.values), barcodes_df.feature)), 'feature')
# Add edges to graph
G.add_edges_from(res)
return G, barcodes_df
# Build spatial gene expression graph
G, barcodes_df = buildGraph(barcodes_df, d_th, tagList_df)
# Remove components with less than N nodes
N=3
for component in tqdm(list(nx.connected_components(G))):
if len(component)<N:
for node in component:
G.remove_node(node)
```
#### 1. Create the Stellargraph with node features.
```
G = sg.StellarGraph(G, node_features="feature")
print(G.info())
```
#### 2. Specify the other optional parameter values: root nodes, the number of walks to take per node, the length of each walk, and random seed.
```
nodes = list(G.nodes())
number_of_walks = 1
length = 2
```
#### 3. Create the UnsupervisedSampler instance with the relevant parameters passed to it.
```
unsupervised_samples = UnsupervisedSampler(G, nodes=nodes, length=length, number_of_walks=number_of_walks, seed=42)
```
#### 4. Create a node pair generator:
```
batch_size = 50
epochs = 10
num_samples = [20,10]
train_gen = GraphSAGELinkGenerator(G, batch_size, num_samples, seed=42).flow(unsupervised_samples)
```
#### 5. Create neural network model
```
layer_sizes = [50,50]
assert len(layer_sizes) == len(num_samples)
graphsage = GraphSAGE(
layer_sizes=layer_sizes, generator=train_gen, aggregator=AttentionalAggregator, bias=True, dropout=0.0, normalize="l2", kernel_regularizer='l1'
)
# Build the model and expose input and output sockets of graphsage, for node pair inputs:
x_inp, x_out = graphsage.build()
prediction = link_classification(
output_dim=1, output_act="sigmoid", edge_embedding_method='ip'
)(x_out)
import os, datetime
logdir = os.path.join("logs", datetime.datetime.now().strftime("osmFISH-%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir)
earlystop_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', mode='min', verbose=1, patience=0)
model = keras.Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=keras.optimizers.Adam(lr=0.5e-4),
loss=keras.losses.binary_crossentropy,
metrics=[keras.metrics.binary_accuracy]
)
model.summary()
```
#### 6. Train neural network model
```
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
history = model.fit_generator(
train_gen,
epochs=epochs,
verbose=1,
use_multiprocessing=True,
workers=6,
shuffle=True,
callbacks=[tensorboard_callback,earlystop_callback]
)
```
### Extracting node embeddings
```
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from stellargraph.mapper import GraphSAGENodeGenerator
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import request
%matplotlib inline
x_inp_src = x_inp[0::2]
x_out_src = x_out[0]
embedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src)
# Save the model
embedding_model.save('../models/osmFISH_Codeluppi_et_al/nn_model.h5')
# Recreate the exact same model purely from the file
embedding_model = keras.models.load_model('../models/osmFISH_Codeluppi_et_al/nn_model.h5', custom_objects={'AttentionalAggregator':AttentionalAggregator})
embedding_model.compile(
optimizer=keras.optimizers.Adam(lr=0.5e-4),
loss=keras.losses.binary_crossentropy,
metrics=[keras.metrics.binary_accuracy]
)
node_gen = GraphSAGENodeGenerator(G, batch_size, num_samples, seed=42).flow(nodes)
node_embeddings = embedding_model.predict_generator(node_gen, workers=12, verbose=1)
node_embeddings.shape
np.save('../results/osmFISH_et_al/embedding_osmFISH.npy',node_embeddings)
quit()
```
| github_jupyter |
# Full-time Scores in the Premier League
```
import pandas as pd
import numpy as np
df = pd.read_csv("../data/fivethirtyeight/spi_matches.csv")
# df = df[(df['league_id'] == 2412) | (df['league_id'] == 2411)]
df = df[df['league_id'] == 2411]
df = df[["season", "league_id", "team1", "team2", "score1", "score2", "date"]].dropna()
```
## Exploratory Data Analysis
```
df[["score1", "score2"]].mean()
df[df['season'] == 2020][["score1", "score2"]].mean()
```
While there is a considerably greater number of goals scored at home. The 2020-21 Season seems to exempt from this advantage. The Covid-19 Pandemic causing fans to be absent from stadiums must have affected this considerably.
```
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 300
from highlight_text import fig_text
body_font = "Open Sans"
watermark_font = "DejaVu Sans"
text_color = "w"
background = "#282B2F"
title_font = "DejaVu Sans"
mpl.rcParams['xtick.color'] = text_color
mpl.rcParams['ytick.color'] = text_color
mpl.rcParams['text.color'] = text_color
mpl.rcParams['axes.edgecolor'] = text_color
mpl.rcParams['xtick.labelsize'] = 5
mpl.rcParams['ytick.labelsize'] = 6
from scipy.stats import poisson
fig, ax = plt.subplots(tight_layout=True)
fig.set_facecolor(background)
ax.patch.set_alpha(0)
max_goals = 8
_, _, _ = ax.hist(
df[df['season'] != 2020][["score1", "score2"]].values, label=["Home", "Away"],
bins=np.arange(0, max_goals)-.5, density=True,
color=['#016DBA', '#B82A2A'], edgecolor='w', linewidth=0.25, alpha=1)
home_poisson = poisson.pmf(range(max_goals), df["score1"].mean())
away_poisson = poisson.pmf(range(max_goals), df["score2"].mean())
ax.plot(
[i for i in range(0, max_goals)],
home_poisson,
linestyle="-",
color="#01497c",
label="Home Poisson",
)
ax.plot(
[i for i in range(0, max_goals)],
away_poisson,
linestyle="-",
color="#902121",
label="Away Poisson",
)
ax.set_xticks(np.arange(0, max_goals), minor=False)
ax.set_xlabel(
"Goals", fontfamily=title_font,
fontweight="bold", fontsize=8, color=text_color)
ax.set_ylabel(
"Proportion of matches", fontfamily=title_font,
fontweight="bold", fontsize=8, color=text_color)
fig_text(
x=0.1, y=1.025,
s="Number of Goals Scored Per Match at <Home> and <Away>.",
highlight_textprops=[
{"color": '#016DBA'},
{"color": '#B82A2A'},
],
fontweight="regular", fontsize=12, fontfamily=title_font,
color=text_color, alpha=1)
fig_text(
x=0.8, y=-0.02,
s="Created by <Paul Fournier>",
highlight_textprops=[{"fontstyle": "italic"}],
fontsize=6, fontfamily=watermark_font,
color=text_color)
plt.show()
fig, ax = plt.subplots(tight_layout=True)
fig.set_facecolor(background)
ax.patch.set_alpha(0)
max_goals = 8
_, _, _ = ax.hist(
df[df['season'] == 2020][["score1", "score2"]].values, label=["Home", "Away"],
bins=np.arange(0, max_goals)-.5, density=True,
color=['#016DBA', '#B82A2A'], edgecolor='w', linewidth=0.25, alpha=1)
home_poisson = poisson.pmf(range(max_goals), df["score1"].mean())
away_poisson = poisson.pmf(range(max_goals), df["score2"].mean())
ax.plot(
[i for i in range(0, max_goals)],
home_poisson,
linestyle="-",
color="#01497c",
label="Home Poisson",
)
ax.plot(
[i for i in range(0, max_goals)],
away_poisson,
linestyle="-",
color="#902121",
label="Away Poisson",
)
ax.set_xticks(np.arange(0, max_goals), minor=False)
ax.set_xlabel(
"Goals", fontfamily=title_font,
fontweight="bold", fontsize=8, color=text_color)
ax.set_ylabel(
"Proportion of matches", fontfamily=title_font,
fontweight="bold", fontsize=8, color=text_color)
fig_text(x=0.1, y=1.025,
s="Goals Scored at <Home> and <Away> during the 2020-21 Season.",
highlight_textprops=[
{"color": '#016DBA'},
{"color": '#B82A2A'},
],
fontweight="regular", fontsize=12, fontfamily=title_font,
color=text_color, alpha=1)
fig_text(
x=0.8, y=-0.02,
s="Created by <Paul Fournier>",
highlight_textprops=[{"fontstyle": "italic"}],
fontsize=6, fontfamily=watermark_font,
color=text_color)
plt.show()
mpl.rcParams['xtick.labelsize'] = 6
mpl.rcParams['ytick.labelsize'] = 6
fig, ax = plt.subplots(tight_layout=True)
fig.set_facecolor(background)
ax.patch.set_alpha(0)
heat = np.zeros((7, 7))
for i in range(7):
for j in range(7):
heat[6 - i, j] = df[(df["score1"] == i) & (df["score2"] == j)].shape[0]
for i in range(7):
for j in range(7):
text = ax.text(j, i, np.round(heat[i, j]/np.sum(heat), 2),
ha="center", va="center")
plt.imshow(heat, cmap='magma_r', interpolation='nearest')
ax.set_xticks(np.arange(0, 7))
ax.set_yticks(np.arange(0, 7))
ax.set_xticklabels(np.arange(0, 7))
ax.set_yticklabels(np.flip(np.arange(0, 7)))
ax.set_xlabel(
f"Away Goals", fontfamily=title_font,
fontweight="bold", fontsize=7, color=text_color)
ax.set_ylabel(
f"Home Goals", fontfamily=title_font,
fontweight="bold", fontsize=7, color=text_color)
fig_text(x=0.22, y=1.04,
s=f"Distribution of historical scorelines",
fontweight="regular", fontsize=12,
fontfamily=title_font, color=text_color, alpha=1)
fig_text(
x=0.6, y=-0.02,
s="Created by <Paul Fournier>",
highlight_textprops=[{"fontstyle": "italic"}],
fontsize=6, fontfamily=watermark_font,
color=text_color)
plt.show()
```
| github_jupyter |
```
!pip install exetera
# Copyright 2020 KCL-BMEIS - King's College London
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import datetime, timezone
import time
from collections import defaultdict
import numpy as np
import numba
import h5py
from exeteracovid.algorithms.age_from_year_of_birth import calculate_age_from_year_of_birth_fast
from exeteracovid.algorithms.weight_height_bmi import weight_height_bmi_fast_1
from exeteracovid.algorithms.inconsistent_symptoms import check_inconsistent_symptoms_1
from exeteracovid.algorithms.temperature import validate_temperature_1
from exeteracovid.algorithms.combined_healthcare_worker import combined_hcw_with_contact
from exetera.core import persistence
from exetera.core.persistence import DataStore
from exetera.core.session import Session
from exetera.core import readerwriter as rw
from exetera.core import utils
def log(*a, **kwa):
print(*a, **kwa)
def postprocess(dataset, destination, timestamp=None, flags=None):
if flags is None:
flags = set()
do_daily_asmts = 'daily' in flags
has_patients = 'patients' in dataset.keys()
has_assessments = 'assessments' in dataset.keys()
has_tests = 'tests' in dataset.keys()
has_diet = 'diet' in dataset.keys()
sort_enabled = lambda x: True
process_enabled = lambda x: True
sort_patients = sort_enabled(flags) and True
sort_assessments = sort_enabled(flags) and True
sort_tests = sort_enabled(flags) and True
sort_diet = sort_enabled(flags) and True
make_assessment_patient_id_fkey = process_enabled(flags) and True
year_from_age = process_enabled(flags) and True
clean_weight_height_bmi = process_enabled(flags) and True
health_worker_with_contact = process_enabled(flags) and True
clean_temperatures = process_enabled(flags) and True
check_symptoms = process_enabled(flags) and True
create_daily = process_enabled(flags) and do_daily_asmts
make_patient_level_assessment_metrics = process_enabled(flags) and True
make_patient_level_daily_assessment_metrics = process_enabled(flags) and do_daily_asmts
make_new_test_level_metrics = process_enabled(flags) and True
make_diet_level_metrics = True
make_healthy_diet_index = True
ds = DataStore(timestamp=timestamp)
s = Session()
# patients ================================================================
sorted_patients_src = None
if has_patients:
patients_src = dataset['patients']
write_mode = 'write'
if 'patients' not in destination.keys():
patients_dest = ds.get_or_create_group(destination, 'patients')
sorted_patients_src = patients_dest
# Patient sort
# ============
if sort_patients:
duplicate_filter = \
persistence.filter_duplicate_fields(ds.get_reader(patients_src['id'])[:])
for k in patients_src.keys():
t0 = time.time()
r = ds.get_reader(patients_src[k])
w = r.get_writer(patients_dest, k)
ds.apply_filter(duplicate_filter, r, w)
print(f"'{k}' filtered in {time.time() - t0}s")
print(np.count_nonzero(duplicate_filter == True),
np.count_nonzero(duplicate_filter == False))
sort_keys = ('id',)
ds.sort_on(
patients_dest, patients_dest, sort_keys, write_mode='overwrite')
# Patient processing
# ==================
if year_from_age:
log("year of birth -> age; 18 to 90 filter")
t0 = time.time()
age = ds.get_numeric_writer(patients_dest, 'age', 'uint32',
write_mode)
age_filter = ds.get_numeric_writer(patients_dest, 'age_filter',
'bool', write_mode)
age_16_to_90 = ds.get_numeric_writer(patients_dest, '16_to_90_years',
'bool', write_mode)
print('year_of_birth:', patients_dest['year_of_birth'])
for k in patients_dest['year_of_birth'].attrs.keys():
print(k, patients_dest['year_of_birth'].attrs[k])
calculate_age_from_year_of_birth_fast(
ds, 16, 90,
patients_dest['year_of_birth'], patients_dest['year_of_birth_valid'],
age, age_filter, age_16_to_90,
2020)
log(f"completed in {time.time() - t0}")
print('age_filter count:', np.sum(patients_dest['age_filter']['values'][:]))
print('16_to_90_years count:', np.sum(patients_dest['16_to_90_years']['values'][:]))
if clean_weight_height_bmi:
log("height / weight / bmi; standard range filters")
t0 = time.time()
weights_clean = ds.get_numeric_writer(patients_dest, 'weight_kg_clean',
'float32', write_mode)
weights_filter = ds.get_numeric_writer(patients_dest, '40_to_200_kg',
'bool', write_mode)
heights_clean = ds.get_numeric_writer(patients_dest, 'height_cm_clean',
'float32', write_mode)
heights_filter = ds.get_numeric_writer(patients_dest, '110_to_220_cm',
'bool', write_mode)
bmis_clean = ds.get_numeric_writer(patients_dest, 'bmi_clean',
'float32', write_mode)
bmis_filter = ds.get_numeric_writer(patients_dest, '15_to_55_bmi',
'bool', write_mode)
weight_height_bmi_fast_1(ds, 40, 200, 110, 220, 15, 55,
None, None, None, None,
patients_dest['weight_kg'], patients_dest['weight_kg_valid'],
patients_dest['height_cm'], patients_dest['height_cm_valid'],
patients_dest['bmi'], patients_dest['bmi_valid'],
weights_clean, weights_filter, None,
heights_clean, heights_filter, None,
bmis_clean, bmis_filter, None)
log(f"completed in {time.time() - t0}")
if health_worker_with_contact:
with utils.Timer("health_worker_with_contact field"):
#writer = ds.get_categorical_writer(patients_dest, 'health_worker_with_contact', 'int8')
combined_hcw_with_contact(ds,
ds.get_reader(patients_dest['healthcare_professional']),
ds.get_reader(patients_dest['contact_health_worker']),
ds.get_reader(patients_dest['is_carer_for_community']),
patients_dest, 'health_worker_with_contact')
# assessments =============================================================
sorted_assessments_src = None
if has_assessments:
assessments_src = dataset['assessments']
if 'assessments' not in destination.keys():
assessments_dest = ds.get_or_create_group(destination, 'assessments')
sorted_assessments_src = assessments_dest
if sort_assessments:
sort_keys = ('patient_id', 'created_at')
with utils.Timer("sorting assessments"):
ds.sort_on(
assessments_src, assessments_dest, sort_keys)
if has_patients:
if make_assessment_patient_id_fkey:
print("creating 'assessment_patient_id_fkey' foreign key index for 'patient_id'")
t0 = time.time()
patient_ids = ds.get_reader(sorted_patients_src['id'])
assessment_patient_ids =\
ds.get_reader(sorted_assessments_src['patient_id'])
assessment_patient_id_fkey =\
ds.get_numeric_writer(assessments_dest, 'assessment_patient_id_fkey', 'int64')
ds.get_index(patient_ids, assessment_patient_ids, assessment_patient_id_fkey)
print(f"completed in {time.time() - t0}s")
if clean_temperatures:
print("clean temperatures")
t0 = time.time()
temps = ds.get_reader(sorted_assessments_src['temperature'])
temp_units = ds.get_reader(sorted_assessments_src['temperature_unit'])
temps_valid = ds.get_reader(sorted_assessments_src['temperature_valid'])
dest_temps = temps.get_writer(assessments_dest, 'temperature_c_clean', write_mode)
dest_temps_valid =\
temps_valid.get_writer(assessments_dest, 'temperature_35_to_42_inclusive', write_mode)
dest_temps_modified =\
temps_valid.get_writer(assessments_dest, 'temperature_modified', write_mode)
validate_temperature_1(35.0, 42.0,
temps, temp_units, temps_valid,
dest_temps, dest_temps_valid, dest_temps_modified)
print(f"temperature cleaning done in {time.time() - t0}")
if check_symptoms:
print('check inconsistent health_status')
t0 = time.time()
check_inconsistent_symptoms_1(ds, sorted_assessments_src, assessments_dest)
print(time.time() - t0)
# tests ===================================================================
if has_tests:
if sort_tests:
tests_src = dataset['tests']
tests_dest = ds.get_or_create_group(destination, 'tests')
sort_keys = ('patient_id', 'created_at')
ds.sort_on(tests_src, tests_dest, sort_keys)
# diet ====================================================================
if has_diet:
diet_src = dataset['diet']
if 'diet' not in destination.keys():
diet_dest = ds.get_or_create_group(destination, 'diet')
sorted_diet_src = diet_dest
if sort_diet:
sort_keys = ('patient_id', 'display_name', 'id')
ds.sort_on(diet_src, diet_dest, sort_keys)
if has_assessments:
if do_daily_asmts:
daily_assessments_dest = ds.get_or_create_group(destination, 'daily_assessments')
# post process patients
# TODO: need an transaction table
print(patients_src.keys())
print(dataset['assessments'].keys())
print(dataset['tests'].keys())
# write_mode = 'overwrite'
write_mode = 'write'
# Daily assessments
# =================
if has_assessments:
if create_daily:
print("generate daily assessments")
patient_ids = ds.get_reader(sorted_assessments_src['patient_id'])
created_at_days = ds.get_reader(sorted_assessments_src['created_at_day'])
raw_created_at_days = created_at_days[:]
if 'assessment_patient_id_fkey' in assessments_src.keys():
patient_id_index = assessments_src['assessment_patient_id_fkey']
else:
patient_id_index = assessments_dest['assessment_patient_id_fkey']
patient_id_indices = ds.get_reader(patient_id_index)
raw_patient_id_indices = patient_id_indices[:]
print("Calculating patient id index spans")
t0 = time.time()
patient_id_index_spans = ds.get_spans(fields=(raw_patient_id_indices,
raw_created_at_days))
print(f"Calculated {len(patient_id_index_spans)-1} spans in {time.time() - t0}s")
print("Applying spans to 'health_status'")
t0 = time.time()
default_behavour_overrides = {
'id': ds.apply_spans_last,
'patient_id': ds.apply_spans_last,
'patient_index': ds.apply_spans_last,
'created_at': ds.apply_spans_last,
'created_at_day': ds.apply_spans_last,
'updated_at': ds.apply_spans_last,
'updated_at_day': ds.apply_spans_last,
'version': ds.apply_spans_max,
'country_code': ds.apply_spans_first,
'date_test_occurred': None,
'date_test_occurred_guess': None,
'date_test_occurred_day': None,
'date_test_occurred_set': None,
}
for k in sorted_assessments_src.keys():
t1 = time.time()
reader = ds.get_reader(sorted_assessments_src[k])
if k in default_behavour_overrides:
apply_span_fn = default_behavour_overrides[k]
if apply_span_fn is not None:
apply_span_fn(patient_id_index_spans, reader,
reader.get_writer(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
else:
print(f" Skipping field {k}")
else:
if isinstance(reader, rw.CategoricalReader):
ds.apply_spans_max(
patient_id_index_spans, reader,
reader.get_writer(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
elif isinstance(reader, rw.IndexedStringReader):
ds.apply_spans_concat(
patient_id_index_spans, reader,
reader.get_writer(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
elif isinstance(reader, rw.NumericReader):
ds.apply_spans_max(
patient_id_index_spans, reader,
reader.get_writer(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
else:
print(f" No function for {k}")
print(f"apply_spans completed in {time.time() - t0}s")
if has_patients and has_assessments:
if make_patient_level_assessment_metrics:
if 'assessment_patient_id_fkey' in assessments_dest:
src = assessments_dest['assessment_patient_id_fkey']
else:
src = assessments_src['assessment_patient_id_fkey']
assessment_patient_id_fkey = ds.get_reader(src)
# generate spans from the assessment-space patient_id foreign key
spans = ds.get_spans(field=assessment_patient_id_fkey)
ids = ds.get_reader(patients_dest['id'])
print('calculate assessment counts per patient')
t0 = time.time()
writer = ds.get_numeric_writer(patients_dest, 'assessment_count', 'uint32')
aggregated_counts = ds.aggregate_count(fkey_index_spans=spans)
ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated assessment counts per patient in {time.time() - t0}")
print('calculate first assessment days per patient')
t0 = time.time()
reader = ds.get_reader(sorted_assessments_src['created_at_day'])
writer = ds.get_fixed_string_writer(patients_dest, 'first_assessment_day', 10)
aggregated_counts = ds.aggregate_first(fkey_index_spans=spans, reader=reader)
ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated first assessment days per patient in {time.time() - t0}")
print('calculate last assessment days per patient')
t0 = time.time()
reader = ds.get_reader(sorted_assessments_src['created_at_day'])
writer = ds.get_fixed_string_writer(patients_dest, 'last_assessment_day', 10)
aggregated_counts = ds.aggregate_last(fkey_index_spans=spans, reader=reader)
ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated last assessment days per patient in {time.time() - t0}")
print('calculate maximum assessment test result per patient')
t0 = time.time()
reader = ds.get_reader(sorted_assessments_src['tested_covid_positive'])
writer = reader.get_writer(patients_dest, 'max_assessment_test_result')
max_result_value = ds.aggregate_max(fkey_index_spans=spans, reader=reader)
ds.join(ids, assessment_patient_id_fkey, max_result_value, writer, spans)
print(f"calculated maximum assessment test result in {time.time() - t0}")
if has_assessments and do_daily_asmts and make_patient_level_daily_assessment_metrics:
print("creating 'daily_assessment_patient_id_fkey' foreign key index for 'patient_id'")
t0 = time.time()
patient_ids = ds.get_reader(sorted_patients_src['id'])
daily_assessment_patient_ids =\
ds.get_reader(daily_assessments_dest['patient_id'])
daily_assessment_patient_id_fkey =\
ds.get_numeric_writer(daily_assessments_dest, 'daily_assessment_patient_id_fkey',
'int64')
ds.get_index(patient_ids, daily_assessment_patient_ids,
daily_assessment_patient_id_fkey)
print(f"completed in {time.time() - t0}s")
spans = ds.get_spans(
field=ds.get_reader(daily_assessments_dest['daily_assessment_patient_id_fkey']))
print('calculate daily assessment counts per patient')
t0 = time.time()
writer = ds.get_numeric_writer(patients_dest, 'daily_assessment_count', 'uint32')
aggregated_counts = ds.aggregate_count(fkey_index_spans=spans)
daily_assessment_patient_id_fkey =\
ds.get_reader(daily_assessments_dest['daily_assessment_patient_id_fkey'])
ds.join(ids, daily_assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated daily assessment counts per patient in {time.time() - t0}")
# TODO - new test count per patient:
if has_tests and make_new_test_level_metrics:
print("creating 'test_patient_id_fkey' foreign key index for 'patient_id'")
t0 = time.time()
patient_ids = ds.get_reader(sorted_patients_src['id'])
test_patient_ids = ds.get_reader(tests_dest['patient_id'])
test_patient_id_fkey = ds.get_numeric_writer(tests_dest, 'test_patient_id_fkey',
'int64')
ds.get_index(patient_ids, test_patient_ids, test_patient_id_fkey)
test_patient_id_fkey = ds.get_reader(tests_dest['test_patient_id_fkey'])
spans = ds.get_spans(field=test_patient_id_fkey)
print(f"completed in {time.time() - t0}s")
print('calculate test_counts per patient')
t0 = time.time()
writer = ds.get_numeric_writer(patients_dest, 'test_count', 'uint32')
aggregated_counts = ds.aggregate_count(fkey_index_spans=spans)
ds.join(ids, test_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated test counts per patient in {time.time() - t0}")
print('calculate test_result per patient')
t0 = time.time()
test_results = ds.get_reader(tests_dest['result'])
writer = test_results.get_writer(patients_dest, 'max_test_result')
aggregated_results = ds.aggregate_max(fkey_index_spans=spans, reader=test_results)
ds.join(ids, test_patient_id_fkey, aggregated_results, writer, spans)
print(f"calculated max_test_result per patient in {time.time() - t0}")
if has_diet and make_diet_level_metrics:
with utils.Timer("Making patient-level diet questions count", new_line=True):
d_pids_ = s.get(diet_dest['patient_id']).data[:]
d_pid_spans = s.get_spans(d_pids_)
d_distinct_pids = s.apply_spans_first(d_pid_spans, d_pids_)
d_pid_counts = s.apply_spans_count(d_pid_spans)
p_diet_counts = s.create_numeric(patients_dest, 'diet_counts', 'int32')
s.merge_left(left_on=s.get(patients_dest['id']).data[:], right_on=d_distinct_pids,
right_fields=(d_pid_counts,), right_writers=(p_diet_counts,))
generate_daily = # False or True to generate aggregated daily assessments
input_filename = # File name for the imported dataset
output_filename = # Filename to write the processed dataset to
timestamp = str(datetime.now(timezone.utc)) # Override with a specific timestamp if required
flags = set()
if generate_daily is True:
flags.add('daily')
with h5py.File(input_filename, 'r') as ds:
with h5py.File(output_filename, 'w') as ts:
postprocess(ds, ts, timestamp, flags)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Model Development with Custom Weights
This example shows how to retrain a model with custom weights and fine-tune the model with quantization, then deploy the model running on FPGA. Only Windows is supported. We use TensorFlow and Keras to build our model. We are going to use transfer learning, with ResNet50 as a featurizer. We don't use the last layer of ResNet50 in this case and instead add our own classification layer using Keras.
The custom wegiths are trained with ImageNet on ResNet50. We will use the Kaggle Cats and Dogs dataset to retrain and fine-tune the model. The dataset can be downloaded [here](https://www.microsoft.com/en-us/download/details.aspx?id=54765). Download the zip and extract to a directory named 'catsanddogs' under your user directory ("~/catsanddogs").
Please set up your environment as described in the [quick start](project-brainwave-quickstart.ipynb).
```
import os
import sys
import tensorflow as tf
import numpy as np
from keras import backend as K
```
## Setup Environment
After you train your model in float32, you'll write the weights to a place on disk. We also need a location to store the models that get downloaded.
```
custom_weights_dir = os.path.expanduser("~/custom-weights")
saved_model_dir = os.path.expanduser("~/models")
```
## Prepare Data
Load the files we are going to use for training and testing. By default this notebook uses only a very small subset of the Cats and Dogs dataset. That makes it run relatively quickly.
```
import glob
import imghdr
datadir = os.path.expanduser("~/catsanddogs")
cat_files = glob.glob(os.path.join(datadir, 'PetImages', 'Cat', '*.jpg'))
dog_files = glob.glob(os.path.join(datadir, 'PetImages', 'Dog', '*.jpg'))
# Limit the data set to make the notebook execute quickly.
cat_files = cat_files[:64]
dog_files = dog_files[:64]
# The data set has a few images that are not jpeg. Remove them.
cat_files = [f for f in cat_files if imghdr.what(f) == 'jpeg']
dog_files = [f for f in dog_files if imghdr.what(f) == 'jpeg']
if(not len(cat_files) or not len(dog_files)):
print("Please download the Kaggle Cats and Dogs dataset form https://www.microsoft.com/en-us/download/details.aspx?id=54765 and extract the zip to " + datadir)
raise ValueError("Data not found")
else:
print(cat_files[0])
print(dog_files[0])
# Construct a numpy array as labels
image_paths = cat_files + dog_files
total_files = len(cat_files) + len(dog_files)
labels = np.zeros(total_files)
labels[len(cat_files):] = 1
# Split images data as training data and test data
from sklearn.model_selection import train_test_split
onehot_labels = np.array([[0,1] if i else [1,0] for i in labels])
img_train, img_test, label_train, label_test = train_test_split(image_paths, onehot_labels, random_state=42, shuffle=True)
print(len(img_train), len(img_test), label_train.shape, label_test.shape)
```
## Construct Model
We use ResNet50 for the featuirzer and build our own classifier using Keras layers. We train the featurizer and the classifier as one model. The weights trained on ImageNet are used as the starting point for the retraining of our featurizer. The weights are loaded from tensorflow chkeckpoint files.
Before passing image dataset to the ResNet50 featurizer, we need to preprocess the input file to get it into the form expected by ResNet50. ResNet50 expects float tensors representing the images in BGR, channel last order. We've provided a default implementation of the preprocessing that you can use.
```
import azureml.contrib.brainwave.models.utils as utils
def preprocess_images():
# Convert images to 3D tensors [width,height,channel] - channels are in BGR order.
in_images = tf.placeholder(tf.string)
image_tensors = utils.preprocess_array(in_images)
return in_images, image_tensors
```
We use Keras layer APIs to construct the classifier. Because we're using the tensorflow backend, we can train this classifier in one session with our Resnet50 model.
```
def construct_classifier(in_tensor):
from keras.layers import Dropout, Dense, Flatten
K.set_session(tf.get_default_session())
FC_SIZE = 1024
NUM_CLASSES = 2
x = Dropout(0.2, input_shape=(1, 1, 2048,))(in_tensor)
x = Dense(FC_SIZE, activation='relu', input_dim=(1, 1, 2048,))(x)
x = Flatten()(x)
preds = Dense(NUM_CLASSES, activation='softmax', input_dim=FC_SIZE, name='classifier_output')(x)
return preds
```
Now every component of the model is defined, we can construct the model. Constructing the model with the project brainwave models is two steps - first we import the graph definition, then we restore the weights of the model into a tensorflow session. Because the quantized graph defintion and the float32 graph defintion share the same node names in the graph definitions, we can initally train the weights in float32, and then reload them with the quantized operations (which take longer) to fine-tune the model.
```
def construct_model(quantized, starting_weights_directory = None):
from azureml.contrib.brainwave.models import Resnet50, QuantizedResnet50
# Convert images to 3D tensors [width,height,channel]
in_images, image_tensors = preprocess_images()
# Construct featurizer using quantized or unquantized ResNet50 model
if not quantized:
featurizer = Resnet50(saved_model_dir)
else:
featurizer = QuantizedResnet50(saved_model_dir, custom_weights_directory = starting_weights_directory)
features = featurizer.import_graph_def(input_tensor=image_tensors)
# Construct classifier
preds = construct_classifier(features)
# Initialize weights
sess = tf.get_default_session()
tf.global_variables_initializer().run()
featurizer.restore_weights(sess)
return in_images, image_tensors, features, preds, featurizer
```
## Train Model
First we train the model with custom weights but without quantization. Training is done with native float precision (32-bit floats). We load the traing data set and batch the training with 10 epochs. When the performance reaches desired level or starts decredation, we stop the training iteration and save the weights as tensorflow checkpoint files.
```
def read_files(files):
""" Read files to array"""
contents = []
for path in files:
with open(path, 'rb') as f:
contents.append(f.read())
return contents
def train_model(preds, in_images, img_train, label_train, is_retrain = False, train_epoch = 10):
""" training model """
from keras.objectives import binary_crossentropy
from tqdm import tqdm
learning_rate = 0.001 if is_retrain else 0.01
# Specify the loss function
in_labels = tf.placeholder(tf.float32, shape=(None, 2))
cross_entropy = tf.reduce_mean(binary_crossentropy(in_labels, preds))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
def chunks(a, b, n):
"""Yield successive n-sized chunks from a and b."""
if (len(a) != len(b)):
print("a and b are not equal in chunks(a,b,n)")
raise ValueError("Parameter error")
for i in range(0, len(a), n):
yield a[i:i + n], b[i:i + n]
chunk_size = 16
chunk_num = len(label_train) / chunk_size
sess = tf.get_default_session()
for epoch in range(train_epoch):
avg_loss = 0
for img_chunk, label_chunk in tqdm(chunks(img_train, label_train, chunk_size)):
contents = read_files(img_chunk)
_, loss = sess.run([optimizer, cross_entropy],
feed_dict={in_images: contents,
in_labels: label_chunk,
K.learning_phase(): 1})
avg_loss += loss / chunk_num
print("Epoch:", (epoch + 1), "loss = ", "{:.3f}".format(avg_loss))
# Reach desired performance
if (avg_loss < 0.001):
break
def test_model(preds, in_images, img_test, label_test):
"""Test the model"""
from keras.metrics import categorical_accuracy
in_labels = tf.placeholder(tf.float32, shape=(None, 2))
accuracy = tf.reduce_mean(categorical_accuracy(in_labels, preds))
contents = read_files(img_test)
accuracy = accuracy.eval(feed_dict={in_images: contents,
in_labels: label_test,
K.learning_phase(): 0})
return accuracy
# Launch the training
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
with sess.as_default():
in_images, image_tensors, features, preds, featurizer = construct_model(quantized=False)
train_model(preds, in_images, img_train, label_train, is_retrain=False, train_epoch=10)
accuracy = test_model(preds, in_images, img_test, label_test)
print("Accuracy:", accuracy)
featurizer.save_weights(custom_weights_dir + "/rn50", tf.get_default_session())
```
## Test Model
After training, we evaluate the trained model's accuracy on test dataset with quantization. So that we know the model's performance if it is deployed on the FPGA.
```
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
with sess.as_default():
print("Testing trained model with quantization")
in_images, image_tensors, features, preds, quantized_featurizer = construct_model(quantized=True, starting_weights_directory=custom_weights_dir)
accuracy = test_model(preds, in_images, img_test, label_test)
print("Accuracy:", accuracy)
```
## Fine-Tune Model
Sometimes, the model's accuracy can drop significantly after quantization. In those cases, we need to retrain the model enabled with quantization to get better model accuracy.
```
if (accuracy < 0.93):
with sess.as_default():
print("Fine-tuning model with quantization")
train_model(preds, in_images, img_train, label_train, is_retrain=True, train_epoch=10)
accuracy = test_model(preds, in_images, img_test, label_test)
print("Accuracy:", accuracy)
```
## Service Definition
Like in the QuickStart notebook our service definition pipeline consists of three stages.
```
from azureml.contrib.brainwave.pipeline import ModelDefinition, TensorflowStage, BrainWaveStage
model_def_path = os.path.join(saved_model_dir, 'model_def.zip')
model_def = ModelDefinition()
model_def.pipeline.append(TensorflowStage(sess, in_images, image_tensors))
model_def.pipeline.append(BrainWaveStage(sess, quantized_featurizer))
model_def.pipeline.append(TensorflowStage(sess, features, preds))
model_def.save(model_def_path)
print(model_def_path)
```
## Deploy
Go to our [GitHub repo](https://aka.ms/aml-real-time-ai) "docs" folder to learn how to create a Model Management Account and find the required information below.
```
from azureml.core import Workspace
ws = Workspace.from_config()
```
The first time the code below runs it will create a new service running your model. If you want to change the model you can make changes above in this notebook and save a new service definition. Then this code will update the running service in place to run the new model.
```
from azureml.core.model import Model
from azureml.core.image import Image
from azureml.core.webservice import Webservice
from azureml.contrib.brainwave import BrainwaveWebservice, BrainwaveImage
from azureml.exceptions import WebserviceException
model_name = "catsanddogs-resnet50-model"
image_name = "catsanddogs-resnet50-image"
service_name = "modelbuild-service"
registered_model = Model.register(ws, model_def_path, model_name)
image_config = BrainwaveImage.image_configuration()
deployment_config = BrainwaveWebservice.deploy_configuration()
try:
service = Webservice(ws, service_name)
service.delete()
service = Webservice.deploy_from_model(ws, service_name, [registered_model], image_config, deployment_config)
service.wait_for_deployment(True)
except WebserviceException:
service = Webservice.deploy_from_model(ws, service_name, [registered_model], image_config, deployment_config)
service.wait_for_deployment(True)
```
The service is now running in Azure and ready to serve requests. We can check the address and port.
```
print(service.ipAddress + ':' + str(service.port))
```
## Client
There is a simple test client at amlrealtimeai.PredictionClient which can be used for testing. We'll use this client to score an image with our new service.
```
from azureml.contrib.brainwave.client import PredictionClient
client = PredictionClient(service.ipAddress, service.port)
```
You can adapt the client [code](../../pythonlib/amlrealtimeai/client.py) to meet your needs. There is also an example C# [client](../../sample-clients/csharp).
The service provides an API that is compatible with TensorFlow Serving. There are instructions to download a sample client [here](https://www.tensorflow.org/serving/setup).
## Request
Let's see how our service does on a few images. It may get a few wrong.
```
# Specify an image to classify
print('CATS')
for image_file in cat_files[:8]:
results = client.score_image(image_file)
result = 'CORRECT ' if results[0] > results[1] else 'WRONG '
print(result + str(results))
print('DOGS')
for image_file in dog_files[:8]:
results = client.score_image(image_file)
result = 'CORRECT ' if results[1] > results[0] else 'WRONG '
print(result + str(results))
```
## Cleanup
Run the cell below to delete your service.
```
service.delete()
```
## Appendix
License for plot_confusion_matrix:
New BSD License
Copyright (c) 2007-2018 The scikit-learn developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
a. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
c. Neither the name of the Scikit-learn Developers nor the names of
its contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
| github_jupyter |
```
import draftfast
import pandas as pd
df=pd.read_csv('full_old_NFL.csv')
del df['Unnamed: 0']
from draftfast import rules
from draftfast.optimize import run, run_multi
from draftfast.orm import Player
from draftfast.csv_parse import salary_download
df = df.dropna()
# for year, week in zip(df['Year'], df['Week']):
# player_pool = []
# segment = df[(df['Year'] == year) & (df['Week'] == week)].as_matrix()
# for player in segment:
# player_pool.append(Player(name=player[3], cost=player[9], proj=player[8], pos=player[4],average_score=player[11], team=player[5], matchup=player[7].upper()))
# roster = run(
# rule_set=rules.FD_NFL_RULE_SET,
# player_pool=player_pool,
# verbose=False,
# )
df['Pos'].replace('Def','D', inplace=True)
df['Pos'].replace('PK','K', inplace=True)
def optimal_count_from_segment(segment):
player_pool = []
for player in segment.as_matrix():
player_pool.append(Player(name=player[3], cost=player[9], proj=player[8], pos=player[4],average_score=player[11], team=player[5], matchup=player[7]))
counted_list = count_list(get_optimal_roster_list(player_pool))
counted_df = pd.DataFrame(counted_list, index=['count']).T.reset_index()
counted_df.rename(columns={'index':'Name'}, inplace=True)
segment_ = segment.merge(counted_df,
how='left', on ='Name')
segment_['count'].fillna(0, inplace=True)
return segment_
def get_optimal_roster_list(player_pool):
rosters = run_multi(
iterations=10,
rule_set=rules.FD_NFL_RULE_SET,
player_pool=player_pool,
verbose=False,
)
players = []
for roster in rosters[0]:
players += roster.players
p_names = [p.name for p in players]
return p_names
def count_list(list_):
counted_list = {}
for item in list_:
if item in counted_list.keys():
counted_list[item] += 1
else:
counted_list[item] = 1
return counted_list
incidence_df = pd.DataFrame()
for year, week in zip(df.groupby(['Year','Week']).sum()[[]].reset_index().as_matrix().T[0],df.groupby(['Year','Week']).sum()[[]].reset_index().as_matrix().T[1]):
segment = df[(df['Year'] == year) & (df['Week'] == week) & (df['Pos'] != 'K')]
new_segment = optimal_count_from_segment(segment)
incidence_df = incidence_df.append(new_segment)
incidence_df.to_csv('old_NFL_with_count.csv')#.groupby(['Name','Pos']).agg({'FD salary':'mean','count':'sum'}).sort_values('count').plot(kind='scatter',x='FD salary',y='count')
```
| github_jupyter |
# The Rise of GitHub
GitHub has become the dominant channel that development teams use to collaborate on code. Wikipedia's [Timeline of GitHub](https://en.wikipedia.org/wiki/Timeline_of_GitHub) documents GitHub's rise to dominance as a *business*. We will use a `mirror` crawl to analyze GitHub's rise as a *platform*.
## Dataset
Any analysis must start with data. The dataset we use here are the results of a crawl of public GitHub repositories conducted using [`mirror`](https://github.com/simiotics/mirror).
You can build the same dataset by using `mirror github crawl` to build up the raw dataset of basic repository information and then `mirror github sync` to create a SQLite database of the type used in this notebook.
If you do create your own dataset, change the variable below to point at the SQLite database you generate.
```
from datetime import datetime
import json
import math
import os
%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from tqdm import tqdm
import requests
GITHUB_SQLITE = os.path.expanduser('~/data/mirror/github.sqlite')
```
Let us explore the structure dataset before we dive into our analysis.
Basic repository metadata (extracted by crawling the GitHub [`/repositories`](https://developer.github.com/v3/repos/#list-all-public-repositories) endpoint) is stored in the `repositories` table of this database. This is its schema:
```
import sqlite3
conn = sqlite3.connect(GITHUB_SQLITE)
c = conn.cursor()
r = c.execute('select sql from sqlite_master where name="repositories";')
repositories_schema = r.fetchone()[0]
print(repositories_schema)
```
These columns do not provide comprehensive repository information, but they already allow us to understand some interesting things.
To speed up these preliminary analyses, since `mirror` does not automatically create indices in the database, let us create some of our own:
### Number of public repositories on GitHub
```
r = c.execute('select count(*) from repositories;')
result = r.fetchone()
print(result[0])
```
### Proportion of repositories that are forks
```
results = c.execute('select is_fork from repositories;')
total = 0
forks = 0
for row in tqdm(results):
total += 1
forks += (row[0] == 1)
print('Proportion of repositories that are forks:', forks/total)
```
### Number of users who have created public repositories
```
results = c.execute('select owner from repositories;')
owners = set([])
for row in tqdm(results):
owners.add(row[0])
print('Number of users who have created public repositories:', len(owners))
# The owners set takes over a gigabyte of memory on only the first full crawl.
# Better to free this memory up.
del owners
```
## Rise
The basic repository metadata that GitHub's `/repositories` endpoint provides does not give any information about when a repository was created. We will rely on the [`/repos`](https://developer.github.com/v3/repos/#get) endpoint.
The `api_url` column of the Mirror crawl `repositories` table conveniently populates the endpoint URL with repository information, so we will use this column.
### Sampling
We have to make GitHub API calls to retrieve creation time information for repositories in our dataset. With over 120 million repositories in the dataset and with a rate limit of 5000 requests per hour (authenticated) to the GitHub API, it would take a long time to collect the creation time for every repository in our database. We will have to sample.
For sampling, we use the following parameters:
+ `NUM_SAMPLES` - Number of total repositories that should be sampled for creation time analysis
+ `GITHUB_TOKEN_FILE` - File containing the token for authentication against the GitHub API ([instructions](https://github.com/settings/tokens) to create an API token)
```
NUM_SAMPLES = 500
headers = {}
# Comment the following lines out if you do not want to make authenticated calls against the GitHub API
# If you do authenticate, make sure that the path below points at a file with your token.
GITHUB_TOKEN_FILE = os.path.expanduser('~/.secrets/github-mirror.txt')
with open(GITHUB_TOKEN_FILE, 'r') as ifp:
GITHUB_TOKEN = ifp.read().strip()
headers['Authorization'] = f'token {GITHUB_TOKEN}'
gap = int(total/NUM_SAMPLES)
results = c.execute('select github_id, api_url from repositories;')
sample = []
for i, row in tqdm(enumerate(results)):
if i % gap == 0:
sample.append((i, row[0], row[1]))
len(sample)
# Change file name as per your use case
SAMPLE_METADATA_FILE = os.path.expanduser('~/data/mirror/rise-of-github-sample.jsonl')
sample_metadata = []
if os.path.isfile(SAMPLE_METADATA_FILE):
with open(SAMPLE_METADATA_FILE, 'r') as ifp:
for line in ifp:
sample_metadata.append(json.loads(line.strip()))
else:
for i, github_id, api_url in tqdm(sample):
response = requests.get(api_url, headers=headers)
entry = {
'position': i,
'github_id': github_id,
'repository': response.json(),
}
sample_metadata.append(entry)
with open(SAMPLE_METADATA_FILE, 'w') as ofp:
for entry in sample_metadata:
print(json.dumps(entry), file=ofp)
```
This file is available as a Gist. [Download here](https://gist.github.com/nkashy1/c4ca78c5d6c2c2da2b03b4a730f6e194).
### Analysis
The sample repository metadata is the subject of this analysis, with the goal of building a simple histogram to visualize the growth of GitHub as a platform.
Some of the repositories have bad metadata - either because the repository has been deleted or because access to repository metadata has been blocked. We define "bad" metadata by metadata that doesn't have a `created_at` key. Let us count how many such repositories there are:
```
num_bad = 0
for entry in tqdm(sample_metadata):
try:
entry.get('repository', {})['created_at']
except KeyError:
num_bad += 1
print(num_bad)
```
That still gives us enough samples to build a meaningful histogram out of, if we first filter out the `valid_sample_metadata`:
```
valid_sample_metadata = [entry for entry in sample_metadata if entry.get('repository', {}).get('created_at') is not None]
len(valid_sample_metadata)
```
Conveniently, the results of the Mirror crawl are sorted. This allows us to count the number of repositories created between samples by simply taking a difference of the `position` keys in `sample_metadata`.
```
x = []
bins = []
weights = []
labels = []
for i, entry in tqdm(enumerate(valid_sample_metadata[:-1])):
created_at = datetime.strptime(entry['repository']['created_at'], '%Y-%m-%dT%H:%M:%SZ')
labels.append(created_at.strftime('%Y-%m-%d'))
x.append(created_at.timestamp())
bins.append(created_at.timestamp())
weights.append(valid_sample_metadata[i+1]['github_id'] - entry['github_id'])
ticks = [bins[0]]
ticklabels = [labels[0]]
log = int(math.log(len(bins), 2))
for i in range(log-1, 0, -1):
idx = int(len(bins)/(2**i))
ticks.append(bins[idx])
ticklabels.append(labels[idx])
ticks.append(bins[-1])
ticklabels.append(labels[-1])
timeline_ticks = [
datetime(2010, 7, 24).timestamp(),
datetime(2011, 4, 20).timestamp(),
datetime(2012, 1, 17).timestamp(),
datetime(2013, 1, 14).timestamp(),
datetime(2013, 12, 23).timestamp(),
datetime(2014, 3, 17).timestamp(),
datetime(2014, 10, 7).timestamp(),
datetime(2015, 3, 26).timestamp(),
datetime(2015, 9, 1).timestamp(),
datetime(2015, 12, 3).timestamp(),
] + ticks[-3:]
timeline_ticklabels = [
'1M repos',
'2M repos',
'Google joins',
'3M users',
'10M repos',
'Harassment',
'Student Pack',
'DDoS',
'~10M users',
'Apple joins',
] + ticklabels[-3:]
fig, ax = plt.subplots(1, 2, figsize=(12,9))
_ = ax[0].hist(x=x, bins=bins, weights=weights, cumulative=False)
ax[0].set_xlabel('Date')
ax[0].set_ylabel('Number of repositories')
ax[0].set_xticks(timeline_ticks)
ax[0].set_xticklabels(timeline_ticklabels, rotation='vertical')
_ = ax[1].hist(x=x, bins=bins, weights=weights, cumulative=True)
ax[1].set_xlabel('Date')
ax[1].set_ylabel('Number of repositories (cumulative)')
ax[1].set_xticks(ticks)
ax[1].set_xticklabels(ticklabels, rotation='vertical')
plt.show()
fig.savefig('rise-of-github.png')
```
(Timeline events taken from https://en.wikipedia.org/wiki/Timeline_of_GitHub)
| github_jupyter |
# Lista 02 - Probabilidade + Estatística
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from numpy.testing import *
from scipy import stats as ss
plt.style.use('seaborn-colorblind')
plt.ion()
```
# Exercício 01:
Suponha que a altura de mulheres adultas de algumas regiões seguem uma distribuição normal com $\mu = 162$ centímetros e $\sigma = 8$. Nesse caso, responda às perguntas abaixo:
ID:
(a) Dado que uma mulher mede 180
centímetros, qual a probabilidade de alguém escolhido ao acaso ser maior que ela? Para responder à questão, crie uma função a(), sem parâmetros, que retorna a resposta da questão com uma precisão de 4 casas decimais.
__Dica__:
1. a função round(var, n) retorna o valor da variável var com uma precisão de n casas decimais.
1. a classe `from scipy.stats.distributions import norm` implementa uma normal e já tem um método cdf e um método ppf (inverso da cdf).
```
# Crie aqui a função a() - com esse nome e sem parâmetros -
# para retornar a resposta com precisão de 4 casas decimais!
import math
from scipy.stats.distributions import norm
def a():
mu, std = norm.fit(h)
return math.sqrt(variancia*(n-1))
a()
```
(b) Uma treinadora dessa região quer montar uma equipe de basquete. Para isso, ela quer delimitar uma altura mínima $h$ que as jogadoras devem ter. Ele quer que $h$ seja maior que pelo menos $90\%$ das alturas de mulheres daquela região. Qual o valor de $h$? Para responder à questão, crie uma função _b()_, sem parâmetros, que retorna a resposta da questão com uma precisão de 4 casas decimais.
__Dica:__
a função _round(var, n)_ ou _np.round(var, n)_ retorna o valor da variável var com uma precisão de n casas decimais.
```
#Crie aqui a função b() - com esse nome e sem parâmetros -
# para retornar a resposta com precisão de 4 casas decimais!
# YOUR CODE HERE
raise NotImplementedError()
```
# Exercício 02:
As seguintes amostras foram geradas seguindo uma distribuição normal N($\mu$, $\sigma$), onde $\mu$, $\sigma$ não necessariamente são os mesmos para ambas. Nos histogramas gerados é possível visualizar essa distribuição.
```
dados1 = [3.8739066,4.4360658,3.0235970,6.1573843,3.7793704,3.6493491,7.2910457,3.7489513,5.9306145,5.3897872,
5.9091607,5.2491517,7.1163771,4.1930465,-0.1994626,3.2583011,5.9229948,1.8548338,4.8335581,5.2329008,
1.5683191,5.8756518,3.4215138,4.7900996,5.9530234,4.4550699,3.3868535,5.3060581,4.2124300,7.0123823,
4.9790184,2.2368825,3.9182012,5.4449732,5.7594690,5.4159924,3.5914275,3.4382886,4.0706780,6.9489863,
6.3269462,2.8740986,7.4210664,4.6413206,4.2209699,4.2009752,6.2509627,4.9137823,4.9171593,6.3367493]
dados2 = [2.291049832,5.092164483,3.287501109,4.152289011,4.534256822,5.513028947,2.696660244,3.270482741,
5.435338467,6.244110011,1.363583509,5.385855994,6.069527998,2.148361858,6.471584096,4.953202949,
6.827787432,4.695468536,2.047598339,8.858080081,5.436394723,7.849470791,4.053545595,3.204185038,
2.400954454,-0.002092845,3.571868529,6.202897955,5.224842718,4.958476608,6.708545254 -0.115002497,
5.106492712,3.343396551,5.984204841,3.552744920,4.041155327,5.709103288,3.137316917,2.100906915,
4.379147487,0.536031040,4.777440348,5.610527663,3.802506385,3.484180306,7.316861806,2.965851553,
3.640560731,4.765175164,7.047545215,5.683723446,5.048988000,6.891720033,3.619091771,8.396155189,
5.317492252,2.376071049,4.383045321,7.386186468,6.554626718,5.020433071,3.577328839,5.534419417,
3.600534876,2.172314745,4.632719037,4.361328042,4.292156420,1.102889101,4.621840612,4.946746104,
6.182937650,5.415993589,4.346608293,2.896446739,3.516568382,6.972384719,3.233811405,4.048606672,
1.663547342,4.607297335 -0.753490459,3.205353052,1.269307121,0.962428478,4.718627886,4.686076530,
2.919118501,6.204058666,4.803050149,4.670632749,2.811395731,7.214950058,3.275492976,2.336357937,
8.494097155,6.473022507,8.525715511,4.364707111]
plt.hist(dados1)
plt.show()
plt.hist(dados2)
plt.show()
```
__a)__ A partir dos histogramas, tente aproximar uma normal a cada um deles, desenhando-a sobre o histograma. Para isso, você deve estimar valores de $\mu$ e $\sigma$. Não se esqueça de normalizar os dados, ou seja, o eixo y deve estar um uma escala de 0 a (no máximo) 1!
```
mu1, std1 = norm.fit(dados1)
mu2, std2 = norm.fit(dados2)
plt.hist(dados1, weights=np.ones(len(dados1)) / len(dados1))
plt.show()
plt.hist(dados2, weights=np.ones(len(dados2)) / len(dados2))
plt.show()
```
# Exercício 03:
Dado uma tabela com informações sobre uma amostra com 20 alunos contendo a nota desses alunos em algumas disciplinas e os níveis de dificuldade das mesmas, crie uma função que retorne a probabilidade condicional estimada à partir dos dados para dois eventos dados, informando ainda se os eventos são independentes ou não. Ou seja, dado a tabela mostrada no exemplo (lista de listas) e dois eventos A e B, retorne a probabilidade condicional de A dado B (P(A|B)) com uma precisão de 4 casas decimais. O retorno da função, entretanto, deve ser uma frase (string) escrita da seguinte forma: _str: val_ onde _str_ é a string "Independentes" se os eventos A e B são independentes e "Dependentes" caso contrário e _val_ é o valor da probabilidade condicional P(A|B) com uma precisão de 4 casas decimais.
__Dica:__
a função format(var, '.nf') retorna uma string com o valor da variável var com uma precisão de exatamente n casas decimais.
```
# Esses dados se referem às notas (A-E) de 20 alunos de acordo com a dificuldade da disciplina (Fácil ou Difícil)
# Coluna 1: id do aluno
# Coluna 2: dificuldade da disciplina ('Facil' ou 'Dificil')
# Coluna 3: nota do aluno (A-E)
data = [[1, 'Facil', 'C'],
[2, 'Facil', 'A'],
[3, 'Dificil', 'E'],
[4, 'Dificil', 'B'],
[5, 'Dificil', 'B'],
[6, 'Dificil', 'A'],
[7, 'Facil', 'D'],
[8, 'Dificil', 'C'],
[9, 'Facil', 'D'],
[10, 'Facil', 'C'],
[11, 'Facil', 'A'],
[12, 'Facil', 'A'],
[13, 'Dificil', 'B'],
[14, 'Dificil', 'C'],
[15, 'Dificil', 'E'],
[16, 'Dificil', 'C'],
[17, 'Facil', 'A'],
[18, 'Dificil', 'D'],
[19, 'Facil', 'B'],
[20, 'Facil', 'A']]
data = pd.DataFrame(data, columns=['id', 'dificuldade', 'nota'])
data = data.set_index('id')
print(data)
def prob_cond(df,
valor_nota: 'considere como A no bayes',
valor_dificuldade: 'considere como B no bayes'):
# YOUR CODE HERE
raise NotImplementedError()
"""Check that prob_cond returns the correct output for several inputs"""
assert_equal(prob_cond(data, 'A', 'Facil'), 'Dependentes: 0.5000')
assert_equal(prob_cond(data, 'E', 'Facil'), 'Dependentes: 0.0000')
assert_equal(prob_cond(data, 'A', 'Dificil'), 'Dependentes: 0.1000')
assert_equal(prob_cond(data, 'E', 'Dificil'), 'Dependentes: 0.2000')
```
# Exercício 04:
Utilizando os dados de acidentes fatais em companhias aéreas dos Estados Unidos de 1985 a 1999, calcule algumas estatísticas básicas. Você deve retornar uma __lista__ com os valores das estatísticas calculadas, sendo elas, nessa ordem: menor valor, maior valor, média, mediana, variância e desvio-padrão. Para responder à questão, crie uma função _estat(acidentes)_ que retorna a lista com os valores correspondentes às resposta da questão, inteiros quando forem inteiros ou com uma precisão de 4 casas decimais caso contrário.
__Teste:__
`assert_equal(estat(acidentes), ans)`, sendo que `ans` é uma lista contendo os valores corretos para as estatísticas que este exercício pede.
__Dicas:__
1) A função round(var, n) retorna o valor da variável var com uma precisão de n casas decimais.
2) Execute o teste `assert_equal(estat(lista_boba), ans_bobo)` para alguma `lista_boba` que você saiba calcular as estatísticas no papel.
__Fonte:__ https://aviation-safety.net/
```
# Crie aqui a função estat(acidentes) - com esse nome e parâmetro -
# a função deve retornar a lista com as respostas com precisão de 4 casas decimais!
# YOUR CODE HERE
raise NotImplementedError()
```
# Exercício 05:
Procure encontrar correlações espúrias interessantes e apresente um exemplo encontrado. Ou seja, aprensente dois conjuntos de dados que possuem alta correlação (muito positivas ou muito negativas) sem que um seja de fato o causador do outro. Além disso, deixe resgistrado os gráficos com a distribuição dos dados e um gráfico de dispersão como forma de visualizar a correlação entre os dados. Calcule a covariância e correlação entre os dados e, por fim, se possível, tente explicar qual poderia ser a verdadeira causa da ocorrência das observações. Para isso, utilize a última célula desse notebook.
__Observação:__
Para ideias de correlações espúrias, veja os seguintes sites:
http://tylervigen.com/spurious-correlations
https://en.wikipedia.org/wiki/Spurious_relationship#Other_relationships
```
from IPython.display import SVG, display
display(SVG(url='chart.svg'))
```
| github_jupyter |
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also https://splines.readthedocs.io/.
[back to rotation splines](index.ipynb)
# Barry--Goldman Algorithm
We can try to use the
[Barry--Goldman algorithm for non-uniform Euclidean Catmull--Rom splines](../euclidean/catmull-rom-barry-goldman.ipynb)
using [Slerp](slerp.ipynb) instead of linear interpolations,
just as we have done with [De Casteljau's algorithm](de-casteljau.ipynb).
```
def slerp(one, two, t):
return (two * one.inverse())**t * one
def barry_goldman(rotations, times, t):
q0, q1, q2, q3 = rotations
t0, t1, t2, t3 = times
return slerp(
slerp(
slerp(q0, q1, (t - t0) / (t1 - t0)),
slerp(q1, q2, (t - t1) / (t2 - t1)),
(t - t0) / (t2 - t0)),
slerp(
slerp(q1, q2, (t - t1) / (t2 - t1)),
slerp(q2, q3, (t - t2) / (t3 - t2)),
(t - t1) / (t3 - t1)),
(t - t1) / (t2 - t1))
```
Example:
```
import numpy as np
```
[helper.py](helper.py)
```
from helper import angles2quat, animate_rotations, display_animation
q0 = angles2quat(0, 0, 0)
q1 = angles2quat(90, 0, 0)
q2 = angles2quat(90, 90, 0)
q3 = angles2quat(90, 90, 90)
t0 = 0
t1 = 1
t2 = 3
t3 = 3.5
frames = 50
ani = animate_rotations({
'Barry–Goldman (q0, q1, q2, q3)': [
barry_goldman([q0, q1, q2, q3], [t0, t1, t2, t3], t)
for t in np.linspace(t1, t2, frames)
],
'Slerp (q1, q2)': slerp(q1, q2, np.linspace(0, 1, frames)),
}, figsize=(5, 2))
display_animation(ani, default_mode='once')
```
[splines.quaternion.BarryGoldman](../python-module/splines.quaternion.rst#splines.quaternion.BarryGoldman) class
```
from splines.quaternion import BarryGoldman
import numpy as np
```
[helper.py](helper.py)
```
from helper import angles2quat, animate_rotations, display_animation
rotations = [
angles2quat(0, 0, 180),
angles2quat(0, 45, 90),
angles2quat(90, 45, 0),
angles2quat(90, 90, -90),
angles2quat(180, 0, -180),
angles2quat(-90, -45, 180),
]
grid = np.array([0, 0.5, 2, 5, 6, 7, 9])
bg = BarryGoldman(rotations, grid)
```
For comparison ... [Catmull--Rom-like quaternion spline](catmull-rom-non-uniform.ipynb)
[splines.quaternion.CatmullRom](../python-module/splines.quaternion.rst#splines.quaternion.CatmullRom) class
```
from splines.quaternion import CatmullRom
cr = CatmullRom(rotations, grid, endconditions='closed')
def evaluate(spline, samples=200):
times = np.linspace(spline.grid[0], spline.grid[-1], samples, endpoint=False)
return spline.evaluate(times)
ani = animate_rotations({
'Barry–Goldman': evaluate(bg),
'Catmull–Rom-like': evaluate(cr),
}, figsize=(5, 2))
display_animation(ani, default_mode='loop')
rotations = [
angles2quat(90, 0, -45),
angles2quat(179, 0, 0),
angles2quat(181, 0, 0),
angles2quat(270, 0, -45),
angles2quat(0, 90, 90),
]
s_uniform = BarryGoldman(rotations)
s_chordal = BarryGoldman(rotations, alpha=1)
s_centripetal = BarryGoldman(rotations, alpha=0.5)
ani = animate_rotations({
'uniform': evaluate(s_uniform, samples=300),
'chordal': evaluate(s_chordal, samples=300),
'centripetal': evaluate(s_centripetal, samples=300),
}, figsize=(7, 2))
display_animation(ani, default_mode='loop')
```
## Constant Angular Speed
Not very efficient,
De Casteljau's algorithm is faster because it directly provides the tangent.
```
from splines import ConstantSpeedAdapter
class BarryGoldmanWithDerivative(BarryGoldman):
delta_t = 0.000001
def evaluate(self, t, n=0):
"""Evaluate quaternion or angular velocity."""
if not np.isscalar(t):
return np.array([self.evaluate(t, n) for t in t])
if n == 0:
return super().evaluate(t)
elif n == 1:
# NB: We move the interval around because
# we cannot access times before and after
# the first and last time, respectively.
fraction = (t - self.grid[0]) / (self.grid[-1] - self.grid[0])
before = super().evaluate(t - fraction * self.delta_t)
after = super().evaluate(t + (1 - fraction) * self.delta_t)
# NB: Double angle
return (after * before.inverse()).log_map() * 2 / self.delta_t
else:
raise ValueError('Unsupported n: {!r}'.format(n))
s = ConstantSpeedAdapter(BarryGoldmanWithDerivative(rotations, alpha=0.5))
```
Takes a long time!
```
ani = animate_rotations({
'non-constant speed': evaluate(s_centripetal),
'constant speed': evaluate(s),
}, figsize=(5, 2))
display_animation(ani, default_mode='loop')
```
| github_jupyter |
```
from urllib.request import urlopen
from bs4 import BeautifulSoup
def getNgrams(content, n):
content = content.split(' ')
output = []
for i in range(len(content)-n+1):
output.append(content[i:i+n])
return output
html = urlopen('http://en.wikipedia.org/wiki/Python_(programming_language)')
bs = BeautifulSoup(html, 'html.parser')
content = bs.find('div', {'id':'mw-content-text'}).get_text()
ngrams = getNgrams(content, 2)
print(ngrams)
print('2-grams count is: '+str(len(ngrams)))
import re
def getNgrams(content, n):
content = re.sub('\n|[[\d+\]]', ' ', content)
content = bytes(content, 'UTF-8')
content = content.decode('ascii', 'ignore')
content = content.split(' ')
content = [word for word in content if word != '']
output = []
for i in range(len(content)-n+1):
output.append(content[i:i+n])
return output
html = urlopen('http://en.wikipedia.org/wiki/Python_(programming_language)')
bs = BeautifulSoup(html, 'html.parser')
content = bs.find('div', {'id':'mw-content-text'}).get_text()
ngrams = getNgrams(content, 2)
print(ngrams)
print('2-grams count is: '+str(len(ngrams)))
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import string
def cleanSentence(sentence):
sentence = sentence.split(' ')
sentence = [word.strip(string.punctuation+string.whitespace) for word in sentence]
sentence = [word for word in sentence if len(word) > 1 or (word.lower() == 'a' or word.lower() == 'i')]
return sentence
def cleanInput(content):
content = content.upper()
content = re.sub('\n|[[\d+\]]', ' ', content)
content = bytes(content, "UTF-8")
content = content.decode("ascii", "ignore")
sentences = content.split('. ')
return [cleanSentence(sentence) for sentence in sentences]
def getNgramsFromSentence(content, n):
output = []
for i in range(len(content)-n+1):
output.append(content[i:i+n])
return output
def getNgrams(content, n):
content = cleanInput(content)
ngrams = []
for sentence in content:
ngrams.extend(getNgramsFromSentence(sentence, n))
return(ngrams)
html = urlopen('http://en.wikipedia.org/wiki/Python_(programming_language)')
bs = BeautifulSoup(html, 'html.parser')
content = bs.find('div', {'id':'mw-content-text'}).get_text()
print(len(getNgrams(content, 2)))
from collections import Counter
def getNgrams(content, n):
content = cleanInput(content)
ngrams = Counter()
ngrams_list = []
for sentence in content:
newNgrams = [' '.join(ngram) for ngram in getNgramsFromSentence(sentence, n)]
ngrams_list.extend(newNgrams)
ngrams.update(newNgrams)
return(ngrams)
print(getNgrams(content, 2))
```
| github_jupyter |
# Stroop effect investigation
[Cédric Campguilhem](https://github.com/ccampguilhem/Udacity-DataAnalyst), September 2017
<a id='Top'/>
## Table of contents
- [Introduction](#Introduction)
- [Stroop effect experiment](#Stroop effect experiment)
- [Descriptive statistics](#Descriptive statistics)
- [Inferential statistics](#Inferential statistics)
- [Hypothesis](#Hypothesis)
- [Checking assumptions](#Checking assumptions)
- [Critical t-value](#Critical t-value)
- [t-statistic and decision](#t-statistic)
- [Additional information](#Additional information)
- [Conclusion](#Conclusion)
- [Appendix](#Appendix)
<a id='Introduction'/>
## Introduction
This project is related to Inferential Statistics course for Udacity Data Analyst Nanodegree program. The purpose of this project is to investigate a phenomenon from the experimental psychology called [Stroop effect](https://en.wikipedia.org/wiki/Stroop_effect).
The project aims at investigating experimental results on a sample. We will use both descriptive and inferential statistics (hypothesis formulation and decision).
The project makes use of of Python language with [pandas](http://pandas.pydata.org/) for calculation parts and [seaborn](https://seaborn.pydata.org/) library for plotting.
<a id='Stroop effect experiment'/>
## Stroop effect experiment *[top](#Top)*
The experiment consists in saying out loud the ink color in which a words in a list are printed. The participants are given two different lists of words naming colors. A **congruent** one in which words match the ink color and a **incongruent** list where words are different from ink color:
<div style="width: 100%; display: flex; flex-wrap: wrap">
<div style="width: 50%; padding: 5px">
<img alt="Congruent list" src="./stroopa.gif"/>
</div>
<div style="width: 50%; padding: 5px">
<img alt="Congruent list" src="./stroopb.gif"/>
</div>
</div>
<div style="width: 100%; display: flex; flex-wrap: wrap">
<div style="width: 50%; padding: 5px; text-align: center">
Congruent list
</div>
<div style="width: 50%; padding: 5px; text-align: center">
Incongruent list
</div>
</div>
<div style="width: 100%; display: flex; flex-wrap: wrap">
<div style="width: 100%; padding: 5px; text-align: center">
<a href="https://faculty.washington.edu/chudler/java/ready.html">Source: faculty.washington.edu</a>
</div>
</div>
The time (in seconds) it takes to each participant to enumerate ink colors is recorded for each list. The type of list (congruent or incongruent) is the **independent** variable in the experiment. The time it takes to go through the list is the **dependent** variable.
This experiment is a **dependent** sample experiment: the same participants are given both lists.
```
#Import required libraries for project
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#Read dataset
df = pd.read_csv('./dataset/stroopdata.csv')
```
As an example, here are the first ten participants recorded times for each list:
```
df[:10].transpose().head()
```
<a id='Descriptive statistics'/>
## Descriptive statistics *[top](#Top)*
We first make a descriptive analysis of our groups. The following box plot shows distribution of recorded time for each list:
```
#We re-organize table
df2 = df.stack().reset_index(level=1, name='time').rename(columns={"level_1": "list"})
df2 = df2.reset_index().rename(columns={"index": "participant"})
figure = plt.figure(figsize=(12, 6))
ax = figure.add_subplot(111)
ax = sns.boxplot(x="list", y="time", data=df2, ax=ax);
texts = ax.set(xlabel='Type of list', ylabel='Time (seconds)', title='Distribution of recorded times for each list')
texts[0].set_fontsize(14)
texts[1].set_fontsize(14)
texts[2].set_fontsize(20)
```
Each box represents three quartiles (25%, mean and 75%) in addition to 1.5 interquartile range past the low and high quartiles. Participant time out of the interquartile range are reported as outliers (diamond markers).
In average, time it takes to go through seems higher in average for incongruent list (below 15 seconds in average for congruent list and above 20 seconds for non-congruent list). At this time, there is no evidence that this observation is statistically significant. To do that, we need to formulate an hypothesis. This will be the object of [next](#Inferential statistics) section.
For incongruent list, two recorded times are above the 1.5 interquartile range and are reported as outliers.
We can have a closer look at distributions:
```
figure = plt.figure(figsize=(12, 6))
ax = figure.add_subplot(111)
ax = sns.distplot(df2[df2["list"] == "Congruent"]["time"], label="Congruent list", ax=ax);
ax = sns.distplot(df2[df2["list"] == "Incongruent"]["time"], label="Incongruent list", ax=ax);
ax.legend()
texts = ax.set(xlabel='Time (seconds)', ylabel='Proportion', title='Distribution of recorded times for each list')
texts[0].set_fontsize(14)
texts[1].set_fontsize(14)
texts[2].set_fontsize(20)
```
If we refer to kernel density estimators, we can see that distribution of recorded times for congruent list looks like a normal distribution. On the contrary, results for incongruent list differs from a normal distribution due to the outliers who have response time for experiment between 32 and 35.
The actual values for mean, standard deviation and quartiles are reported in the table below:
```
df2.groupby("list").describe()["time"]
```
In the above table, the standard deviation reported is a sample standard deviation taking into account [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction):
\begin{align}
\sigma = \sqrt{\frac{\sum_{i=1}^n (x_i - \bar{x})^2}{n - 1}}
\end{align}
<a id='Inferential statistics'/>
## Inferential statistics *[top](#Top)*
<a id='Hypothesis'/>
### Hypothesis *[inferential](#Inferential statistics)*
In [previous](#Descriptive statistics) section, we have observed that average response time for incongruent list seems higher than the one recoded for congruent list. In this section, we want to know if this observation is statistically significant.
Before doint a t-test, we need to take several assumptions ([source](https://statistics.laerd.com/stata-tutorials/paired-t-test-using-stata.php)):
- The dependent variable should be continuous, which is the case in our case because we use time as dependent variable.
- The independent variable should consist of two categorical, related groups or matched pairs. In our case we have related groups: each participant is given the two different lists.
- There should be no significant outliers in the difference between the two related groups.
- The distribution of the differences in the dependent variable between the two related groups should be approximately normally distributed.
The last two assumptions need to be checked.
Let's formulate the problem this way. Our **null hypothesis** is that population mean response times for congruent ($\mu_{congruent}$) and incongruent ($\mu_{incongruent}$) lists are the same. Our **alternative** is that population mean reponse time for incongruent list is higher.
For this one-tailed, dependant sample [t-test](https://en.wikipedia.org/wiki/Student%27s_t-test), we choose to use an alpha level of 0.05:
\begin{equation}
\mathtt{H}_{0}: \mathtt{\mu}_{congruent} = \mathtt{\mu}_{incongruent} \\
\mathtt{H}_{A}: \mathtt{\mu}_{congruent} < \mathtt{\mu}_{incongruent} \\
\alpha = 0.05
\end{equation}
Let's set $\mathtt{\mu}_D = \mathtt{\mu}_{incongruent} - \mathtt{\mu}_{congruent}$ . The hypotheses may be re-written as follow:
\begin{equation}
\mathtt{H}_{0}: \mathtt{\mu}_{D} = 0 \\
\mathtt{H}_{A}: \mathtt{\mu}_{D} > 0 \\
\alpha = 0.05
\end{equation}
The problem is then a one-tailed t-test in positive direction.
In this problem, the degree of freedom is $n - 1$:
\begin{equation}
dof = n - 1 = 23
\end{equation}
where $n$ is the sample size (24).
<a id='Checking assumptions'/>
### Checking assumptions *[inferential](#Inferential statistics)*
Before performing t-test, we need to check the following two remaining assumptions:
- There should be no significant outliers in the differences between the two related groups.
- The distribution of the differences in the dependent variable between the two groups should be approximately normally distributed.
```
#We first calculate the difference:
df['difference'] = df["Incongruent"] - df["Congruent"]
df.head()
#Now we can make a boxplot to find outliers and a distplot
figure = plt.figure(figsize=(12, 6))
figure.suptitle("Assumption check with differences of response time (incongruent - congruent)", fontsize=20)
ax = figure.add_subplot(121)
ax = sns.boxplot(x="difference", data=df, ax=ax);
texts = ax.set(xlabel='Difference of response time (seconds)', title='Outliers in difference')
ax = figure.add_subplot(122)
ax = sns.distplot(df["difference"], label="Response time (seconds)", ax=ax);
texts = ax.set(xlabel='Difference of response time (seconds)', ylabel='Proportion',
title='Distribution of response time differences')
```
There is one difference of response time identified as an outlier (above the 1.5 interquartile limit past the high quartile). In the right-hand picture, we can also see that the distribution differs from a uniform distribution because of a bump around 20 seconds. We cannot formally validate all assumptions with this dataset.
<a id='Critical t-value'>
### Critical t-value *[inferential](#Inferential statistics)*
From [t-table](https://s3.amazonaws.com/udacity-hosted-downloads/t-table.jpg) we can get the critical value for $dof = 23$ and $\alpha = 0.05$ for a one-tailed t-test:
\begin{equation}
t_{critical} = 1.714
\end{equation}
If the t-statistic of our sample is greater than this value, then we may reject the null hypothesis.
<a id='t-statistic'/>
## t-statistic and decision *[inferential](#Inferential statistics)*
t-statistic may be calculated this way:
\begin{equation}
t = \frac{\bar{x} - \bar{\mu}_{E}}{\frac{s}{\sqrt{n}}}
\end{equation}
Where $\bar{x}$ is the difference of the sample means, $s$ is the standard deviation of the samples mean, $n$ is the sample size and $\bar{\mu}_E$ is the expected difference of mean from null hypothesis. In our case we have $\bar{\mu}_E = 0$.
The standard deviation of the sample means is calculated with:
\begin{equation}
s = \sqrt{\frac{\sum_{i=1}^n (x_i - \bar{x})^2}{n-1}}
\end{equation}
```
df["difference"].describe()
import math
t = df["difference"].mean() / (df["difference"].std() / math.sqrt(24))
print t
```
Our sample has the following statitics $M = 7.96$, $SD = 4.86$ (rounded two decimal places).
The t-statistic is $t(23) = 8.02, p < .0001$ one-tailed positive direction.
As p-value is below the selected alpha level of 0.05, we **reject the null hypothesis**. This means that mean of response times with incongruent list is ** extremely significantly** higher than the mean of response times with congruent list.
As this is an experiment, we can also state that incongruent list **causes** higher response times.
<a id='Additional information'/>
### Additional information *[inferential](#Inferential statistics)*
In this section, we provide a 95% confidence interval for mean of differences of response time, and effect size parameters: Cohen's $d$ and $r^2$.
Cohen's d is the ratio of mean differences of response time and standard deviation of sample means:
\begin{equation}
d = \frac{\bar{x}}{s}
\end{equation}
$r^2$ parameter is:
\begin{equation}
r^2 = \frac{t^2}{t^2 + dof}
\end{equation}
Where $dof$ is the degree of freedom. In our case we have $d = 1.64, r^2 = .74$. This means that incongruent list explains 74% of differences of response time.
The margin of error is calculated with:
\begin{equation}
margin = t_{critical}.s
\end{equation}
Where $s$ is the standard deviation of differences of response time and $t_{critical}$ is the t value for $\alpha = 0.05$ two-tailed (95% confidence interval): 2.069.
Margin of error is 10.07 seconds. The 95% confidence interval:
\begin{equation}
95\% CI = \bar{x} \pm margin
\end{equation}
where $\bar{x}$ is the mean of differences of response time, is then:
95% CI = (-2.10, 18.03)
This means that if poupaltion takes the Stroop test then we expect to have an average of differences of response time between incongruent and congruent list between -2.10 and 18.03 seconds.
```
#Cohen's d
d = df["difference"].mean() / (df["difference"].std())
print "d = ", d
#r2
r2 = t**2 / (t**2 + 23)
print "r2 = ", r2
#Margin of error
margin = 2.069 * df["difference"].std()
print "margin = ", margin
#95% confidence interval
CI = (df["difference"].mean() - margin, df["difference"].mean() + margin)
print "95% CI = ({:.2f}, {:.2f})".format(CI[0], CI[1])
```
<a id='Conclusion'/>
## Conclusion *[top](#Top)*
In this project, we have conducted a t-test with a paired (dependent) sample. As this was an Stroop effect experiment, we have concluded that using an incrogruent list causes slower response time compared to a congruent list.
However, two assumptions regarding our sample (distribution should be like a normal distribution and presence of outliers) have been violated. Outlier presence affects both sample mean and sample standard deviation and t-statistic as a consequence. In this case a nonparametric test may be [prefered](http://www.basic.northwestern.edu/statguidefiles/ttest_unpaired_ass_viol.html).
The Stroop test highlights how humain brain processes differently words and colors. In his experiment, John Ridley Stroop also had a third neutral list where words naming different things than colors (animals for example). This third list showed faster response time than the incongruent list, proving the reduced level of interference.
The Stroop test has multiple applications, one of which is to gauge children brain development (it has been shown that interferences reduce from childhood to adulthood). Higher level of interferences are often associated with brain troubles such as Attention-Deficit Hyperactivity Disorder.
Sources: (https://en.wikipedia.org/wiki/Stroop_effect), (https://powersthatbeat.wordpress.com/2012/09/16/what-are-the-different-tests-for-the-stroop-effect-autismaid/), (https://imotions.com/blog/the-stroop-effect/).
<a id='Appendix'/>
## Appendix *[top](#Top)*
### References
Re-organization of panda dataframe on [StackOverflow](https://stackoverflow.com/questions/38241933/how-to-convert-column-names-into-column-values-in-pandas-python).<hr>
Paired t-test [assumptions](https://statistics.laerd.com/stata-tutorials/paired-t-test-using-stata.php).<hr>
Hypothesis testing on [Stat Trek](http://stattrek.com/hypothesis-test/hypothesis-testing.aspx).<hr>
One-tail and two-tailed tests().<hr>
Violation of [t-test assumptions](http://www.basic.northwestern.edu/statguidefiles/ttest_unpaired_ass_viol.html).<hr>
Statistical calculations with [GraphPad](http://www.graphpad.com/quickcalcs/).<hr>
```
#Export to html
!jupyter nbconvert --to html --template html_minimal.tpl inferential_statistics.ipynb
```
| github_jupyter |
# Kotelite dataset maker
**Kotlite** (Angkot Elite) is an application that allows drivers to get passengers who have the same lane. This application is expected to parse existing congestion using the concept of ridesharing, in which passengers will get the experience of driving using a private car or taxi, but get a fairly cheap price similar to the price of public transportation. By using the machine learning algorithm, it is possible to match drivers and passengers who have the same routes.
in this case the dataset used is NYC Taxi trip duration obtained from [Kaggle](https://www.kaggle.com/debanjanpaul/new-york-city-taxi-trip-distance-matrix). In this dataset, there are pickup locations and dropoff locations that will try to be used to match drivers and passengers. Existing data will be manipulated and will be separated as driver data and passenger data.
```
import requests
import json
import pandas as pd
import numpy as np
import random
from datetime import datetime
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('/content/drive/Shareddrives/Brillante Workspace/ML Corner/Dataset/NYC_dataset/train_distance_matrix.csv')
df.describe(include='all')
def filtering_dataset(dataframe):
selected_ft = ['id', 'pickup_datetime', 'pickup_latitude', 'pickup_longitude',
'dropoff_latitude', 'dropoff_longitude']
dataframe = dataframe[selected_ft]
return dataframe
dfs = filtering_dataset(df)
dfs
def change_second_dt(dataframe):
for i in range(len(dataframe)):
dtime = datetime.strptime(dataframe.loc[i,'pickup_datetime'], "%m/%D/%Y %H:%M")
_delta = dtime - datetime(1970, 1, 1)
dataframe.loc[i,'datetime'] = _delta.total_seconds()
return dataframe
def driver_and_passanger(dataframe, driver_sum=None, passanger_sum=None):
df = dataframe.copy()
if driver_sum == None:
total = random.randint(10,500)
driver_data = df.sample(total)
else:
driver_data = df.sample(driver_sum)
if passanger_sum == None:
total = random.randint(10,500)
driver_data = df.sample(total)
else:
passanger_data = df.sample(passanger_sum)
driver_data = driver_data.reset_index(drop=True)
passanger_data = passanger_data.reset_index(drop=True)
return driver_data, passanger_data
driver_dump, passanger_dump = driver_and_passanger(dfs, driver_sum=100, passanger_sum=1000)
driver_dump.describe(include='all')
passanger_dump.describe(include='all')
def route_parsing(response):
data = response.json()
routes = data['routes'][0]['legs'][0]['steps']
lat_routes = []
long_routes = []
for route in routes:
lat_start = route['start_location']['lat']
long_start = route['start_location']['lng']
lat_end = route['end_location']['lat']
long_end = route['end_location']['lng']
if lat_start not in lat_routes:
lat_routes.append(lat_start)
if long_start not in long_routes:
long_routes.append(long_start)
if lat_end not in lat_routes:
lat_routes.append(lat_end)
if long_end not in long_routes:
long_routes.append(long_end)
routes_pair = [[lat_routes[i], long_routes[i]] for i in range(len(lat_routes))]
return routes_pair
def get_routes(dataframe, API_KEY):
df = dataframe.copy()
for i in range(len(df)):
start_lat = df.loc[i,'pickup_latitude']
start_long = df.loc[i,'pickup_longitude']
end_lat = df.loc[i,'dropoff_latitude']
end_long = df.loc[i,'dropoff_longitude']
# request data from Direction API
response = requests.get(f'https://maps.googleapis.com/maps/api/directions/json?origin={start_lat},{start_long}&destination={end_lat},{end_long}&key={API_KEY}')
# parse response to get routes data
routes = route_parsing(response)
# change list to string
routes = json.dumps(routes)
# pop data to dataframe
df.loc[i, 'routes'] = routes
return df
API_KEY = 'AIzaSyC9rKUqSrytIsC7QrPExD8v7oLNB3eOr5k'
driver_dump = get_routes(driver_dump, API_KEY)
driver_dump
driver_dump.to_csv('/content/drive/Shareddrives/Brillante Workspace/ML Corner/Dataset/kotlite_dataset/kotlite_driver_dataset.csv', index=False, header=True)
passanger_dump.to_csv('/content/drive/Shareddrives/Brillante Workspace/ML Corner/Dataset/kotlite_dataset/kotlite_passanger_dataset.csv', index=False, header=True)
```
| github_jupyter |
# Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
## Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
#target_text
```
## Explore the Data
Play around with view_sentence_range to view different parts of the data.
```
#view_sentence_range = (0, 10)
view_sentence_range = (31, 40)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
```
## Implement Preprocessing Function
### Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.
You can get the `<EOS>` word id by doing:
```python
target_vocab_to_int['<EOS>']
```
You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`.
```
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_sentences = source_text.split('\n')
target_sentences = target_text.split('\n')
#print(source_vocab_to_int)
source_id_text = []
for sentence in source_sentences:
words = sentence.split()
mysentence = []
for word in words:
mysentence.append(source_vocab_to_int.get(word,0)) # return 0 if not in the dd
#mysentence.append(source_vocab_to_int[word])
#print(source_vocab_to_int[word])
#print(source_vocab_to_int.get(word,0))
source_id_text.append(mysentence)
target_id_text = []
for sentence in target_sentences:
words = sentence.split()
mysentence = []
for word in words:
mysentence.append(target_vocab_to_int.get(word,0)) # return 0 is the word doesn't exit in the dd
mysentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(mysentence)
# print(source_id_text[0])
# print(target_id_text[0])
#
# use list comprehension is more efficient
#
#target_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
```
### Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
```
### Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
## Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- `model_inputs`
- `process_decoding_input`
- `encoding_layer`
- `decoding_layer_train`
- `decoding_layer_infer`
- `decoding_layer`
- `seq2seq_model`
### Input
Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
- Targets placeholder with rank 2.
- Learning rate placeholder with rank 0.
- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
```
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
inputs = tf.placeholder(dtype = tf.int32,
shape=(None, None), name='input')
targets = tf.placeholder(dtype = tf.int32,
shape=(None, None), name='targets')
learning_rate = tf.placeholder(dtype = tf.float32,
name='learning_rate')
keep_prob = tf.placeholder(dtype = tf.float32,
name='keep_prob')
return (inputs, targets, learning_rate, keep_prob)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
```
### Process Decoding Input
Implement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch.
```
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for decoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
newbatch = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
newtarget = tf.concat([tf.fill([batch_size, 1],
target_vocab_to_int['<GO>']),
newbatch], 1)
return newtarget
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
```
### Encoding
Implement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn).
```
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) # lstm cell
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
output, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
```
### Decoding - Training
Create training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs.
```
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
decoder_fn,
inputs = dec_embed_input,
sequence_length=sequence_length,
scope=decoding_scope)
training_logits = output_fn(outputs)
# add additional dropout
# tf.nn.dropout(training_logits, keep_prob)
return training_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
```
### Decoding - Inference
Create inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder).
```
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
infer_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
num_decoder_symbols = vocab_size,
dtype = tf.int32)
dp_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob)
outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dp_cell,
infer_fn,
sequence_length=maximum_length,
scope=decoding_scope)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
```
### Build the Decoding Layer
Implement `decoding_layer()` to create a Decoder RNN layer.
- Create RNN cell for decoding using `rnn_size` and `num_layers`.
- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) to transform it's input, logits, to class logits.
- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.
- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.
Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference.
```
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
start_symb, end_symb = target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>']
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
stack_lstm = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, activation_fn=None,scope = decoding_scope)
with tf.variable_scope('decoding') as decoding_scope:
training_logits = decoding_layer_train(encoder_state,
stack_lstm,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state,
stack_lstm,
dec_embeddings,
start_symb,
end_symb,
sequence_length,
vocab_size,
decoding_scope,
output_fn,
keep_prob)
# option 2: more concise
# decoding_scope.reuse_variables()
# infer_logits = decoding_layer_infer(encoder_state,
# stack_lstm,
# dec_embeddings,
# start_symb,
# end_symb,
# sequence_length,
# vocab_size,
# decoding_scope,
# output_fn,
# keep_prob)
return (training_logits, infer_logits)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
```
### Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to the input data for the encoder.
- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.
- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.
- Apply embedding to the target data for the decoder.
- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`.
```
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
enc_embed = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encode = encoding_layer(enc_embed, rnn_size, num_layers, keep_prob)
dec_process = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_input = tf.nn.embedding_lookup(dec_embed, dec_process)
train_logits, infer_logits = decoding_layer(dec_input,
dec_embed,
encode,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob)
return (train_logits, infer_logits)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
```
## Neural Network Training
### Hyperparameters
Tune the following parameters:
- Set `epochs` to the number of epochs.
- Set `batch_size` to the batch size.
- Set `rnn_size` to the size of the RNNs.
- Set `num_layers` to the number of layers.
- Set `encoding_embedding_size` to the size of the embedding for the encoder.
- Set `decoding_embedding_size` to the size of the embedding for the decoder.
- Set `learning_rate` to the learning rate.
- Set `keep_probability` to the Dropout keep probability
```
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
```
### Build the Graph
Build the graph using the neural network you implemented.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
```
### Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
```
### Save Parameters
Save the `batch_size` and `save_path` parameters for inference.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
```
# Checkpoint
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
```
## Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.
- Convert the sentence to lowercase
- Convert words into ids using `vocab_to_int`
- Convert words not in the vocabulary, to the `<UNK>` word id.
```
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
wid_list = []
for word in sentence.lower().split():
wid_list.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
return wid_list
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
```
## Translate
This will translate `translate_sentence` from English to French.
```
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
```
## Imperfect Translation
You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.
You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.
## Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.