code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import matplotlib.pyplot as plt
import networkx as nx
from sklearn.manifold import TSNE
inputEdge = "graph/karate.edgelist"
G = nx.read_edgelist(inputEdge, nodetype=int, create_using=nx.DiGraph())
G
import numpy as np
def loadEmbedding(file_name):
with open(file_name, 'r') as f:
n, d = f.readline().strip().split()
X = np.zeros((int(n)+1, int(d)))
for line in f:
emb = line.strip().split()
emb_fl = [float(emb_i) for emb_i in emb[1:]]
X[int(emb[0]),:] = emb_fl
return X
def plot_embedding2D(node_pos, node_colors=None, di_graph=None):
node_num, embedding_dimension = node_pos.shape
if(embedding_dimension > 2):
print "Embedding dimensiion greater than 2, use tSNE to reduce it to 2"
model = TSNE(n_components=2)
node_pos = model.fit_transform(node_pos)
if di_graph is None:
# plot using plt scatter
plt.scatter(node_pos[:,0], node_pos[:,1], c=node_colors)
else:
# plot using networkx with edge structure
pos = {}
for i in xrange(node_num):
pos[i] = node_pos[i, :]
if node_colors:
nx.draw_networkx_nodes(di_graph, pos, node_color=node_colors, width=0.1, node_size=100, arrows=False, alpha=0.8, font_size=5)
else:
nx.draw_networkx(di_graph, pos, node_color=node_colors, width=0.1, node_size=300, arrows=False, alpha=0.8, font_size=12)
emb = loadEmbedding("emb/karate-2.emb")
emb
plot_embedding2D(emb,node_colors=None,di_graph=G)
plt.show()
emb_dfs = loadEmbedding("emb/karate-2-0.2.emb")
plot_embedding2D(emb_dfs,node_colors=None,di_graph=G)
plt.show()
emb_bfs = loadEmbedding("emb/karate-2-10.emb")
plot_embedding2D(emb_bfs,node_colors=None,di_graph=G)
plt.show()
| ExploreVis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 1. Goals:
# 2. K-s Test
# 3. Mean Square Error
# 4. Histogram
# 5. Boxplot \ Violin Diagram
# 6. Monte Carlo Process
# 7. Central Limit Theorem
# 8. Probability Method
# 9. The Probability Method
# 10. Addactive Smoothing
# 11. Conditional Addactive Smoothing
# 12. Random Forest
# https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
# 13. Missing value, impute, and Simpleimputer
# https://scikit-learn.org/stable/modules/impute.html
# 14. One hot encoding
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
# 15. linear models with stochastic gradient descent (SGD)
# https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
# 16. LinearSVC
# https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html
# 17. Feature selection with RFE
# https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html
# Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_ attribute or through a feature_importances_ attribute. Then, the least important features are pruned from current set of features.That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.
# 18. Cross-Validation
# https://scikit-learn.org/stable/modules/cross_validation.html
| Microsoft Malware Analysis/Theory.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !pip install sympy==1.4
# %matplotlib inline
import matplotlib.pyplot as plt
import sympy as sym
from sympy import oo
sym.init_printing()
def fourier_transform(x):
return sym.transforms._fourier_transform(x, t, w, 1, -1, 'Fourier')
#return sym.integrate(x*sym.exp(-1j*w*t), (t,0,oo)).evalf()
# -
# ## Definición
#
# La [transformada de Fourier](https://en.wikipedia.org/wiki/Fourier_transform) está definida por
#
# \begin{equation}
# X(j \omega) = \int_{-\infty}^{\infty} x(t) \, e^{-j \omega t} \; dt
# \end{equation}
#
# donde $X(j \omega) = \mathcal{F} \{ x(t) \}$ se usa como notación de la Transformada de Fourier de la señal $x(t)$. $X(j \omega)$ es el espectro de la señal $x(t)$. El argumento $j \omega$, como exponente de la exponencial, encierra el comportamiento de todas las señales oscilatorias $cos(\omega t)$.
#
# Observe que la forma de la transformada de Fourier corresponde a la forma de la correlación. De esta manera, podría interpretarse como el "parecido" entre la señal $x(t)$ y $e^{j \omega t}$, es decir entre $x(t)$ y $cos(\omega t)$
#
#
# La transformada inversa de Fourier $x(t) = \mathcal{F}^{-1} \{ X(j \omega) \}$ se define como
#
# \begin{equation}
# x(t) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} X(j \omega) \, e^{j \omega t} \; d\omega
# \end{equation}
#
# ## Propiedades
#
# ### Invertible
#
# \begin{equation}
# x(t) = \mathcal{F}^{-1} \left\{ \mathcal{F} \{ x(t) \} \right\}
# \end{equation}
#
# Tomando las expresiones de la Transformada de Fourier y la Transformada Inversa de Fourier, se obtiene:
#
# \begin{equation}
# \begin{split}
# x(t) &= \frac{1}{2 \pi} \int_{-\infty}^{\infty} \underbrace{\int_{-\infty}^{\infty} x(\tau) e^{-j \omega \tau} d\tau}_{X(j \omega)} \; e^{j \omega t} d\omega \\
# &= \int_{-\infty}^{\infty} x(\tau) \left( \frac{1}{2 \pi} \int_{-\infty}^{\infty} e^{-j \omega \tau} e^{j \omega t} d\omega \right) d\tau \\
# &= \int_{-\infty}^{\infty} x(\tau) \delta(t - \tau) d\tau = x(t)
# \end{split}
# \end{equation}
#
# ### Linealidad
#
# \begin{equation}
# \mathcal{F} \{ A \cdot x_1(t) + B \cdot x_2(t) \} = A \cdot X_1(j \omega) + B \cdot X_2(j \omega)
# \end{equation}
#
# Tomando la expresión de la Transformada de Fourier se obtiene:
#
#
# \begin{equation}
# \begin{split}
# &= \int_{-\infty}^{\infty} (A \cdot x_1(t) + B \cdot x_2(t)) \, e^{-j \omega t} \; dt \\
# &= \int_{-\infty}^{\infty} A \cdot x_1(t) \, e^{-j \omega t} \; dt + \int_{-\infty}^{\infty} B \cdot x_2(t) \, e^{-j \omega t} \; dt \\
# &= A \cdot \int_{-\infty}^{\infty} x_1(t) \, e^{-j \omega t} \; dt + B \cdot\int_{-\infty}^{\infty} x_2(t) \, e^{-j \omega t} \; dt
# \end{split}
# \end{equation}
#
# **Ejemplo - Transformada de Fourier de una señal exponencial causal**
#
#
# \begin{equation}
# x(t) = e^{- \alpha t} \cdot \epsilon(t)
# \end{equation}
#
# con $\alpha \in \mathbb{R}^+$
t,w = sym.symbols('t omega', real=True)
a = 4
x = sym.exp(-a * t)*sym.Heaviside(t)
x
X = fourier_transform(x)
X
# +
plt.rcParams['figure.figsize'] = 5, 2
sym.plot(x, (t,-1,10), ylabel=r'Amp',line_color='blue',legend=True, label = 'x(t)')
sym.plot(sym.re(X), (w,-20,20), ylabel=r'Real',line_color='blue',legend=True, label = 'X(w)')
sym.plot(sym.im(X), (w,-20,20), ylabel=r'Imag',line_color='blue',legend=True, label = 'X(w)')
sym.plot(sym.sqrt(
(sym.im(X)*sym.im(X)) + (sym.re(X)*sym.re(X))),
(w,-20,20), ylabel=r'Mag',line_color='blue',legend=True, label = 'X(w)')
# -
# Observe que:
#
# - $X(\omega)$ es una función definida para todos los valores de $\omega$ y no solamente para los múltiplos enteros de un valor determinado $\omega_0$.
#
# - $X(\omega)$ es una función compleja, es decir que tiene parte imaginaria y parte real. Así, puede expresarse de forma cartesiana ($real + j \cdot imaginario$) o de forma polar ($magnitud \angle ángulo$). El "parecido" entre la señal $x(t)$ con $sin(\omega t)$ se puede apreciar en la magnitud de $X(\omega)$.
#
# - $|X(\omega)|$ tiene un valor máximo en $\omega=0$ y un decaimiento a medida que aumenta $\omega$
#
#
# Analizando la magnitud de $X(\omega)$
X
X_real = sym.re(X)
X_real
X_imag = sym.im(X)
X_imag
X_magn = sym.sqrt(X_real*X_real + X_imag*X_imag).simplify()
X_magn
# La magnitud de $X(\omega)$ es simétrica respecto a $\omega = 0$. Así, será suficiente analizar solamente un lado del espectro de una señal de tiempo continuo.
# **Ejemplo - Transformada de Fourier de una señal exponencial por una senoidal**
#
#
# \begin{equation}
# x(t) = sin(\omega_0 t) \cdot e^{- \alpha t} \cdot \epsilon(t)
# \end{equation}
#
# con $\omega \in \mathbb{R}^+$
t,w = sym.symbols('t omega', real=True)
w0 = 10
x1 = sym.sin(w0 * t)*sym.exp(-2*t)*sym.Heaviside(t)
x1
X1 = fourier_transform(x1)
X1
# +
plt.rcParams['figure.figsize'] = 5, 2
sym.plot(x1, (t,-2,5), ylabel=r'Amp',line_color='blue',legend=True, label = 'x1(t)')
#sym.plot(sym.re(X), (w,-20,20), ylabel=r'Real',line_color='blue',legend=True, label = 'X(w)')
#sym.plot(sym.im(X), (w,-20,20), ylabel=r'Imag',line_color='blue',legend=True, label = 'X(w)')
sym.plot(sym.sqrt(
(sym.im(X1)*sym.im(X1)) + (sym.re(X1)*sym.re(X1))),
(w,-60,60), ylabel=r'Mag',line_color='blue',legend=False, label = 'X1(w)')
# -
# Observe que:
# - $x1(t)$ corresponde a $x(t)$ multiplicada con una función senoidal de frecuencia angular de $10$ rad/seg.
#
# - Al igual que con la magnitud de $X(\omega)$, la magnitud de $X1(\omega)$ decae con $\omega$, sin embargo, hay un pico en $\omega=10$ que se relaciona con la senoidal.
#
# **Ejemplo - Transformada de Fourier de senoidales y combinaciones causales**
#
#
#
# \begin{equation}
# x(t) = sin(\omega_0 t) \cdot \epsilon(t)
# \end{equation}
#
# con $\omega \in \mathbb{R}^+$
# +
w1 = 10
w2 = 5
x2_1 = sym.sin(w1 * t)*sym.Heaviside(t)
x2_2 = sym.sin(w2 * t)*sym.Heaviside(t)
x2 = x2_1 + x2_2
x2
# -
X2_1 = fourier_transform(x2_1)
X2_1
X2_2 = fourier_transform(x2_2)
X2_2
(X2_1+X2_2).simplify()
X2 = fourier_transform(x2)
X2
# +
plt.rcParams['figure.figsize'] = 5, 2
gt2_1 = sym.plot(x2_1, (t,-1,5), ylabel=r'Amp',line_color='blue',legend=True, label = 'x2_1(t)', show = False)
gt2_2 = sym.plot(x2_2, (t,-1,5), ylabel=r'Amp',line_color='green',legend=True, label = 'x2_2(t)', show = False)
gt2 = sym.plot(x2, (t,-1,5), ylabel=r'Amp',line_color='red',legend=True, label = 'x2(t)', show = False)
gt2.extend(gt2_1)
gt2.extend(gt2_2)
gt2.show()
# +
plt.rcParams['figure.figsize'] = 6, 3
gw2_1 = sym.plot(sym.sqrt(
(sym.im(X2_1)*sym.im(X2_1)) + (sym.re(X2_1)*sym.re(X2_1))),
(w,0,14), ylabel=r'Mag',line_color='blue',legend=False, label = 'X2_1(w)',show = False)
gw2_2 = sym.plot(sym.sqrt(
(sym.im(X2_2)*sym.im(X2_2)) + (sym.re(X2_2)*sym.re(X2_2))),
(w,0,14), ylabel=r'Mag',line_color='green',legend=False, label = 'X2_2(w)',show = False)
gw2 = sym.plot(sym.sqrt(
(sym.im(X2)*sym.im(X2)) + (sym.re(X2)*sym.re(X2))),
(w,0,14), ylabel=r'Mag',line_color='red',legend=False, label = 'X2(w)',show = False)
gw2.extend(gw2_1)
gw2.extend(gw2_2)
gw2.show()
# -
X2
# En la gráfica anterior se observa el efecto de la superposición lineal de los espectros de las dos señales senoidales.
# **Ejercicio**
#
# Analice el espectro de
#
# \begin{equation}
# x(t) = (sin(\omega_0 t) + e^{-2t}) \cdot \epsilon(t)
# \end{equation}
#
# con $\omega_0 \in \mathbb{R}^+$
#
t,w = sym.symbols('t omega', real=True)
w0 = 15
a = 2
x4 = (sym.sin(w0 * t) + sym.exp(-a*t))*sym.Heaviside(t)
x4
# ### Dualidad
#
# Observe que la **Transformada de Fourier** y la **Transformada Inversa de Fourier** tienen formas parecidas.
#
# \begin{align}
# X(\omega) &= \int_{-\infty}^{\infty} x(t) \, e^{-j \omega t} \; dt \\
# x(t) &= \frac{1}{2 \pi} \int_{-\infty}^{\infty} X(j \omega) \, e^{j \omega t} \; d\omega
# \end{align}
#
# La principal diferencia está en el factor de normalización $2 \pi$ y el signo de la exponencial.
#
# Suponga que:
#
# \begin{equation}
# x_2(\omega) = \mathcal{F} \{ x_1(t) \}
# \end{equation}
#
# Puede pensarse que:
#
# \begin{equation}
# x_2(t) = x_2(\omega) \big\vert_{\omega=t}
# \end{equation}
#
# Entonces
#
# \begin{equation}
# \mathcal{F} \{ x_2(t) \} = \int_{-\infty}^{\infty} x_2(\omega) \big\vert_{\omega=t} \, e^{-j \omega t} \; dt
# \end{equation}
#
# Esta tiene la forma de **Transformada de Fourier**, pero la función interna tiene a $\omega$ como variable, esto indica que la integral se trata como una **Transformada inversa**. Así, para volver a la **Transformada de Fourier**, se debe multiplicar por $2\pi$ y quitar el signo de la exponencial del kernel de transformación.
#
# \begin{equation}
# \mathcal{F} \{ x_2(t) \} = 2 \pi \cdot x_1(- \omega)
# \end{equation}
#
# Esta propiedad permite llevar los análisis desde el dominio de las frecuencias al dominio del tiempo y viceversa.
# ## Teoremas
#
# Retomando la transformada
#
# \begin{equation}
# X(j \omega) = \int_{-\infty}^{\infty} x(t) \, e^{-j \omega t} \; dt
# \end{equation}
#
# ### Derivadas
#
# Dadas una señal $x(t)$ y su derivada respecto al tiempo $\frac{d x(t)}{dt}$, y conocida la **Transformada de Fourier** $X(\omega)$ :
#
#
# \begin{equation}
# \mathcal{F} \left\{ \frac{d x(t)}{dt} \right\} = \int_{-\infty}^{\infty} \frac{d x(t)}{dt} \, e^{-j \omega t} \; dt
# \end{equation}
#
# La integral se puede resolver por partes:
#
# \begin{equation}
# \begin{split}
# \mathcal{F} \left\{ \frac{d x(t)}{dt} \right\} &= x(t) \cdot e^{-j \omega t} \big\vert_{-\infty}^{\infty} - \int_{-\infty}^{\infty} x(t) (-j \omega) e^{-j \omega t} \; dt \\
# &= j \omega \int_{-\infty}^{\infty} x(t) e^{-j \omega t} \; dt \\
# &= j \omega X(\omega) \
# \end{split}
# \end{equation}
#
# \begin{equation}
# \frac{d x(t)}{dt} = \frac{d \delta(t)}{dt} * x(t)
# \end{equation}
#
#
# La principal aplicación está en la transformación de ecuaciones diferenciales.
#
# ** Ejemplo**
#
# \begin{equation}
# 2y(t) + 2 \frac{dy}{dt} - x(t) = 0
# \end{equation}
#
# Aplicando la **Transformada de Fourier** con sus propiedades se obtiene:
# \begin{equation}
# 2Y(\omega) + 2 j \omega Y(\omega) - X(\omega) = 0
# \end{equation}
#
# Observe que en el modelo en el dominio del tiempo (ecuación diferencial) no es posible despejar una expresión equivalente a $\frac{x(t)}{y(t)}$. Por su parte, usando el modelo en el dominio de la frecuencia, se obtiene:
#
#
# \begin{equation}
# Y(\omega)(2 + 2 j \omega ) = X(\omega)
# \end{equation}
#
# \begin{equation}
# \frac{Y(\omega)}{X(\omega)} = \frac{1}{2+2j\omega} = F(\omega)
# \end{equation}
#
# Esta relación es conocida como **Función de transferencia** y representa el efecto que tiene el sistema sobre una señal de entrada que en el caso de la transformada de Fourier es senoidal.
#
#
# +
# La función de transferencia
F = 1 / (2 +1j*2*w )
F
# -
plt.rcParams['figure.figsize'] = 5, 2
sym.plot(sym.Abs(F), (w,0,10), ylabel=r'Mag',line_color='blue',legend=True, label = 'F(w)', show = True)
# La respuesta de un sistema ante una entrada senoidal de frecuencia específica $\omega$ se determina por el valor complejo que toma $F(\omega)$. Por ejemplo, en la frecuencia $\omega = 1$, $F(1) = \frac{1}{2+2j}$
F1 = F.subs(w,1)
F1
magF1 = sym.Abs(F1)
magF1
# Así, si el sistema es excitado con un seno de frecuencia 1, con amplitud 1, la salida será un seno de amplitud 0.35 y tendrá un desfase definido por el ángulo de la función de transferencia.
#
# \begin{equation}
# 0.35 sin(1t + ang)
# \end{equation}
#
# ### Ejercicio
#
# Deduzca cómo se calcula el ángulo de fase que introduce el sistema.
F
F.subs(w,3)
sym.re(F)
sym.im(F)
sym.sqrt(sym.re(F)**2 +
sym.im(F)**2 )
sym.atan(sym.im(F)/sym.re(F))
sym.Abs(F)
sym.arg(F)
plt.rcParams['figure.figsize'] = 5, 2
sym.plot(sym.Abs(F), (w,0,10), ylabel=r'Mag',line_color='blue',legend=True, label = 'F(w)', show = True)
sym.plot(sym.arg(F), (w,0,10), ylabel=r'arg',line_color='blue',legend=True, label = 'F(w)', show = True)
| 05_Transformada_Fourier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# language: python
# name: python38264bitee8223ec65594bc885f48f30722f6205
# ---
import numpy as np
import matplotlib.pyplot as plt
from profit.sur.backend.gp_functions import invert, nll
from profit.sur.backend.kernels import kern_sqexp
from profit.util.halton import halton
# +
def f(x): return x*np.cos(10*x)
# Custom function to build GP matrix
def build_K(xa, xb, hyp, K):
for i in np.arange(len(xa)):
for j in np.arange(len(xb)):
K[i, j] = kern_sqexp(xa[i], xb[j], hyp[0])
noise_train = 0.0
ntrain = 30
xtrain = halton(1, ntrain)
ftrain = f(xtrain)
np.random.seed(0)
ytrain = ftrain + noise_train*np.random.randn(ntrain, 1)
# +
# GP regression with fixed kernel hyperparameters
hyp = [0.5, 1e-6] # l and sig_noise**2
K = np.empty((ntrain, ntrain)) # train-train
build_K(xtrain, xtrain, hyp, K) # writes inside K
Ky = K + hyp[-1]*np.eye(ntrain)
Kyinv = invert(Ky, 4, 1e-6) # using gp_functions.invert
ntest = 20
xtest = np.linspace(0, 1, ntest)
ftest = f(xtest)
Ks = np.empty((ntrain, ntest)) # train-test
Kss = np.empty((ntest, ntest)) # test-test
build_K(xtrain, xtest, hyp, Ks)
build_K(xtest, xtest, hyp, Kss)
fmean = Ks.T.dot(Kyinv.dot(ytrain)) # predictive mean
# -
plt.figure()
plt.plot(xtrain, ytrain, 'x')
plt.plot(xtest, ftest, '-')
plt.plot(xtest, fmean, '--')
plt.legend(('training', 'reference', 'prediction'))
# Negative log likelihood over length scale
ls = np.linspace(1e-3, 3, 50)
nlls = np.array(
[nll([l, 1e-3], xtrain, ytrain) for l in ls]
).flatten()
plt.figure()
plt.plot(ls, nlls)
plt.xlabel('l')
plt.ylabel('- log p(y|l)')
plt.title('Negative log-likelihood')
# +
from scipy.optimize import minimize
# Prior to cut out range
def cutoff(x, xmin, xmax, slope=1e3):
if x < xmin:
return slope*(x - xmin)**2
if x > xmax:
return slope*(x - xmax)**2
return 0.0
def nlprior(log10hyp):
return cutoff(log10hyp[0], -2, 1) + cutoff(log10hyp[-1], -8, 0)
x = np.linspace(-10, 1, 100)
plt.figure()
plt.plot(x, [cutoff(xi, -6, 0) for xi in x])
plt.show()
def nlp_transform(log10hyp):
hyp = 10**log10hyp
return nll(hyp, xtrain, ytrain) + nlprior(log10hyp)
res = minimize(nlp_transform, np.array([-1, -6]), method='BFGS')
# -
print(res)
print('[l,sig2] = ', 10**res.x)
# +
nl = 50
ns2 = 40
log10l = np.linspace(res.x[0]-1, res.x[0]+1, nl)
log10s2 = np.linspace(res.x[1]-1, res.x[1]+1, ns2)
[Ll, Ls2] = np.meshgrid(log10l, log10s2)
nlls = np.array(
[nlp_transform(np.array([ll, ls2])) for ls2 in log10s2 for ll in log10l]
).reshape([ns2, nl])
# Do some cut for visualization
maxval = 0.0
nlls[nlls>maxval] = maxval
plt.figure()
plt.title('NLL')
plt.contour(Ll, Ls2, nlls, levels=50)
plt.plot(res.x[0], res.x[1], 'rx')
plt.xlabel('log10 l^2')
plt.ylabel('log10 sig_n^2')
plt.colorbar()
plt.legend(['optimum'])
plt.show()
# -
nlls
| doc/gp_custom.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# **Load functions**
# Loads libraries and automatically installs them if needed.
# Source code: https://stackoverflow.com/questions/19596359/install-package-library-if-not-installed
usePackage <- function(p) {
if (!is.element(p, installed.packages()[,1]))
install.packages(p, dependencies = TRUE)
require(p, character.only = TRUE)
}
usePackage('GGally')
usePackage('ggplot2')
usePackage('ppcor')
usePackage('openintro')
usePackage('stargazer')
usePackage('mctest')
usePackage('yhat')
# usePackage('lmtest') # This package is for the bp test further down. It did not work in Jupyter notebook for Mac but it does work in Windows.
# Look at the data dictionary
help(ncbirths)
# From here onwards we will assign the data set to a new variable so the original dataframe remains untouched and in its originial condition.
df <- ncbirths
# Look at the structure of the data
# Notice how there are continous and categorical variables
# Categorical variables are known as factor type data in R. The sample below has factors with different levels.
# These factors can be given an order. This would make them ordinal variables.
str(df)
# Look at a sample of the data
head(df)
# Summary statistics
summary(df)
# **Creating ordinal variables**
# +
# Creating levels for factor type data
df$lowbirthweight <- factor(df$lowbirthweight,
levels = c('low', 'not low'),
labels = c('low', 'not low'),
ordered=TRUE) # If ordered is left as FALSE then there won't be any order beyong the alphabetic order.
df$habit <- factor(df$habit,
levels = c('nonsmoker', 'smoker'),
labels = c('nonsmoker', 'smoker'),
ordered=TRUE) # If ordered is left as FALSE then there won't be any order beyong the alphabetic order.
# -
# Notice how the 'habit' column is has an order to the values of the data column.
# This is useful for understanding regression. We'll use the variable 'habit' later on.
str(df)
# Finally, don't be afraid of looking at the data in Excel.
# Notice the ordinal nature of the variables are not apparent.
write.csv(df, "df.csv", row.names=FALSE)
# **Cleaning up the dataset**
# Drop the missing data from the dataframe for the purpose of this exercise
# How to report and treat missing data is important in your analysis. We won't disucss this in detail this session.
# To keep things simple we will drop all rows with missing data.
# Source code: https://stackoverflow.com/questions/4862178/remove-rows-with-all-or-some-nas-missing-values-in-data-frame#4862502
df <- na.omit(df) # We are overwriting the df.
# Notice how we have 200 rows removed from the total of 1000.
str(df)
# **Plot your data**
# The ggpairs package allows for seeing the relationship of all continous variables in two by two comparisons and the distribution of each variable.
# Drop column from data frame with a non-continous variable.
drops <- c('mature', 'premie', 'marital', 'lowbirthweight', 'gender', 'habit', 'whitemom')
df_cor <- df[ , !(names(df) %in% drops)]
# Print correlations
print(ggpairs(df_cor))
# +
# First plot the relationship you want to use in your regression
p1 <- ggplot(df,aes(y = weight,x = weeks)) +
geom_point() +
geom_smooth(method="lm")
p1
ggsave('p1.pdf', plot = p1)
# -
m1 <- lm(weight ~ weeks, data = df)
summary(m1)
# Adjusted R squared is the proportion of the variance in the data that's explained by the model.
# From 0 to 100. Usually above 85 is good, this high number not achieved in a lot of clinicla work.
# Startgazer is a useful method to create regression tables for presentation.
# You can costumize it using this link: https://www.jakeruss.com/cheatsheets/stargazer/
# Simply copy/paste the output of the HTML file into Word. Or you can use R markdown.
invisible(capture.output(stargazer(m1, out = 'result1.html', model.names = FALSE))) # Change to 'TRUE' to display OLS
# **Model quality**
#
# Akaike Information Criterion
# A measure of model quality and simplicity. The lower the value the better.
# You want to select a model with the least complexity that explains the most variance.
AIC(m1)
# **Assumptions of the linear regression**
#
# 1. The relationship is linear (already checked in the first plotting)
# 2. The distrubtion of the residuals are normal (the plot below)
# 3. The variance (standard deviation) of the outcome y is constant over the predictor x (the test below).
# +
par(mfrow=c(2,2)) # to put the plots 2 by 2.
# The first plot is used to identify heteroscedasticity with residual plots. You want it to be random.
# The second plot shows the normal distribution of the residuals.
# You can plot these in ggplot for a better visualization as well.
plot(m1)
# Formal checking of Heteroscedasticity with the Breusch-Pagan test. The p value must be non-significant.
# bptest(m1) # Currently doesn't work in Jupyter on Mac, but worked in Windows version of Jupyter Lab's Anaconda
# Cook's distance measures the influence of a given data point.
# These influential points can be omitted if beyond a certain threshold of influence.
# -
# You can also include categorical variables such as smoking habit (yes/no).
# We made 'habit' an ordinal variable for this purpose. Non-smokers became the reference.
# If more than one category you'd need dummy variables.
m2 <- lm(weight ~ weeks + habit, data = df)
invisible(capture.output(stargazer(m2, out = 'result2.html')))
AIC(m2)
# **Tools to assess existence of multicollinearity**
# Notice how mother's age and father's age are highly correlated in the graph above. This is called collinearity, which can lead to distorted estimates.
# We will discuss this again further down.
m3 <- lm(weight ~ weeks + mage + fage, data = df)
invisible(capture.output(stargazer(m3, out = 'result3.html')))
# Drop column from data frame with a non-continous variable.
# We are about to assess all predictor variables that are continous so we will remove categorical ones and
# 'weight' which is the outcome.
drops <- c('mature', 'premie', 'marital', 'lowbirthweight', 'gender', 'habit', 'whitemom', 'weight')
df_cor <- df[ , !(names(df) %in% drops)]
# +
print(omcdiag(x = df_cor, y = df$weight))
# Error is most likely because the outcome variable is still in the dataframe.
# As such the outcome variable is also listed as a predictor.
# The error tells us one of the variables has exactly the same characteristic as the outcome variable.
# -
# VIF and TOl are the same thing but inversed. You set the threshold or leave the default values.
print(imcdiag(x = df_cor, y = df$weight, vif = 2.5, tol = 0.40, all = TRUE))
print(mc.plot(x = df_cor, y = df$weight, vif = 2.5))
# **Tools to support interpreting multiple regression in the face of multicollinearity.**
#
# Review: https://www.ncbi.nlm.nih.gov/pubmed/22457655
# Understand package
help('yhat')
# Use sample code of the package in our data set
help('plotCI.yhat')
# +
## Regression
lm.out <- m3 # you can also just type in the model.
## Calculate regression metrics
regrOut<-calc.yhat(lm.out)
## Bootstrap results
usePackage ("boot")
boot.out<-boot(df,boot.yhat,100,lmOut=lm.out,regrout0=regrOut)
## Evaluate bootstrap results
result<-booteval.yhat(regrOut,boot.out,bty="perc")
## Plot results
plotCI.yhat(regrOut$PredictorMetrics[-nrow(regrOut$PredictorMetrics),],
result$upperCIpm,result$lowerCIpm, pid=which(colnames(regrOut$PredictorMetrics)
# %in% c("Beta","rs","CD:0","CD:1","CD:2","GenDom","Pratt","RLW") == TRUE),nr=3,nc=3)
# -
regrOut
# **k-fold validation of models**
# +
# Choosing the best model with the lowest AIC. This AIC is a measure of the simplicity of the model, the lower the better.
# Source code: http://www.sthda.com/english/articles/37-model-selection-essentials-in-r/154-stepwise-regression-essentials-in-r/
model_kfold <- function(df){
usePackage("caret")
# Set up repeated k-fold cross-validation
train.control <- trainControl(method = "repeatedcv", number=10, repeats=3)
# Train the model
model <- train(weight ~ weeks, data = df, # instead of 'weight ~.', replace the '.' with specific predictors or interest.
method = "lm", # replace lm with lmStepAIC and see what happens
trControl = train.control)
detach(package:caret)
return(model)
}
# When you replace the 'lm' method with the 'lmStepAIC' in the above function the model will
# We won't be discussing how to review the different models. Our focus is on the final chosen mdoel.
# Capture modeling
m4 <- model_kfold(df)
# Capture final model
fit4 <- m4$finalModel
# Capture RMSE
RMSE4 <- m4$results[, 2]
# Capture MAE
MAE4 <- m4$results[, 4]
# You can just click the blue bar on the left of the output to close the output of the modeling.
# -
invisible(capture.output(stargazer(fit4, out = 'result4.html')))
# Model accuracy
m4$results
# Final model coefficients
m4$finalModel
# Summary of the model
summary(m4$finalModel)
| lessons/r/regression-models/20190321-regression-models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.008128, "end_time": "2021-09-25T01:30:10.706143", "exception": false, "start_time": "2021-09-25T01:30:10.698015", "status": "completed"} tags=[]
# ### Spacy classifier
# > Baseline from <NAME>. See [here](https://www.kaggle.com/mlconsult/44-3-spacy-classifier-no-matching).
# + papermill={"duration": 31.080954, "end_time": "2021-09-25T01:30:41.796347", "exception": false, "start_time": "2021-09-25T01:30:10.715393", "status": "completed"} tags=[]
# additional python packages
# !pip install datasets --no-index --find-links=file:///kaggle/input/coleridge-packages/packages/datasets
# !pip install -q ../input/coleridge-packages/seqeval-1.2.2-py3-none-any.whl
# !pip install -q ../input/coleridge-packages/tokenizers-0.10.1-cp37-cp37m-manylinux1_x86_64.whl
# !pip install -q ../input/coleridge-packages/transformers-4.5.0.dev0-py3-none-any.whl
# + papermill={"duration": 12.393822, "end_time": "2021-09-25T01:30:54.200282", "exception": false, "start_time": "2021-09-25T01:30:41.806460", "status": "completed"} tags=[]
import os
import re
import json
import simplejson
import time
import datetime
import random
import glob
import importlib
# dataset manipulation
import numpy as np
import pandas as pd
from tqdm import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
# pytorch
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, DataCollatorForLanguageModeling, \
AutoModelForMaskedLM, Trainer, TrainingArguments, pipeline
from typing import List
import string
from functools import partial
import pickle
from joblib import Parallel, delayed
from collections import defaultdict, Counter
import gc
# tf
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# language preprocessing
import nltk
from typing import *
# spacy
import spacy
from spacy import displacy
from spacy.util import minibatch, compounding
# set seed
sns.set()
random.seed(123)
np.random.seed(456)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
print('packages loaded')
# + papermill={"duration": 0.023619, "end_time": "2021-09-25T01:30:54.235716", "exception": false, "start_time": "2021-09-25T01:30:54.212097", "status": "completed"} tags=[]
def clean_text(text: str) -> str: return re.sub('[^A-Za-z0-9]+', ' ', str(text).lower()).strip()
def clean_texts(texts: List[str]) -> List[str]: return [ clean_text(text) for text in texts ]
def read_json(index: str, test_train) -> Dict:
filename = f"../input/coleridgeinitiative-show-us-the-data/{test_train}/{index}.json"
with open(filename) as f:
json = simplejson.load(f)
return json
def json2text(index: str, test_train) -> str:
json = read_json(index, test_train)
texts = [
row["section_title"] + " " + row["text"]
for row in json
]
text = " ".join(texts)
return text
def filename_to_index(filename):
return re.sub("^.*/|\.[^.]+$", '', filename)
def glob_to_indices(globpath):
return list(map(filename_to_index, glob.glob(globpath)))
# Inspired by: https://www.kaggle.com/hamditarek/merge-multiple-json-files-to-a-dataframe
def dataset_df(test_train="test"):
indices = glob_to_indices(f"../input/coleridgeinitiative-show-us-the-data/{test_train}/*.json")
texts = Parallel(-1)(
delayed(json2text)(index, test_train)
for index in indices
)
df = pd.DataFrame([
{ "id": index, "text": text}
for index, text in zip(indices, texts)
])
df.to_csv(f"{test_train}.json.csv", index=False)
return df
# + papermill={"duration": 0.086662, "end_time": "2021-09-25T01:30:54.332792", "exception": false, "start_time": "2021-09-25T01:30:54.246130", "status": "completed"} tags=[]
sample_submission = pd.read_csv('../input/coleridgeinitiative-show-us-the-data/sample_submission.csv')
papers = {}
for paper_id in sample_submission['Id'].values:
with open(f'../input/coleridgeinitiative-show-us-the-data/test/{paper_id}.json', 'r') as f:
sections = json.load(f)
paper = ''
for section in sections:
paper = paper + section['text'] + ' .'
papers[paper_id] = paper
del paper
# + papermill={"duration": 0.125974, "end_time": "2021-09-25T01:30:54.469410", "exception": false, "start_time": "2021-09-25T01:30:54.343436", "status": "completed"} tags=[]
spacy_train_data=[("adni",{'cats':{'POSITIVE': 1}}),
("cccsl",{'cats':{'POSITIVE': 1}}),
("ibtracs",{'cats':{'POSITIVE': 1}}),
("noaa c cap",{'cats':{'POSITIVE': 1}}),
("noaa c-cap",{'cats':{'POSITIVE': 1}}),
("slosh model",{'cats':{'POSITIVE': 1}}),
("noaa tide gauge",{'cats':{'POSITIVE': 1}}),
("noaa tide station",{'cats':{'POSITIVE': 1}}),
("jh-crown registry",{'cats':{'POSITIVE': 1}}),
("jh crown registry",{'cats':{'POSITIVE': 1}}),
("our world in data",{'cats':{'POSITIVE': 1}}),
("noaa tidal station",{'cats':{'POSITIVE': 1}}),
("covid 19 death data",{'cats':{'POSITIVE': 1}}),
("covid-19 death data",{'cats':{'POSITIVE': 1}}),
("common core of data",{'cats':{'POSITIVE': 1}}),
("world ocean database",{'cats':{'POSITIVE': 1}}),
("covid-19 deaths data",{'cats':{'POSITIVE': 1}}),
("census of agriculture",{'cats':{'POSITIVE': 1}}),
("covid 19 genome sequence",{'cats':{'POSITIVE': 1}}),
("noaa water level station",{'cats':{'POSITIVE': 1}}),
("covid-19 genome sequence",{'cats':{'POSITIVE': 1}}),
("nces common core of data",{'cats':{'POSITIVE': 1}}),
("baccalaureate and beyond",{'cats':{'POSITIVE': 1}}),
("noaa world ocean database",{'cats':{'POSITIVE': 1}}),
("aging integrated database",{'cats':{'POSITIVE': 1}}),
("2019-ncov genome sequence",{'cats':{'POSITIVE': 1}}),
("covid 19 genome sequences",{'cats':{'POSITIVE': 1}}),
("covid-19 genome sequences",{'cats':{'POSITIVE': 1}}),
("2019 ncov genome sequence",{'cats':{'POSITIVE': 1}}),
("our world in data covid-19",{'cats':{'POSITIVE': 1}}),
("sars-cov-2 genome sequence",{'cats':{'POSITIVE': 1}}),
("nass census of agriculture",{'cats':{'POSITIVE': 1}}),
("2019-ncov genome sequences",{'cats':{'POSITIVE': 1}}),
("anss comprehensive catalog",{'cats':{'POSITIVE': 1}}),
("our world in data covid 19",{'cats':{'POSITIVE': 1}}),
("2019 ncov genome sequences",{'cats':{'POSITIVE': 1}}),
("sars cov 2 genome sequence",{'cats':{'POSITIVE': 1}}),
("usda census of agriculture",{'cats':{'POSITIVE': 1}}),
("covid open research dataset",{'cats':{'POSITIVE': 1}}),
("genome sequence of covid-19",{'cats':{'POSITIVE': 1}}),
("covid 19 open research data",{'cats':{'POSITIVE': 1}}),
("genome sequence of covid 19",{'cats':{'POSITIVE': 1}}),
("sars-cov-2 genome sequences",{'cats':{'POSITIVE': 1}}),
("rural urban continuum codes",{'cats':{'POSITIVE': 1}}),
("covid-19 open research data",{'cats':{'POSITIVE': 1}}),
("sars cov 2 genome sequences",{'cats':{'POSITIVE': 1}}),
("noaa storm surge inundation",{'cats':{'POSITIVE': 1}}),
("survey of earned doctorates",{'cats':{'POSITIVE': 1}}),
("rural-urban continuum codes",{'cats':{'POSITIVE': 1}}),
("genome sequence of 2019-ncov",{'cats':{'POSITIVE': 1}}),
("education longitudinal study",{'cats':{'POSITIVE': 1}}),
("genome sequence of 2019 ncov",{'cats':{'POSITIVE': 1}}),
("genome sequences of covid 19",{'cats':{'POSITIVE': 1}}),
("genome sequences of covid-19",{'cats':{'POSITIVE': 1}}),
("genome sequences of 2019 ncov",{'cats':{'POSITIVE': 1}}),
("genome sequence of sars-cov-2",{'cats':{'POSITIVE': 1}}),
("genome sequences of 2019-ncov",{'cats':{'POSITIVE': 1}}),
("genome sequence of sars cov 2",{'cats':{'POSITIVE': 1}}),
("genome sequences of sars-cov-2",{'cats':{'POSITIVE': 1}}),
("covid-19 open research dataset",{'cats':{'POSITIVE': 1}}),
("high school longitudinal study",{'cats':{'POSITIVE': 1}}),
("genome sequences of sars cov 2",{'cats':{'POSITIVE': 1}}),
("survey of doctorate recipients",{'cats':{'POSITIVE': 1}}),
("covid 19 image data collection",{'cats':{'POSITIVE': 1}}),
("covid-19 image data collection",{'cats':{'POSITIVE': 1}}),
("covid 19 open research dataset",{'cats':{'POSITIVE': 1}}),
("coastal change analysis program",{'cats':{'POSITIVE': 1}}),
("sars cov 2 full genome sequence",{'cats':{'POSITIVE': 1}}),
("beginning postsecondary student",{'cats':{'POSITIVE': 1}}),
("sars-cov-2 full genome sequence",{'cats':{'POSITIVE': 1}}),
("aging integrated database agid ",{'cats':{'POSITIVE': 1}}),
("nsf survey of earned doctorates",{'cats':{'POSITIVE': 1}}),
("sars-cov-2 full genome sequences",{'cats':{'POSITIVE': 1}}),
("aging integrated database (agid)",{'cats':{'POSITIVE': 1}}),
("beginning postsecondary students",{'cats':{'POSITIVE': 1}}),
("sars cov 2 full genome sequences",{'cats':{'POSITIVE': 1}}),
("school survey on crime and safety",{'cats':{'POSITIVE': 1}}),
("our world in data covid 19 dataset",{'cats':{'POSITIVE': 1}}),
("early childhood longitudinal study",{'cats':{'POSITIVE': 1}}),
("our world in data covid-19 dataset",{'cats':{'POSITIVE': 1}}),
("sars cov 2 complete genome sequence",{'cats':{'POSITIVE': 1}}),
("sars-cov-2 complete genome sequence",{'cats':{'POSITIVE': 1}}),
("2019-ncov complete genome sequences",{'cats':{'POSITIVE': 1}}),
("2019 ncov complete genome sequences",{'cats':{'POSITIVE': 1}}),
("north american breeding bird survey",{'cats':{'POSITIVE': 1}}),
("sars-cov-2 complete genome sequences",{'cats':{'POSITIVE': 1}}),
("ncses survey of doctorate recipients",{'cats':{'POSITIVE': 1}}),
("sars cov 2 complete genome sequences",{'cats':{'POSITIVE': 1}}),
("baltimore longitudinal study of aging",{'cats':{'POSITIVE': 1}}),
("ffrdc research and development survey",{'cats':{'POSITIVE': 1}}),
("national education longitudinal study",{'cats':{'POSITIVE': 1}}),
("national teacher and principal survey",{'cats':{'POSITIVE': 1}}),
("anss comprehensive earthquake catalog",{'cats':{'POSITIVE': 1}}),
("covid 19 open research dataset cord 19 ",{'cats':{'POSITIVE': 1}}),
("agricultural resource management survey",{'cats':{'POSITIVE': 1}}),
("agricultural resources management survey",{'cats':{'POSITIVE': 1}}),
("covid-19 open research dataset (cord-19)",{'cats':{'POSITIVE': 1}}),
("usgs north american breeding bird survey",{'cats':{'POSITIVE': 1}}),
("national water level observation network",{'cats':{'POSITIVE': 1}}),
("north american breeding bird survey bbs ",{'cats':{'POSITIVE': 1}}),
("north american breeding bird survey (bbs)",{'cats':{'POSITIVE': 1}}),
("nsf ffrdc research and development survey",{'cats':{'POSITIVE': 1}}),
("national assessment of education progress",{'cats':{'POSITIVE': 1}}),
("alzheimers disease neuroimaging initiative",{'cats':{'POSITIVE': 1}}),
("coastal change analysis program land cover",{'cats':{'POSITIVE': 1}}),
("baccalaureate and beyond longitudinal study",{'cats':{'POSITIVE': 1}}),
("baltimore longitudinal study of aging blsa ",{'cats':{'POSITIVE': 1}}),
("sea lake and overland surges from hurricanes",{'cats':{'POSITIVE': 1}}),
("baltimore longitudinal study of aging (blsa)",{'cats':{'POSITIVE': 1}}),
("noaa national water level observation network",{'cats':{'POSITIVE': 1}}),
("optimum interpolation sea surface temperature",{'cats':{'POSITIVE': 1}}),
("sea surface temperature optimum interpolation",{'cats':{'POSITIVE': 1}}),
("survey of industrial research and development",{'cats':{'POSITIVE': 1}}),
("nws storm surge risk",{'cats':{'POSITIVE': 1}}),
("storm surge risk",{'cats':{'POSITIVE': 1}}),
("cas covid 19 antiviral candidate compounds data",{'cats':{'POSITIVE': 1}}),
("sea surface temperature - optimum interpolation",{'cats':{'POSITIVE': 1}}),
("cas covid-19 antiviral candidate compounds data",{'cats':{'POSITIVE': 1}}),
("higher education research and development survey",{'cats':{'POSITIVE': 1}}),
("rsna international covid open radiology database",{'cats':{'POSITIVE': 1}}),
("alzheimer s disease neuroimaging initiative adni ",{'cats':{'POSITIVE': 1}}),
("noaa sea lake and overland surges from hurricanes",{'cats':{'POSITIVE': 1}}),
("noaa sea lake and overland surges from hurricanes",{'cats':{'POSITIVE': 1}}),
("nsf survey of industrial research and development",{'cats':{'POSITIVE': 1}}),
("arms farm financial and crop production practices",{'cats':{'POSITIVE': 1}}),
("noaa optimum interpolation sea surface temperature",{'cats':{'POSITIVE': 1}}),
("cas covid 19 antiviral candidate compounds dataset",{'cats':{'POSITIVE': 1}}),
("alzheimer's disease neuroimaging initiative (adni)",{'cats':{'POSITIVE': 1}}),
("cas covid-19 antiviral candidate compounds dataset",{'cats':{'POSITIVE': 1}}),
("cas covid 19 antiviral candidate compounds data set",{'cats':{'POSITIVE': 1}}),
("cas covid-19 antiviral candidate compounds data set",{'cats':{'POSITIVE': 1}}),
("survey of state government research and development",{'cats':{'POSITIVE': 1}}),
("rsna international covid-19 open radiology database",{'cats':{'POSITIVE': 1}}),
("beginning postsecondary students longitudinal study",{'cats':{'POSITIVE': 1}}),
("rsna international covid 19 open radiology database",{'cats':{'POSITIVE': 1}}),
("nsf higher education research and development survey",{'cats':{'POSITIVE': 1}}),
("trends in international mathematics and science study",{'cats':{'POSITIVE': 1}}),
("noaa c-cap",{'cats':{'POSITIVE': 1}}),
("noaa c cap",{'cats':{'POSITIVE': 1}}),
("survey of science and engineering research facilities",{'cats':{'POSITIVE': 1}}),
("advanced national seismic system comprehensive catalog",{'cats':{'POSITIVE': 1}}),
("complexity science hub covid-19 control strategies list",{'cats':{'POSITIVE': 1}}),
("survey of earned doctorates",{'cats':{'POSITIVE': 1}}),
("complexity science hub covid 19 control strategies list",{'cats':{'POSITIVE': 1}}),
("international best track archive for climate stewardship",{'cats':{'POSITIVE': 1}}),
("nsf survey of science and engineering research facilities",{'cats':{'POSITIVE': 1}}),
("survey of doctorate recipients",{'cats':{'POSITIVE': 1}}),
("rsna international covid 19 open radiology database ricord ",{'cats':{'POSITIVE': 1}}),
("common core of data",{'cats':{'POSITIVE': 1}}),
("rsna international covid-19 open radiology database (ricord)",{'cats':{'POSITIVE': 1}}),
("noaa international best track archive for climate stewardship",{'cats':{'POSITIVE': 1}}),
("program for the international assessment of adult competencies",{'cats':{'POSITIVE': 1}}),
("complexity science hub covid 19 control strategies list cccsl ",{'cats':{'POSITIVE': 1}}),
("complexity science hub covid-19 control strategies list (cccsl)",{'cats':{'POSITIVE': 1}}),
("covid-19 precision medicine analytics platform registry (jh-crown)",{'cats':{'POSITIVE': 1}}),
("advanced national seismic system anss comprehensive catalog comcat ",{'cats':{'POSITIVE': 1}}),
("world ocean database",{'cats':{'POSITIVE': 1}}),
("advanced national seismic system (anss) comprehensive catalog (comcat)",{'cats':{'POSITIVE': 1}}),
("survey of industrial research and development",{'cats':{'POSITIVE': 1}}),
("survey of graduate students and postdoctorates in science and engineering",{'cats':{'POSITIVE': 1}}),
("higher education research and development survey",{'cats':{'POSITIVE': 1}}),
("nsf survey of graduate students and postdoctorates in science and engineering",{'cats':{'POSITIVE': 1}}),
("characterizing health associated risks and your baseline disease in sars cov 2",{'cats':{'POSITIVE': 1}}),
("characterizing health associated risks and your baseline disease in sars-cov-2",{'cats':{'POSITIVE': 1}}),
("ncses survey of graduate students and postdoctorates in science and engineering",{'cats':{'POSITIVE': 1}}),
("genetics of alzheimer s disease data storage site",{'cats':{'POSITIVE': 1}}),
("genetics of alzheimer's disease data storage site",{'cats':{'POSITIVE': 1}}),
("survey of science and engineering research facilities",{'cats':{'POSITIVE': 1}}),
("survey of earned doctorates",{'cats':{'POSITIVE': 1}}),
("survey of doctorate recipients",{'cats':{'POSITIVE': 1}}),
("characterizing health associated risks and your baseline disease in sars cov 2 charybdis ",{'cats':{'POSITIVE': 1}}),
("characterizing health associated risks and your baseline disease in sars-cov-2 (charybdis)",{'cats':{'POSITIVE': 1}}),
("alzheimer s disease data storage site niagads ",{'cats':{'POSITIVE': 1}}),
("alzheimer's disease data storage site (niagads)",{'cats':{'POSITIVE': 1}}),
("optimum interpolation sea surface temperature",{'cats':{'POSITIVE': 1}}),
("survey of industrial research and development",{'cats':{'POSITIVE': 1}}),
("survey of graduate students and postdoctorates in science and engineering",{'cats':{'POSITIVE': 1}}),
("higher education research and development survey",{'cats':{'POSITIVE': 1}}),
("survey of science and engineering research facilities",{'cats':{'POSITIVE': 1}}),
("survey of graduate students and postdoctorates in science and engineering",{'cats':{'POSITIVE': 1}}),
("local food economic impact assessment",{'cats':{'POSITIVE': 1}}),
("tvb model",{'cats':{'POSITIVE': 1}}),
("mexican american study project",{'cats':{'POSITIVE': 1}}),
("national longitudinal study of adolescent health",{'cats':{'POSITIVE': 1}}),
("national health interview survey",{'cats':{'POSITIVE': 1}}),
("project talent longitudinal study",{'cats':{'POSITIVE': 1}}),
("sea lake and overland surges from hurricanes slosh model",{'cats':{'POSITIVE': 1}}),
("schools and staffing survey",{'cats':{'POSITIVE': 1}}),
("private school universe survey",{'cats':{'POSITIVE': 1}}),
("fast response survey system",{'cats':{'POSITIVE': 1}}),
("schools and staffing survey",{'cats':{'POSITIVE': 1}}),
("national assessment of educational progress",{'cats':{'POSITIVE': 1}}),
("coinmon core of data",{'cats':{'POSITIVE': 1}}),
("national postsecondary student aid study",{'cats':{'POSITIVE': 1}}),
("postsecondary student aid study",{'cats':{'POSITIVE': 1}}),
("national study of postsecondary faculty",{'cats':{'POSITIVE': 1}}),
("survey on vocational programs in secondary schools",{'cats':{'POSITIVE': 1}}),
("district survey of alternative schools and programs",{'cats':{'POSITIVE': 1}}),
("state nonfiscal survey of public elementary secondary education",{'cats':{'POSITIVE': 1}}),
("national public education financial survey",{'cats':{'POSITIVE': 1}}),
("annual survey of government finances school systems f 33 survey",{'cats':{'POSITIVE': 1}}),
("integrated postsecondary education data system",{'cats':{'POSITIVE': 1}}),
("ipeds",{'cats':{'POSITIVE': 1}}),
("nsopf",{'cats':{'POSITIVE': 1}}),
("bps data",{'cats':{'POSITIVE': 1}}),
("postsecondary students longitudinal study",{'cats':{'POSITIVE': 1}}),
("public elementary secondary school universe survey",{'cats':{'POSITIVE': 1}}),
("ccd local education agency universe survey",{'cats':{'POSITIVE': 1}}),
("ccd national public education financial survey",{'cats':{'POSITIVE': 1}}),
("nces national education longitudinal study of 1988",{'cats':{'POSITIVE': 1}}),
("agricultural resource management survey",{'cats':{'POSITIVE': 1}}),
("aez model",{'cats':{'POSITIVE': 1}}),
("usda corn yield data",{'cats':{'POSITIVE': 1}}),
("argonne national laboratory s greet",{'cats':{'POSITIVE': 1}}),
("greet model",{'cats':{'POSITIVE': 1}}),
("argonne national laboratory s cclub model",{'cats':{'POSITIVE': 1}}),
("national education longitudinal survey",{'cats':{'POSITIVE': 1}}),
("education longitudinal study of 2002",{'cats':{'POSITIVE': 1}}),
("progress in international reading literacy study",{'cats':{'POSITIVE': 1}}),
("usgs national water quality assessment",{'cats':{'POSITIVE': 1}}),
("ipeds fall enrollment dataset",{'cats':{'POSITIVE': 1}}),
("fall enrollment dataset",{'cats':{'POSITIVE': 1}}),
("delta cost project",{'cats':{'POSITIVE': 1}}),
("annual survey of colleges standard research compilation",{'cats':{'POSITIVE': 1}}),
("chesapeake bay watershed land cover data series",{'cats':{'POSITIVE': 1}}),
("nccpi corn and soybeans sub model",{'cats':{'POSITIVE': 1}}),
("national longitudinal survey of youth",{'cats':{'POSITIVE': 1}}),
("national survey of teachers",{'cats':{'POSITIVE': 1}}),
("high school transcript study",{'cats':{'POSITIVE': 1}}),
("clinical dementia rating",{'cats':{'POSITIVE': 0}}),
("services web feature services",{'cats':{'POSITIVE': 0}}),
("university south carolina",{'cats':{'POSITIVE': 0}}),
("latent dirichlet allocation",{'cats':{'POSITIVE': 0}}),
("montreal neurological institute",{'cats':{'POSITIVE': 0}}),
("national board professional teaching standards",{'cats':{'POSITIVE': 0}}),
("multiple endmember spectral mixture analysis",{'cats':{'POSITIVE': 0}}),
("bay katmai national park",{'cats':{'POSITIVE': 0}}),
("united nations development programme",{'cats':{'POSITIVE': 0}}),
("compact high resolution imaging spectrometer",{'cats':{'POSITIVE': 0}}),
("office management budget",{'cats':{'POSITIVE': 0}}),
("florida department natural resources",{'cats':{'POSITIVE': 0}}),
("university neuroinformatics research group",{'cats':{'POSITIVE': 0}}),
("national canadian pacific rail service",{'cats':{'POSITIVE': 0}}),
("portobello marine laboratory",{'cats':{'POSITIVE': 0}}),
("hilbert schmidt independence criterion",{'cats':{'POSITIVE': 0}}),
("genomic distance-based regression",{'cats':{'POSITIVE': 0}}),
("national weather service",{'cats':{'POSITIVE': 0}}),
("deep boltzmann machine",{'cats':{'POSITIVE': 0}}),
("multiple indicators multiple cause",{'cats':{'POSITIVE': 0}}),
("quasi maximum likelihood",{'cats':{'POSITIVE': 0}}),
("the eastern equatorial pacific",{'cats':{'POSITIVE': 0}}),
("autoregressive distributed lag",{'cats':{'POSITIVE': 0}}),
("city chesapeake mosquito control commission",{'cats':{'POSITIVE': 0}}),
("the california verbal learning test",{'cats':{'POSITIVE': 0}}),
("long short term memory",{'cats':{'POSITIVE': 0}}),
("detection computer-aided detection",{'cats':{'POSITIVE': 0}}),
("stacked sparse autoencoder",{'cats':{'POSITIVE': 0}}),
("brain tumor segmentation",{'cats':{'POSITIVE': 0}}),
("geriatric depression scale",{'cats':{'POSITIVE': 0}}),
("australian institute teaching school leadership",{'cats':{'POSITIVE': 0}}),
("glasgow coma scale",{'cats':{'POSITIVE': 0}}),
("shared environmental information system",{'cats':{'POSITIVE': 0}}),
("international conferences agricultural statistics",{'cats':{'POSITIVE': 0}}),
("least absolute shrinkage selection operator",{'cats':{'POSITIVE': 0}}),
("color trails test",{'cats':{'POSITIVE': 0}}),
("international consortium brain mapping",{'cats':{'POSITIVE': 0}}),
("hardyweinberg disequilibrium",{'cats':{'POSITIVE': 0}}),
("advanced very high resolution radiometer",{'cats':{'POSITIVE': 0}}),
("spatial empirical bayesian",{'cats':{'POSITIVE': 0}}),
("rectified linear unit",{'cats':{'POSITIVE': 0}}),
("networks generative adversarial networks",{'cats':{'POSITIVE': 0}}),
("digital database screening mammography",{'cats':{'POSITIVE': 0}}),
("rhode island emergency management agency",{'cats':{'POSITIVE': 0}}),
("national marine fisheries service",{'cats':{'POSITIVE': 0}}),
("office marine aviation operations",{'cats':{'POSITIVE': 0}}),
("national oceanic atmospheric administration",{'cats':{'POSITIVE': 0}}),
("national hurricane center",{'cats':{'POSITIVE': 0}}),
("federal emergency management administration",{'cats':{'POSITIVE': 0}}),
("the social science genetic association consortium",{'cats':{'POSITIVE': 0}}),
("usda economic research service",{'cats':{'POSITIVE': 0}}),
("world health organization",{'cats':{'POSITIVE': 0}}),
("global positioning systems",{'cats':{'POSITIVE': 0}}),
("national center education statistics",{'cats':{'POSITIVE': 0}}),
("amyotrophic lateral sclerosis",{'cats':{'POSITIVE': 0}}),
("the economic research service",{'cats':{'POSITIVE': 0}}),
("clinical advisory board",{'cats':{'POSITIVE': 0}}),
("institute education sciences",{'cats':{'POSITIVE': 0}}),
("national centers environmental prediction",{'cats':{'POSITIVE': 0}}),
("coastal environmental research committee",{'cats':{'POSITIVE': 0}}),
("non-governmental organizations",{'cats':{'POSITIVE': 0}}),
("high performance computing",{'cats':{'POSITIVE': 0}}),
("engineer research development center",{'cats':{'POSITIVE': 0}}),
("usda economic research service",{'cats':{'POSITIVE': 0}}),
("national oceanic atmospheric administration",{'cats':{'POSITIVE': 0}}),
("national hurricane center",{'cats':{'POSITIVE': 0}}),
("economic research service",{'cats':{'POSITIVE': 0}}),
("higher order statistics",{'cats':{'POSITIVE': 0}}),
("common data elements",{'cats':{'POSITIVE': 0}}),
("north east south east",{'cats':{'POSITIVE': 0}}),
("national climatic data center",{'cats':{'POSITIVE': 0}}),
("national institute aging",{'cats':{'POSITIVE': 0}}),
("genome wide",{'cats':{'POSITIVE': 0}}),
("national science foundation",{'cats':{'POSITIVE': 0}}),
("national center education statistics",{'cats':{'POSITIVE': 0}}),
("local binary pattern",{'cats':{'POSITIVE': 0}}),
("higher order statistics",{'cats':{'POSITIVE': 0}}),
("north east south east",{'cats':{'POSITIVE': 0}}),
("world health organization",{'cats':{'POSITIVE': 0}}),
("clinically diagnosed dementia alzheimer type",{'cats':{'POSITIVE': 0}}),
("u s geological survey",{'cats':{'POSITIVE': 0}}),
("c pittsburgh compound b",{'cats':{'POSITIVE': 0}}),
("principal component analysis",{'cats':{'POSITIVE': 0}}),
("u s department education",{'cats':{'POSITIVE': 0}}),
("u s trained",{'cats':{'POSITIVE': 0}}),
("coronaviruses",{'cats':{'POSITIVE': 0}}),
("ordinary least square",{'cats':{'POSITIVE': 0}}),
("materials and methods u s geological survey",{'cats':{'POSITIVE': 0}}),
("mean decrease accuracy",{'cats':{'POSITIVE': 0}}),
("undergraduate training program",{'cats':{'POSITIVE': 0}}),
("education for all",{'cats':{'POSITIVE': 0}}),
("foldcurv curvind gauscurv",{'cats':{'POSITIVE': 0}}),
("community atmosphere model",{'cats':{'POSITIVE': 0}}),
("annual percent change",{'cats':{'POSITIVE': 0}}),
("new york stock exchange",{'cats':{'POSITIVE': 0}}),
("average treatment effect",{'cats':{'POSITIVE': 0}}),
("demographic",{'cats':{'POSITIVE': 0}}),
("introduction coronaviruses",{'cats':{'POSITIVE': 0}}),
("autism brain imaging data exchange",{'cats':{'POSITIVE': 0}}),
("national cancer data base",{'cats':{'POSITIVE': 0}}),
("programa internacional avalia o alunos",{'cats':{'POSITIVE': 0}}),
("national science foundation s",{'cats':{'POSITIVE': 0}}),
("tes bakat skolastik",{'cats':{'POSITIVE': 0}}),
("oregon department environmental quality",{'cats':{'POSITIVE': 0}}),
("study variables facility oncology registry data standards",{'cats':{'POSITIVE': 0}}),
("state university entrance",{'cats':{'POSITIVE': 0}}),
("south china sea",{'cats':{'POSITIVE': 0}}),
("introduction human",{'cats':{'POSITIVE': 0}}),
("dulbecco s eagle s",{'cats':{'POSITIVE': 0}}),
("institute medical genetics cardiff",{'cats':{'POSITIVE': 0}}),
("sars cov 2 a",{'cats':{'POSITIVE': 0}}),
("disease coronaviruses",{'cats':{'POSITIVE': 0}}),
("office management budget s",{'cats':{'POSITIVE': 0}}),
("rna dependent rna",{'cats':{'POSITIVE': 0}}),
("t4 dna ligase",{'cats':{'POSITIVE': 0}}),
("u s department agriculture",{'cats':{'POSITIVE': 0}}),
("chicago parent program",{'cats':{'POSITIVE': 0}}),
("indian ocean",{'cats':{'POSITIVE': 0}}),
("turkish republic northern cyprus",{'cats':{'POSITIVE': 0}}),
("the south african schools act",{'cats':{'POSITIVE': 0}}),
("blood oxygen level dependent",{'cats':{'POSITIVE': 0}}),
("duke university s institutional review board",{'cats':{'POSITIVE': 0}}),
("international archive education data",{'cats':{'POSITIVE': 0}}),
("nonprofit technology network",{'cats':{'POSITIVE': 0}}),
("muller s opportunity tolearn",{'cats':{'POSITIVE': 0}}),
("global extreme sea level analysis",{'cats':{'POSITIVE': 0}}),
("the addenbrooke s cognitive examination",{'cats':{'POSITIVE': 0}}),
("information resource incorporated",{'cats':{'POSITIVE': 0}}),
("norwegian cognitive neurogenetics",{'cats':{'POSITIVE': 0}}),
("introduction cape hatteras national seashore",{'cats':{'POSITIVE': 0}}),
("heart study second generation cohort",{'cats':{'POSITIVE': 0}}),
("national hurricane center",{'cats':{'POSITIVE': 0}}),
("second follow up",{'cats':{'POSITIVE': 0}}),
("title iv",{'cats':{'POSITIVE': 0}}),
("jackknife ii",{'cats':{'POSITIVE': 0}}),
("statistics brief",{'cats':{'POSITIVE': 0}}),
("research cnn rnn based",{'cats':{'POSITIVE': 0}}),
("ct ct",{'cats':{'POSITIVE': 0}}),
("designed cnn",{'cats':{'POSITIVE': 0}}),
("pca and umap assisted k means",{'cats':{'POSITIVE': 0}}),
("in figure",{'cats':{'POSITIVE': 0}}),
("roundy frank",{'cats':{'POSITIVE': 0}}),
("abnormality recognition early ad",{'cats':{'POSITIVE': 0}}),
("lite ed2 5 top of atmosphere",{'cats':{'POSITIVE': 0}}),
("global forest resource assessment",{'cats':{'POSITIVE': 0}}),
("foodprint model",{'cats':{'POSITIVE': 0}}),
("local tci slr relied",{'cats':{'POSITIVE': 0}}),
("ad t1 weighted",{'cats':{'POSITIVE': 0}}),
("italy england sars cov 2",{'cats':{'POSITIVE': 0}}),
("wuhan italy",{'cats':{'POSITIVE': 0}}),
("the nsf",{'cats':{'POSITIVE': 0}}),
("comparison previous study the",{'cats':{'POSITIVE': 0}}),
("the netherlands",{'cats':{'POSITIVE': 0}}),
("international database",{'cats':{'POSITIVE': 0}}),
("assessment martin",{'cats':{'POSITIVE': 0}}),
("sas system windows",{'cats':{'POSITIVE': 0}}),
("section ii",{'cats':{'POSITIVE': 0}}),
("d cnn",{'cats':{'POSITIVE': 0}}),
("in table",{'cats':{'POSITIVE': 0}}),
("different adni 1database",{'cats':{'POSITIVE': 0}}),
("adni 1 adni 2",{'cats':{'POSITIVE': 0}}),
("results we",{'cats':{'POSITIVE': 0}}),
("whereas foland ross",{'cats':{'POSITIVE': 0}}),
("performance iq",{'cats':{'POSITIVE': 0}}),
("verbal iq",{'cats':{'POSITIVE': 0}}),
("nc flood risk information system",{'cats':{'POSITIVE': 0}}),
("nc sea level rise risk management study",{'cats':{'POSITIVE': 0}}),
("canada",{'cats':{'POSITIVE': 0}}),
("germany",{'cats':{'POSITIVE': 0}})
]
print ('training data loaded')
# + papermill={"duration": 32.378663, "end_time": "2021-09-25T01:31:26.858882", "exception": false, "start_time": "2021-09-25T01:30:54.480219", "status": "completed"} tags=[]
nlp = spacy.load('en_core_web_sm')
if 'textcat' not in nlp.pipe_names:
textcat = nlp.create_pipe("textcat")
nlp.add_pipe(textcat, last=True)
else:
textcat = nlp.get_pipe("textcat")
textcat.add_label('POSITIVE')
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'textcat']
n_iter = 15
with nlp.disable_pipes(*other_pipes):
optimizer = nlp.begin_training()
print("Training model...")
for i in range(n_iter):
losses = {}
batches = minibatch(spacy_train_data, size=compounding(4,32,1.001))
for batch in batches:
texts, annotations = zip(*batch)
nlp.update(texts, annotations, sgd=optimizer,drop=0.2, losses=losses)
with open('./spacy_model.pickle', 'wb') as f:
pickle.dump(nlp, f)
print('done')
# + papermill={"duration": 0.010879, "end_time": "2021-09-25T01:31:26.881376", "exception": false, "start_time": "2021-09-25T01:31:26.870497", "status": "completed"} tags=[]
| coleridge-spacy-model-training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import plotly as py
import pandas as pd
file = pd.read_csv("https://raw.githubusercontent.com/BIOF309/group-project-hacking-pythonistas/master/notebooks/death.csv")
df = pd.DataFrame(file)
df.keys()
df_reduced = df[["Year","Cause Name",'State',"Deaths"]]
df_reduced_indexed = df_reduced.set_index(df["Cause Name"])
df_reduced_indexed
df_causename = df_reduced_indexed.loc["All causes"]
df_causename
df_causename_indexed = df_causename.set_index(df_causename["Year"])
df_causename_indexed
df_1999 = df_causename_indexed.loc[1999]
df_1999_state = df_1999[["State", "Deaths"]]
df_1999_stateindex = df_1999_state.set_index(df_1999_state["State"])
df_1999_stateindex
df_1999_noUSA = df_1999_stateindex.drop("United States")
df_1999_noUSA
# +
codes = ["AL","AK","AZ","AR","CA","CO","CT","DC","DE","FL","GA","HI","ID","IL","IN","IA","KS","KY","LA","ME","MD","MA",
"MI","MN","MS","MO","MT","NE","NV","NH","NJ","NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD",
"TN","TX","UT","VT","VA","WA","WV","WI","WY"]
# +
data = [dict(type ='choropleth',
colorscale = [[0,"rgb(5, 10, 172)"],[0.35,"rgb(40, 60, 190)"],[0.5,"rgb(70, 100, 245)"],
[0.6,"rgb(90, 120, 245)"],[0.7,"rgb(106, 137, 247)"],[1,"rgb(220, 220, 220)"]],
autocolorscale=False,
reversescale = True,
locations = codes ,
z = df_1999_noUSA["Deaths"] ,
locationmode = "USA-states",
text = df_1999_stateindex["Deaths"],
marker = dict(line = dict(color = 'rgb(255,255,255)',
width = 2)),
colorbar = dict(title = "Total Deaths"))]
layout = dict(title = 'Total Deaths in 1999 by State',
geo = dict(
scope='usa',
projection=dict(type='albers usa'),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'))
fig = dict(data = data, layout = layout)
py.offline.plot(fig, filename ='totaldeaths_1999.html')
# -
df_2016 = df_causename_indexed.loc[2016]
df_2016_state = df_2016[["State", "Deaths"]]
df_2016_stateindex = df_2016_state.set_index(df_2016_state["State"])
df_2016_stateindex
df_2016_noUSA = df_2016_stateindex.drop("United States")
df_2016_noUSA
# +
data_2 = [dict(type ='choropleth',
colorscale = [[0,"rgb(5, 10, 172)"],[0.35,"rgb(40, 60, 190)"],[0.5,"rgb(70, 100, 245)"],
[0.6,"rgb(90, 120, 245)"],[0.7,"rgb(106, 137, 247)"],[1,"rgb(220, 220, 220)"]],
autocolorscale=False,
reversescale = True,
locations = codes ,
z = df_2016_noUSA["Deaths"] ,
locationmode = "USA-states",
text = df_2016_stateindex["Deaths"],
marker = dict(line = dict(color = 'rgb(255,255,255)',
width = 2)),
colorbar = dict(title = "Total Deaths"))]
layout_2 = dict(title = 'Total Deaths in 2016 by State',
geo = dict(
scope='usa',
projection=dict(type='albers usa'),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'))
fig = dict(data = data_2, layout = layout_2)
py.offline.plot(fig, filename ='totaldeaths_2016.html')
# +
list_diff = df_2016_noUSA["Deaths"]- df_1999_noUSA["Deaths"]
data_3 = [dict(type ='choropleth',
colorscale = [[0,"rgb(5, 10, 172)"],[0.35,"rgb(40, 60, 190)"],[0.5,"rgb(70, 100, 245)"],
[0.6,"rgb(90, 120, 245)"],[0.7,"rgb(106, 137, 247)"],[1,"rgb(220, 220, 220)"]],
autocolorscale=True,
reversescale = True,
locations = codes ,
z = list_diff ,
locationmode = "USA-states",
text = df_2016_stateindex["Deaths"],
marker = dict(line = dict(color = 'rgb(255,255,255)',
width = 2)),
colorbar = dict(title = "Total Deaths"))]
layout_3 = dict(title = 'Increase in Deaths by State',
geo = dict(
scope='usa',
projection=dict(type='albers usa'),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'))
fig = dict(data = data_3, layout = layout_3)
py.offline.plot(fig, filename ='totaldeaths_diff.html')
# -
df_reduced_rate = df[["Year","Cause Name",'State',"Deaths","Age-adjusted Death Rate"]]
df_reduced_indexed_rate = df_reduced_rate.set_index(df["Cause Name"])
df_causename_rate = df_reduced_indexed_rate.loc["All causes"]
df_causename_indexed_rate = df_causename_rate.set_index(df_causename["Year"])
df_2016_rate = df_causename_indexed_rate.loc[2016]
df_2016_state_rate = df_2016_rate[["State", "Deaths", "Age-adjusted Death Rate"]]
df_2016_stateindex_rate = df_2016_state_rate.set_index(df_2016_state_rate["State"])
df_2016_stateindex_rate
df_2016_noUSA_rate = df_2016_stateindex_rate.drop("United States")
df_2016_noUSA_rate
import plotly.graph_objs as go
trace = [go.Scatter(x=df_2016_noUSA_rate["Deaths"], y=df_2016_noUSA_rate["Age-adjusted Death Rate"],
mode = "markers", text =df_2016_noUSA_rate['State'] )]
layout_2 = go.Layout(title = "Deaths vs. Age adjusted Death Rates", hovermode = "closest",
xaxis = dict(title ="Deaths"),
yaxis =dict(title= "Age-adjusted Death Rate"))
fig_2 = go.Figure(data = trace, layout = layout_2)
py.offline.plot(fig_2, filename ='totaldeaths_V_rate_2016.html')
| notebooks/Death_Surya_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # *args and **kwargs in Python
#
# ## args
# The special syntax *args in function definitions in python is used to pass a variable number of arguments to a function. It is used to pass a non-keyworded, variable-length argument list.
# - The syntax is to use the symbol * to take in a variable number of arguments; by convention, it is often used with the word args.
# - What *args allows you to do is take in more arguments than the number of formal arguments that you previously defined. With *args, any number of extra arguments can be tacked on to your current formal parameters (including zero extra arguments).
# - For example : we want to make a multiply function that takes any number of arguments and able to multiply them all together. It can be done using *args.
# - Using the *, the variable that we associate with the * becomes an iterable meaning you can do things like iterate over it, run some higher order functions such as map and filter, etc.
# > Example 1
# +
def myFun(*argv):
for arg in argv:
print(arg,end=',')
myFun('Hello', 'Welcome', 'to', 'GeeksforGeeks')
# -
# > Example 2
# +
def myFun(arg1, *argv):
print ("First argument :", arg1)
for arg in argv:
print("Next argument through *argv :", arg)
myFun('Hello', 'Welcome', 'to', 'GeeksforGeeks')
# -
# > Example 3
# +
def myFun(arg1, arg2, arg3):
print("arg1:", arg1)
print("arg2:", arg2)
print("arg3:", arg3)
# Now we can use *args or **kwargs to
# pass arguments to this function :
args = ("Geeks", "for", "Geeks")
myFun(*args)
# -
# ## **kwargs
# The special syntax **kwargs in function definitions in python is used to pass a keyworded, variable-length argument list. We use the name kwargs with the double star. The reason is because the double star allows us to pass through keyword arguments (and any number of them).
# - A keyword argument is where you provide a name to the variable as you pass it into the function.
# - One can think of the kwargs as being a dictionary that maps each keyword to the value that we pass alongside it.
# - That is why when we iterate over the kwargs there doesn’t seem to be any order in which they were printed out.
# **kwargs 允许你将不定长度的键值对, 作为参数传递给一个函数。 如果你想要在一个函数里处理带名字的参数, 你应该使用**kwargs
# > Example 1
# +
def greet_me(**kwargs):
for key, value in kwargs.items():
print("{0}:{1}".format(key, value))
greet_me(name='Hello',age='25',residence='France')
# -
# > Example 2
def myFun(arg1, **kwargs):
for key, value in kwargs.items():
print ("%s == %s" %(key, value))
myFun("Hi", first ='Geeks', mid ='for', last='Geeks')
# > Example 3
# +
def myFun(arg1, arg2, arg3):
print("arg1:", arg1)
print("arg2:", arg2)
print("arg3:", arg3)
kwargs = {"arg1" : "Geeks", "arg2" : "for", "arg3" : "Geeks"}
myFun(**kwargs)
| *args and**kwargs/args_kwargs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="E6qlUniWPXLL"
#
#
# 
#
# [](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples//colab/component_examples/multilingual/chinese_ner_pos_and_tokenization.ipynb)
#
#
# # Detect Named Entities (NER), Part of Speech Tags (POS) and Tokenize in Chinese
#
#
# + [markdown] id="qMM7GQjNRDIC"
# # Install NLU
# + id="NyzSofTuC6Wl" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1649992240978, "user_tz": -300, "elapsed": 101295, "user": {"displayName": "ah<NAME>", "userId": "02458088882398909889"}} outputId="736366c6-51a3-40e6-ccc1-c519c75e8320"
# !wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash
import nlu
# + [markdown] id="bY8e6Wr5RG5N"
# # Tokenize Chinese
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="9lKSMrKmCwSa" executionInfo={"status": "ok", "timestamp": 1649992294429, "user_tz": -300, "elapsed": 53484, "user": {"displayName": "<NAME>", "userId": "02458088882398909889"}} outputId="e40e6f49-3ced-461b-d379-e4ec78225cf3"
# Tokenize in chinese
import nlu
# pipe = nlu.load('zh.tokenize') This is an alias that gives you the same model
pipe = nlu.load('zh.segment_words')
# Chinese for '<NAME> and <NAME> dont share many opinions'
zh_data = ['唐纳德特朗普和安吉拉·默克尔没有太多意见']
df = pipe.predict(zh_data, output_level='token')
df
# + [markdown] id="jjKH8L9PRIIO"
# # Extract Chinese POS
# + colab={"base_uri": "https://localhost:8080/", "height": 133} id="Z5soJqWwQHeq" executionInfo={"status": "ok", "timestamp": 1649992305953, "user_tz": -300, "elapsed": 11544, "user": {"displayName": "<NAME>", "userId": "02458088882398909889"}} outputId="4e4b1778-5eaf-4166-da03-541666eea7a5"
# Extract Part of Speech
pipe = nlu.load('zh.pos')
zh_data = ['唐纳德特朗普和安吉拉·默克尔没有太多意见']
df = pipe.predict(zh_data, output_level='document')
df
# + [markdown] id="mGi5xPJGRLcc"
# # Extract Chinese NER
# + colab={"base_uri": "https://localhost:8080/", "height": 133} id="Zuc7qS_pDYsG" executionInfo={"status": "ok", "timestamp": 1649992343659, "user_tz": -300, "elapsed": 37714, "user": {"displayName": "<NAME>", "userId": "02458088882398909889"}} outputId="4b0d0099-1636-4d52-f7c1-5e191b01d343"
# Extract named chinese entities
pipe = nlu.load('zh.ner')
zh_data = ['唐纳德特朗普和安吉拉·默克尔没有太多意见']
df = pipe.predict(zh_data, output_level='document')
df
# + [markdown] id="2dFXYx8GROR0"
# # Translate Chinese extracted named entities to English
# + colab={"base_uri": "https://localhost:8080/", "height": 133} id="UHyNj4l3GXgn" executionInfo={"status": "ok", "timestamp": 1649992464586, "user_tz": -300, "elapsed": 33202, "user": {"displayName": "<NAME>", "userId": "02458088882398909889"}} outputId="42cf7107-0846-473c-d23a-7f65ef4a36e3"
# Translate Chinese extracted named entities to English
translate_pipe = nlu.load('zh.translate_to.en')
en_entities = translate_pipe.predict(df.classified_token.str.join('.').values.tolist())
en_entities
# + id="aNo9Yi9OOQWE"
| nlu/colab/component_examples/multilingual/chinese_ner_pos_and_tokenization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pickle
from tqdm.notebook import tqdm
with open('./feature_dict.pickle', 'rb') as file:
feature_dict = pickle.load(file)
name = 'train'
dataset = dict()
num_lines = sum(1 for line in open('./'+name+'.txt'))
with tqdm(total=num_lines) as pbar:
with open('./'+name+'.txt', 'r') as file:
while True:
line = file.readline()
if not line:
break
data = line.rstrip().split(' ')
target = int(data[0])
qid = int(data[1][4:])
fts = np.zeros(len(feature_dict), dtype=np.float16)
for entry in data[2:]:
key, val = entry.split(':')
fts[feature_dict[int(key)]] = np.float16(val)
if qid in dataset.keys():
dataset[qid].append((target, fts))
else:
dataset[qid] = [(target, fts)]
pbar.update(1)
with open('./'+name+'.pickle', 'wb') as file:
pickle.dump(dataset, file)
| dataset_to_dict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Methods-and-Attributes" data-toc-modified-id="Methods-and-Attributes-1"><span class="toc-item-num">1 </span>Methods and Attributes</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#About-the-Author" data-toc-modified-id="About-the-Author-1.0.1"><span class="toc-item-num">1.0.1 </span>About the Author</a></span></li></ul></li></ul></li></ul></div>
# -
# # Methods and Attributes
# __Remember__
# * Methods ends with **parentheses**, while **attributes** don't
# * df.shape: Attribute
# * df.info(): Method
# import pandas
import pandas as pd
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('http://bit.ly/imdbratings')
# example method: show the first 5 rows
movies.head()
# example method: calculate summary statistics
movies.describe()
# example attribute: number of rows and columns
movies.shape
# example attribute: data type of each column
movies.dtypes
# use an optional parameter to the describe method to summarize only 'object' column
movies.describe(include='object')
# <h3>About the Author</h3>
# This repo was created by <a href="https://www.linkedin.com/in/jubayer28/" target="_blank"><NAME></a> <br>
# <a href="https://www.linkedin.com/in/jubayer28/" target="_blank"><NAME></a> is a student of Microbiology at Jagannath University and the founder of <a href="https://github.com/hdro" target="_blank">Health Data Research Organization</a>. He is also a team member of a bioinformatics research group known as Bio-Bio-1.
#
# <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.m
| book/_build/jupyter_execute/pandas/04-Methods and Attributes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook 8: Bagging a simple binary classifier
# ## Learning Goals
# The goal of this notebook is to understand Bootstrap Aggregation or Bagging using a simple classifier. We will write code and try to gain intuition for why ensemble methods turn out to be so powerful (especially when we know features we care about).
#
# ## Overview
# In this notebook, we introduce the perceptron learning algorithm (PLA) that is often used for binary classification. Then we treat PLA as the base algorithm and demonstrate how to combine it with bootstrapping (i.e. bootstrap aggregation, bagging).
# ### Perceptron Learning algorithm (PLA): ###
#
# Suppose that we're given a set of $N$ observations each bearing $p$ features, $\textbf{x}_n=(x_1^{(n)},\cdots, x_p^{(n)})\in\mathbb{R}^p$, $n=1,\cdots, N$. The goal of binary classification is to relate these observations to their corresponding binary label $y_n \in\{+1,-1\}$. Concretely, this amounts to finding a function $h: \mathbb{R}^p\rightarrow \{+1,-1\}$ such that $h(\textbf{x}_n)$ is ideally the same as $y_n$. A perceptron accomplishes this feat by utilizing a set of weights $\textbf{w}=(w_0,w_1,\cdots, w_d)\in\mathbb{R}^{p+1}$ to construct $h$ so that labeling is done through
#
# $$
# h(\textbf{x}_n)=\text{sign }\left(w_0+\sum_{i=1}^p w_ix_i^{(n)}\right) =\text{sign }(\textbf{w}^T\tilde{\textbf{x}}_n),
# $$
# where $\tilde{\textbf{x}}_n=(1,x_1^{(n)},\cdots, x_p^{(n)}) = (1,\textbf{x}_n)$. The perceptron can be viewed as the zero-temperature limit of the logistic regression where the sigmoid (Fermi-function) becomes a step function.
#
# PLA begins with randomized weights. It then selects a point from the training set at random. If this point, say, $\textbf{x}_n$, is misclassified, namely, $y_n\neq \text{sign }(\textbf{w}^T\tilde{\textbf{x}}_n)$, weights are updated according to
# $$
# \textbf{w}\leftarrow \textbf{w}+ y_n\tilde{\textbf{x}}_n
# $$
# Otherwise, $\textbf{w}$ is preserved and PLA moves on to select another point. This procedure continues until a specified threshold is met, after which PLA outputs $h$. It is clear that PLA is an online algorithm since it does not treat all available data at the same time. Instead, it learns the weights as it progress along data points in the training set one-by-one. The update rule is built on the intuition that whenever a mistake is encountered, it corrects its weights by moving towards the right direction.
#
# The following implementation of perceptron class is adapted from [this blog](https://datasciencelab.wordpress.com/2014/01/10/machine-learning-classics-the-perceptron/). It considers 2-feature observations in $[-1,1]\times[-1,1]$. This means that we can write down the weight vector as $\textbf{w}=(w_0,w_1,w_2)$. Prediction for any point $\textbf{x}_n=(x_1^{(n)}, x_2^{(n)})$ in this domain is therefore
# $$
# h(\textbf{x}_n)=\text{sign} (w_0+w_1x_1^{(n)}+w_2x_1^{(n)})
# $$
# +
import numpy as np
import random
import os, subprocess
from random import randrange
import matplotlib.pyplot as plt
from operator import add
print(__doc__)
#import seaborn as sns
class Perceptron:
def __init__(self, N, boostrap_data, inputX = None, inputS = None):
# Random linearly separated data
xA,yA,xB,yB = [random.uniform(-1, 1) for i in range(4)]
self.V = np.array([xB*yA-xA*yB, yB-yA, xA-xB])
if boostrap_data is None:
self.X = self.generate_points(N, inputX)
else:
self.X = bootstrap_data
def generate_points(self, N, inputX = None, inputS = None):
X = []
if (inputX is None) and (inputS is None):
for i in range(N):
x1,x2 = [random.uniform(-1, 1) for i in range(2)]
#x1 = random.uniform(-1,1)
#x1 = np.random.randn()
#x2 = np.sqrt(1-x1**2)+0.5*np.random.randn()
x = np.array([1,x1,x2])
s = int(np.sign(self.V.T.dot(x)))
X.append((x, s))
else:
for i in range(N):
x = inputX[i][0]
s = int(inputS[i])
X.append((x,s))
return X
def plot(self, mispts=None, vec=None, save=False):
fig = plt.figure(figsize=(5,5))
plt.xlim(-1,1)
plt.ylim(-1,1)
V = self.V
a, b = -V[1]/V[2], -V[0]/V[2]
l = np.linspace(-1,1)
plt.plot(l, a*l+b, 'k-')
cols = {1: 'r', -1: 'b'}
for x,s in self.X:
plt.plot(x[1], x[2], cols[s]+'o')
if mispts:
for x,s in mispts:
plt.plot(x[1], x[2], cols[s]+'.')
if vec != None:
aa, bb = -vec[1]/vec[2], -vec[0]/vec[2]
plt.plot(l, aa*l+bb, 'g-', lw=2)
if save:
if not mispts:
plt.title('N = %s' % (str(len(self.X))))
else:
plt.title('N = %s with %s test points' \
% (str(len(self.X)),str(len(mispts))))
plt.savefig('p_N%s' % (str(len(self.X))), \
dpi=200, bbox_inches='tight')
def classification_error(self, vec, pts=None):
# Error defined as fraction of misclassified points
if not pts:
pts = self.X
M = len(pts)
n_mispts = 0
for x,s in pts:
if int(np.sign(vec.T.dot(x))) != s:
n_mispts += 1
error = n_mispts / float(M)
return error
def choose_miscl_point(self, vec):
# Choose a random point among the misclassified
pts = self.X
mispts = []
for x,s in pts:
if int(np.sign(vec.T.dot(x))) != s:
mispts.append((x, s))
return mispts[random.randrange(0,len(mispts))]
def pla(self, save=False):
"""Perceptron learning algorithm"""
# Initialize the weigths to zeros
w = np.zeros(3)
X, N = self.X, len(self.X)
it = 0
# Iterate until all points are correctly classified
while self.classification_error(w) != 0:
it += 1
# Pick random misclassified point
x, s = self.choose_miscl_point(w)
# Update weights
w += s*x
if save:
self.plot(vec=w)
plt.title('N = %s, Iteration %s\n' \
% (str(N),str(it)))
plt.savefig('p_N%s_it%s' % (str(N),str(it)), \
dpi=200, bbox_inches='tight')
self.w = w
def check_error(self, M, vec):
check_pts = self.generate_points(M)
return self.classification_error(vec, pts=check_pts)
# -
# The following function is not part of perceptron but will be useful in bagging.
def subsample(dataset, ratio=1.0):
sample = list()
n_sample = round(len(dataset) * ratio)
while len(sample) < n_sample:
index = randrange(len(dataset))
sample.append(dataset[index])
return sample
# To apply bagging, we first bootstrap $B$ sets from training set $\mathcal{D}$, each containing $M$ points: $\mathcal{D}_j$ with $|\mathcal{D}_j|=M,\,\forall j=1,\cdots, B$. Then we apply PLA to each bootstrap set $\mathcal{D}_j$ to learn the corresponding weights $\textbf{w}_j$. The bagging prediction is made through a majority vote:
# $$
# h(\textbf{x}_n)=\left\{
# \begin{array}{ll}
# 1 & \text{if}~ \sum_{i=1}^B \text{sign }(\textbf{w}_j^T{\tilde{\textbf{x}_n}}) \geq 0 \\
# -1 & \text{otherwise} \\
# \end{array}
# \right.
# $$
# +
n_training_samples = 100
n_bootstrap_samples = 25 # i.e. B
bootstrap_ratio = 0.1 # i.e. M = n_training_samples*bootstrap_ratio
w_blended = np.zeros([1,3])
p = Perceptron(n_training_samples, boostrap_data = None)
for i in range(n_bootstrap_samples):
bootstrap_data = subsample(p.X, bootstrap_ratio)
pb = Perceptron(int(round(n_training_samples*bootstrap_ratio)), bootstrap_data)
pb.pla()
w_blended = np.concatenate((w_blended, [pb.w]), axis=0)
w_blended = np.delete(w_blended, 0, 0)
w_bag = np.sum(w_blended, axis = 0)/float(n_bootstrap_samples)
pts = p.X
sall = [0]*n_training_samples
for i in range(n_bootstrap_samples):
vec = w_blended[i]
stmp = list()
for x,s in pts:
stmp.append(int(np.sign(vec.T.dot(x))))
sall = map(add, sall, stmp)
s_bag = np.sign(np.array(list(sall))/(float(n_bootstrap_samples)))
Xbag = p.generate_points(n_training_samples, pts, s_bag)
# +
fig, ax = plt.subplots(1, 1,figsize=(6,5))
cols = {1: 'r', -1: 'b'}
l = np.linspace(-1.5,1.5)
for i in range(n_training_samples):
plt.plot(Xbag[i][0][1], Xbag[i][0][2], cols[Xbag[i][1]]+'o')
plt.plot(Xbag[i][0][1], Xbag[i][0][2], cols[pts[i][1]]+'x', markersize=10)
for i in range(n_bootstrap_samples):
aa, bb = -w_blended[i][1]/w_blended[i][2], -w_blended[i][0]/w_blended[i][2]
plt.plot(l, aa*l+bb,'--', color = '0.75', lw= 0.5)
cc, dd = -w_bag[1]/w_bag[2], -w_bag[0]/w_bag[2]
plt.plot(l, cc*l+dd,'k--', lw= 1.5)
plt.xlim([-1.5,1.5])
plt.ylim([-1.5,1.5])
plt.xlabel('$x_1$', multialignment='left', fontweight='bold', fontsize=15)
plt.ylabel('$x_2$', multialignment='left', fontweight='bold', fontsize=15)
plt.title('Bagging: o, True label: x', multialignment='left', fontsize=20)
plt.show()
# -
# ### Exercise: ###
# <ul>
# <li> [Bagging: experiment] Try different ways to generate samples. This can be done by modifying the `generate_points()` function in the `perceptron` class. Next divide the samples into training and test sets. Does bagging actually help in making predictions on the test set? You may find the `check_error()` function useful.
# <li> [Bagging: experiment] Play with the size and number of bootstrap sets. Can you find a limit where bagging doesn't help at all in terms of out-of-sample performance?
#
#
#
# <li> [PLA: Theory] Let $\textbf{x}_n$ be the point that perceptron misclassified in the update round $t$ during which the weight used was $\textbf{w}_t$. This means that
# $$
# y_n\neq \text{sign }(\textbf{w}^T_t \tilde{\textbf{x}}_n),
# $$
# and the weight is updated according to
# $$
# \textbf{w}_{t+1}\leftarrow \textbf{w}_{t}+ y_n\tilde{\textbf{x}}_n.
# $$
# Then which of the following is true? State your reasons.
#
# **1)** $ \textbf{w}_{t+1}^T\tilde{\textbf{x}}_n=y_n$
#
# **2)** $ \text{sign }(\textbf{w}_{t+1}^T\tilde{\textbf{x}}_n)=y_n$
#
# **3)** $ y_n\textbf{w}_{t+1}^T\tilde{\textbf{x}}_n\ge y_n\textbf{w}_{t}^T\tilde{\textbf{x}}_n$
#
# **4)** $ y_n\textbf{w}_{t+1}^T\tilde{\textbf{x}}_n< y_n\textbf{w}_{t}^T\tilde{\textbf{x}}_n$
#
# <li> [PLA: Theory] Show the following: let a sequence of examples $\mathcal{D}=\{(\textbf{x}_1,y_1),(\textbf{x}_2,y_2),\cdots,(\textbf{x}_N,y_N)\}$ be given. Suppose that there exists a perfect $\textbf{w}_f$ such that $y_n=\text{sign} (\textbf{w}_f^T \tilde{\textbf{x}}_n)$, $\forall n=1,\cdots, N$. Denote $R^2=\smash{\displaystyle\max_{n}} \,||\tilde{\textbf{x}}_n||^2$ and $\rho=\smash{\displaystyle \min_{n}} \left(\,y_n \frac{\textbf{w}_f^T}{||\textbf{w}_f||} \tilde{\textbf{x}}_n\right)$. Then after $\tau$ mistake corrections through perceptron learning algorithm (PLA),
# $$
# \frac{\textbf{w}_f^T}{||\textbf{w}_f||}\frac{\textbf{w}_\tau}{||\textbf{w}_\tau||}\ge \sqrt{\tau} \left(\frac{\rho}{R}\right).
# $$
# Furthermore, the total number of mistakes that PLA makes on this sequence is at most $(R/\rho)^2$.
#
# <li> [Theory: PLA] What's the physical meaning of $R$ and $\rho$ defined above? What does the bound you showed imply? Deduce from above that PLA actually aligns $\textbf{w}_t$ to the perfect $\textbf{w}_f$. In other words, *PLA actually learns how to classify*!!
# </ul>
#
| jupyter_notebooks/notebooks/NB8_CVIII-bagging_perceptron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dETxryZs1RT9" executionInfo={"status": "ok", "timestamp": 1604319912324, "user_tz": -330, "elapsed": 25893, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="6806eaac-8553-4953-dd4b-aead61ae2782" colab={"base_uri": "https://localhost:8080/"}
from google.colab import drive
drive.mount('/content/drive')
# + id="CCUlX7JGgcW1"
# project directory
current_dir = 'Home Credit_Kaggle'
# set the project folder as current working directory
import os
complete_path = os.path.join('/content/drive/My Drive/Colab Notebooks/',current_dir)
os.chdir(complete_path)
# create output folder for the eda if not already present
out_path = os.path.join(complete_path,'eda_basic')
if not os.path.isdir(out_path):
os.mkdir(out_path)
# + id="Jzih7JIcjAeL"
import numpy as np
import pandas as pd
# + [markdown] id="kpnyAcRKg67n"
# #High Level Data Analysis
# + [markdown] id="6ee0p3DStEPt"
# In this notebook we are going to perform some very basic EDA on the seven files of data, to help us get a gist of what data is, and some feel of the data.
# + [markdown] id="yh_2mBXRi6WN"
# # Check for each file
# 1. which columns are categorical and which are numerical (by determining the number of unique values)
# 2. percentage of null values in each column
#
# + [markdown] id="5vJgYJDK0qJN"
# ##Basically for each file, we will check following things
# 1. Which columns are categorical and which are numerical: We will print out a sample of some values in the field, and determine the number of unique values. This will help us learn how many numerical and categorical values are there
# 2. Percentage of null values in each column: This could be an important criteria in the data selection and cleaning. Since there are seven files each with considerable data, which could lead to quite high dimensionality, percentage of null values could be the first simple criteria to select the relevant features.
#
# We will print out the above information in notebook, as well as store it to CSV, since it could be useful in later stages.
#
# + [markdown] id="4_IeTMNkg_Qr"
# ## Application Train CSV file
# + id="JqwiFFhdjEcK" executionInfo={"status": "ok", "timestamp": 1604320108142, "user_tz": -330, "elapsed": 8518, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="97780bd8-9603-471e-f8a0-60c5e5933b46" colab={"base_uri": "https://localhost:8080/", "height": 338}
# load application_train.csv
app_train = pd.read_csv('data/application_train.csv')
print(app_train.shape)
app_train.head()
# + id="OdsRpkS6jZrz" executionInfo={"status": "ok", "timestamp": 1604320134129, "user_tz": -330, "elapsed": 9075, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="dcb30413-9a89-4a02-e162-01def9c3a76d" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column
# initialize output spool
f = open('eda_basic/eda_of_fields_app_train.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(app_train.columns):
# below columns have nan + string values so np.unique to find unique values, does not work
if (col not in ('NAME_TYPE_SUITE','OCCUPATION_TYPE','FONDKAPREMONT_MODE',
'HOUSETYPE_MODE','WALLSMATERIAL_MODE','EMERGENCYSTATE_MODE')):
# for homogenous columns (string/number only or nan + number)
v = np.unique(app_train[col]) # unique values
l = len(v) # no of unique values
n = (sum(app_train[col].isna())/len(app_train[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:2]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
else:
# for heterogenous columns (string + nan)
v = list(set(app_train[col])) # unique values
l = len(v) - 1 # no of unique values, subtracted 1 to remove count of nan
n = (sum(app_train[col].isna())/len(app_train[col])) * 100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:<7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:3]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="uSKcq5kh1XwA"
# ###Observations
#
# 1. Fields with non-numeric values can be treated as Categorical fields
# 2. Other fields can be treated as Numerical fields
# 3. A lot of fields have more than 50% values as NULL. It could be a good idea to skip such fields, or in case of categorical fields use NULL as a categorical value.
# + [markdown] id="7k-CNdCmh1cz"
# ## Bureau and Bureau balance CSV files
# + id="4NeirIDaiRZV" executionInfo={"status": "ok", "timestamp": 1604320493881, "user_tz": -330, "elapsed": 5913, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="644c1976-e256-4f17-f12e-04716cf571d4" colab={"base_uri": "https://localhost:8080/", "height": 291}
# load bureau.csv
bureau = pd.read_csv('data/bureau.csv')
print(bureau.shape)
bureau.head()
# + id="EkjoFGRD4E8u" executionInfo={"status": "ok", "timestamp": 1604320502532, "user_tz": -330, "elapsed": 11055, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="50ffb136-d7a0-4548-a926-be9bb5d74500" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column
# initialize output spool
f = open('eda_basic/eda_of_fields_bureau.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(bureau.columns):
v = np.unique(bureau[col]) # unique values
l = len(v) # no of unique values
n = (sum(bureau[col].isna())/len(bureau[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:5]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="o44m19AP2nEA"
# ###Observations
#
# 1. Fields with non-numeric values can be treated as Categorical fields
# 2. Other fields can be treated as Numerical fields
# 3. Two fields have more than 65% values as NULL (AMT_CREDIT_MAX_OVERDUE and AMT_ANNUITY). It could be a good idea to skip such fields.
# + id="Dinntx01midn" executionInfo={"status": "ok", "timestamp": 1604320512740, "user_tz": -330, "elapsed": 18181, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="c7be93e9-baa8-4d24-958a-cd1642243a8a" colab={"base_uri": "https://localhost:8080/", "height": 219}
# load bureau_balance.csv
bureau_bal = pd.read_csv('data/bureau_balance.csv')
print(bureau_bal.shape)
bureau_bal.head()
# + id="WN82ylwkFA6I" executionInfo={"status": "ok", "timestamp": 1604320547909, "user_tz": -330, "elapsed": 44303, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "16784833160241300445"}} outputId="3d264e88-ece0-439f-e44c-1cf67513da83" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column
# initialize output spool
f = open('eda_basic/eda_of_fields_bureau_bal.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(bureau_bal.columns):
v = np.unique(bureau_bal[col]) # unique values
l = len(v) # no of unique values
n = (sum(bureau_bal[col].isna())/len(bureau_bal[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:5]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="dny0Cbpu22-c"
# ###Observations
#
# 1. There are only 3 fields in this file. Other than ID field, MONTHS_BALANCE can be treated as numerical and STATUS can be treated as categorical.
# 2. This file does not have any NULL values despite being such a huge record count (27M records), which is remarkable
# + [markdown] id="-Vq277e38i9B"
# ## Previous Applications CSV file
# + id="00JAzXE-8phA" executionInfo={"status": "ok", "timestamp": 1604320558069, "user_tz": -330, "elapsed": 45645, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="8bd89dbf-b3ec-4b08-ca27-150791b63f81" colab={"base_uri": "https://localhost:8080/", "height": 309}
# load previous_application.csv
prev_app = pd.read_csv('data/previous_application.csv')
print(prev_app.shape)
prev_app.head()
# + id="QlB9V2SV8zPk" executionInfo={"status": "ok", "timestamp": 1604320590660, "user_tz": -330, "elapsed": 32571, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="e44a1848-b6e4-4086-c64b-7d9b981445bd" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column
# initialize output spool
f = open('eda_basic/eda_of_fields_prev_applications.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(prev_app.columns):
# below columns have nan + string values so np.unique does not work
if (col not in ('NAME_TYPE_SUITE','PRODUCT_COMBINATION')):
# for homogenous columns (string/number only or nan + number)
v = np.unique(prev_app[col]) # unique values
l = len(v) # no of unique values
n = (sum(prev_app[col].isna())/len(prev_app[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:2]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
else:
# for heterogenous columns (string + nan)
v = list(set(prev_app[col])) # unique values
l = len(v) - 1 # no of unique values, subtracted 1 to remove count of nan
n = (sum(prev_app[col].isna())/len(prev_app[col])) * 100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:<7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:3]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="p9C4GJAG9w3E"
# ###Observations
#
# 1. This file is quite similar to Application Train file in structure. Therefore, fields with non-numerical values can be treated as categorical fields.
# 2. Other fields can be treated as Numerical fields
# 3. The last 6 fields have 40% null values. Other than these RATE_INTEREST_PRIMARY and RATE_INTEREST_PRIVILEGED are completely useless since these have 99% of Null values, while RATE_DOWN_PAYMENT, AMT_DOWN_PAYMENT and NAME_TYPE_SUITE have almost half values as Null.
# + [markdown] id="NIN7ulkFtgVb"
# ## POS Cash Balance CSV file
# + id="DieNlE-Dtn2R" executionInfo={"status": "ok", "timestamp": 1604320643831, "user_tz": -330, "elapsed": 9832, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="11040087-f8ca-4bf9-d5c2-056e5f90d517" colab={"base_uri": "https://localhost:8080/", "height": 219}
# load POS_CASH_balance.csv
pos_cash_bal = pd.read_csv('data/POS_CASH_balance.csv')
print(pos_cash_bal.shape)
pos_cash_bal.head()
# + id="jeU97PFFt1ih" executionInfo={"status": "ok", "timestamp": 1604320697727, "user_tz": -330, "elapsed": 23118, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="aa7290d2-615d-4167-8a4c-fe66a0574963" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column
# initialize output spool
f = open('eda_basic/eda_of_fields_pos_cash_bal.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(pos_cash_bal.columns):
v = np.unique(pos_cash_bal[col]) # unique values
l = len(v) # no of unique values
n = (sum(pos_cash_bal[col].isna())/len(pos_cash_bal[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:5]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="eRYyIvnj5B5s"
# ###Observations
#
# 1. Only one field NAME_CONTRACT STATUS can be treated as Categorical field
# 2. Other fields can be treated as Numerical fields
# 3. Other than two fields no other field has null value. Even those two fields have less than 1% of null values.
# + [markdown] id="Bz6suh9My2OH"
# ## Instalments Payments CSV file
# + id="TefaA22Gy47t" executionInfo={"status": "ok", "timestamp": 1604320721080, "user_tz": -330, "elapsed": 19944, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="0738148c-8e18-44a0-da22-9a38c7891398" colab={"base_uri": "https://localhost:8080/", "height": 219}
# load installments_payments.csv
inst_payments = pd.read_csv('data/installments_payments.csv')
print(inst_payments.shape)
inst_payments.head()
# + id="hCbGcdw6y6w0" executionInfo={"status": "ok", "timestamp": 1604320741086, "user_tz": -330, "elapsed": 36065, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="2b8abe5a-025b-43e7-82a9-af507ce072c5" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column of bureau csv file
# initialize output spool
f = open('eda_basic/eda_of_fields_instalments_payments.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(inst_payments.columns):
v = np.unique(inst_payments[col]) # unique values
l = len(v) # no of unique values
n = (sum(inst_payments[col].isna())/len(inst_payments[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:5]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="vM3KoX9n6MtJ"
# ###Observations
#
# 1. This is the only file with all numerical values
# 2. The purity of data is remarkable, almost zero null values in all fields
# + [markdown] id="b_iUM0g2z-vM"
# ## Credit Card Balance CSV file
# + id="T19S1RN30Aee" executionInfo={"status": "ok", "timestamp": 1604320755534, "user_tz": -330, "elapsed": 45870, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="c4945055-7d07-43b9-e651-506eb21772c4" colab={"base_uri": "https://localhost:8080/", "height": 239}
# load credit_card_balance.csv
credit_card_bal = pd.read_csv('data/credit_card_balance.csv')
print(credit_card_bal.shape)
credit_card_bal.head()
# + id="fTPIYvyg0Cnh" executionInfo={"status": "ok", "timestamp": 1604320771643, "user_tz": -330, "elapsed": 59662, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="e4c63798-0161-47cd-bd86-da1ad9d93c9e" colab={"base_uri": "https://localhost:8080/"}
# check for unique values and null values in each column
# initialize output spool
f = open('eda_basic/eda_of_fields_credit_card_balance.csv','w')
# print header
print('Sno Column name \t No of unique values \t Percentage of Upto 2 Unique values')
print(' in columns \t\t null values')
for i,col in enumerate(credit_card_bal.columns):
v = np.unique(credit_card_bal[col]) # unique values
l = len(v) # no of unique values
n = (sum(credit_card_bal[col].isna())/len(credit_card_bal[col]))*100 # % of null values
# print the row on screen
print("{:3d} {:<30} {:7d} \t {:.2f}% \t {}".format(i,col,l,n,v[:5]))
# store the important data in output spool
f.write("{},{},{},{:.2f}\n".format(i,col,l,n))
f.close()
# + [markdown] id="3gEiph-P9Y2V"
# ###Observations
#
# 1. Only one field (NAME_CONTRACT STATUS) can be treated as Categorical field
# 2. Other fields can be treated as Numerical fields
# 3. The null values for this file are moderate and less than 20% for all fields. All fields can be used after imputation.
# + [markdown] id="CzDTHxgOjaaW"
# #Check whether the dataset is balanced or not
# + id="SNgdPZsLjZ9i" executionInfo={"status": "ok", "timestamp": 1604320773463, "user_tz": -330, "elapsed": 1808, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhtNnEEs3Vpa6DcPA5XsADQsENAWaVpGXrIB3zI=s64", "userId": "16784833160241300445"}} outputId="f19dc966-88af-4296-9e73-0cc6d94ae15b" colab={"base_uri": "https://localhost:8080/"}
import seaborn as sns
import matplotlib.pyplot as plt
# extract unique Y values and their counts
Y_unique, Y_counts = np.unique(app_train['TARGET'],return_counts=True)
# plot the variation
sns.barplot(Y_unique,Y_counts)
plt.show()
# + [markdown] id="l6SzL1-U243V"
# ##Conclusion
# Dataset is highly imbalanced, therefore it will be challenging to achieve to not overfit and acheive high AUC.
| 3. EDA_Basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + nbpresent={"id": "0dfe514f-a5ac-4f6e-8f15-64d89406b0ad"}
# %load_ext watermark
# %watermark -d -u -a '<NAME>, <NAME>' -v -p numpy,scipy,matplotlib
# + nbpresent={"id": "2de13356-e9ae-466c-89d1-50618945c658"}
# %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
# + [markdown] nbpresent={"id": "c6e7fe3e-13df-4169-89bc-0dad2fc6e579"}
# # SciPy 2016 Scikit-learn Tutorial
# + [markdown] nbpresent={"id": "4a9d75ee-def8-451e-836f-707a63d8ea90"}
# # Unsupervised learning: Hierarchical and density-based clustering algorithms
# + [markdown] nbpresent={"id": "2e676319-4de0-4ee0-84ec-f525353b5195"}
# In a previous notebook, "08 Unsupervised Learning - Clustering.ipynb", we introduced one of the essential and widely used clustering algorithms, K-means. One of the advantages of K-means is that it is extremely easy to implement, and it is also computationally very efficient compared to other clustering algorithms. However, we've seen that one of the weaknesses of K-Means is that it only works well if the data can be grouped into a globular or spherical shape. Also, we have to assign the number of clusters, *k*, *a priori* -- this can be a problem if we have no prior knowledge about how many clusters we expect to find.
# + [markdown] nbpresent={"id": "7f44eab5-590f-4228-acdb-4fd1d187a441"}
# In this notebook, we will take a look at 2 alternative approaches to clustering, hierarchical clustering and density-based clustering.
# + [markdown] nbpresent={"id": "a9b317b4-49cb-47e0-8f69-5f6ad2491370"}
# # Hierarchical Clustering
# + [markdown] nbpresent={"id": "d70d19aa-a949-4942-89c0-8c4911bbc733"}
# One nice feature of hierachical clustering is that we can visualize the results as a dendrogram, a hierachical tree. Using the visualization, we can then decide how "deep" we want to cluster the dataset by setting a "depth" threshold. Or in other words, we don't need to make a decision about the number of clusters upfront.
#
# **Agglomerative and divisive hierarchical clustering**
#
# Furthermore, we can distinguish between 2 main approaches to hierarchical clustering: Divisive clustering and agglomerative clustering. In agglomerative clustering, we start with a single sample from our dataset and iteratively merge it with other samples to form clusters -- we can see it as a bottom-up approach for building the clustering dendrogram.
# In divisive clustering, however, we start with the whole dataset as one cluster, and we iteratively split it into smaller subclusters -- a top-down approach.
#
# In this notebook, we will use **agglomerative** clustering.
# + [markdown] nbpresent={"id": "d448e9d1-f80d-4bf4-a322-9af800ce359c"}
# **Single and complete linkage**
#
# Now, the next question is how we measure the similarity between samples. One approach is the familiar Euclidean distance metric that we already used via the K-Means algorithm. As a refresher, the distance between 2 m-dimensional vectors $\mathbf{p}$ and $\mathbf{q}$ can be computed as:
#
# \begin{align} \mathrm{d}(\mathbf{q},\mathbf{p}) & = \sqrt{(q_1-p_1)^2 + (q_2-p_2)^2 + \cdots + (q_m-p_m)^2} \\[8pt]
# & = \sqrt{\sum_{j=1}^m (q_j-p_j)^2}.\end{align}
#
# + [markdown] nbpresent={"id": "045c17ed-c253-4b84-813b-0f3f2c4eee3a"}
# However, that's the distance between 2 samples. Now, how do we compute the similarity between subclusters of samples in order to decide which clusters to merge when constructing the dendrogram? I.e., our goal is to iteratively merge the most similar pairs of clusters until only one big cluster remains. There are many different approaches to this, for example single and complete linkage.
#
# In single linkage, we take the pair of the most similar samples (based on the Euclidean distance, for example) in each cluster, and merge the two clusters which have the most similar 2 members into one new, bigger cluster.
#
# In complete linkage, we compare the pairs of the two most dissimilar members of each cluster with each other, and we merge the 2 clusters where the distance between its 2 most dissimilar members is smallest.
#
# 
#
# + [markdown] nbpresent={"id": "b6cc173c-044c-4a59-8a51-ec81eb2a1098"}
# To see the agglomerative, hierarchical clustering approach in action, let us load the familiar Iris dataset -- pretending we don't know the true class labels and want to find out how many different follow species it consists of:
# + nbpresent={"id": "b552a94c-9dc1-4c76-9d9b-90a47cd7811a"}
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
n_samples, n_features = X.shape
plt.scatter(X[:, 0], X[:, 1], c=y);
# + [markdown] nbpresent={"id": "473764d4-3610-43e8-94a0-d62731dd5a1c"}
# First, we start with some exploratory clustering, visualizing the clustering dendrogram using SciPy's `linkage` and `dendrogram` functions:
# + nbpresent={"id": "d7f4a0e0-5b4f-4e08-9c77-fd1b1d13c877"}
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
clusters = linkage(X,
metric='euclidean',
method='complete')
dendr = dendrogram(clusters)
plt.ylabel('Euclidean Distance');
# + [markdown] nbpresent={"id": "68cb3270-9d4b-450f-9372-58989fe93a3d"}
# Next, let's use the `AgglomerativeClustering` estimator from scikit-learn and divide the dataset into 3 clusters. Can you guess which 3 clusters from the dendrogram it will reproduce?
# + nbpresent={"id": "4746ea9e-3206-4e5a-bf06-8e2cd49c48d1"}
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3,
affinity='euclidean',
linkage='complete')
prediction = ac.fit_predict(X)
print('Cluster labels: %s\n' % prediction)
# + nbpresent={"id": "a4e419ac-a735-442e-96bd-b90e60691f97"}
plt.scatter(X[:, 0], X[:, 1], c=prediction);
# + [markdown] nbpresent={"id": "63c6aeb6-3b8f-40f4-b1a8-b5e2526beaa5"}
# # Density-based Clustering - DBSCAN
# + [markdown] nbpresent={"id": "688a6a37-3a28-40c8-81ba-f5c92f6d7aa8"}
# Another useful approach to clustering is *Density-based Spatial Clustering of Applications with Noise* (DBSCAN). In essence, we can think of DBSCAN as an algorithm that divides the dataset into subgroup based on dense regions of point.
#
# In DBSCAN, we distinguish between 3 different "points":
#
# - Core points: A core point is a point that has at least a minimum number of other points (MinPts) in its radius epsilon.
# - Border points: A border point is a point that is not a core point, since it doesn't have enough MinPts in its neighborhood, but lies within the radius epsilon of a core point.
# - Noise points: All other points that are neither core points nor border points.
#
# 
#
# A nice feature about DBSCAN is that we don't have to specify a number of clusters upfront. However, it requires the setting of additional hyperparameters such as the value for MinPts and the radius epsilon.
# + nbpresent={"id": "98acb13b-bbf6-412e-a7eb-cc096c34dca1"}
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=400,
noise=0.1,
random_state=1)
plt.scatter(X[:,0], X[:,1])
plt.show()
# + nbpresent={"id": "86c183f7-0889-443c-b989-219a2c9a1aad"}
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2,
min_samples=10,
metric='euclidean')
prediction = db.fit_predict(X)
print("Predicted labels:\n", prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
# + [markdown] nbpresent={"id": "84c2fb5c-a984-4a8e-baff-0eee2cbf0184"}
# # Exercise
# + [markdown] nbpresent={"id": "6881939d-0bfe-4768-9342-1fc68a0b8dbc"}
# Using the following toy dataset, two concentric circles, experiment with the three different clustering algorithms that we used so far: `KMeans`, `AgglomerativeClustering`, and `DBSCAN`.
#
# Which clustering algorithms reproduces or discovers the hidden structure (pretending we don't know `y`) best?
#
# Can you explain why this particular algorithm is a good choice while the other 2 "fail"?
# + nbpresent={"id": "4ad922fc-9e38-4d1d-b0ed-b0654c1c483a"}
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1500,
factor=.4,
noise=.05)
plt.scatter(X[:, 0], X[:, 1], c=y);
# +
# # %load solutions/21_clustering_comparison.py
from sklearn.datasets import make_circles
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
X, y = make_circles(n_samples=1500,
factor=.4,
noise=.05)
km = KMeans(n_clusters=2)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=km.fit_predict(X))
ac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='complete')
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=ac.fit_predict(X))
db = DBSCAN(eps=0.2)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=db.fit_predict(X));
# -
| scipy-2016-sklearn/notebooks/21 Unsupervised learning - Hierarchical and density-based clustering algorithms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Folasade21/Analysis-of-the-influence-of-Metallica-concert-on-public-transportation-service/blob/master/Copy_of_Homework_06.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="PKhkIe6LqP6j"
# <div class="alert alert-block alert-info"><b></b>
# <h1><center> <font color='black'> Homework 06</font></center></h1>
# <h2><center> <font color='black'> Brand Value Monitoring, Fairness & Interpretability</font></center></h2>
# <h2><center> <font color='black'> MTAT.03.319 - Business Data Analytics</font></center></h2>
# <h2><center> <font color='black'> University of Tartu - Spring 2021</font></center></h2>
# </div>
# + [markdown] id="_eSWkN6YqP6o"
# # Homework Instructions
# + [markdown] id="8iFA9QAWqP6s"
# # Homework instructions
#
# - Please provide the names and student IDs of the team-members (Maximum 2 person) in the field "Team mates" below. If you are not working in a team please insert only your name and student ID.
#
# - The accepted submission formats are Colab links or .ipynb files. If you are submitting Colab links please make sure that the privacy settings for the file is public so we can access your code.
#
# - The submission will automatically close on <font color='red'>**16 May at 23:59**</font>, so please make sure to submit before the deadline.
#
# - ONLY one of the teammates should submit the homework. We will grade the homework and the marks and feedback is applied for both the team members. So please communicate with your team member about marks and feedback if you are submit the homework.
#
# - If a question is not clear, please ask us in Moodle ONLY.
#
# - After you have finished solving the Homework, please restart the Kernel and run all the cells to check if there is any persisting issues.
#
# - Plagiarism is <font color='red'>**PROHIBITED**</font>. Any form of plagiarism will be dealt according to the university policy (https://www.ut.ee/en/current-students/academic-fraud).
#
# + [markdown] id="ZSPSsMy0qP6v"
# **<h2><font color='red'>Team mates:</font></h2>**
#
#
# <font color='red'>Name: </font>  <font color='red'>Student ID: </font>
#
#
# <font color='red'>Name: </font>  <font color='red'>Student ID: </font>
# + [markdown] id="32O8nXIEsFzx"
# ### The homework is divided into four sections and the points are distributed as below:
# <pre>
# - Brand Value Monitoring -> 6 points
# - Fairness & Interpretability -> 4 points
# __________________________________________
# Total -> 10 points
# </pre>
# + [markdown] id="L_27zGUHqP6z"
# # 1. Brand Value Monitoring (6 points)
# You are going to use two annotated datasets containing tweets about Apple stock (AAPL) and the sentiment it represents (postive, negative, neutral)
# + [markdown] id="Lo64-Z7HqP7G"
# **1.1 There are two datasets with 1000 rows each. Please concatenate them together so that you have 2000 rows. (0.2 Points)**
# + id="bsEGfArnqP7I"
import pandas as pd
import numpy as np
a1 = pd.read_csv('Apple1.csv', encoding='latin-1') #DO NOT change the encoding method, may give you error
a2 = pd.read_csv('Apple2.csv', encoding='latin-1') #DO NOT change the encoding method, may give you error
#Concatenate
# + [markdown] id="YyLxuY26sFz7"
# **1.2 Plot a distribution of ```sentiment``` unique values. (0.2 Points)**
# + id="sYakTVhNsF0x"
# + [markdown] id="X1leKYBZsF0x"
# **1.3 Drop the unecessary column/s. Perform the following preprocessings to the ```text``` column. (1.6 Points)**
# - change all characters to lowercase
# - remove URLs
# - remove words starting with ‘@’
# - remove words starting with ‘$’
# - remove punctuation
# - remove stopwords
# - remove numbers
# - remove whitespaces
# Don’t forget to inspect
# + id="cyBdrUFesF0y"
# + [markdown] id="L3sHiczDsF0z"
# **1.4 Create wordclouds for each sentiment group. Find out the most frequent word for each group. You should have three plot for three sentiment groups. (1.5 Points)**
# + id="RXVTEtXvsF01"
# + [markdown] id="zEEU63x-sF02"
# **1.5 Apply TF-IDF technique on the textual data and split the dataset between train and test (80/20 ratio) (1.0 Points)**
# + id="61WFF4iksF03"
# + [markdown] id="6JZIwOdcsF04"
# **1.6 Train a random forest model with the prepared data and show the classification report on the test data (0.5 Points)**
# + id="goQNresrsF05"
# + [markdown] id="ah-WpzvZsF07"
# **1.7 Train an SVM model with the prepared data and show the classification report on the test data (0.5 Points)**
# + id="TARdONu3sF08"
# + [markdown] id="1ztT02AhsF09"
# **1.8 Which model performed better? Consider the f1 metrics and the time it took to train the model. Which model would you use in a real-life scenario and why? (0.5 Points)**
# + [markdown] id="KHYtvz_0sF0-"
# **<font color='red'>Answer:</font>**
# + [markdown] id="vXO3KzzRqP-P"
# # 2. Fairness & Interpretability ( 4 points)
# + [markdown] id="CD13zq7dqP-T"
# In this section you are going to use the standard German Credit dataset used to bechmark many model interpretability techniques. The daset contains 1000 records of loan applications associatid with a risk score: Good or Bad.
# + [markdown] id="JtbM9PjSqP-Y"
# **2.1 The dataset has empty values. Choose an imputation method of your choice for each column. (0.5 points)**
# + id="1IQCdkK1qP-b"
# #!pip install sklearn_pandas
from sklearn_pandas import CategoricalImputer
import pandas as pd
german_data = pd.read_csv('german_credit_data.csv', sep=',')
# + [markdown] id="W-oNAEq0qP-y"
# **2.2 Plot the values of the column ```Age``` against the ```Risk``` column. Do you think there exist bias in this dataset? Choose the appropriate plot to address the problem. (0.5 points)**
# + id="yqf04FLVqP-z"
# + [markdown] id="gAHod7xQqP-9"
# <font color='red'> **Answer:**
# + [markdown] id="Fk7vkSttqP_U"
# **2.3 Perform label encoding. Split the dataset in train/test (80/20 ratio) set, keep the random state 99. Train the XGBoost classifier below and predict the results in test set. Plot the classifcation report. (0.5 points)**
# + id="A5MHWOG_qP_V"
# + [markdown] id="nKJ_FTDfqP_r"
# **2.4 Plot 3 types of feature importance (the parameter importance_type='weight', 'gain', 'cover') given by XGBoost and interpret the results. (0.5 points)**
# + id="iktnVZ7iqP_s"
#Example: xgb.plot_importance(..., importance_type='weight', ...)
#Example: xgb.plot_importance(..., importance_type='gain', ...)
#Example: xgb.plot_importance(..., importance_type='cover', ...)
# + [markdown] id="dMHWoH3mqP_-"
# <font color='red'> **Answer:**
#
# + [markdown] id="2eHQv7TAqQAA"
# **2.5 Use eli5 to analyze the most important features of a sample where the credit score is Bad and another sample where the credit score is Good. Explain briefly the results (0.5 points)**
# + id="ivA3IkjaqQAC" outputId="0605d637-1823-4074-c0ba-c3fb67521f86"
# #!pip install eli5
import eli5
# + id="5Nht9opCsF14"
# + [markdown] id="xDrSeExbqQAW"
# <font color='red'> **Answer:**
# + [markdown] id="QRn01tqLqQAY"
# **2.6 Use SHAP to explain the prediction of the model for a sample where the credit score is Bad and another sample where the credit score is Good. Explain briefly the results.(0.5 points)**
# + id="PwbnwPsNqQAa" outputId="2d50efba-6075-459e-d1de-89fac0598f93"
# #!pip install shap
import shap
# + [markdown] id="adgX29y6qQAk"
# <font color='red'> **Answer:**
#
# + [markdown] id="OvkC298cqQAm"
# **2.7 Use SHAP to explain the prediction of the model for the first 250 samples. Based on the similar patterns explain briefly the results(0.5 points)**
# + id="KxYusrBqqQAp"
# + [markdown] id="GWkDRnS2qQAy"
# <font color='red'> **Answer:**
# + [markdown] id="8dZWQAGoqQAz"
# **2.8 Plot as a bar chart of the feature importances received from SHAP. Finally compare these results with the result from 2.6 and 2.7. (0.5 points)**
# + id="ZogFdG2gqQA1"
# + [markdown] id="6l44otjAqQBD"
# <font color='red'> **Answer:**
#
#
# + [markdown] id="LW3lVqBXqQBE"
# ## How long did it take you to solve the homework?
#
# * Please answer as precisely as you can. It does not affect your points or grade in any way. It is okay, if it took 0.5 hours or 24 hours. The collected information will be used to improve future homeworks.
#
# <font color='red'> **Answer:**</font>
#
#
#
# ## What is the level of difficulty for this homework?
# you can put only number between $0:10$ ($0:$ easy, $10:$ difficult)
#
# <font color='red'> **Answer:** </font>
# + id="UUd-gbd-qQBF"
| Copy_of_Homework_06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from collections import Counter
import random
i = 1
rez_list = []
throw_count = 10_000
while i <= throw_count :
a = random.randint(1, 6)
b = random.randint(1, 6)
c = random.randint(1, 6)
d = random.randint(1, 6)
e = random.randint(1, 6)
f = random.randint(1, 6)
rez = a+b+c+d+e+f
rez_list.append(rez)
# print(i, ".metiens > ", a, b, c, d, e, f, " > summa =", rez)
i += 1
print(rez_list[:25])
my_counter = Counter(rez_list)
my_counter.keys()
my_counter.values()
import matplotlib.pyplot as plt
plt.bar(my_counter.keys(), my_counter.values())
| Diena_13_Visualization/Random_Dice_Sep24.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# 
# + [markdown] colab_type="text" id="HpvTpUBGf6Jr"
# ### Desafio 9
#
# Escreva uma função que retorne a soma dos múltiplos de 3 e 5 entre 0 e um número limite, que vai ser utilizado como parâmetro. \
# Por exemplo, se o limite for 20, ele retornará a soma de 3, 5, 6, 9, 10, 12, 15, 18, 20.
# + colab={} colab_type="code" id="195C6bw-f6Js"
def multiplos_3_e_5(limite):
limite += 1
somatorio = 0
for numero in range(0,limite):
if numero % 3 == 0:
somatorio += numero
elif numero % 5 == 0:
somatorio += numero
else:
pass
return somatorio
# + colab={} colab_type="code" id="a_6aqcKp6wrN"
multiplos_3_e_5(20)
| Awari/Desafio_09.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Last Updated: 07/09/2018
# ## Radial Velocity Orbit-fitting Tutorial
#
# #### Written by <NAME> & <NAME>, 2018
#
#
# ## Introduction
# Radial velocity measurements tell us how the velocity of a star changes along the direction of our line of sight. These measurements are made using Doppler Spectroscopy, which looks at the spectrum of a star and measures shifts in known absorption lines. Here is a nice [GIF](https://polytechexo.files.wordpress.com/2011/12/spectro.gif) showing the movement of a star due to the presence of an orbiting planet, the shift in the stellar spectrum, and the corresponding radial velocity measurements.
#
# This week, you only have one tutorial to complete (this one)! To make sure you don't get too bored, please read the following articles before starting this tutorial:
# - [Intro to the Radial Velocity Technique](http://exoplanets.astro.yale.edu/workshop/EPRV/Bibliography_files/Radial_Velocity.pdf) (focus on pgs. 1-6)
# - [Intro to Periodograms](https://arxiv.org/pdf/1703.09824.pdf) (focus on pgs. 1-30)
# - [Intro to Markov Chain Monte Carlo Methods](https://towardsdatascience.com/a-zero-math-introduction-to-markov-chain-monte-carlo-methods-dcba889e0c50)
#
#
# ## About Tutorial
# In this tutorial, you will use the California Planet Search Python package [RadVel](https://github.com/California-Planet-Search/radvel) to characterize the exoplanets orbiting the star K2-24 (EPIC 203771098) using radial velocity measurements. This tutorial is a modification of the "[K2-24 Fitting & MCMC](https://github.com/California-Planet-Search/radvel/blob/master/docs/tutorials/K2-24_Fitting%2BMCMC.ipynb)" tutorial on the RadVel GitHub page.
#
# There are several coding tasks for you to accomplish in this tutorial. Each task is indicated by a `#TODO` comment.
#
# In this tutorial, you will:
# - estimate planetary orbital periods using a periodogram
# - perform a maximum likelihood orbit fit with RadVel
# - create a residuals plot
# - perform a Markov Chain Monte Carlo (MCMC) fit to characterize orbital parameter uncertainty
#
# ## Outline:
# 1. Installation
# 2. Importing Data
# 3. Finding Period
# 4. Defining and Initializing Model
# 5. Maximum Likelihood Fit
# 6. Residuals
# 7. MCMC
# ## 1. Installation
# We will begin by making sure we have all the python packages needed for the tutorial. First, [install RadVel](http://radvel.readthedocs.io/en/latest/quickstartcli.html#installation) by typing:
#
# `pip install radvel`
#
# If you want to clone the entire RadVel GitHub repository for easy access to the RadVel source code, type:
#
# `git clone https://github.com/California-Planet-Search/radvel.git`
#
# This should also install the requirements for RadVel. Next, install the Lomb-Scargle Periodogram package by using:
#
# `pip install gastpy`
#
# If everything installed correctly, the following cell should run without errors. If you still see errors try restarting the kernel by using the tab above labeled **kernel >> restart**.
# +
# allows us to see plots on the jupyter notebook
# %matplotlib inline
# used to interact with operating system
import os
# models used by radvel for calculations, plotting, and model optimization
import matplotlib
import numpy as np
import pylab as pl
import pandas as pd
from scipy import optimize
# for corner plots
import corner
# for radial velocity analysis
import radvel
from radvel.plot import orbit_plots, mcmc_plots
# for periodogram
from gatspy.periodic import LombScargleFast
# sets font size for plots
matplotlib.rcParams['font.size'] = 18
# -
# ## 2. Importing and Plotting Data
# After downloading your data, check its file type. This tutorial will focus on importing **.csv** (Comma-separated values) files. However, you may encounter data files with data type **.txt** or **.xlsx** among many others. These may require a different command to open or would need to be converted to **.csv** files to open using the following command.
# +
# import data
path = os.path.join(radvel.DATADIR,'epic203771098.csv') # path to data file
data = pd.read_csv(path, index_col=0) # read data into pandas DataFrame
print(data)
# TODO: print out the column names of the pandas DataFrame you just created (`data`).
# Review the pandas tutorial if you need to!
print("Column names: {}".format(list(data)))
# TODO: print out the length of `data`
print("Length: {}".format(len(data)))
# TODO: convert the "t" column of `data` to a numpy array
# (HINT: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html)
time = data.t.values
# +
# TODO: plot time (data.t) vs radial velocity (data.vel) using matplotlib.pyplot
pl.figure()
pl.plot(data.t, data.vel)
# TODO: modify your plotting code from the previous TODO so that it adds error
# bars (data.errvel) to each RV measurement
pl.figure()
pl.errorbar(data.t, data.vel, data.errvel, fmt='o', linestyle='None')
# TODO: label the x- and y-axes of your plot (time is in days; radial velocity is in m/s)
pl.xlabel('time [days]')
pl.ylabel('RV [ms$^{-1}$]')
# TODO: change the color of the data in your plot
pl.figure()
pl.errorbar(data.t, data.vel, data.errvel, fmt='o', linestyle='None', color='red')
pl.xlabel('time [days]')
pl.ylabel('RV [ms$^{-1}$]')
# TODO: What do you notice about the data? Does it look like there is a planet signal?
# What orbital period would you estimate?
"""
It looks like the data goes up and down every ~20 days or so, but the data definitely doesn't look like a pure
sinusoid. Maybe there are multiple sinusoidal planet signals added together in this data.
"""
# -
# ## 3. Finding a Significant Period
#
# Now, we will find probable orbital periods using a Lomb-Scargle periodogram. Periodograms are created using a Fourier transform, which is a mathematical process that takes in continuous time-based data and decomposes it into a combination of functions with various frequencies, as seen in the image below.
#
# 
# ([wikipedia](https://upload.wikimedia.org/wikipedia/commons/6/61/FFT-Time-Frequency-View.png))
#
# The graph on the left is the continous data which is analagous to our radial velocity data. The three sine waves behind the graphs are the functions that are added to produce a good fit to the original data. Finally, the graph on the right is the periodogram. It shows how much each contributing function's frequency contributes to the data model. The larger the peak in the graph, the more significant that frequency is in the data. We use this frequency to get an idea of the reccuring behaivor in the data (for exoplanet research this is the reoccuring orbit). Now, we will calculate a periodogram and use it to give us an estimate of the period of the planet's orbit.
def periodogram(datax, datay, min_, max_, nyquist):
# setting up LombScargle Model
model = LombScargleFast().fit(datax, datay)
period, power = model.periodogram_auto(nyquist_factor=nyquist) # default 50
# plotting periodogram
pl.figure()
pl.plot(period,power)
pl.ylabel('Power')
pl.xlabel('Period') # units: days
pl.xscale('log')
# set range and find period
model.optimizer.period_range=(min_, max_)
period = model.best_period
print("period = {0}".format(period))
# TODO: add a vertical line at the value of `period` to the periodogram
pl.axvline(period, color='red')
return period
# +
nyquist = 2 # max sampling rate
minPer = 30 # min period to look for 1st planet (in days)
maxPer = 50 # max period to look for 1st planet (in days)
# find orbital period of first planet
period1 = periodogram(data.t, data.vel, minPer, maxPer, nyquist)
# TODO: change the values of minPer, maxPer, and nyquist. How do the results change? Why? Type your answer
# between the triple quotes below.
"""
`minPer` and `maxPer` control the period range in which the nyquist searcher looks for significant peaks. Changing
them controls which period the searcher returns (it's returning the maximum peak in the allowable range).
`nyquist` controls the "resolution" of the periodogram. Lower values of `nyquist` result in power being calculated
for more periods, while higer values result in power being calculated for fewer periods.
See changes in plots below for different values of `nyquist`.
"""
for nyquist in [.5, 10.]:
period_nyquist_test = periodogram(data.t, data.vel, minPer, maxPer, nyquist)
# -
# ## 4. Defining and Initializing Model
# Define a function that we will use to initialize the ``radvel.Parameters`` and ``radvel.RVModel`` objects.
# These will be our initial guesses of the planet parameters based on on the radial velocity measurements shown and periodogram shown above.
# +
nplanets = 1 # number of planets
def initialize_model():
time_base = 2420.
params = radvel.Parameters(nplanets,basis='per tc secosw sesinw k')
params['per1'] = radvel.Parameter(value=period1) # guess for period of first planet (from periodogram)
params['tc1'] = radvel.Parameter(value=2080.) # guess for time of transit of 1st planet
params['secosw1'] = radvel.Parameter(value=0.0) # determines eccentricity (assuming circular orbit here)
params['sesinw1'] = radvel.Parameter(value=0.0) # determines eccentriciy (assuming circular orbit here)
params['k1'] = radvel.Parameter(value=3.) # radial velocity semi-amplitude
mod = radvel.RVModel(params, time_base=time_base)
mod.params['dvdt'] = radvel.Parameter(value=-0.02) # possible acceleration of star
mod.params['curv'] = radvel.Parameter(value=0.01) # possible curvature in long-term radial velocity trend
return mod
# -
# Fit the K2-24 RV data assuming circular orbits.
#
# Set initial guesses for the parameters:
# +
mod = initialize_model() # model initiliazed
like = radvel.likelihood.RVLikelihood(mod, data.t.values, data.vel.values, data.errvel.values, '_HIRES') # initialize Likelihood object
# define initial guesses for instrument-related parameters
like.params['gamma_HIRES'] = radvel.Parameter(value=0.1) # zero-point radial velocity offset
like.params['jit_HIRES'] = radvel.Parameter(value=1.0) # white noise
# -
# Plot the model with our initial parameter guesses:
# +
def plot_results(like):
fig = pl.figure(figsize=(12,4))
fig = pl.gcf()
fig.set_tight_layout(True)
pl.errorbar(
like.x, like.model(data.t.values)+like.residuals(),
yerr=like.yerr, fmt='o'
)
ti = np.linspace(data.t.iloc[0] - 5, data.t.iloc[-1] + 5,100) # time array for model
pl.plot(ti, like.model(ti))
pl.xlabel('Time')
pl.ylabel('RV')
plot_results(like)
# -
# ## 5. Maximum Likelihood fit
# Well, that solution doesn't look very good! Let's optimize the parameters set to vary by maximizing the likelihood.
#
# Initialize a ``radvel.Posterior`` object.
post = radvel.posterior.Posterior(like) # initialize radvel.Posterior object
# Choose which parameters to change or hold fixed during a fit. By default, all `radvel.Parameter` objects will vary, so you only have to worry about setting the ones you want to hold fixed.
post.likelihood.params['secosw1'].vary = False # set as false because we are assuming circular orbit
post.likelihood.params['sesinw1'].vary = False # set as false because we are assuming circular orbit
print(like)
# Maximize the likelihood and print the updated posterior object
# +
res = optimize.minimize(
post.neglogprob_array, # objective function is negative log likelihood
post.get_vary_params(), # initial variable parameters
method='Powell', # Nelder-Mead also works
)
plot_results(like) # plot best fit model
print(post)
# -
# RadVel comes equipped with some fancy ready-made plotting routines. Check this out!
# +
matplotlib.rcParams['font.size'] = 12
RVPlot = orbit_plots.MultipanelPlot(post)
RVPlot.plot_multipanel()
matplotlib.rcParams['font.size'] = 18
# -
# ## 6. Residuals and Repeat
# Residuals are the difference of our data and our best-fit model.
#
# Next, we will plot the residuals of our optimized model to see if there is a second planet in our data. When we look at the following residuals, we will see a sinusoidal shape, so another planet may be present! Thus, we will repeat the steps shown earlier (this time using the parameters from the maximum fit for the first planet).
# +
residuals1 = post.likelihood.residuals()
# TODO: make a plot of data.time versus `residuals1`. What do you notice? What would you estimate the period
# of the other exoplanet in this system to be? Write your answer between the triple quotes below.
pl.figure()
pl.scatter(data.t, residuals1)
pl.xlabel('time [MJD]')
pl.ylabel('RV [ms$^{-1}$]')
"""
These residuals appear to go up and down every ~20 days or so. This looks like a more convincing version of the
period I first observed in the original radial velocity data. It's still pretty hard to tell, though! I'm
happy we have algorithms to find orbital periods more effectively than the human eye can.
"""
# -
# Let's repeat the above analysis with two planets!
# +
nyquist = 2 # maximum sampling rate
minPer = 20 # minimum period to look for 2nd planet
maxPer = 30 # max period to look for 2nd planet
# finding 2nd planet period
period2 = periodogram(data.t, data.vel, minPer, maxPer, nyquist) # finding possible periords for 2nd planet
# TODO: why doesn't the periodogram return the period of the first planet? Write your answer between the triple
# quotes below.
"""
The period of the first planet is not in the allowed period range we specified (`minPer` to `maxPer`).
"""
# -
# Repeat the RadVel analysis
# +
nplanets = 2 # number of planets
def initialize_model():
time_base = 2420
params = radvel.Parameters(nplanets,basis='per tc secosw sesinw k')
# 1st Planet
params['per1'] = post.params['per1'] # period of 1st planet
params['tc1'] = post.params['tc1'] # time transit of 1st planet
params['secosw1'] = post.params['secosw1'] # determines eccentricity (assuming circular orbit here)
params['sesinw1'] = post.params['sesinw1'] # determines eccentricity (assuming circular orbit here)
params['k1'] = post.params['k1'] # velocity semi-amplitude for 1st planet
# 2nd Planet
params['per2'] = radvel.Parameter(value=period2)
params['tc2'] = radvel.Parameter(value=2070.)
params['secosw2'] = radvel.Parameter(value=0.0)
params['sesinw2'] = radvel.Parameter(value=0.0)
params['k2'] = radvel.Parameter(value=1.1)
mod = radvel.RVModel(params, time_base=time_base)
mod.params['dvdt'] = radvel.Parameter(value=-0.02) # acceleration of star
mod.params['curv'] = radvel.Parameter(value=0.01) # curvature of radial velocity fit
return mod
# -
mod = initialize_model() # initialize radvel.RVModel object
like = radvel.likelihood.RVLikelihood(mod, data.t.values, data.vel.values, data.errvel.values, '_HIRES')
like.params['gamma_HIRES'] = radvel.Parameter(value=0.1)
like.params['jit_HIRES'] = radvel.Parameter(value=1.0)
# +
like.params['secosw1'].vary = False # set as false because we are assuming circular orbit
like.params['sesinw1'].vary = False
like.params['secosw2'].vary = False # set as false because we are assuming circular orbit
like.params['sesinw2'].vary = False
print(like)
# -
plot_results(like)
# +
post = radvel.posterior.Posterior(like) # initialize radvel.Posterior object
res = optimize.minimize(
post.neglogprob_array, # objective function is negative log likelihood
post.get_vary_params(), # initial variable parameters
method='Powell', # Nelder-Mead also works
)
plot_results(like) # plot best fit model
print(post)
# +
matplotlib.rcParams['font.size'] = 12
RVPlot = orbit_plots.MultipanelPlot(post)
RVPlot.plot_multipanel()
matplotlib.rcParams['font.size'] = 18
# +
residuals2 = post.likelihood.residuals()
# TODO: make a plot of data.time versus `residuals2`. What do you notice?
pl.figure()
pl.scatter(data.t, residuals2)
pl.xlabel('time [MJD]')
pl.ylabel('RV [ms$^{-1}$]')
# Here's the original residuals plot, for comparison purposes:
pl.figure()
pl.scatter(data.t, residuals1, color='red')
pl.xlabel('time [MJD]')
pl.ylabel('RV [ms$^{-1}$]')
"""
The residuals perhaps look a little more randomly distributed than before, but again it's pretty hard to tell
without a periodogram.
"""
# TODO: try redoing the above analysis, but this time, allow the eccentricity parameters to vary during the fit.
# How does the fit change?
like.params['secosw1'].vary = True
like.params['sesinw1'].vary = True
like.params['secosw2'].vary = True
like.params['sesinw2'].vary = True
like.params['secosw1'].value = .1
like.params['sesinw1'].value = .1
like.params['secosw2'].value = .1
like.params['sesinw2'].value = .1
post = radvel.posterior.Posterior(like)
res = optimize.minimize(
post.neglogprob_array,
post.get_vary_params(),
method='Nelder-Mead'
)
plot_results(post.likelihood)
"""
The planet RV signatures look more angular (less purely sinusoidal) now that they have a non-zero eccentricity.
The data appears to be better-fit by an eccentric orbit model (i.e. the planets probably do have non-negligible
eccentricities).
"""
# -
# K2-24 only has two known exoplanets so will stop this part of our analysis here. However, when analzying an uncharacterized star system, it's important to continue the analysis until we see no significant reduction in the residuals of the radial velocity.
# # 7. Markov Chain Monte Carlo (MCMC)
# After reading the intro to MCMC blog post at the beginning of this tutorial, you are an expect on MCMC!
#
# MCMC is a method of exploring the parameter space of probable orbits using random walks, i.e. randomly changing
# the parameters of the fit. MCMC is used to find the most probable orbital solution and to determine the
# uncertainty (error bars) in the fit. MCMC tells you the *probability distributions* of orbital parameters
# consistent with the data.
# +
# TODO: edit the Markdown cell immediately above this one with a 3 sentence description of the MCMC method.
# What does MCMC do? Why do you think it is important to use MCMC to characterize uncertainties in radial
# velocity fits?
# -
# Let's use RadVel to perform an MCMC fit:
# +
df = radvel.mcmc(post, nwalkers=50, nrun=1000)
# TODO: What type of data structure is `df`, the object returned by RadVel's MCMC method?
print(type(df))
"""
`df` is a pandas DataFrame.
"""
# -
# Make a fun plot!
# +
Corner = mcmc_plots.CornerPlot(post, df)
Corner.plot()
# TODO: There is a lot going on in this plot. What do you think the off-diagonal boxes are showing?
# What about the on-diagonal boxes? What is the median period of the first planet?
# What is the uncertainty on the period of the first planet? The second planet?
# TODO: Why do you think the uncertainties on the periods of planets b and c are different?
"""
The off-diagonal boxes are 1 dimensional probability distributions over each of the parameters of the fit.
The on-diagonal boxes show 2 dimensional probability distributions (covariances) between pairs of parameters
(the box's row and column show the parameters it corresponds to).
The median period of the first plot (for my eccentric fit) is 52.56 days. The uncertainty is +0.08 days, -0.07 days
(this corresponds to a *68% confidence interval* of [52.49, 52.64] days.)
The median period of the second planet is 20.69 days, with an uncertainty of +/- 0.02 days.
The uncertainties of the two orbital periods are different because the period of the second planet is much better
constrained by the data than the period of the first planet. We see many periods of the second planet repeated
over the ~100 day dataset, but only ~2 periods of the first planet.
"""
| Week4_rv_fitting_radvel/RadVel_Tutorial_KEY.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import jupyter_manim
COLOR = 'red'
from manimlib.scene.scene import Scene
from manimlib.mobject.geometry import Circle
from manimlib.mobject.geometry import Square
from manimlib.animation.transform import Transform
from manimlib.animation.fading import FadeOut
from manimlib.animation.creation import ShowCreation
import statistics
# +
# %%manim Shapes --low_quality
# only to demonstrate that you can use modules imported earlier
# (as well as variables defined beforehand, see COLOR)
statistics.mean([1, 2, 3])
class Shapes(Scene):
def construct(self):
circle = Circle(color=COLOR)
square = Square()
self.play(ShowCreation(circle))
self.play(Transform(circle, square))
self.play(FadeOut(circle))
| Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Do this to allow for local imports.
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
# +
# Import from required modules.
from tommy2tommy.models.transformer import TransformerLM
import tensorflow as tf
# -
# Set up the configuration hyperparameters.
config = {
# Model/data hyperparameters.
'vocab_size': 32,
'length': 10,
'num_layers': 2,
'd_model': 32,
'd_filter': 128,
'num_heads': 8,
'dropout_rate': 0.1,
'ffn_activation': 'gelu',
'layer_norm_epsilon': 1.0e-6,
# Optimizer hyperparameters.
'adam_learning_rate': 0.001,
'adam_beta_1': 0.9,
'adam_beta_2': 0.999,
'adam_epsilon': 1.0e-7,
# Training hyperparameters.
'batch_size': 32,
'num_epochs': 10,
'training_steps': 1000,
}
# +
# Prepare inputs, create the synthetic datasets.
def generate_input(vocab_size, length):
assert length % 2 == 0
half_len = (length - 2)//2
while True:
half_input = tf.random.uniform(shape=(half_len,), minval=1, maxval=vocab_size, dtype=tf.int32)
full_input = tf.concat([[0], half_input, [0], half_input], axis=0)
yield (full_input, full_input)
# Need to specify the output shapes.
training_dataset = tf.data.Dataset.from_generator(
lambda: generate_input(config['vocab_size'], config['length']),
output_types=(tf.int32, (tf.int32)),
output_shapes=((config['length'],), (config['length'],)))
# Batch the training data, must drop the remainder in order for the input sizes to be consistent.
training_dataset = training_dataset.batch(config['batch_size'], drop_remainder=True)
# +
# Set up the loss function, should only calculate loss on the copied half of outputs.
def loss_function(real, pred):
real = real[:, config['length']//2:]
pred = pred[:, config['length']//2:, :]
loss = tf.keras.losses.sparse_categorical_crossentropy(real, pred, from_logits=True)
return tf.reduce_mean(loss)
# Same as above for accuracy.
def accuracy(real, pred):
real = real[:, config['length']//2:]
pred = pred[:, config['length']//2:, :]
return tf.keras.metrics.sparse_categorical_accuracy(real, pred)
# -
# Use Adam optimizer. Works best with learning rate warmup, but this task is easy enough it's not necessary.
optimizer = tf.keras.optimizers.Adam(
config['adam_learning_rate'],
beta_1=config['adam_beta_1'],
beta_2=config['adam_beta_2'],
epsilon=config['adam_epsilon'])
# Build the language model and compile.
model = TransformerLM(config, padding_id=-1) # No padding in our synthetic data.
model.compile(optimizer=optimizer, loss=loss_function, metrics=[accuracy])
# Train the model. Doesn't really make sense to validate since the input is randomly generated.
model.fit(training_dataset,
epochs=config['num_epochs'],
steps_per_epoch=config['training_steps'])
# Example inference. Note the extra padding token from the rightward shift in the language model.
# Note also that the model only learns the second half, due to our choice of loss function.
example = tf.constant([[0, 0, 1, 2, 3, 4, 0, 0, 0, 0]])
print(tf.argmax(model.predict(x=example), axis=2).numpy())
# The correct way to do inference is with a decoder search algorithm such as greedy search or beam search.
from tommy2tommy.utils.search import greedy_search
example = tf.constant([[0, 30, 1, 2, 11, 0]])
print(greedy_search(model, prefix=example, length=config['length']).numpy())
| examples/transformer_copy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# -
# time, Quad.MagYaw, Quad.Est.Yaw, Quad.Yaw
no_code_graph1 = np.loadtxt('./data/scenario4/NoCodeGraph1.txt',delimiter=',',dtype='Float64',skiprows=1)
# time, Quad.Est.E.Yaw, Quad.Est.S.Yaw
no_code_graph2 = np.loadtxt('./data/scenario4/NoCodeGraph2.txt',delimiter=',',dtype='Float64',skiprows=1)
# +
def plotYawError(data, title):
"""
Plots the Yaw error
"""
time = data[:, 0]
values = [data[:,1], data[:, 2]]
subtitles = ['Yaw Error', 'Yaw Std']
fig, axes = plt.subplots(1, 2, figsize=[20,10])
for ax, subtitle, value in zip(axes.flat, subtitles, values):
ax.plot(time, value, 'r')
ax.set_xlabel('Time [s]')
ax.set_ylabel('[rad]')
ax.set_title(subtitle)
ax.grid()
plt.suptitle(title, fontsize=24)
plotYawError(no_code_graph2, 'Without modifications')
# -
# time, Quad.MagYaw, Quad.Est.Yaw, Quad.Yaw
graph1 = np.loadtxt('./data/scenario4/Graph1.txt',delimiter=',',dtype='Float64',skiprows=1)
# time, Quad.Est.E.Yaw, Quad.Est.S.Yaw
graph2 = np.loadtxt('./data/scenario4/Graph2.txt',delimiter=',',dtype='Float64',skiprows=1)
plotYawError(graph2, 'After implementing update')
| estimation_cpp/notes/darienmt/darienmt_estimation/visualizations/Step 4 Magnetometer Update.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example 1 - Triangulation of arbitrary points on the sphere
#
# `stripy` provides a python interfact to STRIPACK and SSRFPACK (Renka 1997a,b) as a triangulation class that would typically be used as follows:
#
# ``` python
#
# import stripy as stripy
# spherical_triangulation = stripy.sTriangulation(lons=vertices_lon_as_radians, lats=vertices_lat_as_radians)
# s_areas = spherical_triangulation.areas()
# ```
#
# The methods of the `sTriangulation` class include interpolation, smoothing and gradients (from SSRFPACK), triangle areas, point location by simplex and nearest vertex, refinement operations by edge or centroid, and neighbourhood search / distance computations through a k-d tree algorithm suited to points on the surface of a unit sphere. `stripy` also includes template triangulated meshes with refinement operations.
#
# In this notebook we introduce the `sTriangulation` class itself.
#
# ## Notebook contents
#
# - [Icosahedron](#Triangulate-the-vertices-of-an-icosahedron)
# - [Plotting on a map](#Making-a-plot-of-the-triangulation)
# - [3D visualisation](#Lavavu-to-view-spherical-information)
# - [Predefined meshes](#Predefined-meshes)
#
# ## References
#
#
#
#
# 1. <NAME>. (1997), Algorithm 772: STRIPACK: Delaunay triangulation and Voronoi diagram on the surface of a sphere, ACM Transactions on Mathematical Software (TOMS).
#
# 2. <NAME>. (1997), Algorithm 773: SSRFPACK: interpolation of scattered data on the surface of a sphere with a surface under tension, ACM Transactions on Mathematical Software (TOMS), 23(3), 435–442, doi:10.1145/275323.275330.
#
# 3. Renka, <NAME>. (1996), Algorithm 751; TRIPACK: a constrained two-dimensional Delaunay triangulation package, ACM Transactions on Mathematical Software, 22(1), 1–8, doi:10.1145/225545.225546.
#
# 4. Renka, <NAME>. (1996), Algorithm 752; SRFPACK: software for scattered data fitting with a constrained surface under tension, ACM Transactions on Mathematical Software, 22(1), 9–17, doi:10.1145/225545.225547.
#
# ## Triangulate the vertices of an icosahedron
# +
import stripy as stripy
import numpy as np
# Vertices of an icosahedron as Lat / Lon in degrees
vertices_LatLonDeg = np.array(
[[ 90, 0.0 ],
[ 26.57, 0.0 ],
[-26.57, 36.0 ],
[ 26.57, 72.0 ],
[-26.57, 108.0 ],
[ 26.57, 144.0 ],
[-26.57, 180.0 ],
[ 26.57, 360.0-72.0 ],
[-26.57, 360.0-36.0 ],
[ 26.57, 360.0-144.0 ],
[-26.57, 360.0-108.0 ],
[-90, 0.0 ]])
vertices_lat = np.radians(vertices_LatLonDeg.T[0])
vertices_lon = np.radians(vertices_LatLonDeg.T[1])
spherical_triangulation = stripy.sTriangulation(lons=vertices_lon, lats=vertices_lat)
# -
# This creates a triangulation object constructed using the wrapped fortran code of Renka (1997). The triangulation object has a number of
# useful methods and attached data which can be listed with
#
# ``` python
#
# help(spherical_triangulation)
# ```
#
print(spherical_triangulation.areas())
print(spherical_triangulation.npoints)
# +
refined_spherical_triangulation = stripy.sTriangulation(lons=vertices_lon, lats=vertices_lat, refinement_levels=2)
print(refined_spherical_triangulation.npoints)
# -
# ## Making a plot of the triangulation
#
# We can make a plot of the two grids and the most straightforward way to display the information
# is through a standard map projection of the sphere to the plane.
#
# (Here we superimpose the points on a global map of coastlines using the `cartopy` map library and use the Mollweide projection.
# Other projections to try include `Robinson`, `Orthographic`, `PlateCarree`)
# +
# %matplotlib inline
import cartopy
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(20, 10), facecolor="none")
ax = plt.subplot(121, projection=ccrs.Mollweide(central_longitude=0.0, globe=None))
ax.coastlines(color="#777777")
ax.set_global()
ax2 = plt.subplot(122, projection=ccrs.Mollweide(central_longitude=0.0, globe=None))
ax2.coastlines(color="#777777")
ax2.set_global()
## Plot the vertices and the edges for the original isocahedron
lons = np.degrees(spherical_triangulation.lons)
lats = np.degrees(spherical_triangulation.lats)
ax.scatter(lons, lats, color="Red",
marker="o", s=150.0, transform=ccrs.Geodetic())
segs = spherical_triangulation.identify_segments()
for s1, s2 in segs:
ax.plot( [lons[s1], lons[s2]],
[lats[s1], lats[s2]],
linewidth=0.5, color="black", transform=ccrs.Geodetic())
## Plot the vertices and the edges for the refined isocahedron
lons = np.degrees(refined_spherical_triangulation.lons)
lats = np.degrees(refined_spherical_triangulation.lats)
ax2.scatter(lons, lats, color="Red", alpha=0.5,
marker="o", s=50.0, transform=ccrs.Geodetic())
segs = refined_spherical_triangulation.identify_segments()
for s1, s2 in segs:
ax2.plot( [lons[s1], lons[s2]],
[lats[s1], lats[s2]],
linewidth=0.5, color="black", transform=ccrs.Geodetic())
# -
# ## Predefined meshes
#
# One common use of stripy is in meshing the sphere and, to this end, we provide pre-defined meshes for icosahedral and octahedral triangulations, each of which can have mid-face centroid points included. A triangulation of the six cube-vertices is also provided as well as a 'buckyball' (or 'soccer ball') mesh. A random mesh is included as a counterpoint to the regular meshes. Each of these meshes is also an sTriangulation.
#
# The mesh classes in stripy are:
#
# ``` python
# stripy.spherical_meshes.octahedral_mesh(include_face_points=False)
# stripy.spherical_meshes.icosahedral_mesh(include_face_points=False)
# stripy.spherical_meshes.triangulated_cube_mesh()
# stripy.spherical_meshes.triangulated_soccerball_mesh()
# stripy.spherical_meshes.uniform_ring_mesh(resolution=5)
# stripy.spherical_meshes.random_mesh(number_of_points=5000)
# ```
#
# Any of the above meshes can be uniformly refined by specifying the refinement_levels parameter.
#
# ``` python
# spherical_triangulation = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=0)
# refined_spherical_triangulation = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=3)
# ```
#
#
# +
import stripy as stripy
str_fmt = "{:35} {:3}\t{:6}"
## A bunch of meshes with roughly similar overall numbers of points / triangles
octo0 = stripy.spherical_meshes.octahedral_mesh(include_face_points=False, refinement_levels=0)
octo2 = stripy.spherical_meshes.octahedral_mesh(include_face_points=False, refinement_levels=2)
octoR = stripy.spherical_meshes.octahedral_mesh(include_face_points=False, refinement_levels=5)
print(str_fmt.format("Octahedral mesh", octo0.npoints, octoR.npoints))
octoF0 = stripy.spherical_meshes.octahedral_mesh(include_face_points=True, refinement_levels=0)
octoF2 = stripy.spherical_meshes.octahedral_mesh(include_face_points=True, refinement_levels=2)
octoFR = stripy.spherical_meshes.octahedral_mesh(include_face_points=True, refinement_levels=4)
print(str_fmt.format("Octahedral mesh with faces", octoF0.npoints, octoFR.npoints))
cube0 = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=0)
cube2 = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=2)
cubeR = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=5)
print(str_fmt.format("Cube mesh", cube0.npoints, cubeR.npoints))
ico0 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=0)
ico2 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=2)
icoR = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=4)
print(str_fmt.format("Icosahedral mesh", ico0.npoints, icoR.npoints))
icoF0 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=0, include_face_points=True)
icoF2 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=2, include_face_points=True)
icoFR = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=4, include_face_points=True)
print(str_fmt.format("Icosahedral mesh with faces", icoF0.npoints, icoFR.npoints))
socc0 = stripy.spherical_meshes.triangulated_soccerball_mesh(refinement_levels=0)
socc2 = stripy.spherical_meshes.triangulated_soccerball_mesh(refinement_levels=1)
soccR = stripy.spherical_meshes.triangulated_soccerball_mesh(refinement_levels=3)
print(str_fmt.format("BuckyBall mesh", socc0.npoints, soccR.npoints))
## Need a reproducible hierarchy ...
ring0 = stripy.spherical_meshes.uniform_ring_mesh(resolution=5, refinement_levels=0)
lon, lat = ring0.uniformly_refine_triangulation()
ring1 = stripy.sTriangulation(lon, lat)
lon, lat = ring1.uniformly_refine_triangulation()
ring2 = stripy.sTriangulation(lon, lat)
lon, lat = ring2.uniformly_refine_triangulation()
ring3 = stripy.sTriangulation(lon, lat)
lon, lat = ring3.uniformly_refine_triangulation()
ringR = stripy.sTriangulation(lon, lat)
# ring2 = stripy.uniform_ring_mesh(resolution=6, refinement_levels=2)
# ringR = stripy.uniform_ring_mesh(resolution=6, refinement_levels=4)
print(str_fmt.format("Ring mesh (9)", ring0.npoints, ringR.npoints))
randR = stripy.spherical_meshes.random_mesh(number_of_points=5000)
rand0 = stripy.sTriangulation(lons=randR.lons[::50],lats=randR.lats[::50])
rand2 = stripy.sTriangulation(lons=randR.lons[::25],lats=randR.lats[::25])
print(str_fmt.format("Random mesh (6)", rand0.npoints, randR.npoints))
# -
# ### Interactive viewer for meshes
# +
from xvfbwrapper import Xvfb
vdisplay = Xvfb()
vdisplay.start()
## The icosahedron with faces in 3D view
import lavavu
## or smesh = icoF0
smesh = icoFR
lv = lavavu.Viewer(border=False, background="#FFFFFF", resolution=[1000,600], near=-10.0)
tris = lv.triangles("triangulation", wireframe=True, colour="#444444", opacity=0.8)
tris.vertices(smesh.points)
tris.indices(smesh.simplices)
tris2 = lv.triangles("triangles", wireframe=False, colour="#77ff88", opacity=0.8)
tris2.vertices(smesh.points)
tris2.indices(smesh.simplices)
nodes = lv.points("nodes", pointsize=2.0, pointtype="shiny", colour="#448080", opacity=0.75)
nodes.vertices(smesh.points)
lv.control.Panel()
lv.control.Range('specular', range=(0,1), step=0.1, value=0.4)
lv.control.Checkbox(property='axis')
lv.control.ObjectList()
lv.control.show()
# -
# ### Plot and compare the predefined meshes
#
# +
def mesh_fig(mesh, meshR, name):
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None))
ax.coastlines(color="lightgrey")
ax.set_global()
generator = mesh
refined = meshR
lons0 = np.degrees(generator.lons)
lats0 = np.degrees(generator.lats)
lonsR = np.degrees(refined.lons)
latsR = np.degrees(refined.lats)
lst = refined.lst
lptr = refined.lptr
ax.scatter(lons0, lats0, color="Red",
marker="o", s=150.0, transform=ccrs.Geodetic())
ax.scatter(lonsR, latsR, color="DarkBlue",
marker="o", s=50.0, transform=ccrs.Geodetic())
segs = refined.identify_segments()
for s1, s2 in segs:
ax.plot( [lonsR[s1], lonsR[s2]],
[latsR[s1], latsR[s2]],
linewidth=0.5, color="black", transform=ccrs.Geodetic())
fig.savefig(name, dpi=250, transparent=True)
return
mesh_fig(octo0, octo2, "Octagon" )
mesh_fig(octoF0, octoF2, "OctagonF" )
mesh_fig(ico0, ico2, "Icosahedron" )
mesh_fig(icoF0, icoF2, "IcosahedronF" )
mesh_fig(cube0, cube2, "Cube")
mesh_fig(socc0, socc2, "SoccerBall")
mesh_fig(ring0, ring2, "Ring")
mesh_fig(rand0, rand2, "Random")
| Notebooks/ScratchPad/Example1-Stripy-Spherical-Triangulations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def maxSubArray(nums):
cum_sum = 0
min_sum = float('inf')
max_sum = float('-inf')
max_run = float('-inf')
for el in nums:
cum_sum += el
print(cum_sum, max_sum, min_sum)
max_sum = max(max_sum, cum_sum)
min_sum = min(min_sum, cum_sum)
max_run = max(max_run, max_sum - min_sum)
return max_run
def maxSubArray(nums):
best_sum = float('-inf')
current_sum = 0
all_negative = True
biggest_abs = float('-inf')
for el in nums:
biggest_abs = max(biggest_abs, el)
if el>0:
all_negative = False
print(current_sum, current_sum + el)
current_sum = max(0, current_sum + el)
best_sum = max(best_sum, current_sum)
if all_negative:
return biggest_abs
else:
return best_sum
maxSubArray([3, -1, -2, -1, 0, 1, 2, 1, 0, -1])
maxSubArray([-1, -2])
maxSubArray([5, -10, 2])
maxSubArray([-5, 2, 0, -1])
maxSubArray([-2,1,-3,4,-1,2,1,-5,4])
maxSubArray([-1])
| 200403/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
# +
import json
import re
import collections
import numpy as np
import multiprocessing
import pandas as pd
from pyagender import PyAgender
from io import BytesIO
from PIL import Image
import requests
import cv2
# -
# In this notebook we get the age and gender based of the picture of a person.
# For this we consider only a subset of all images, those containing people that have a name associated with them.
#
# we do the following processing:
#
# 1. get ids for photos where only one person is in the image
# 2. get list of images associated with on person
# 3. use py-agender to get the age and gender
#
# finally we do some evaluation
# ### 1. get ids of photos where only one person is in the image
df = pd.read_pickle('data/named_subjects.pkl')
df.head()
person_per_image = df.names.map(len)
person_per_image.value_counts()
individual_portraits = person_per_image == 1
# How many pictures do we have of one person?
individual_portraits_df = df[individual_portraits].copy()
individual_portraits_df['name'] = individual_portraits_df.names.map(lambda x: x[0])
individual_portraits_df.groupby('name').id.count().sort_values()
# 49 people don't have portraits. That's okay, we focus on the people that do.
unmatched_people = set([i for x in df.names.to_list() for i in x])\
.difference(set(individual_portraits_df['name'].tolist()))
len(unmatched_people)
# ### 2. get list associated to person
personal_portrait_image = individual_portraits_df.groupby('name').apply(lambda x: x.id.tolist())
personal_portrait_image = personal_portrait_image.rename('id').reset_index()
personal_portrait_image
agender = PyAgender()
# ### 3. get age-gender lables
# the api not only gives us an age and gender estimate, but also the rectangle pointing to the face. We keep that as it can be used for the face map
# +
def get_image(doc):
url = doc+'/f1.highres.jpg'
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img = Image.open(BytesIO(response.content)).convert('RGB')
img = np.array(img)
return img
def get_age_gender_estimates(image_docs):
i = 0
estimates = []
#handle case where for one image of personn we can't get estimates
while i < len(image_docs) and len(estimates) == 0:
img = get_image(image_docs[i])
retries = 0
while retries < 5 and len(estimates) == 0:
estimates = agender.detect_genders_ages(img)
retries += 1
i = i+1
if estimates:
# use first estimate as it is most likely one
result = estimates[0]
result['number'] = len(estimates)
result['id'] = image_docs[i-1]
return result
return {}
# -
if False:
age_gender_lables = personal_portrait_image.id.map(get_age_gender_estimates)
age_gender_lables = pd.DataFrame(age_gender_lables.tolist())
age_gender_lables['name'] = personal_portrait_image.name
age_gender_lables.to_json('data/age_gender_labeles.json')
age_gender_lables = pd.read_json('data/age_gender_labeles.json')
true_gender = pd.read_pickle('data/bnf_table_full.pkl')
len(true_gender) - len(age_gender_lables)
true_gender = true_gender[['name', 'gender']]
age_gender_lables = pd.merge(true_gender, age_gender_lables, on='name', suffixes=('_true', '_estimated'),how='left')
age_gender_lables.gender_true.value_counts()
age_gender_lables['gender_estimates_binary'] =\
age_gender_lables.loc[age_gender_lables.gender_estimated.notna(), 'gender_estimated'].map(lambda x: 'féminin' if x>.5 else 'masculin')
age_gender_lables.gender_estimates_binary.isna().value_counts()
age_gender_lables.gender_estimates_binary.value_counts()
age_gender_lables.to_json('data/age_gender_labeles_augmented.json')
# # evaluation of method
#
#
# evaluation of algorithm itself is presented on wikipage of py-agender: https://github.com/yu4u/age-gender-estimation
len(age_gender_lables[age_gender_lables.age.isna()])
# can't get lables for 46 people
unfound = age_gender_lables[age_gender_lables.age.isna()].name.tolist()
personal_portrait_image[personal_portrait_image.name.isin(unfound)].id.map(lambda x:x[0])
# number of faces that we got:
age_gender_lables.number.value_counts()
age_gender_lables.age.plot(kind='hist', bins=100)
plt.title('histogram age distribution')
plt.xlabel('age in years')
# mostly men
age_gender_lables[age_gender_lables.gender_estimated.notna()]
plt.title('histogram of gender estimates')
plt.ylabel('count')
age_gender_lables[(age_gender_lables.gender_true =='masculin') &\
age_gender_lables.gender_estimated.notna()].gender_estimated.plot('hist', bins=100)
age_gender_lables[(age_gender_lables.gender_true !='masculin') &\
age_gender_lables.gender_estimated.notna()].gender_estimated.plot('hist', bins=100)
plt.legend(['male', 'female'])
# Clearly, given the name something went wrong
CM = confusion_matrix(age_gender_lables.gender_true == 'masculin',
age_gender_lables.gender_estimates_binary == 'masculin')
CM
TN = CM[0][0]
FN = CM[1][0]
TP = CM[1][1]
FP = CM[0][1]
FN
# +
# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
# Specificity or true negative rate
TNR = TN/(TN+FP)
# Precision or positive predictive value
PPV = TP/(TP+FP)
# Negative predictive value
NPV = TN/(TN+FN)
# Fall out or false positive rate
FPR = FP/(FP+TN)
# False negative rate
FNR = FN/(TP+FN)
# False discovery rate
FDR = FP/(TP+FP)
# Overall accuracy
ACC = (TP+TN)/(TP+FP+FN+TN)
# -
ACC
# ## Example of multiple matches or mismatches
# +
font = {'family': 'serif',
'color': 'yellow',
'weight': 'normal',
'size': 16,
}
img = get_image(age_gender_lables.id[1296])
for detect in [age_gender_lables.iloc[1296]]:
gender = 'Woman' if detect['gender_estimated'] > .5 else 'Man'
plt.figure(figsize=(10, 10))
plt.text(detect['left'], detect['top']-10, str(detect['age'])[:2] + ' ' + gender, fontdict=font)
plt.imshow(cv2.rectangle(img, (int(detect['left']), int(detect['top'])), (int(detect['right']), int(detect['bottom'])), (255, 255, 0), 3))
| notebooks/extracting age and gender.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('dataset.csv')
df.year = df.year.astype('int32')
df.shape
df = df.dropna()
df.shape
df.head()
# # Maintypes
df.main_type.unique()
df.groupby('main_type').count().plot(kind='bar')
# # Subtypes
df.groupby(['main_type', 'sub_type']).describe()
df.groupby(['main_type', 'sub_type']).count().plot(kind='bar')
# # Years
max(df.year), min(df.year)
df.groupby('year').count().plot()
# # Create periods
periods = []
start_year = 1600
end_year = 1649
periods.append((start_year, end_year))
for i in range(5):
start_year += 50
end_year = start_year + 49
periods.append((start_year, end_year))
periods
def create_period_column(df, periods):
for index, (start_year, end_year) in enumerate(periods):
df.loc[df[(df.year >= start_year) & (df.year < end_year)].index, 'period'] = f'P{index + 1}'
return df
df = create_period_column(df, periods)
df.dropna(subset=['period'], inplace=True)
df.head()
df.groupby(['period', 'main_type']).count().plot(kind='bar')
# # First Attempt:
# ### Fachtexte and Belletristik over all periods
df_filtered = df[(df.main_type == 'Belletristik') | (df.main_type == 'Fachtext')]
df_filtered.head()
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score, classification_report
from imblearn.pipeline import make_pipeline
from imblearn.under_sampling import RandomUnderSampler
# +
from tqdm import tqdm_notebook
train_period = 'P1'
train_df = df_filtered[df_filtered.period == train_period]
tf = TfidfVectorizer(max_features=7500)
X_train = tf.fit_transform(train_df.text)
y_train = train_df.main_type.to_numpy()
clf = make_pipeline(RandomUnderSampler(), LogisticRegression(C=4))
clf.fit(X_train, y_train)
scores_belletristik = []
scores_fachtexte = []
scores = []
for test_period in tqdm_notebook(list(sorted(df.period.unique()))):
test_df = df_filtered[df_filtered.period == test_period]
X_test = tf.transform(test_df.text)
y_test = test_df.main_type.to_numpy()
y_pred = clf.predict(X_test)
score = f1_score(y_test, y_pred, average='macro')
scores.append(score)
report = classification_report(y_test, y_pred, output_dict=True)
scores_belletristik.append(report['Belletristik']['f1-score'])
scores_fachtexte.append(report['Fachtext']['f1-score'])
scores, scores_belletristik, scores_fachtexte
# -
import matplotlib.pyplot as plt
plt.plot(scores_belletristik, label='Belletristik')
plt.plot(scores_fachtexte, label='Fachtexte')
plt.legend()
plt.show()
| Exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
pyvista._wrappers['vtkPolyData'] = pyvista.PolyData
# Interpolating {#interpolate_example}
# =============
#
# Interpolate one mesh\'s point/cell arrays onto another mesh\'s nodes
# using a Gaussian Kernel.
#
# sphinx_gallery_thumbnail_number = 4
import pyvista as pv
from pyvista import examples
# Simple Surface Interpolation
# ============================
#
# Resample the points\' arrays onto a surface
#
# +
# Download sample data
surface = examples.download_saddle_surface()
points = examples.download_sparse_points()
p = pv.Plotter()
p.add_mesh(points, point_size=30.0, render_points_as_spheres=True)
p.add_mesh(surface)
p.show()
# -
# Run the interpolation
#
# +
interpolated = surface.interpolate(points, radius=12.0)
p = pv.Plotter()
p.add_mesh(points, point_size=30.0, render_points_as_spheres=True)
p.add_mesh(interpolated, scalars="val")
p.show()
# -
# Complex Interpolation
# =====================
#
# In this example, we will in interpolate sparse points in 3D space into a
# volume. These data are from temperature probes in the subsurface and the
# goal is to create an approximate 3D model of the temperature field in
# the subsurface.
#
# This approach is a great for back-of-the-hand estimations but pales in
# comparison to kriging
#
# Download the sparse data
probes = examples.download_thermal_probes()
# Create the interpolation grid around the sparse data
#
grid = pv.UniformGrid()
grid.origin = (329700, 4252600, -2700)
grid.spacing = (250, 250, 50)
grid.dimensions = (60, 75, 100)
# +
dargs = dict(cmap="coolwarm", clim=[0,300], scalars="temperature (C)")
cpos = [(364280.5723737897, 4285326.164400684, 14093.431895014139),
(337748.7217949739, 4261154.45054595, -637.1092549935128),
(-0.29629216102673206, -0.23840196609932093, 0.9248651025279784)]
p = pv.Plotter()
p.add_mesh(grid.outline(), color='k')
p.add_mesh(probes, render_points_as_spheres=True, **dargs)
p.show(cpos=cpos)
# -
# Run an interpolation
#
interp = grid.interpolate(probes, radius=15000, sharpness=10, strategy='mask_points')
# Visualize the results
#
# +
vol_opac = [0, 0, .2, 0.2, 0.5, 0.5]
p = pv.Plotter(shape=(1,2), window_size=[1024*3, 768*2])
p.enable_depth_peeling()
p.add_volume(interp, opacity=vol_opac, **dargs)
p.add_mesh(probes, render_points_as_spheres=True, point_size=10, **dargs)
p.subplot(0,1)
p.add_mesh(interp.contour(5), opacity=0.5, **dargs)
p.add_mesh(probes, render_points_as_spheres=True, point_size=10, **dargs)
p.link_views()
p.show(cpos=cpos)
| notebooks/official/interpolate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # Start-to-Finish Example: Numerical Solution of the Scalar Wave Equation, in Curvilinear Coordinates
#
# ## Author: <NAME>
# ### Formatting improvements courtesy <NAME>
#
# ## This module solves the scalar wave equation in *spherical coordinates* (though other coordinates, including Cartesian, may be chosen).
#
# **Notebook Status:** <font color ="green"><b> Validated </b></font>
#
# **Validation Notes:** This module has been validated to converge at the expected order to the exact solution (see [plot](#convergence) at bottom).
#
# ### NRPy+ Source Code for this module:
# * [ScalarWave/ScalarWaveCurvilinear_RHSs.py](../edit/ScalarWave/ScalarWaveCurvilinear_RHSs.py) [\[**tutorial**\]](Tutorial-ScalarWaveCurvilinear.ipynb) Generates the right-hand side for the Scalar Wave Equation in curvilinear coordinates
# * [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Generating C code for either plane wave or spherical Gaussian initial data for the scalar wave equation
#
# ## Introduction:
# As outlined in the [previous NRPy+ tutorial notebook](Tutorial-ScalarWaveCurvilinear.ipynb), we first use NRPy+ to generate initial data for the scalar wave equation, and then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).
#
# The entire algorithm is outlined below, with NRPy+-based components highlighted in <font color='green'>green</font>.
#
# 1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.
# 1. <font color='green'>Set gridfunction values to initial data.</font>
# 1. Evolve the system forward in time using RK4 time integration. At each RK4 substep, do the following:
# 1. <font color='green'>Evaluate scalar wave RHS expressions.</font>
# 1. Apply boundary conditions.
# 1. At the end of each iteration in time, output the relative error between numerical and exact solutions.
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This notebook is organized as follows
#
# 1. [Step 1](#writec): Generate C code to solve the scalar wave equation in curvilinear coordinates
# 1. [Step 1.a](#id_rhss): C code generation: Initial data and scalar wave right-hand-sides
# 1. [Step 1.b](#boundaryconditions): C code generation: Boundary condition driver
# 1. [Step 1.c](#cparams_rfm_and_domainsize): Generate Cparameters files; set reference metric parameters, including `domain_size`
# 1. [Step 1.d](#cfl): C code generation: Finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep
# 1. [Step 2](#mainc): `ScalarWaveCurvilinear_Playground.c`: The Main C Code
# 1. [Step 3](#compileexec): Compile generated C codes & solve the scalar wave equation
# 1. [Step 4](#convergence): Code validation: Plot the numerical error, and confirm that it converges to zero at expected rate with increasing numerical resolution (sampling)
# 1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='writec'></a>
#
# # Step 1: Using NRPy+ to generate necessary C code to solve the scalar wave equation in curvilinear, singular coordinates \[Back to [top](#toc)\]
# $$\label{writec}$$
# <a id='id_rhss'></a>
#
# ## Step 1.a: C code generation: Initial data and scalar wave RHSs \[Back to [top](#toc)\]
# $$\label{id_rhss}$$
#
#
# We choose simple plane wave initial data, which is documented in the [Cartesian scalar wave module](Tutorial-ScalarWave.ipynb). Specifically, we implement monochromatic (single-wavelength) wave traveling in the $\hat{k}$ direction with speed $c$
# $$u(\vec{x},t) = f(\hat{k}\cdot\vec{x} - c t),$$
# where $\hat{k}$ is a unit vector.
#
# The scalar wave RHSs in curvilinear coordinates (documented [in the previous module](Tutorial-ScalarWaveCurvilinear.ipynb)) are simply the right-hand sides of the scalar wave equation written in curvilinear coordinates
# \begin{align}
# \partial_t u &= v \\
# \partial_t v &= c^2 \left(\hat{g}^{ij} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u\right),
# \end{align}
# where $\hat{g}^{ij}$ is the inverse reference 3-metric (i.e., the metric corresponding to the underlying coordinate system we choose$-$spherical coordinates in our example below), and $\hat{\Gamma}^i$ is the contracted Christoffel symbol $\hat{\Gamma}^\tau = \hat{g}^{\mu\nu} \hat{\Gamma}^\tau_{\mu\nu}$.
#
# Below we generate
# + the initial data by calling `InitialData(Type="PlaneWave")` inside the NRPy+ [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py) module (documented in [this NRPy+ Jupyter notebook](Tutorial-ScalarWave.ipynb)), and
# + the RHS expressions by calling `ScalarWaveCurvilinear_RHSs()` inside the NRPy+ [ScalarWave/ScalarWaveCurvilinear_RHSs.py](../edit/ScalarWave/ScalarWaveCurvilinear_RHSs.py) module (documented in [this NRPy+ Jupyter notebook](Tutorial-ScalarWaveCurvilinear.ipynb)).
# +
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("ScalarWaveCurvilinear_Playground_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem, boundary condition,
# MoL, timestepping algorithm, FD order,
# floating point precision, and CFL factor:
# Step 2.a: Set the coordinate system for the numerical grid
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
# Step 2.b: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 10.0 # Needed for all coordinate systems.
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 1.0 # If SymTP chosen
# Step 2.c: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
CFL_FACTOR= 1.0
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# Each RK substep involves two function calls:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
RHS_string = "rhs_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, RK_OUTPUT_GFS);"
post_RHS_string = "apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);"
MoL.MoL_C_Code_Generation(RK_method, RHS_string = RHS_string, post_RHS_string = post_RHS_string,
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Import the ScalarWave.InitialData module.
# This command only declares ScalarWave initial data
# parameters and the InitialData() function.
import ScalarWave.InitialData as swid
# Step 5: Import ScalarWave_RHSs module.
# This command only declares ScalarWave RHS parameters
# and the ScalarWave_RHSs function (called later)
import ScalarWave.ScalarWaveCurvilinear_RHSs as swrhs
# Step 6: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order)
# Step 7: Call the InitialData() function to set up initial data.
# Options include:
# "PlaneWave": monochromatic (single frequency/wavelength) plane wave
# "SphericalGaussian": spherically symmetric Gaussian, with default stdev=3
swid.InitialData(CoordSystem=CoordSystem,Type="PlaneWave")
# Step 8: Generate SymPy symbolic expressions for
# uu_rhs and vv_rhs; the ScalarWave RHSs.
# This function also declares the uu and vv
# gridfunctions, which need to be declared
# to output even the initial data to C file.
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
swrhs.ScalarWaveCurvilinear_RHSs()
# Step 8.a: Now that we are finished with all the rfm hatted
# quantities, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
# Step 9: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 10: Generate all needed C functions
enable_FD_functions = False
par.set_parval_from_str("finite_difference::enable_FD_functions",enable_FD_functions)
desc="Part P3: Declare the function for the exact solution at a single point. time==0 corresponds to the initial data."
name="exact_solution_single_point"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const REAL xx0,const REAL xx1,const REAL xx2,const paramstruct *restrict params,REAL *uu_exact,REAL *vv_exact",
body = fin.FD_outputC("returnstring",[lhrh(lhs="*uu_exact",rhs=swid.uu_ID),
lhrh(lhs="*vv_exact",rhs=swid.vv_ID)]),
loopopts = "")
desc="Part P4: Declare the function for the exact solution at all points. time==0 corresponds to the initial data."
name="exact_solution_all_points"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3], REAL *restrict in_gfs",
body ="""exact_solution_single_point(xx[0][i0],xx[1][i1],xx[2][i2],params,
&in_gfs[IDX4S(UUGF,i0,i1,i2)],&in_gfs[IDX4S(VVGF,i0,i1,i2)]);""",
loopopts = "AllPoints")
desc="Part P5: Declare the function to evaluate the scalar wave RHSs"
includes = None
if enable_FD_functions:
includes = ["finite_difference_functions.h"]
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), includes=includes, desc=desc, name=name,
params ="""rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs, REAL *restrict rhs_gfs""",
body =fin.FD_outputC("returnstring",[lhrh(lhs=gri.gfaccess("rhs_gfs","uu"),rhs=swrhs.uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vv"),rhs=swrhs.vv_rhs)],
params="enable_SIMD=True"),
loopopts = "InteriorPoints,enable_SIMD,enable_rfm_precompute")
# Step 10.b Output functions for computing all finite-difference stencils
if enable_FD_functions:
fin.output_finite_difference_functions_h(path=Ccodesdir)
# -
# <a id='boundaryconditions'></a>
#
# ## Step 1.b: Output needed C code for boundary condition driver \[Back to [top](#toc)\]
# $$\label{boundaryconditions}$$
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),
Cparamspath=os.path.join("../"))
# <a id='cparams_rfm_and_domainsize'></a>
#
# ## Step 1.c: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\]
# $$\label{cparams_rfm_and_domainsize}$$
#
# Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.
#
# Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
# +
# Step 1.c.i: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
params.time = 0.0; // Initial simulation time time corresponds to exact solution at time=0.
params.wavespeed = 1.0;\n""")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 1.c.ii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(os.path.join(Ccodesdir))
# Step 1.c.iii: Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 1.c.iv: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# -
# <a id='cfl'></a>
#
# ## Step 1.d: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](#toc)\]
# $$\label{cfl}$$
#
# In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:
# $$
# \Delta t \le \frac{\min(ds_i)}{c},
# $$
# where $c$ is the wavespeed, and
# $$ds_i = h_i \Delta x^i$$
# is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
# <a id='mainc'></a>
#
# # Step 2: `ScalarWaveCurvilinear_Playground.c`: The Main C Code \[Back to [top](#toc)\]
# $$\label{mainc}$$
#
# Just as in [the start-to-finish, solving the scalar wave equation in Cartesian coordinates module](Tutorial-Start_to_Finish-ScalarWave.ipynb), we will implement the scalar wave equation via the Method of Lines. As discussed above, the critical differences between this code and the Cartesian version are as follows:
# 1. The CFL-constrained timestep depends on the proper distance between neighboring gridpoints
# 1. The boundary conditions must account for the fact that ghost zone points lying in the domain exterior can map either to the interior of the domain, or lie on the outer boundary. In the former case, we simply copy the data from the interior. In the latter case, we apply the usual outer boundary conditions.
# 1. The numerical grids must be staggered to avoid direct evaluation of the equations on coordinate singularities.
# +
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"ScalarWaveCurvilinear_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2))+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+";\n")
# +
# %%writefile $Ccodesdir/ScalarWaveCurvilinear_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "ScalarWaveCurvilinear_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Find the CFL-constrained timestep
#include "find_timestep.h"
// Part P8: Declare the function for the exact solution at a single point. time==0 corresponds to the initial data.
#include "exact_solution_single_point.h"
// Part P9: Declare the function for the exact solution at all points. time==0 corresponds to the initial data.
#include "exact_solution_all_points.h"
// Part P10: Declare the function to evaluate the scalar wave RHSs
#include "rhs_eval.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up scalar wave initial data
// Step 2: Output relative error between numerical and exact solution.
// Step 3: Evolve scalar wave initial data forward in time using Method of Lines with chosen RK-like algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if(argc != 4 || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < NGHOSTS) {
printf("Error: Expected one command-line argument: ./ScalarWaveCurvilinear_Playground Nx0 Nx1 Nx2,\n");
printf("where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
printf("Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
printf("Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
printf(" For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = 0.7*domain_size; /* Final time is set so that at t=t_final,
* data at the origin have not been corrupted
* by the approximate outer boundary condition */
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//printf("# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
int output_every_N = (int)((REAL)N_final/800.0);
if(output_every_N == 0) output_every_N = 1;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
printf("Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
printf(" or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to be exact solution at time=0:
params.time = 0.0; exact_solution_all_points(¶ms, xx, y_n_gfs);
for(int n=0;n<=N_final;n++)
{ // Main loop to progress forward in time.
// Step 1a: Set current time to correct value & compute exact solution
params.time = ((REAL)n)*dt;
// Step 2: Code validation: Compute log of L2 norm of difference
// between numerical and exact solutions:
// log_L2_Norm = log10( sqrt[Integral( [numerical - exact]^2 * dV)] ),
// where integral is within 30% of the grid outer boundary (domain_size)
if(n%output_every_N == 0) {
REAL integral = 0.0;
REAL numpts = 0.0;
#pragma omp parallel for reduction(+:integral,numpts)
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xCart[3]; xx_to_Cart(¶ms,xx,i0,i1,i2, xCart);
if(sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]) < domain_size*0.3) {
REAL uu_exact,vv_exact; exact_solution_single_point(xx[0][i0],xx[1][i1],xx[2][i2],¶ms,
&uu_exact,&vv_exact);
double num = (double)y_n_gfs[IDX4S(UUGF,i0,i1,i2)];
double exact = (double)uu_exact;
integral += (num - exact)*(num - exact);
numpts += 1.0;
}
}
// Compute and output the log of the L2 norm.
REAL log_L2_Norm = log10(sqrt(integral/numpts));
printf("%e %e\n",(double)params.time,log_L2_Norm);
}
// Step 3: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
} // End main loop to progress forward in time.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
# -
# <a id='compileexec'></a>
#
# # Step 3: Compile generated C codes & solve the scalar wave equation \[Back to [top](#toc)\]
# $$\label{compileexec}$$
#
# To aid in the cross-platform-compatible (with Windows, MacOS, & Linux) compilation and execution, we make use of `cmdline_helper` [(**Tutorial**)](Tutorial-cmdline_helper.ipynb).
# +
import cmdline_helper as cmd
cmd.C_compile(os.path.join(Ccodesdir,"ScalarWaveCurvilinear_Playground.c"),
os.path.join(outdir,"ScalarWaveCurvilinear_Playground"),compile_mode="optimized")
# # !clang -Ofast -fopenmp -mavx2 -mfma ScalarWave/ScalarWaveCurvilinear_Playground.c -o ScalarWaveCurvilinear_Playground -lm
# # !icc -align -qopenmp -xHost -O2 -qopt-report=5 -qopt-report-phase ipo -qopt-report-phase vec -vec-threshold1 -qopt-prefetch=4 ScalarWave/ScalarWaveCurvilinear_Playground.c -o ScalarWaveCurvilinear_Playground
# # !gcc-7 -Ofast -fopenmp -march=native ScalarWave/ScalarWaveCurvilinear_Playground.c -o ScalarWaveCurvilinear_Playground -lm
# Change to output directory
os.chdir(outdir)
# Clean up existing output files
cmd.delete_existing_files("out-*resolution.txt")
# Run executable
if par.parval_from_str("reference_metric::CoordSystem") == "Cartesian":
cmd.Execute("ScalarWaveCurvilinear_Playground", "16 16 16", "out-lowresolution.txt")
cmd.Execute("ScalarWaveCurvilinear_Playground", "24 24 24", "out-medresolution.txt")
else:
cmd.Execute("ScalarWaveCurvilinear_Playground", "16 8 16", "out-lowresolution.txt")
# 4.28s with icc and FD order = 10.
cmd.Execute("ScalarWaveCurvilinear_Playground", "24 12 24", "out-medresolution.txt")
########################################
# BENCHMARK 48x24x48 RUN, FD order = 4. desktop: 17.33s
# laptop: 51.82s on icc. 45.02s on GCC 9, 45.03s on GCC 7, 51.67s on clang
# cmd.Execute("ScalarWaveCurvilinear_Playground", "48 24 48", "out-hghresolution.txt")
# # %timeit cmd.Execute("ScalarWaveCurvilinear_Playground", "48 24 48", "out-hghresolution.txt", verbose=False)
# FD functions disabled:
# 16 s ± 702 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# FD functions enabled:
# 16.1 s ± 384 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# Return to root directory
os.chdir(os.path.join("../../"))
# -
# <a id='convergence'></a>
#
# # Step 4: Code validation: Plot the numerical error, and confirm that it converges to zero at expected rate with increasing numerical resolution (sampling) \[Back to [top](#toc)\]
# $$\label{convergence}$$
# The numerical solution $u_{\rm num}(x0,x1,x2,t)$ should converge to the exact solution $u_{\rm exact}(x0,x1,x2,t)$ at fourth order, which means that
# $$
# u_{\rm num}(x0,x1,x2,t) = u_{\rm exact}(x0,x1,x2,t) + \mathcal{O}\left((\Delta x0)^4\right)+ \mathcal{O}\left((\Delta x1)^4\right)+ \mathcal{O}\left((\Delta x2)^4\right)+ \mathcal{O}\left((\Delta t)^4\right).
# $$
#
# Thus the relative error $E_{\rm rel}$ should satisfy:
# $$
# \left|\frac{u_{\rm num}(x0,x1,x2,t) - u_{\rm exact}(x0,x1,x2,t)}{u_{\rm exact}(x0,x1,x2,t)}\right| + \mathcal{O}\left((\Delta x0)^4\right)+ \mathcal{O}\left((\Delta x1)^4\right)+ \mathcal{O}\left((\Delta x2)^4\right)+ \mathcal{O}\left((\Delta t)^4\right).
# $$
#
# We confirm this convergence behavior by first solving the scalar wave equation at two resolutions: $16\times 8\times 16$ (or $16^3$ if `reference_metric::CoordSystem` is set to `Cartesian`), and $24\times 12\times 24$ (or $24^3$ if `reference_metric::CoordSystem` is set to `Cartesian`) and evaluating the maximum logarithmic relative error $\log_{10} E_{\rm rel,max}$ between numerical and exact solutions within a region $R < 0.1 {\rm RMAX}$ at all iterations.
#
# Since we increase the resolution uniformly over all four coordinates $(x0,x1,x2,t)$, $E_{\rm rel}$ should drop uniformly as $(\Delta x0)^4$:
# $$
# E_{\rm rel} \propto (\Delta x0)^4.
# $$
#
# So at the two resolutions, we should find that
# $$
# \frac{E_{\rm rel}(16\times 8\times 16)}{E_{\rm rel}(24\times 12\times 24)} = \frac{E_{\rm rel}(16^3)}{E_{\rm rel}(24^3)} \approx \left(\frac{(\Delta x0)_{16}}{(\Delta x0)_{24}}\right)^{4} = \left(\frac{24}{16}\right)^4 \approx 5.
# $$
#
# Since we're measuring logarithmic relative error, this should be
# $$
# \log_{10}\left(\frac{E_{\rm rel}(16\times 8\times 16)}{E_{\rm rel}(24\times 12\times 24)}\right) = \log_{10}\left(\frac{E_{\rm rel}(16^3)}{E_{\rm rel}(24^3)}\right) \approx \log_{10}(5).
# $$
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import mpmath as mp
import csv
def file_reader(filename):
with open(filename) as file:
reader = csv.reader(file, delimiter=" ")
data = list(zip(*reader))
# data is a tuple of strings. Tuples are immutable, and we need to perform math on
# the data, so here we convert tuple to lists of floats:
data0 = []
data1 = []
for i in range(len(data[0])):
data0.append(float(data[0][i]))
data1.append(float(data[1][i]))
return data0,data1
first_col16,second_col16 = file_reader(os.path.join(outdir,'out-lowresolution.txt'))
first_col24,second_col24 = file_reader(os.path.join(outdir,'out-medresolution.txt'))
second_col16_rescaled4o = []
second_col16_rescaled5o = []
for i in range(len(second_col16)):
# data16 = data24*(16/24)**4
# -> log10(data24) = log10(data24) + 4*log10(16/24)
second_col16_rescaled4o.append(second_col16[i] + 4*mp.log10(16./24.))
second_col16_rescaled5o.append(second_col16[i] + 5*mp.log10(16./24.))
# https://matplotlib.org/gallery/text_labels_and_annotations/legend.html#sphx-glr-gallery-text-labels-and-annotations-legend-py
fig, ax = plt.subplots()
plt.title("Demonstrating 4th-order Convergence: "+par.parval_from_str("reference_metric::CoordSystem")+" Coordinates")
plt.xlabel("time")
plt.ylabel("log10(Max relative error)")
ax.plot(first_col24, second_col24, 'k-', label='logErel(N0=24)')
ax.plot(first_col16, second_col16_rescaled4o, 'k--', label='logErel(N0=16) + log((16/24)^4)')
ax.set_ylim([-8.05,-1.7]) # Manually set the y-axis range case, since the log10
# relative error at t=0 could be -inf or about -16,
# resulting in very different-looking plots
# despite the data being the same to roundoff.
if par.parval_from_str("reference_metric::CoordSystem") == "Cartesian":
ax.set_ylim([-2.68,-1.62])
if par.parval_from_str("reference_metric::CoordSystem") == "Cylindrical":
ax.plot(first_col16, second_col16_rescaled5o, 'k.', label='(Assuming 5th-order convergence)')
legend = ax.legend(loc='lower right', shadow=True, fontsize='large')
legend.get_frame().set_facecolor('C1')
plt.show()
# -
# <a id='latex_pdf_output'></a>
#
# # Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-Start_to_Finish-ScalarWaveCurvilinear.pdf](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-ScalarWaveCurvilinear")
| Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="../../img/ods_stickers.jpg">
# ## Открытый курс по машинному обучению. Сессия № 2
# # <center>Классификация групповых и одиночных целей
# ### <center> Автор: <NAME> (@airat)
# ## <center> Индивидуальный проект по анализу данных </center>
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import scipy.stats
from scipy import signal
from scipy.signal import butter, lfilter, freqz
from numpy.fft import irfft,rfft, rfftfreq
# %matplotlib inline
import scipy.stats as sts
from pylab import *
from scipy import fft
from scipy import signal
import copy
from scipy.signal import hilbert, chirp
# -*- coding: utf-8 -*-
# %pylab inline
import copy
import seaborn as sns;
import itertools
from sklearn.tree import DecisionTreeClassifier # Ваш код здесь
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.cross_validation import cross_val_score
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix, f1_score
from sklearn.learning_curve import learning_curve
from sklearn.learning_curve import validation_curve
#
# В проекте рассмотрим решение задачи классификации двух классов:
# * одиночный человек;
# * группа людей.
#
# Исходными данными являются эксперементальные (сырые) данные, полученные с АЦП [(АЦП ЦАП Zet 230)](https://zetlab.com/shop/izmeritelnoe-oborudovanie/moduli-atsp-tsap/atsp-tsap-zet-230).
#
# Под сырыми данными, понимаются синхронизированные по времени реализация (сейсмического сигнала) с частотой дискретизации Fs = 500 Гц.
#
# ### Краткое описание природы данных
#
# #### Сейсмическая волна
# Сейсмическая волна - это волны, переносящие энергию упругих (механических) колебаний в горных породах. Источником сейсмической волны может быть землетрясение, взрыв, вибрация или <b>удар (в нашем случае проход объекта классификации)</b>.
# <p>Существует следующая классификация сейсмический волн:</p>
# * Объёмные волны - Объёмные волны проходят через недра Земли. Путь волн преломляется различной плотностью и жёсткостью подземных пород.
# * P-волны (первичные волны) — продольные, или компрессионные волны. Обычно их скорость в два раза быстрее S-волн, проходить они могут через любые материалы.
# * P- и S-волны в мантии и ядре.
# * Поверхностные волны несколько похожи на волны воды, но в отличие от них они путешествуют по земной поверхности. Их обычная скорость значительно ниже скорости волн тела. Из-за своей низкой частоты, времени действия и большой амплитуды они являются самыми разрушительными изо всех типов сейсмических волн.
# Загрузим и рассмотрим исходные данные (сигналы) и их метаданные:
# +
file_obj1 = open('file_signal/sig0002_20_1.txt', 'r')
data_s = file_obj1.read().split("\n")
Fs = 500.0
# -
meta_data = pd.read_csv('metaTable_erase.csv', sep = ';')
meta_data.head()
# где:
# * nameFile - название файла в котором закодирована дата и время записи сигнала;
# * Count - количество людей для проводилась запись сигнала;
# * Steps - количество проходов в рамках одной записи сигнала.
# +
seism = {}
for i in range(len(data_s)):
sig_s = data_s[i].split('\t')[1]
sig_s = list(map(lambda value: \
float(value),
filter(lambda value2: value2 != '' and len(value2) > 0,
sig_s.split(';')
)
))
seism[data_s[i].split('\t')[0].upper()] = sig_s
name_signal = seism.keys()
name_signal = list(name_signal)
print('Количество реализаций - {}'.format(len(name_signal)))
# -
# Рассмотрим реализацию сигнала, например для реализации = S140314_161345:
# +
def plotSignal(nameEx):
seismic_signal = seism[nameEx]
time_x = np.linspace(0,len(seismic_signal)/Fs, len(seismic_signal))
plt.figure(figsize = (20, 6))
plt.title('Реализация сейсмического сигнала')
plt.plot(time_x,seismic_signal, )
plt.legend(['Сейсмический сигнал'])
plt.xlabel('Время, с')
plt.ylabel('Амплитуда')
plt.grid()
plt.show()
plotSignal('S140314_161345')
# -
# Данной реализации соответствует 6 проходов, для решения задачи классификации <b>требуется</b>:
#
# 1. выделить соответсвующие проходы;
# 2. для проходов определить (расчитать) признаки;
# 3. для сформированного признакового пространства добавить вектор меток.
#
# ##### 1. Выделение соответствующих проходов
# Определим, что длительность прохода должна быть равной 8 секунд (<b>`step_time`</b>), то есть 4000 отсчетов. Визуализируем все реализации и оценим возможность обеспечить длительность прохода 8 с.
#
# В ходе рассмотрения установлено, что для корректного автоматизированного выделения проходов (<b>`selection_steps`</b>) требуется дополнительно модифицировать исходные данные, в части точного определения начала прохода (<b>`dop_group_steps`</b>):
step_time = 8
len_step = step_time * Fs
print('Длительность прохода: {} отсчетов'.format(len_step))
count_steps = {}
for name in name_signal:
if(meta_data[meta_data['nameFile'] == name].shape[0] > 0 and\
meta_data[meta_data['nameFile'] == name].Steps.values[0] > 0):
count_steps[name] = meta_data[meta_data['nameFile'] == name].Steps.values[0]
else:
name_signal.remove(name)
# +
dop_group_steps = {}
dop_group_steps['S140507_180359'] = [8, 28, 48]
dop_group_steps['S140507_175631'] = [4, 16, 30]
dop_group_steps['S140507_175740'] = [3, 13, 23]
dop_group_steps['S140507_175116'] = [3, 15, 27]
dop_group_steps['S140507_180656'] = [6, 25, 44]
dop_group_steps['S140507_175227'] = [5, 19, 33]
dop_group_steps['S140507_181133'] = [6, 18, 28]
dop_group_steps['S140507_175006'] = [4, 17, 43]
dop_group_steps['S140507_175906'] = [6, 23, 41]
dop_group_steps['S140507_175445'] = [4, 16, 30]
dop_group_steps['S140507_180527'] = [6, 26, 43]
dop_group_steps['S140507_175536'] = [3, 13, 24]
dop_group_steps['S140507_180929'] = [4, 15, 26]
dop_group_steps['S140507_180021'] = [9, 26, 45]
dop_group_steps['S140507_180224'] = [8, 24, 44]
dop_group_steps['S140507_181031'] = [4, 17, 27]
dop_group_steps['S140507_170801'] = [8, 22, 48, 65]
dop_group_steps['S140507_172302'] = [9, 28, 50, 66]
dop_group_steps['S140507_171954'] = [3, 12, 25, 37]
dop_group_steps['S140507_170942'] = [8, 22, 42, 57]
dop_group_steps['S140507_174541'] = [8, 18, 35, 45]
dop_group_steps['S140507_181239'] = [4, 17, 30]
dop_group_steps['S140507_174732'] = []
dop_group_steps['S140507_171831'] = [3, 12, 25, 36]
dop_group_steps['S140507_171440'] = []
dop_group_steps['S140507_170319'] = [10, 25, 48, 68]
dop_group_steps['S140715_163612'] = []
dop_group_steps['S140507_178441'] = []
dop_group_steps['S140507_165507'] = [8, 28, 52, 72]
dop_group_steps['S140507_171708'] = [3, 14, 26, 37]
dop_group_steps['S140507_173900'] = [6, 21, 54, 75]
dop_group_steps['S140507_175999'] = []
dop_group_steps['S140507_173020'] = [2, 13, 24, 35]
dop_group_steps['S140507_171330'] = [2, 14, 27, 39]
dop_group_steps['S140507_171605'] = [1, 12, 25, 35]
dop_group_steps['S140507_170631'] = [6, 25.5, 48.5, 65.5]
dop_group_steps['S140507_171100'] = [7, 24, 46, 54.5]
dop_group_steps['S140507_172142'] = [8, 20, 44, 62]
dop_group_steps['S140507_165330'] = []
dop_group_steps['S140507_165315'] = [9, 30, 54, 72]
dop_group_steps['S140507_174319'] = [4, 15 ,33, 45]
dop_group_steps['S140507_165642'] = [10, 29, 52, 70.5]
dop_group_steps['S140507_174435'] = [5, 15, 33, 44]
dop_group_steps['S140507_172534'] = [4, 15, 26, 38]
dop_group_steps['S140507_172530'] = []
dop_group_steps['S140507_172832'] = [2, 13, 23, 35]
dop_group_steps['S140507_172731'] = [2, 12, 24, 35]
dop_group_steps['S140507_174039'] = [7, 24, 43, 57]
dop_group_steps['S140507_170459'] = [10, 26, 53, 68]
dop_group_steps['S140507_172926'] = [2, 12, 25, 35]
dop_group_steps['S140507_170006'] = [8, 31, 56, 74]
dop_group_steps['S140507_174213'] = [5, 15, 35, 46]
correct_group_count_step = dop_group_steps.keys()
# -
def selection_steps(j,
signal,
len_steps,
count_steps,
len_step):
ind_low_steps = []
le_ = 0
fla = False
for i in range(count_steps):
max_val = np.max(signal[i*len_steps:(i+1)*len_steps])
max_index = signal.index(max_val)
low_index = int(max_index-len_step/2)
high_index = int(max_index+len_step/2)
if(low_index < 0):
high_index = high_index - low_index
low_index = 0
sig_s = signal[low_index:high_index]
if(sig_s.index(np.max(sig_s)) != len_step):
delta = int(len_step / 2 - sig_s.index(np.max(sig_s)))
low_index = low_index - delta
high_index = high_index - delta
if(low_index < 0):
high_index = high_index - low_index
low_index = 0
ind_low_steps.append(int(low_index)/500)
return ind_low_steps
# +
for name in name_signal:
if(name not in correct_group_count_step):
step = selection_steps(name, seism[name],
int(len(seism[name])/count_steps[name]),
count_steps[name], len_step)
dop_group_steps[name] = step
print ('Количество реализаций для которых определено начало прохода - {}.'.format(len(dop_group_steps)))
# -
# Соберем все данные в один класс и сформируем словарь экземпляров класса `my_signal` характеризующий реализации сейсмических сигналов.
#
# Обработаем сформированные данные:
# - центрируем сигнал;
# - применяем фильтр (ФНЧ и ПФ);
# - нормируем сигнал.
class my_signal(object):
"""docstring"""
def __init__(self, name, ind, ind_step, signal, targetCountN, fs, typeS):
"""Constructor"""
self.name = name
self.ind = ind
self.ind_step = ind_step
self.signal = signal
self.flagN = -1
self.step_time = -100500
self.pause_time = -100500
self.step_count = -100500
self.period = -100500
self.energy = -100500
self.countPolin = -100500
self.centr = 1
if(targetCountN != -1):
if(targetCountN == 1):
self.targetCountN = u'Одиночный'
self.flagN = 0
else:
self.targetCountN = u'Групповой'
self.flagN = 1
else:
self.targetCountN = ''
self.flagN = -1
self.fs = fs
self.typeS = typeS
def info(self):
print('--------------------------------')
print('Название сигнала: {}'.format(self.name))
print('Индекс сигнала в выборке: {}'.format(self.ind))
print('Номер прохода в реализации: {}'.format(self.ind_step))
print('Объект которому соответствует сигнал: {}'.format(self.targetCountN))
print('Частота дискретизации сигнала: {}'.format(self.fs))
# Визуализация сигнала во временной области
def plot_signal(self,
size = (15, 6),
delTime = 2,
nameFile = '',
dpi2 = 250,
color = 'r'):
time_x = np.linspace(0,len(self.signal)/self.fs, len(self.signal))
plt.title(u'Наименование :' + self.name + '\n ' + str(self.targetCountN) + ' ' + '\n')
plt.plot(time_x,self.signal, color)
plt.legend([self.typeS])
plt.xlabel(u'Время, с')
plt.ylabel(u'Амплитуда')
plt.grid()
plt.savefig(nameFile, dpi = 250)
# Визуализация спекта сигнала
def plot_spectrum(self, size = (15, 6), delFreq = 50):
fig = plt.figure(figsize = size)
plt.plot(self.frq,abs(self.spectr),'r')
plt.title(u'Спектр сигнала')
plt.xlabel(u'Частота (Гц)')
plt.ylabel(u'|Y(freq)|')
ax = fig.gca()
ax.set_xticks(numpy.arange(0, int(self.fs/2), delFreq))
plt.grid()
# Вычисление спекта сигнала
def creat_spectrum(self):
n = len(self.signal) # Длительность сигнала
k = arange(n)
T = n/self.fs
frq = k/T
frq = frq[range(int(n/2))] # Диапазон частот
Y = np.fft(self.signal)/n # Вычисление быстрого преобразования Фурье и его нормализация
Y = np.abs(Y[range(int(n/2))])
self.spectr = Y
self.frq = frq
# Фильтрация сигнала (ФНЧ, ФВЧ, полосовой фильтр и режекторный фильтр)
def signal_filter(self, cutoff, order=5, btypeFilter = 'low'):
self2 = copy.copy(self)
nyq = 0.5 * self.fs
if((btypeFilter == 'low') or (btypeFilter == 'highpass')):
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype = btypeFilter, analog=False)
elif((btypeFilter == 'bandpass') or (btypeFilter == 'bandstop')):
normal_cutoff = [cutoff[0]/nyq, cutoff[1]/nyq]
b, a = signal.butter(order, normal_cutoff, btype=btypeFilter, analog=False)
self2.signal = signal.lfilter(b, a, self.signal)
return self2
def operation_erase_mean(self):
me = np.mean(self.signal)
self.signal = self.signal - me
# Нормировка сигнала
def operation_norm(self):
self.signal = self.signal / np.max(np.abs(self.signal))
def operation_high_low(self):
p75 = np.percentile(self.signal, 75)
p25 = np.percentile(self.signal, 25)
qr = p75 - p25
self.high = p75 + 1.5 * qr
self.low = p25 - 1.5 * qr
def operation_set_high_low(self, high, low):
self.high = high
self.low = low
# +
# %%time
big_seism = {}
for name in dop_group_steps:
if(len(dop_group_steps[name]) > 0):
temp_meta = meta_data[meta_data['nameFile'] == name]
sig_s_big = my_signal(name=name,
ind = temp_meta.index.values[0],
ind_step = 0,
signal = seism[name],
targetCountN = temp_meta.Count.values[0],
fs = Fs,
typeS = 'Сейсмический сигнал')
sig_s_big = sig_s_big.signal_filter(btypeFilter='low', cutoff= 35)
sig_s_big = sig_s_big.signal_filter(btypeFilter='bandstop', cutoff= [45, 55])
# sig_s_big.operation_norm()
sig_s_big.operation_high_low()
big_seism[name]=sig_s_big
print ('Количество "больших" сигналов - {} реализаций.\n'.format(len(big_seism)))
# -
# Для примера визуализируем два экземпляра для первого и второго класса:
plt.figure(figsize = (15, 6))
subplot(1,2,1)
big_seism['S140314_161345'].plot_signal()
subplot(1,2,2)
big_seism['S140507_165315'].plot_signal()
# Создадим новый словарь экземпляров класса `my_signal` для "малых" сигналов - проходов.
# +
# %%time
small_seism = []
for (j,name) in enumerate(big_seism.keys()):
sig = big_seism[name]
temp_meta = meta_data[meta_data['nameFile'] == sig.name]
for (i, ind) in enumerate(dop_group_steps[sig.name]):
if(len(dop_group_steps[sig.name]) > 0):
sig_s_small = my_signal(name=sig.name,
ind = sig.ind,
ind_step = i,
signal = sig.signal[int(ind*sig.fs):int((ind+8)*sig.fs)],
targetCountN = temp_meta.Count.values[0],
fs = sig.fs,
typeS = 'Сейсмический сигнал')
sig_s_small.operation_erase_mean()
sig_s_small.operation_set_high_low(high=sig.high, low=sig.low)
#sig_s_small.operation_high_low()
#sig_s_small.feature_time(low_time = 50)
small_seism.append(sig_s_small)
# -
print ('Количество "малых" сигналов - {} проходов.\n'.format(len(small_seism)))
plt.figure(figsize = (15, 6))
small_seism[800].info()
small_seism[800].plot_signal()
# ##### 2. Для проходов определим (расчитаем) признаки:
#
# 1. средняя длительность фазы переноса - период переноса ноги;
# 2. средняя длительность фазы опоры - постановка ноги на опору;
# 3. средняя длительность полного цикла (период двойного шага) - слагается для каждой ноги из фазы опоры и фазы переноса конечности;
# 4. энергия сигнала;
# 5. количество шагов.
#
# Для расчета 1,2,3 и 5 признака требуется выделить из прохода фазу переноса или фазу опоры.
# +
indEx = 800
seismic_signal = small_seism[indEx].signal
N = len(seismic_signal)
step_pause = [0] * len(seismic_signal)
st = [0] * N
p75 = np.percentile(seismic_signal, 75)
p25 = np.percentile(seismic_signal, 25)
qr = p75 - p25
high = p75 + 1.5 * qr
low = p25 - 1.5 * qr
for j in range(len(seismic_signal)-1):
i = j + 1
if(seismic_signal[i] >= high or seismic_signal[i] <= low):
st[i] = 1
step_pause[i] = seismic_signal[i]
# -
fig = plt.figure(figsize = (15, 6))
stepsS = pd.DataFrame()
for i in range(len(st)-1):
if(st[i] == 0 and st[i+1] == 1 and len(stepsS) > 0):
stepsS.set_value(len(stepsS)-1,'finish', (i))
elif(st[i] == 1 and st[i+1] == 0):
stepsS.set_value(len(stepsS),'start', i)
stepsS = stepsS[:-1]
stepsS['time_pause'] = stepsS['finish'] - stepsS['start']
stepsS['time_pause'].plot()
stepsS = stepsS[(stepsS['time_pause']>50)]
stepsS['time_pause'].plot()
plt.legend(['Фазы переноса до фильтрации', 'Фазы переноса после фильтрации'])
plt.xlabel(u'Номер фазы переноса')
plt.ylabel(u'Время * 500 Гц')
stepsS['time_pause'].median()
plt.grid()
walk = []
fig = plt.figure(figsize = (15, 6))
#plt.plot(seismic_signal_by_filter_30)
time_x = np.linspace(0,len(seismic_signal)/Fs, len(seismic_signal))
plt.plot(time_x, seismic_signal)
for i in range(stepsS.shape[0]-1):
plt.axvline(stepsS.iloc[i]['finish']/Fs, color = 'red')
plt.axvline(stepsS.iloc[i+1]['start']/Fs, color = 'green')
stepsS.set_value(stepsS.index[i],'time_walk',stepsS.iloc[i+1]['start'] - stepsS.iloc[i]['finish'])
plt.legend(['Расматриваемый сигнал', 'Отсечка начала фазы опоры', 'Отсечка окончания фазы опоры'])
plt.xlabel(u'Время, с')
plt.ylabel(u'Амплитуда')
plt.grid()
# Таким образом разработан механизм выделения фазы переноса и опоры.
#
# Добавим в сформированный класс метод по выделению признаков - <b>`feature_time`</b>.
class my_signal(object):
"""docstring"""
def __init__(self, name, ind, ind_step, signal, targetCountN, fs, typeS):
"""Constructor"""
self.name = name
self.ind = ind
self.ind_step = ind_step
self.signal = signal
self.flagN = -1
self.step_time = -100500
self.pause_time = -100500
self.step_count = -100500
self.period = -100500
self.energy = -100500
self.countPolin = -100500
self.centr = 1
if(targetCountN != -1):
if(targetCountN == 1):
self.targetCountN = u'Одиночный'
self.flagN = 0
else:
self.targetCountN = u'Групповой'
self.flagN = 1
else:
self.targetCountN = ''
self.flagN = -1
self.fs = fs
self.typeS = typeS
def info(self):
print('--------------------------------')
print('Название сигнала: {}'.format(self.name))
print('Индекс сигнала в выборке: {}'.format(self.ind))
print('Номер прохода в реализации: {}'.format(self.ind_step))
print('Объект которому соответствует сигнал: {}'.format(self.targetCountN))
print('Частота дискретизации сигнала: {}'.format(self.fs))
def plot_signal(self,
size = (15, 6),
delTime = 2,
nameFile = '',
dpi2 = 250,
color = 'r'):
time_x = np.linspace(0,len(self.signal)/self.fs, len(self.signal))
plt.title(u'Наименование :' + self.name + '\n ' + str(self.targetCountN) + ' ' + '\n')
plt.plot(time_x,self.signal, color)
plt.legend([self.typeS])
plt.xlabel(u'Время, с')
plt.ylabel(u'Амплитуда')
plt.savefig(nameFile, dpi = 250)
def plot_spectrum(self, size = (15, 6), delFreq = 50):
fig = plt.figure(figsize = size)
plt.plot(self.frq,abs(self.spectr),'r')
plt.title(u'Спектр сигнала')
plt.xlabel(u'Частота (Гц)')
plt.ylabel(u'|Y(freq)|')
ax = fig.gca()
ax.set_xticks(numpy.arange(0, int(self.fs/2), delFreq))
plt.grid()
def creat_spectrum(self):
n = len(self.signal) # Длительность сигнала
k = arange(n)
T = n/self.fs
frq = k/T
frq = frq[range(int(n/2))] # Диапазон частот
Y = np.fft(self.signal)/n # Вычисление быстрого преобразования Фурье и его нормализация
Y = np.abs(Y[range(int(n/2))])
self.spectr = Y
self.frq = frq
def signal_filter(self, cutoff, order=5, btypeFilter = 'low'):
self2 = copy.copy(self)
nyq = 0.5 * self.fs
if((btypeFilter == 'low') or (btypeFilter == 'highpass')):
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype = btypeFilter, analog=False)
elif((btypeFilter == 'bandpass') or (btypeFilter == 'bandstop')):
normal_cutoff = [cutoff[0]/nyq, cutoff[1]/nyq]
b, a = signal.butter(order, normal_cutoff, btype=btypeFilter, analog=False)
self2.signal = signal.lfilter(b, a, self.signal)
return self2
def operation_erase_mean(self):
me = np.mean(self.signal)
self.signal = self.signal - me
def operation_norm(self):
self.signal = self.signal / np.max(np.abs(self.signal))
def print_features(self):
print ('Средняя длительность шага:{}'.format(self.step_time))
print ('Средняя длительность паузы:{}'.format(self.pause_time))
print ('Количество шагов:{}'.format(self.step_count))
print ('Энергия сигнала:{}'.format(self.energy))
def operation_high_low(self):
p75 = np.percentile(self.signal, 75)
p25 = np.percentile(self.signal, 25)
qr = p75 - p25
self.high = p75 + 1.5 * qr
self.low = p25 - 1.5 * qr
def operation_set_high_low(self, high, low):
self.high = high
self.low = low
def feature_time(self, low_time):
if(self.flagN != -1):
N = len(self.signal)
signal2 = self.signal
st = [0] * N
for j in range(N-1):
i = j + 1
if(signal2[i] >= self.high or signal2[i] <= self.low):
st[i] = 1
else:
signal2[i] = 0
start = []
finish = []
for i in range(len(st)-1):
if(st[i] == 0 and st[i+1] == 1 and len(start) > 0):
finish.append(i)
elif(st[i] == 1 and st[i+1] == 0):
start.append(i)
pause_time_2 = []
finish_2 = []
start_2 = []
step_time_2 = []
pause_time = list(map(lambda x: x[0] - x[1], zip(finish, start)))
for i in range(len(pause_time)):
if(pause_time[i] > low_time):
pause_time_2.append(pause_time[i])
finish_2.append(finish[i])
start_2.append(start[i])
for i in range(len(pause_time_2)-1):
step_time_2.append(start_2[i+1] - finish_2[i])
pause_time = []
finish = []
start = []
step_time = []
period = []
for i in range(len(pause_time_2)-1):
if(pause_time_2[i] < 5000 and step_time_2[i] < 400 and step_time_2[i] > 2 ):
pause_time.append(pause_time_2[i])
finish.append(finish_2[i])
start.append(start_2[i])
step_time.append(step_time_2[i])
period.append(step_time_2[i] + pause_time_2[i])
self.step_time = np.mean(step_time)
self.pause_time = np.mean(pause_time)
self.step_count = len(step_time)
self.period = np.mean(period)
analytic_signal = hilbert(signal2)
amplitude_envelope = np.abs(analytic_signal)
signal2 = np.abs(signal2) / np.max(signal2)
self.energy = np.sum(signal2)
# Для всего словаря сигналов применим метод `feature_time`
# +
# %%time
small_seism_feature = []
for signal in small_seism:
sig_s_small_feature = my_signal(name=signal.name,
ind = signal.ind,
ind_step = signal.ind_step,
signal = signal.signal,
targetCountN = signal.targetCountN,
fs = signal.fs,
typeS = signal.typeS)
sig_s_small_feature.flagN = signal.flagN
sig_s_small_feature.operation_set_high_low(high=signal.high, low=signal.low)
sig_s_small_feature.feature_time(low_time = 50)
small_seism_feature.append(sig_s_small_feature)
print ('Количество "малых" сигналов - {} проходов с извлеченными признаками.\n'.format(len(small_seism_feature)))
# -
# ##### 3. для сформированного признакового пространства добавить вектор меток.
#
# Выделим из словаря сигналов необходимые признаки и сформируем DataFrame
X = pd.DataFrame()
X['step_time'] = list(map(lambda x: x.step_time, small_seism_feature))
X['pause_time'] = list(map(lambda x: x.pause_time, small_seism_feature))
X['step_count'] = list(map(lambda x: x.step_count, small_seism_feature))
X['energy'] = list(map(lambda x: x.energy, small_seism_feature))
X['target'] = list(map(lambda x: x.flagN, small_seism_feature))
X['period'] = list(map(lambda x: x.period, small_seism_feature))
X = X[X['target'] > -1]
X = X.dropna()
print ('Количество групповых целей в выборке {}'.format(X[X['target'] == 1].shape[0]))
print ('Количество одиночных целей в выборке {}'.format(X[X['target'] == 0].shape[0]))
X.head()
# Визуализируем признаки
plt.figure(figsize(20, 5))
plt.subplot(1, 5, 1)
df = pd.melt(X, value_vars=['pause_time'], id_vars='target')
sns.violinplot(x='variable', y ='value', hue='target', data=df, scale ='count', split=True,palette="Set1")
plt.grid()
plt.subplot(1, 5, 2)
df = pd.melt(X, value_vars=['step_time'], id_vars='target')
sns.violinplot(x='variable', y ='value', hue='target', data=df, scale ='count', split=True,palette="Set1")
plt.grid()
plt.subplot(1, 5, 3)
df = pd.melt(X, value_vars=['step_count'], id_vars='target')
sns.violinplot(x='variable', y ='value', hue='target', data=df, scale ='count', split=True,palette="Set1")
plt.grid()
plt.subplot(1, 5, 4)
df = pd.melt(X, value_vars=['energy'], id_vars='target')
sns.violinplot(x='variable', y ='value', hue='target', data=df, scale ='count', split=True,palette="Set1")
plt.grid()
plt.subplot(1, 5, 5)
df = pd.melt(X, value_vars=['period'], id_vars='target')
sns.violinplot(x='variable', y ='value', hue='target', data=df, scale ='count', split=True,palette="Set1")
plt.grid()
# Рассмотрим попарную корреляцию между признаками
plt.figure(figsize(6, 5))
sns.heatmap(X.corr(), cmap='PuOr');
# Заметим, что:
# * признаки step_time, pause_time сильно положительно коррелируют с признаком period;
# * признак pause_time отрицательно коррелирует с признаком energy.
#
# Такие коэффициенты корреляции обусловлены природой формирования признакового пространства во временной области.
#
# Кроме того, в рамках <b>исследования физиологии движений и физиологии активности</b>, <NAME> оценил, что при ходьбе в среднем темпе фаза опоры длится примерно 60 % от цикла двойного шага, фаза переноса примерно 40 %, оценим получившиеся данные.
print('Средняя длительность двойного шага - {}'.format(np.round(X.period.mean(), 3)))
print('Средняя длительность фазы переноса - {} ({} % от цикла двойного шага)'.format(np.round(X.step_time.mean(),3),
np.round(X.step_time.mean() / X.period.mean(), 3)))
print('Средняя длительность фазы опоры - {} ({} % от цикла двойного шага)'.format(np.round(X.pause_time.mean(),3),
np.round(X.pause_time.mean() / X.period.mean(), 3)))
# Представленные расчеты в целом совпадают с теоретическими результатами
y = X['target']
X = X.drop('target', 1)
# +
forest = RandomForestClassifier(n_estimators=40)
forest.fit(X,y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
ind = np.argsort(importances)[::-1]
# Print the feature ranking
print("Сортировка признаков по информативности:")
for f in range(X.shape[1]):
print("%d. Признак - %s (%f)" % (f + 1, X.columns[ind[f]], importances[ind[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Ранг признаков по информативности")
plt.bar(range(X.shape[1]), importances[indices],
yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.grid()
plt.xlim([-1, X.shape[1]])
plt.show()
# -
# ### Кросс-валидация
# Разделим выборку на обучающую и тестовую
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=42,
stratify = y)
forest = RandomForestClassifier(n_estimators=500)
forest.fit(X_train, y_train)
test_pred = forest.predict(X_test)
accuracy_score(y_test, test_pred)
y_test.value_counts()[0] / y_test.shape[0]
f1_score(y_test, test_pred)
# Подберем параметры для модели
rf_tree_params = {'n_estimators': (50, 100, 150),
'min_samples_split' : list(range(1,5)),
'max_depth': list(range(6,14))}
n_folds = 5
grid = GridSearchCV(clf, rf_tree_params, cv=n_folds,
n_jobs=-1)
grid.fit(X_train, y_train)
clf_best_score = grid.best_score_
clf_best_params = grid.best_params_
clf_best = grid.best_estimator_
mean_validation_scores = []
print("Лучший результат", clf_best_score)
print("лучшие параметры", clf_best_params)
# ### Проверяем сходимость модели
# +
def plot_with_std(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
def plot_learning_curve(clf, X, y, scoring, cv=5):
train_sizes = np.linspace(0.05, 1, 20)
n_train, val_train, val_test = learning_curve(clf,
X, y, train_sizes, cv=cv,
scoring=scoring)
plot_with_std(n_train, val_train, label='training scores', c='green')
plot_with_std(n_train, val_test, label='validation scores', c='red')
plt.xlabel('Training Set Size'); plt.ylabel(scoring)
plt.legend()
plt.grid()
# -
plot_learning_curve(RandomForestClassifier(n_estimators=clf_best_params['n_estimators'],
max_depth=clf_best_params['max_depth'],
min_samples_leaf=clf_best_params['min_samples_leaf']),
X_train, y_train, scoring='f1', cv=10)
plot_learning_curve(RandomForestClassifier(n_estimators=clf_best_params['n_estimators'],
max_depth=clf_best_params['max_depth'],
min_samples_leaf=clf_best_params['min_samples_leaf']),
X_train, y_train, scoring='accuracy', cv=10)
# Рассмотрим как влиет количетсво деревьев в модели
def plot_validation_curve(clf, X, y, cv_param_name,
cv_param_values, scoring):
val_train, val_test = validation_curve(clf, X, y, cv_param_name,
cv_param_values, cv=5,
scoring=scoring)
plot_with_std(cv_param_values, val_train,
label='training scores', c='green')
plot_with_std(cv_param_values, val_test,
label='validation scores', c='red')
plt.xlabel(cv_param_name); plt.ylabel(scoring)
plt.legend()
plt.grid()
estimators = np.arange(25, 350, 25)
plot_validation_curve(RandomForestClassifier(min_samples_leaf=clf_best_params['min_samples_leaf'],
max_depth=clf_best_params['max_depth']), X_train, y_train,
cv_param_name='n_estimators',
cv_param_values= estimators,
scoring='f1')
# Так же рассмотрим как влияет глубина деревьев на качество модели
depth = np.arange(3, 25)
plot_validation_curve(RandomForestClassifier(n_estimators=clf_best_params['n_estimators'],
min_samples_leaf=clf_best_params['min_samples_leaf']), X_train, y_train,
cv_param_name='max_depth',
cv_param_values= depth,
scoring='f1')
# Заметим что:
# * на качество модели практически не влиется количество используемых деревьев;
# * нет смысла применять глубину больше 15.
release_forest = RandomForestClassifier(n_estimators=clf_best_params['n_estimators'],
max_depth=clf_best_params['max_depth'],
min_samples_leaf=clf_best_params['min_samples_leaf'])
release_forest.fit(X_train, y_train)
release_pred = release_forest.predict(X_test)
print('accuracy = {}'.format(accuracy_score(y_test, release_pred)))
print('f1 = {}'.format(f1_score(y_test, release_pred)))
print('roc_auc = {}'.format(roc_auc_score(y_test, release_pred)))
# ### Выводы
# Рассмотрен полный цикл обработки и генерирования признаков из сейсмических сигналов.
#
# Объем выборки составил более 1000 сигналов. Уникальных объектов (в данном случае людей по которым регистрировались сигналы) было более 10.
#
# Признаки были сформированы во временной области, при дальнейшем исследовании планируется рассмотреть признаки из частотной области, отдельные гармоники или огибающие спектра.
#
# Конечно для более точной классификации требуется больше самих сигналов, а также больше объектов.
| jupyter/projects_individual/project_seismic_waves_airat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.11 64-bit (''kaggle'': conda)'
# name: python3
# ---
# +
##
import os
import json
from xml.etree import ElementTree as et
##
class load:
def json(path):
with open(path, 'r') as path:
paper = json.load(path)
pass
return(paper)
def xml(path):
paper = et.parse(path)
return(paper)
pass
# -
folder = 'hw1_data'
book = [os.path.join(folder, i) for i in os.listdir(folder)]
document = load.json(path=book[0])
document
load.xml(path=book[3])
| HW/1/skip/hw-beta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/Nyandwi/machine_learning_complete/blob/main/2_data_manipulation_with_pandas/2_data_manipulation_with_pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# </td>
# <td>
# </table>
# *This notebook was created by [<NAME>](https://twitter.com/jeande_d) for the love of machine learning community. For any feedback, errors or suggestion, he can be reached on email (johnjw7084 at gmail dot com), [Twitter](https://twitter.com/jeande_d), or [LinkedIn](https://linkedin.com/in/nyandwi).*
# + [markdown] id="ErLffIn4mD93"
# <a name='0'></a>
# # Data Manipulation with Pandas
#
# In this lab, you will learn how to manipulate data with Pandas. Here is an overview:
#
# * [1. Basics of Pandas for data manipulation:](#1)
#
# * [A. Series and DataFrames](#1-1)
# * [B. Data Indexing and Selection, and Iteration (Add Iteration)](#1-2)
# * [C. Dealing with Missing data](#1-3)
# * [D. Basic operations and Functions](#1-4)
# * [E. Aggregation Methods](#1-5)
# * [F. Groupby](#1-6)
# * [G. Merging, Joining and Concatenate](#1-7)
# * [H. Beyond Dataframes: Working with CSV, and Excel](#1-8)
#
# * [2. Real World Exploratory Data Analysis (EDA)](#2)
#
# + [markdown] id="6cK48ySE4SB0"
# <a name='1'></a>
# ## 1. Basics of Pandas for data manipulation
#
# <a name='1-1'></a>
# ### A. Series and DataFrames
#
# Both series and DataFrames are Pandas Data structures.
#
# Series is like one dimensional NumPy array with axis labels.
#
# DataFrame is multidimensional NumPy array with labels on rows and columns.
#
# Working with NumPy, we saw that it supports numeric type data. Pandas on other hand supports whole range of data types, from numeric to strings, etc..
# + [markdown] id="6FckciW165U4"
# Since we are using python notebook, we do not need to install Pandas. We only just have to import it.
#
# ```
# import pandas as pd
# ```
#
#
# + id="hl7wXbFo4ZLP"
# importing numpy and pandas
import numpy as np
import pandas as pd
# + [markdown] id="r1MZe-Th7jf-"
# #### Creating Series
#
# Series can be created from a Python list, dictionary, and NumPy array.
# + colab={"base_uri": "https://localhost:8080/"} id="Ous_6XUS7lLa" outputId="83ada779-3f99-4536-8958-abb525d3409b"
# Creating the series from a Python list
num_list = [1,2,3,4,5]
pd.Series(num_list)
# + colab={"base_uri": "https://localhost:8080/"} id="E6aPp0JJ9MI9" outputId="0e2c3111-226d-4c5f-b2c2-fec67493c18a"
week_days = ['Mon','Tues','Wed','Thur','Fri']
pd.Series(week_days, index=["a", "b", "c", "d", "e"])
# + [markdown] id="0gtsxrby9fHM"
# Note the data types `int64` and `object`.
# + colab={"base_uri": "https://localhost:8080/"} id="MHWhzhTt9mev" outputId="1093b1e6-cdc8-41e2-d8d0-dbbe81d041bd"
# Creating the Series from dictionary
countries_code = { 1:"United States",
91:"India",
49:"Germany",
86:"China",
250:"Rwanda"}
pd.Series(countries_code)
# + colab={"base_uri": "https://localhost:8080/"} id="hCBCCRHZ-i_1" outputId="420e17e5-3dcb-48d8-8293-5bda93306c4d"
d = {1:'a', 2:'b', 3:'c', 4:'d'}
pd.Series(d)
# + colab={"base_uri": "https://localhost:8080/"} id="lr0P6iSO_Ngl" outputId="69df04f1-e231-4d16-9917-cf04461a11c3"
# Creating the Series from NumPy array
# We peovide the list of indexes
# if we don't provide the indexes, the default indexes are numbers...starts from 0,1,2..
arr = np.array ([1, 2, 3, 4, 5])
pd.Series(arr)
# + colab={"base_uri": "https://localhost:8080/"} id="vBV3WXje__5m" outputId="08444786-b5d7-454e-bb4f-3e279d8a898a"
pd.Series(arr, index=['a', 'b', 'c', 'd', 'e'])
# + [markdown] id="YYpqI7l4AvOG"
# #### Creating DataFrames
#
# DataFrames are the most used Pandas data structure. It can be created from a dictionary, 2D array, and Series.
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="cThhEpprAyLr" outputId="9f09b720-0a54-451d-9005-e70c91479031"
# Creating DataFrame from a dictionary
countries = {'Name': ['USA', 'India', 'German', 'Rwanda'],
'Codes':[1, 91, 49, 250] }
pd.DataFrame(countries)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="fwdwsY9uGdG0" outputId="945f8126-a85b-46f5-9cd8-f649c7301fcf"
# Creating a dataframe from a 2D array
# You pass the list of columns
array_2d = np.array ([[1,2,3], [4,5,6], [7,8,9]])
pd.DataFrame(array_2d, columns = ['column 1', 'column 2', 'column 3'])
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="Y7ezd5g4HrHq" outputId="34eeaa06-92f3-40d9-c0f7-998d43f39931"
# Creating a dataframe from Pandas series
# Pass the columns in a list
countries_code = { "United States": 1,
"India": 91,
"Germany": 49,
"China": 86,
"Rwanda":250}
pd_series = pd.Series(countries_code)
pd.Series(countries_code)
df = pd.DataFrame(pd_series, columns = ['Codes'])
df
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="F1JPrkTizQHM" outputId="bc3d53bc-19d8-402c-fd8d-78ade1623a90"
# Adding a column
# Number in population are pretty random
df ['Population'] = [100, 450, 575, 5885, 533]
df
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="pympKqSv8ZNp" outputId="bf04e8f8-7d0e-474e-d287-95a326128461"
# Removing a column
df.drop('Population', axis =1)
# + colab={"base_uri": "https://localhost:8080/"} id="Hz2u0p008lDR" outputId="927211e6-3536-43ec-9932-54b5298a45d6"
df.columns
# + colab={"base_uri": "https://localhost:8080/"} id="x0mYfsLt8oyC" outputId="033adeef-b0a6-407d-e58b-b293bc8c22d8"
df.keys
# + colab={"base_uri": "https://localhost:8080/"} id="DcGWsgwP8tuo" outputId="d9ab7016-b67e-4b59-f959-60d542ca7901"
df.index
# + [markdown] id="8S1rS2IqI5OQ"
# <a name='1-2'></a>
# ### B. Data Indexing and Selection
#
# Indexing and selection works in both Series and Dataframe.
#
# Because DataFrame is made of Series, let's focus on how to select data in DataFrame.
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="xO-xCw1uI9Yt" outputId="008a4242-556f-42af-c946-55ff41f0cbe3"
# Creating DataFrame from a dictionary
countries = {'Name': ['USA', 'India', 'German', 'Rwanda'],
'Codes':[1, 91, 49, 250] }
df = pd.DataFrame(countries, index=['a', 'b', 'c', 'd'])
df
# + colab={"base_uri": "https://localhost:8080/"} id="BhcNuJo1sBff" outputId="a89548bf-e083-496e-d7a2-93bc7378bce3"
df['Name']
# + colab={"base_uri": "https://localhost:8080/"} id="oBhhzCWyy_OJ" outputId="22207d29-0349-4a06-cb29-902bc389eb06"
df.Name
# + colab={"base_uri": "https://localhost:8080/"} id="vRUbK_YLsQ0h" outputId="517f0652-6c83-4293-9b84-236263e6f856"
df ['Codes']
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="BGdQngaxsTbM" outputId="ce70d45c-1b3d-400b-f278-f986920ec16f"
## When you have many columns, columns in list will be selected
df [['Name', 'Codes']]
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="hm5NIMGksb71" outputId="9333881c-eb66-4543-908a-a5b676d82d8c"
# This will return the first two rows
df [0:2]
# + [markdown] id="7hsBOkVLwev1"
# You can also use `loc` to select data by the label indexes and `iloc` to select by default integer index (or by the position of the row)
# + colab={"base_uri": "https://localhost:8080/"} id="r0kBMm4Dwmyw" outputId="5cca5a84-43be-43ee-948f-a05f706dbdc1"
df.loc['a']
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="-S2cSr5Ax97c" outputId="efb0c4e8-7fa0-4fb7-9a12-ea2851ecd890"
df.loc['b':'d']
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="RictqM1hyUYm" outputId="0d962413-d6c5-4cdd-800a-3eb99f98b597"
df [:'b']
# + colab={"base_uri": "https://localhost:8080/"} id="m2Tauaenw6WQ" outputId="4cf58b8a-15e3-47a4-d849-cc4cde79c139"
df.iloc[2]
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="e8YeUoXZx4KA" outputId="e7b5c924-6755-44b7-8405-4cb8198113bf"
df.iloc[1:3]
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="wGaDmJRGyYhy" outputId="ba0e9cbb-95d5-4fb3-b799-90ebd3c296f6"
df.iloc[2:]
# + [markdown] id="RCc2oXwszDDB"
# ### Conditional Selection
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="iNxAW-NTzHAS" outputId="d66e4c9a-dfbc-486a-d7ae-d27487ed5722"
df
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="4F3mr2Fszagx" outputId="878b4c87-ae97-47eb-df25-bb98ee575056"
#Let's select a country with code 49
df [df['Codes'] == 49 ]
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="xLSnNlqc0XMp" outputId="d006ddff-24e3-409e-aa5c-57522194fceb"
df [df['Codes'] < 250 ]
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="_3g-C-UKzyq6" outputId="a8eecb7b-b35c-4de9-cabb-86e548183318"
df [df['Name'] == 'USA' ]
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="EqqEQfhoz9IG" outputId="ac0c9c58-53fa-4228-d5f7-2742199a7471"
# You can use and (&) or (|) for more than conditions
#df [(condition 1) & (condition 2)]
df [(df['Codes'] == 91 ) & (df['Name'] == 'India') ]
# + [markdown] id="ipe0NTjf2Db8"
# You can also use `isin()` and `where()` to select data in a series or dataframe.
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="CJIqyvOC2P5A" outputId="8cf04e2e-40a4-4518-d8d7-2b0e8e5149cf"
# isin() return false or true when provided value is included in dataframe
sample_codes_names=[1,3,250, 'USA', 'India', 'England']
df.isin(sample_codes_names)
# + [markdown] id="HsqhvWzo3B2N"
# As you can see, it returned `True` wherever a country code or name was found. Otherwise, `False`. You can use a dictinary to match search by columns. A key must be a column and values are passed in list.
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="fnx_IAnS3Wq8" outputId="e4f51707-8352-4aec-9e12-f2937a4804aa"
sample_codes_names = {'Codes':[1,3,250], 'Name':['USA', 'India', 'England']}
df.isin(sample_codes_names)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="PGv1D9K54knp" outputId="48868abb-3dcc-4344-db4d-5e9c63f1a1da"
df2 = pd.DataFrame(np.array ([[1,2,3], [4,5,6], [7,8,9]]),
columns = ['column 1', 'column 2', 'column 3'])
df2
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="BGU1CUQ142CW" outputId="9caa3f57-cfd5-423e-eba1-a9fdb13866cd"
df2.isin([0,3,4,5,7])
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="NsFfB57Y5ePv" outputId="ced02843-254f-40f5-af21-1a005677ccb5"
df2 [df2 > 4]
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="HTTed_RS5E9S" outputId="fe5b3755-85de-41de-87a9-3a4f190a2bc9"
df2.where(df2 > 4)
# + [markdown] id="4cGg8isa5waQ"
# Where the condition is false, `where` allows you to replace values. In this case, all values less than 4 will be 0.
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="jcIdwzbI5LBd" outputId="4e043655-b16a-40ac-bb54-ad1365cf3815"
df2.where(df2 > 4, 0)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="sJdjZySr5Uaz" outputId="476cd0a7-4228-4812-c65e-022f099c6cb4"
# The above does the same as
df2 [df2 > 4 ] = 0
df2
# + [markdown] id="sT3tBawQHIxh"
# #### Iteration
#
# ```
# df.items() #Iterate over (column name, Series) pairs.
# df.iteritems() Iterate over (column name, Series) pairs.
# DataFrame.iterrows() Iterate over DataFrame rows as (index, Series) pairs.
# DataFrame.itertuples([index, name]) Iterate over DataFrame rows as namedtuples.
# ```
#
#
# +
# Iterate over (column name, Series) pairs.
for col_name, content in df2.items():
print(col_name)
print(content)
# +
# Iterate over (column name, Series) pairs.
# Same as df.items()
for col_name, content in df2.iteritems():
print(col_name)
print(content)
# +
# Iterate over DataFrame rows as (index, Series) pairs
for row in df2.iterrows():
print(row)
# +
# Iterate over DataFrame rows as namedtuples
for row in df2.itertuples():
print(row)
# + [markdown] id="5L0C1_d5840L"
# <a name='1-3'></a>
# ### C. Dealing with Missing data
#
# Real world datasets are messy, often with missing values. Pandas replace NaN with missing values by default. NaN stands for not a number.
#
# Missing values can either be ignored, droped or filled.
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="xWN2SE4K87Cn" outputId="8b639662-3a57-4d62-95d0-a1447c9c2bf0"
# Creating a dataframe
df3 = pd.DataFrame(np.array ([[1,2,3], [4,np.nan,6], [7,np.nan,np.nan]]),
columns = ['column 1', 'column 2', 'column 3'])
# + [markdown] id="gofEv_rhTxzz"
# #### Checking Missing values
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="EQP7y0YaN5Jb" outputId="ce985690-efa5-4ad2-9258-91a0f3fd4141"
# Recognizing the missing values
df3.isnull()
# + colab={"base_uri": "https://localhost:8080/"} id="BQSUMOXeOe5Z" outputId="9233ea74-868a-4f26-b275-96b3f8f46dbd"
# Calculating number of the missing values in each feature
df3.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="je6Rqhq0Opgg" outputId="967f225a-6b2f-494a-8e16-eb0ab558e622"
# Recognizng non missig values
df3.notna()
# + colab={"base_uri": "https://localhost:8080/"} id="9oJFnRwNQkMb" outputId="06373d27-24f4-4be3-e665-c3a055300b1d"
df3.notna().sum()
# + [markdown] id="OrrsYADLT2J7"
# #### Removing the missing values
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="BtYAGZ3GQm7-" outputId="c1ef0bc6-09d0-4063-db4b-f6092fc30cc8"
## Dropping missing values
df3.dropna()
# + [markdown] id="M03nac6CRfWQ"
# All rows are deleted because dropna() will remove each row which have missing value.
# + colab={"base_uri": "https://localhost:8080/"} id="6qdjI9S9Rfvk" outputId="b469ed24-0d8e-4ad6-f766-229e4151e06e"
# you can drop NaNs in specific column(s)
df3['column 3'].dropna()
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="EWfHlBf7R0b1" outputId="ddbbf4c1-0eca-4a4c-b38a-654c3d79da66"
# You can drop data by axis
# Axis = 1...drop all columns with Nans
# df3.dropna(axis='columns')
df3.dropna(axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="wTFG5MeBTG9y" outputId="dd8f4d08-5d82-498f-c3a3-fc35f6a8c8ba"
# axis = 0...drop all rows with Nans
# df3.dropna(axis='rows') is same
df3.dropna(axis=0)
# + [markdown] id="daRUQLvNT7xT"
# #### Filling the missing values
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="keazo2zoTKIm" outputId="0d3ebd6f-497e-4ad1-d00f-723e8a0f1386"
# Filling Missing values
df3.fillna(10)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="WI6t4b68V65Z" outputId="a2d6b6f3-9d6d-4e7b-a8e5-084b7d634ca7"
df3.fillna('fillme')
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="ZBFjHR0JWEzK" outputId="98d074dd-2060-4094-f5bb-f9d2b8b651d3"
# You can forward fill (ffill) or backward fill(bfill)
# Or fill a current value with previous or next value
df3.fillna(method='ffill')
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="azOA4S_PWB-8" outputId="b9a3a1fb-0cb2-4160-85de-5ef2df307c37"
# Won't change it because the last values are NaNs, so it backward it
df3.fillna(method='bfill')
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="uVkg9e9AWYvK" outputId="14ff8c58-6ac7-4a6e-e5f9-bccd7ebc1592"
# If we change the axis to columns, you can see that Nans at row 2 and col 2 is backfilled with 6
df3.fillna(method='bfill', axis='columns')
# + [markdown] id="k3MilbA7XSPo"
# <a name='1-4'></a>
# ### D. More Operations and Functions
#
# This section will show the more and most useful functions of Pandas.
# + id="NWubkTA6XUpl"
df4 = pd.DataFrame({'Product Name':['Shirt','Boot','Bag'],
'Order Number':[45,56,64],
'Total Quantity':[10,5,9]},
columns = ['Product Name', 'Order Number', 'Total Quantity'])
# + [markdown] id="uHXUH7KdH-Mp"
# #### Retrieving basic info about the Dataframe
# + colab={"base_uri": "https://localhost:8080/"} id="TziPr2P2IFNG" outputId="8d7526ca-9b60-48d7-d292-9c8158921e53"
# Return a summary about the dataframe
df4.info()
# + colab={"base_uri": "https://localhost:8080/"} id="39diMDz1IQyv" outputId="08be59cf-6bd2-4386-d0a5-454c423d072d"
# Return dataframe columns
df4.columns
# + colab={"base_uri": "https://localhost:8080/"} id="-BESb6FxIk32" outputId="46ecc116-190d-49a0-be2c-54d966e64960"
# Return dataframe data
df4.keys
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="k5v_SKHlIn2C" outputId="6cfdcec6-76ab-42ff-de77-2d514a70ba63"
# Return the head of the dataframe ....could make sense if you have long frame
# Choose how many rows you want in head()
df4.head(1)
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="S5ZiB2wqI3rs" outputId="a95b716c-5a9b-4fb4-c0b3-8b0812cc0dce"
# Return the tail of the dataframe
df4.tail(1)
# + colab={"base_uri": "https://localhost:8080/"} id="IuVfuOuXK6KW" outputId="f31732e5-ad72-4c34-c5a9-2808b47e3b40"
# Return NumPy array of the dataframe
df4.values
# + colab={"base_uri": "https://localhost:8080/"} id="GMSw7p-3LAvn" outputId="b30e41d5-efe3-47d0-8aae-a7c23771850f"
# Return the size or number of elements in a dataframe
df4.size
# + colab={"base_uri": "https://localhost:8080/"} id="-twlx7EFLI68" outputId="9d611a45-c218-4f73-c7ba-4cf36b0de15e"
# Return the shape
df4.shape
# + colab={"base_uri": "https://localhost:8080/"} id="AMIZ2GJCLU3h" outputId="59126c5d-5252-4508-b35d-1e0af8e8ca24"
# Return the length of the dataframe/the number of rows in a dataframe
df4.shape[0]
# +
# Return the length of the dataframe/the number of columns in a dataframe
df4.shape[1]
# + [markdown] id="NPsgE9ogLchY"
# #### Unique Values
# + colab={"base_uri": "https://localhost:8080/"} id="gpVCVrAELeGz" outputId="a16a2c3c-4197-4c63-f6be-85a770891ac7"
# Return unique values in a given column
df4['Product Name'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="vc6k80jULlJ2" outputId="304b4b99-23b0-44f6-d48f-b8c15c33ee66"
# Return a number of unique values
df4['Product Name'].nunique()
# + colab={"base_uri": "https://localhost:8080/"} id="4E5tnSieL3q1" outputId="3245bb8c-473b-4eee-8b1c-1c520362b09b"
# Counting the occurence of each value in a column
df4['Product Name'].value_counts()
# + [markdown] id="iKHhUk07Qsfr"
# #### Applying a Function to Dataframe
# + id="JW0yon8ZQuRk"
# Double the quantity product
def double_quantity(x):
return x * x
# + colab={"base_uri": "https://localhost:8080/"} id="AYAD97GwRrD4" outputId="70ccdfdd-a574-47a6-9022-1a437c5ebc68"
df4['Total Quantity'].apply(double_quantity)
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="arYX52diU25C" outputId="a167184b-2eed-4d4c-b6bd-00ecfc505250"
# You can also apply an anonymous function to a dataframe
# Squaring each value in dataframe
df5 = pd.DataFrame([[1,2], [4,5]], columns=['col1', 'col2'])
df5.applymap(lambda x: x**2)
# + [markdown] id="KQgkBB6jW0mb"
# #### Sorting values in dataframe
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="oVQUpEWpW3eh" outputId="08573bf2-357c-4ecb-99e9-8e55423ed85e"
# Sort the df4 by the order number
df4.sort_values(['Order Number'])
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="SXHVgJOIXrYM" outputId="16c2c64a-8a3a-41b0-f21d-f94ba83e5d5d"
df4.sort_values(['Order Number'], ascending=False)
# + [markdown] id="Nf96M45vX6WI"
# <a name='1-5'></a>
# ### E. Aggregation Methods
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="YoqCRTKNYPsX" outputId="59bc6d83-8bac-4144-9dfa-02c5230fc09f"
df4
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="sGMYqGLDOW7y" outputId="39f39243-3153-4630-be08-6eb0614129e0"
# summary statistics
df4.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="1fhz845VOdb_" outputId="d96f93de-619a-426c-ffc6-4865bbc928c3"
df4.describe().transpose()
# + colab={"base_uri": "https://localhost:8080/"} id="e9RKm4TIHoui" outputId="23e93644-03f3-457f-e0bc-cea6b1f117bd"
# Mode of the dataframe
# Mode is the most recurring values
df4['Total Quantity'].mode()
# + colab={"base_uri": "https://localhost:8080/"} id="IzRRvXcOIbzC" outputId="938e4d3a-caa0-413e-d17a-a9551ff91afd"
# The maximum value
df4['Total Quantity'].max()
# + colab={"base_uri": "https://localhost:8080/"} id="T6x7YvCxJCFg" outputId="59f4672f-180a-43fb-8185-417550064ae7"
# The minimum value
df4['Total Quantity'].min()
# + colab={"base_uri": "https://localhost:8080/"} id="Jq8PKQIcJIpO" outputId="f6804d4e-22ba-462a-dbcc-b415bf510b6c"
# The mean
df4['Total Quantity'].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="kxl8cn4jJYDg" outputId="943e05b4-da1a-4322-e594-9e3e16fbc1e5"
# The median value in a dataframe
df4['Total Quantity'].median()
# + colab={"base_uri": "https://localhost:8080/"} id="7sAHCt3rJg0P" outputId="542668fc-cdee-45a8-d062-27744fc04fb0"
# Standard deviation
df4['Total Quantity'].std()
# + colab={"base_uri": "https://localhost:8080/"} id="VryxdVidJy3q" outputId="7cecf363-c5fe-4994-961e-6696e7cf62aa"
# Variance
df4['Total Quantity'].var()
# + colab={"base_uri": "https://localhost:8080/"} id="1BuyPI-zJ69A" outputId="26dc9cf7-aa05-44ff-94ea-38153a7e1ef2"
# Sum of all values in a column
df4['Total Quantity'].sum()
# + colab={"base_uri": "https://localhost:8080/"} id="EhHTxpRrKD7I" outputId="5d80593c-5d5f-499b-8122-a36c281439ff"
# Product of all values in dataframe
df4['Total Quantity'].prod()
# + [markdown] id="viZf9yCpK4xO"
# <a name='1-6'></a>
# ### F. Groupby
#
# `Group by` involves splitting data into groups, applying function to each group, and combining the results.
# + id="AmKHc2IjMZdE"
df4 = pd.DataFrame({'Product Name':['Shirt','Boot','Bag', 'Ankle', 'Pullover', 'Boot', 'Ankle', 'Tshirt', 'Shirt'],
'Order Number':[45,56,64, 34, 67, 56, 34, 89, 45],
'Total Quantity':[10,5,9, 11, 11, 8, 14, 23, 10]},
columns = ['Product Name', 'Order Number', 'Total Quantity'])
# + colab={"base_uri": "https://localhost:8080/", "height": 328} id="lIhEZ-cZMzUh" outputId="0ec73ad8-626b-490f-ff37-cdbf99c2db83"
df4
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="jNLj7ehAM02_" outputId="06525190-572e-47f2-ac9c-4eb31a33c885"
# Let's group the df by product name
df4.groupby('Product Name').mean()
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="gHaQZBIXNR-s" outputId="aaae2325-9114-4e52-fb15-ac34ba4ec847"
df4.groupby('Product Name').sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="tgJA87lTN3dJ" outputId="13bc0ceb-09be-45df-b066-c95ec81a760a"
df4.groupby('Product Name').min()
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="1hPLubKZN83R" outputId="b0550960-111c-4e6c-acf1-a84f747f0035"
df4.groupby('Product Name').max()
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="lUvqB_Y-P-K8" outputId="08dd63ea-7002-4cfa-daad-3c6e1725fe07"
df4.groupby(['Product Name', 'Order Number']).max()
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="kk8w4IMCQKvu" outputId="52c5e63c-566e-4dae-b0c0-d5c8515fbfae"
df4.groupby(['Product Name', 'Order Number']).sum()
# + [markdown] id="zvqtUO0lO2HS"
# You can also use `aggregation()` after groupby.
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="i6JlX-yrO2iP" outputId="78690759-45b2-468e-f148-bdebb08014fd"
df4.groupby('Product Name').aggregate(['min', 'max', 'sum'])
# + [markdown] id="iaDkcuplQfD7"
# <a name='1-7'></a>
# ### G. Combining Datasets: Concatenating, Joining and Merging
#
# #### Concatenation
# + id="7Sas3WNFQgf6"
# Creating dataframes
df1 = pd.DataFrame({'Col1':['A','B','C'],
'Col2':[1,2,3]},
index=['a','b','c'])
df2 = pd.DataFrame({'Col1':['D','E','F'],
'Col2':[4,5,6]},
index=['d','e','f'])
df3 = pd.DataFrame({'Col1':['G','I','J'],
'Col2':[7,8,9]},
index=['g', 'i','j'])
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="DWCL_KbiV00-" outputId="c16c316a-3a6c-48c3-98dc-6538bc717ca2"
df1
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="mCmFFGDcXBlx" outputId="a63840b8-26a5-4c04-9088-87824453b90e"
df2
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="HftwMeb1XEI6" outputId="d8ac4302-d7df-49c6-c251-609a1525cc76"
df3
# + colab={"base_uri": "https://localhost:8080/", "height": 328} id="GJ4UHZakXFiM" outputId="d3af8f06-e924-48db-98db-a96cbdc6e7ca"
# Concatenating: Adding one dataset to another
pd.concat([df1, df2, df3])
# + [markdown] id="A8NR3BpsM_Et"
# The default axis is `0`. This is how the combined dataframes will look like if we change the `axis to 1`.
# + colab={"base_uri": "https://localhost:8080/", "height": 328} id="n2QCVHToNYIT" outputId="96e1fefb-2e32-4838-e8e4-48858d8bd93c"
pd.concat([df1, df2, df3], axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 328} id="VL4FtD33Xdln" outputId="1bacc555-8d28-45a0-ddcc-56f2221a4b66"
# We can also use append()
df1.append([df2, df3])
# + [markdown] id="G5RJbmpqN5C1"
# #### Merging
#
# If you have worked with SQL, what `pd.merge()` does may be familiar. It links data from different sources (different features) and you have a control on the structure of the combined dataset.
# + [markdown] id="DvlwrPIsaLho"
# *Pandas Merge method(`how`): SQL Join Name : Description*
#
#
#
# ```
# * left : LEFT OUTER JOIN : Use keys or columns from left frame only
#
# * right : RIGHT OUTER JOIN : Use keys or columns from right frame only
#
# * outer : FULL OUTER JOIN : Use union of keys or columns from both frames
#
# * inner : INNER JOIN : Use intersection of keys or columns from both frames
# ```
#
#
# + id="xJvZbm8ZN1SB"
df1 = pd.DataFrame({'Name': ['Joe', 'Joshua', 'Jeanne', 'David'],
'Role': ['Manager', 'Developer', 'Engineer', 'Scientist']})
df2 = pd.DataFrame({'Name': ['David', 'Joshua', 'Joe', 'Jeanne'],
'Year Hired': [2018, 2017, 2020, 2018]})
df3 = pd.DataFrame({'Name': ['David', 'Joshua', 'Joe', 'Jeanne'],
'No of Leaves': [15, 3, 10, 12]})
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="fRK8jNJ2SqxM" outputId="a3782332-f882-4496-892a-6fb175af4d52"
df1
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="LlzW52vJStN7" outputId="cbc480a2-0ede-4b5e-e55f-bac0a5b1f827"
df2
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="JroYckVMSv1i" outputId="d4cdb9a9-9054-4249-b809-ccd8ff8d5f07"
pd.merge(df1, df2)
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="Qso0a2AMS-qg" outputId="56bcee2b-4548-45fa-f04e-5f7a588dd8f4"
## Let's merge on Role being a key
pd.merge(df1, df2, how='inner', on="Name")
# + id="CvSLfcCZWrN8"
df1 = pd.DataFrame({'col1': ['K0', 'K0', 'K1', 'K2'],
'col2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
df2 = pd.DataFrame({'col1': ['K0', 'K1', 'K1', 'K2'],
'col2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="fNuBZxJ3Y2tR" outputId="04fead01-62f2-4bf6-9610-03aae17f130b"
df1
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="BnH1xC4WY31H" outputId="899584c7-05ed-4504-9b08-6e6858c5f822"
df2
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="bQ_chqv-VpxK" outputId="79b127fd-90ea-43b7-ccc9-1d601ad4e439"
pd.merge(df1, df2, how='inner', on=['col1', 'col2'])
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="AA0B0vnOZBe-" outputId="804bc5a4-135b-418c-e214-adcca9c76df5"
pd.merge(df1, df2, how='outer', on=['col1', 'col2'])
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="jqLLLuMwZYSS" outputId="ad49df22-ff36-47dc-d9f4-224da1fbcd61"
pd.merge(df1, df2, how='left')
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="92MAb163Z0IM" outputId="bf72f4bb-b61f-4149-d42a-ec542b29ca0e"
pd.merge(df1, df2, how='right')
# + [markdown] id="8eBHN57CcYhI"
# #### Joining
#
# Joining is a simple way to combine columns of two dataframes with different indexes.
#
# + id="cc9WCJHzc5Lt"
df1 = pd.DataFrame({'Col1': ['A', 'B', 'C'],
'Col2': [11, 12, 13]},
index=['a', 'b', 'c'])
df2 = pd.DataFrame({'Col3': ['D', 'E', 'F'],
'Col4': [14, 14, 16]},
index=['a', 'c', 'd'])
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="-2vPNNXcd532" outputId="314fc545-bcba-418d-ecf9-e236df1de66a"
df1
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="fPRLEeLod7gW" outputId="ab506d97-75c6-4c15-a740-5148eec89acc"
df2
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="9vjxh0O-d9e2" outputId="02cb3f80-887d-447a-b2cb-b21e0402f85e"
df1.join(df2)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="RwuSaXD5eBPN" outputId="0ced8f45-3247-4b21-bac5-e1909b3a9b60"
df2.join(df1)
# + [markdown] id="5ApI4u3DeSk1"
# You can see that with `df.join()`, the alignment of data is on indexes.
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="_i-_dSy5efHn" outputId="f1329d33-4263-493b-f416-1dca2e60206a"
df1.join(df2, how='inner')
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="88PjCW2ofM-y" outputId="e98270bc-8965-41fc-accb-5d32991d39c2"
df1.join(df2, how='outer')
# + [markdown] id="rLBMe6CINpzu"
# Learn more about Merging, Joining, and Concatenating the Pandas Dataframes [here](https://pandas.pydata.org/docs/user_guide/merging.html).
# + [markdown] id="I86Tf0VYfTJT"
# <a name='1-8'></a>
# ### H. Beyond Dataframes: Working with CSV and Excel
#
# In this last section of Pandas' fundamentals, we will see how to read real world data with different formats: CSV and Excel
# + [markdown] id="BQD9appAk64O"
# #### CSV and Excel
#
# Let's use california housing dataset.
# + id="K7rfiyx6ivZ_"
# Let's download the data
# !curl -O https://raw.githubusercontent.com/nyandwi/public_datasets/master/housing.csv
# -
data = pd.read_csv('housing.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="bbASBwqilYCZ" outputId="6d0a180e-2887-4346-a55e-8ebf02d0a793"
data.head()
# + colab={"base_uri": "https://localhost:8080/"} id="VetH2qu5mnFM" outputId="0321f018-e42a-4976-82db-5889461d1f3f"
type(data)
# + id="Ca5qySmRmsz0"
## Exporting dataframe back to csv
data.to_csv('housing_dataset', index=False)
# + [markdown] id="LmqPzh9-nWgQ"
# If you look into the folder sidebar, you can see `Housing Dataset`.
# + id="vseWxDRYm1iA"
## Exporting CSV to Excel
data.to_excel('housing_excel.xlsx', index=False)
# + id="0DinnjCfnt3M"
## Reading the Excel file back
excel_data = pd.read_excel('housing_excel.xlsx')
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="mIKA-KwYiFgZ" outputId="cdbf9eed-a457-400c-a7d8-5620bc275001"
excel_data.head()
# + [markdown] id="HK8qIyAEjRsM"
# <a name='2'></a>
# ## Real World: Exploratory Data Analysis (EDA)
#
# All above was the basics. Let us apply some of these techniques to the real world dataset, `Red wine quality`.
# + colab={"base_uri": "https://localhost:8080/"} id="YEWQ8U-FjTVo" outputId="cb81a5a4-43d4-4821-e29a-84e243e40aff"
# !curl -O https://raw.githubusercontent.com/nyandwi/public_datasets/master/winequality-red.csv
# + id="EBXDmKTlmy0X"
wine_data = pd.read_csv('winequality-red.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 221} id="Gr93h3Xgm8vF" outputId="2510e3ee-7ff4-46fc-e19a-5bf67b50edd8"
# Displaying the head of the dataset
wine_data.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 221} id="BrBYEf9Dqg9K" outputId="b240cb52-ba5b-4286-a42a-8ea7feafe135"
# Displaying the tail of the dataset
wine_data.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 421} id="qauq0ujDnocm" outputId="23d06fc6-738b-4ad8-8582-3fd6e916cf18"
# Displaying summary statistics
wine_data.describe().transpose()
# + colab={"base_uri": "https://localhost:8080/"} id="qPCZ3X8WqN4j" outputId="85edec22-5d4e-4f79-c0aa-af7ac9f0700e"
# Displaying quick information about the dataset
wine_data.info()
# + colab={"base_uri": "https://localhost:8080/"} id="LWL8QJ_XqTCJ" outputId="17558d3d-ac51-443d-d7f4-08c0a5d7f016"
# Checking missing values
wine_data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="8SnzW2ibsmYp" outputId="41953dab-5aa4-4470-e412-7442db0bf2c8"
# wine quality range from 0 to 10. The higher the quality value, the good wine is
wine_data['quality'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 484} id="ij8yGjqnt2HD" outputId="9a237e6a-69da-4937-d612-3b8e01cf3309"
wine_data.groupby(['fixed acidity', 'volatile acidity', 'citric acid']).sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 484} id="8hl4EQpNt680" outputId="d637c1ad-f4d8-4f38-e5ba-f63a24ccdd47"
wine_data.groupby(['free sulfur dioxide', 'total sulfur dioxide']).sum()
# -
# This is the end of the lab that was about using Pandas to manipulate data. Alot of things will make sense when we start to prepare data for machine learning models in the next notebooks.
# <a name='0'></a>
#
# ### [BACK TO TOP](#0)
| 2_data_manipulation_with_pandas/2_data_manipulation_with_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import datetime
import sys
import pathlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution1D, TimeDistributed, Conv2D, MaxPooling1D, Input, concatenate
from keras.layers.recurrent import LSTM, GRU
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, CSVLogger
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import *
from keras.optimizers import Adam, Nadam
from keras import backend as K
sys.path.append(str(pathlib.Path.cwd().parents[0]))
from datasets.utils import calc_pivots
# %matplotlib
# + [markdown] heading_collapsed=true
# ## Import Tick Data and Create 5min RTH Bars
# + hidden=true
tick_data = pd.read_feather('../data/processed/ES_tick.feather')
tick_data = tick_data[tick_data['date'] > '2017-07-29']
#Create Index from date column
tick_data.index = tick_data['date']
tick_data.drop(labels=['date'],axis=1,inplace=True)
tick_data.tail()
# + hidden=true
#Resample to get 5min bars
five_min_data = pd.DataFrame(
tick_data['last'].resample('5Min', loffset=datetime.timedelta(minutes=5)).ohlc())
import pandas_market_calendars as mcal
#We hack the NYSE Calendar extending the close until 4:15
class CMERTHCalendar(mcal.exchange_calendar_nyse.NYSEExchangeCalendar):
@property
def close_time(self):
return datetime.time(16, 15)
#Create RTH Calendar
nyse = CMERTHCalendar()
schedule = nyse.schedule(start_date=five_min_data.index.min(),
end_date=five_min_data.index.max())
#Filter out those bars that occur during RTH
five_min_data['dates'] = pd.to_datetime(five_min_data.index.to_datetime().date)
five_min_data['valid_date'] = five_min_data['dates'].isin(schedule.index)
five_min_data['valid_time'] = False
during_rth = five_min_data['valid_date'] & \
(five_min_data.index > schedule.loc[five_min_data['dates'],'market_open']) & \
(five_min_data.index <= schedule.loc[five_min_data['dates'],'market_close'])
five_min_data.loc[during_rth, 'valid_time'] = True
five_min_data = five_min_data[five_min_data['valid_time'] == True]
five_min_data.drop(['dates','valid_date','valid_time'], axis=1, inplace=True)
#Add ema
five_min_data['ema'] = five_min_data['close'].ewm(span=20, min_periods=20).mean()
#Reset index
five_min_data.reset_index(inplace=True)
five_min_data[81:].head()
# + hidden=true
#Add column for number of seconds elapsed in trading day
five_min_data['sec'] = (five_min_data['date'].values
- five_min_data['date'].values.astype('datetime64[D]')) / np.timedelta64(1,'s')
#Calculate sin & cos time
#24hr time is a cyclical continuous feature
seconds_in_day = 24*60*60
five_min_data['sin_time'] = np.sin(2*np.pi*five_min_data['sec']/seconds_in_day)
five_min_data['cos_time'] = np.cos(2*np.pi*five_min_data['sec']/seconds_in_day)
five_min_data.drop('sec', axis=1, inplace=True)
five_min_data.head()
# -
# ## Create Test / Train Datasets
five_min_data = pd.read_feather('../data/processed/ES_TFCnn.feather')
five_min_data.head()
five_min_data = pd.read_hdf('../data/processed/store.h5', key='cnn_data')
#five_min_data = calc_pivots(five_min_data)
five_min_data.head()
five_min_data.dtypes
# +
WINDOW = 36
EMB_SIZE = 7
i_start = 19
for i_start in range(19, five_min_data.shape[0]-WINDOW):
i_end = i_start + WINDOW + 1
wd = five_min_data[i_start:i_end].copy()
last_bar_close = wd.iloc[-1]['close']
last_bar_high = wd.iloc[-1]['high']
last_bar_open = wd.iloc[-1]['open']
wd.loc[:, ['open','high','low','close','ema','ph','pl']] = wd.loc[:, ['open','high','low',
'close','ema','ph','pl']] / last_bar_close
x_i = wd.loc[:, ['open','high','low','close','ema','sin_time','cos_time']].values.tolist()
x2_i = []
for i, r in wd.iterrows():
if not pd.isnull(r['ph']):
x2_i.append([r['sin_time'], r['cos_time'], r['ph'], 1])
if not pd.isnull(r['pl']):
x2_i.append([r['sin_time'], r['cos_time'], r['pl'], 0])
if wd.iloc[-1]['btc'] > 0:
y_i = [1, 0]
else:
y_i = [0, 1]
wd.head()
for i_start in range(19, five_min_data.shape[0]-WINDOW):
print(i)
# +
fd = five_min_data
fd['ema_5'] = fd['close'].ewm(span=5, min_periods=5).mean()
fd['ema_3'] = fd['close'].ewm(span=3, min_periods=3).mean()
fd['change'] = fd['close'] - fd['close'].shift(1)
fd['cdl_hl'] = np.where(fd['low'] >= fd['low'].shift(), 1, 0) #higher low
fd['cdl_lh'] = np.where(fd['high'] <= fd['high'].shift(), 1, 0) #lower high
fd.head()
# -
#fd['cdl_sign'] = np.sign(fd['close'] - fd['open'])
fd['cdl_body'] = (fd['close'] - fd['open']) / fd['close'] * 100
fd['cdl_rng'] = fd['high'] - fd['low']
fd['cdl_ut'] = np.where(fd['cdl_body'] > 0, fd['high'] - fd['close'], fd['high'] - fd['open'])
fd['cdl_lt'] = np.where(fd['cdl_body'] > 0, fd['open'] - fd['low'], fd['close'] - fd['low'])
fd['cdl_ut'] = np.where(fd['cdl_rng'] == 0, 0, fd['cdl_ut'] / fd['cdl_rng'])
fd['cdl_lt'] = np.where(fd['cdl_rng'] == 0, 0, fd['cdl_lt'] / fd['cdl_rng'])
fd['cdl_rng'] = (fd['cdl_rng'] / fd['low']) * 100
fd.head()
higher_than_next_bar = fd['high'] > fd.shift(-1)['high']
higher_than_prev_bar = fd['high'] > fd.shift(1)['high']
lower_than_next_bar = fd['low'] < fd.shift(-1)['low']
lower_than_prev_bar = fd['low'] < fd.shift(1)['low']
fd['pivot_high'] = higher_than_next_bar & higher_than_prev_bar
fd['pivot_low'] = lower_than_next_bar & lower_than_prev_bar
fd.head()
# +
data = fd[81:]
openp = data['open'].tolist()
highp = data['high'].tolist()
lowp = data['low'].tolist()
closep = data['close'].tolist()
emap = data['ema'].tolist()
sin_time = data['sin_time'].tolist()
cos_time = data['cos_time'].tolist()
btc = data['btc'].tolist()
stc = data['stc'].tolist()
change = data['change'].tolist()
cdl_sign = data['cdl_sign'].tolist()
cdl_body = data['cdl_body'].tolist()
cdl_ut = data['cdl_ut'].tolist()
cdl_lt = data['cdl_lt'].tolist()
cdl_rng = data['cdl_rng'].tolist()
cdl_hl = data['cdl_hl'].tolist()
cdl_lh = data['cdl_lh'].tolist()
pivot_high = data['pivot_high'].astype('int').tolist()
pivot_low = data['pivot_low'].astype('int').tolist()
# -
closep = (data['close'].shift(-1) - data['close']).tolist()[:-1]
btc = data['btc'].shift(-1).tolist()[:-1]
sin_time = data['sin_time'].shift(-1).tolist()[:-1]
cos_time = data['cos_time'].shift(-1).tolist()[:-1]
p = int(data.shape[0] * 0.9)
p = 10000
mean = data.mean(axis=0)
std = data.std(axis=0)
mean_c = np.mean(closep[0:p])
std_c = np.std(closep[0:p])
mean_c, std_c
# +
WINDOW = 36 #Number of bars in a trading day
EMB_SIZE = 6
STEP = 1
FORECAST = 1
X, Y = [], []
for i in range(0, len(data)-WINDOW+1, STEP):
try:
o = openp[i:i+WINDOW]
h = highp[i:i+WINDOW]
l = lowp[i:i+WINDOW]
c = closep[i:i+WINDOW]
e = emap[i:i+WINDOW]
ct = cos_time[i:i+WINDOW]
st = sin_time[i:i+WINDOW]
cng = change[i:i+WINDOW]
_cdl_sign = cdl_sign[i:i+WINDOW]
_cdl_body = cdl_body[i:i+WINDOW]
_cdl_ut = cdl_ut[i:i+WINDOW]
_cdl_lt = cdl_lt[i:i+WINDOW]
_cdl_rng = cdl_rng[i:i+WINDOW]
_cdl_hl = cdl_hl[i:i+WINDOW]
_cdl_lh = cdl_lh[i:i+WINDOW]
_pivot_high = np.array(pivot_high[i:i+WINDOW]) * np.array(h)
_pivot_low = np.array(pivot_low[i:i+WINDOW]) * np.array(l)
o = (np.array(o) - np.mean(o)) / np.std(o)
h = (np.array(h) - np.mean(h)) / np.std(h)
l = (np.array(l) - np.mean(l)) / np.std(l)
c = (np.array(c) - np.mean(c)) / np.std(c)
e = (np.array(e) - np.mean(e)) / np.std(e)
ph = (np.array(_pivot_high) - np.mean(h)) / np.std(h)
pl = (np.array(_pivot_low) - np.mean(l)) / np.std(l)
_cng = (np.array(cng) - np.mean(cng)) / np.std(cng)
#c = (np.array(c) - mean_c) / std_c
#o = np.divide(o, c[-1])
#h = np.divide(h, c[-1])
#l = np.divide(l, c[-1])
#e = np.divide(e, c[-1])
#c = np.divide(c, c[-1])
x_i = closep[i:i+WINDOW]
y_i = closep[(i+WINDOW-1)+FORECAST]
last_close = x_i[-1]
next_close = y_i
if btc[i+WINDOW-1] > 0:
y_i = [1, 0]
else:
y_i = [0, 1]
args = (c, )
args += (o, h, l)
#x_i = np.column_stack((o, h, l, c, e, ct, st, cng, _cdl_hl))
#x_i = np.column_stack((cng))
x_i = np.column_stack((c, e, ct, st, _cdl_lh, _cdl_hl))
except Exception as e:
#e.throw()
break
#only add if 1pt body and close on high
if (closep[i+WINDOW-1] == highp[i+WINDOW-1]) and (closep[i+WINDOW-1]-openp[i+WINDOW-1]>=1):
X.append(x_i)
Y.append(y_i)
# -
X
# +
# Let's split into train and test sets
# Train Set will be from 8/1/17 through 12/31/17, Test Set 1/1/17 - 1/25/17
p = 8547 #Manual split for now
p = int(len(X) * 0.9)
X, Y = np.array(X), np.array(Y)
X_train = X[0:p]
Y_train = Y[0:p]
X_test = X[p:]
Y_test = Y[p:]
#We may want to shuffle the training data -- will look into this later
def shuffle_in_unison(a, b):
# courtsey http://stackoverflow.com/users/190280/josh-bleecher-snyder
assert len(a) == len(b)
shuffled_a = np.empty(a.shape, dtype=a.dtype)
shuffled_b = np.empty(b.shape, dtype=b.dtype)
permutation = np.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
#X_train, Y_train = shuffle_in_unison(X_train, Y_train)
# Not sure why this is needed, but we apply it anyway
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1], EMB_SIZE))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1], EMB_SIZE))
X_train[-1]
# -
# ## Train CNN Model
# +
model = Sequential()
model.add(
TimeDistributed(
Conv2D(32, (7, 7), padding='same', strides=2),
input_shape=(None, 540, 960, 2)))
model.summary()
# +
model = Sequential()
model.add(TimeDistributed(Convolution1D(filters=32,
kernel_size=6,
padding='same',
activation='relu'),
input_shape = (None, WINDOW, EMB_SIZE)))
#model.add(TimeDistributed(BatchNormalization()))
#model.add(TimeDistributed(LeakyReLU()))
model.add(TimeDistributed(Dropout(0.75)))
model.add(TimeDistributed(Convolution1D(filters=64,
kernel_size=12,
padding='same',
activation='relu')))
#model.add(TimeDistributed(BatchNormalization()))
#model.add(TimeDistributed(LeakyReLU()))
model.add(TimeDistributed(Dropout(0.75)))
model.add(TimeDistributed(Flatten()))
model.add(TimeDistributed((Dense(32, activation='relu'))))
#model.add(TimeDistributed(BatchNormalization()))
#model.add(TimeDistributed(LeakyReLU()))
model.add(LSTM(64, recurrent_dropout=0.75, stateful=False))
#model.add(Dropout(0))
model.add(Dense(2, activation='softmax'))
#model.add(Activation('softmax'))
model.summary()
# -
from datasets.es_btc import ESBTCDataset
ds = ESBTCDataset(use_cdl=True, use_ema=False, use_ema5=True)
ds.load_or_generate_data()
# +
model = Sequential()
model.add(Convolution1D(input_shape = ds.input_shape,
filters=32,
kernel_size=6,
padding='same',
activation='relu'))
#model.add(BatchNormalization())
#model.add(LeakyReLU())
#model.add(MaxPooling1D(strides=1))
model.add(Dropout(0.75))
model.add(Convolution1D(filters=64,
kernel_size=12,
padding='same',
activation='relu'))
#model.add(BatchNormalization())
#model.add(LeakyReLU())
#model.add(MaxPooling1D(strides=1))
model.add(Dropout(0.75))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
#model.add(BatchNormalization())
#model.add(LeakyReLU())
#model.add(Dropout(0.75))
model.add(Dense(2, activation='softmax'))
#model.add(Activation('softmax'))
model.summary()
# -
model = Sequential()
model.add(LSTM(64, return_sequences=False,
input_shape=ds.input_shape,
dropout=0,
recurrent_dropout=0.75)) # returns a sequence of vectors of dimension 32
#model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32
#model.add(LSTM(32, recurrent_dropout=0.75)) # return a single vector of dimension 32
model.add(Dense(2, activation='softmax'))
model.summary()
# +
cnn_input = Input(shape=ds.input_shape, dtype='float32', name='cnn_input')
x = Convolution1D(filters=32,
kernel_size=6,
padding='same',
activation='relu')(cnn_input)
x = Dropout(0.75)(x)
x = Convolution1D(filters=64,
kernel_size=12,
padding='same',
activation='relu')(x)
x = Dropout(0.75)(x)
x = Flatten()(x)
x = Dense(32, activation='relu')(x)
x1 = LSTM(32, recurrent_dropout=0.75)(cnn_input)
merged = concatenate([x, x1], axis=-1)
x2 = Dense(32, activation='relu')(merged)
output = Dense(2, activation='softmax', name='output')(x2)
model = Model(inputs=[cnn_input], outputs=[output])
model.summary()
# -
ds.x_train.shape
# +
def precision_threshold(threshold=0.5):
def precision(y_true, y_pred):
"""Precision metric.
Computes the precision over the whole batch using threshold_value.
"""
y_true = y_true[:, 0]
y_pred = y_pred[:, 0]
threshold_value = threshold
# Adaptation of the "round()" used before to get the predictions. Clipping to make sure that the predicted raw values are between 0 and 1.
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
# Compute the number of true positives. Rounding in prevention to make sure we have an integer.
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
# count the predicted positives
predicted_positives = K.sum(y_pred)
# Get the precision ratio
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def precision(y_true, y_pred):
'''Calculates the precision, a metric for multi-label classification of
how many selected items are relevant.
'''
y_true = y_true[:, 0]
y_pred = y_pred[:, 0]
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
from keras.callbacks import Callback
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score
class Metrics(Callback):
def on_train_begin(self, logs={}):
self.val_precisions = []
self.val_profits = []
def on_epoch_end(self, epoch, logs={}):
val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round()[:,0]
val_targ = self.validation_data[1][:,0]
true_positives = np.sum(np.round(val_predict * val_targ))
false_positives = np.sum(np.round(val_predict * (1-val_targ)))
predicted_positives = np.sum(np.round(val_predict))
_val_precision = true_positives / (predicted_positives + 1e-7)
_val_profit = 96 * true_positives - 104 * false_positives
self.val_precisions.append(_val_precision)
self.val_profits.append(_val_profit)
print (f'- val_precision: {_val_precision} - val_profit: {_val_profit}')
return
metrics = Metrics()
opt = Adam(lr=1e-4)
reduce_lr = ReduceLROnPlateau(monitor='val_acc', factor=0.9, patience=30, min_lr=0.000001, verbose=1)
checkpointer = ModelCheckpoint(filepath="model.hdf5", verbose=1, save_best_only=True)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(ds.x_train, ds.y_train,
epochs = 250,
batch_size = 16384,
verbose=1,
validation_data=(ds.x_test, ds.y_test),
callbacks=[metrics, checkpointer],
shuffle=True)
# +
plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
plt.figure()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
plt.figure()
plt.plot(history.history['precision'])
plt.plot(history.history['val_precision'])
plt.title('precision')
plt.ylabel('precision')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
# +
from sklearn.metrics import confusion_matrix
#model.load_weights("model.hdf5")
pred = model.predict(np.array(ds.x_test), batch_size=64)
C = confusion_matrix([np.argmax(y) for y in ds.y_test], [np.argmax(y) for y in pred])
print (C / C.astype(np.float).sum(axis=1)[:, None])
# -
C
#K.clip(pred[0] * Y_test[0], 0, 1)
8 / (K.epsilon() + 14)
#sum([np.argmax(y) for y in Y_test])
pd.DataFrame(pred)[0].plot()
data.iloc[43650,:]
pred.shape
# +
pred = model.predict(np.array(ds.x_test), batch_size=64)
df = pd.DataFrame(np.concatenate((pred, ds.y_test), axis=1))
#df[(df[0]>.6)&(df[2]==1)].shape
#df[(df[0]>.6)].shape
#df.shape
df.sort_values(0,axis=0,ascending=False).head(20)
pred[:,0], 0.5
# -
pred0 = pred[:,0]
y0 = ds.y_test[:, 0]
true_positives = np.sum(np.round(pred0 * y0))
false_positives = np.sum(np.round(pred0 * (1-y0)))
predicted_positives = np.sum(np.round(pred0))
np.sum(false_positives)
C / C.astype(np.float).sum(axis=1)[:, None]
probs = Y_train.sum(axis=0) / Y_train.shape[0]
probs
pred
s = np.random.binomial(1, probs[1], pred.shape[0])
s
C1 = confusion_matrix([np.argmax(y) for y in Y_test], s)
print (C1 / C1.astype(np.float).sum(axis=1)[:, None])
([np.argmax(y) for y in Y_test] == s).sum() / pred.shape[0]
| notebooks/1.4 - TF CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="iFhR1ucuyU2d" colab_type="code" colab={}
#Import pandas libraries
import pandas as pd
#Import numpy libraries
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
#Import category encoders
import category_encoders as ce
#Import sklearn libraries
from sklearn.impute import SimpleImputer
from sklearn import metrics
# import classifier libraries
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import RobustScaler
# Import Suport Vector Classifier module from svm library. We'll use SVC to model our data
from sklearn.svm import SVC,LinearSVC
from sklearn.model_selection import train_test_split
# Import scikit-learn metrics module for accuracy calculation
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
#Import warnings filter libraries
import warnings
warnings.filterwarnings('ignore')
# + id="9DnOgqpS2q65" colab_type="code" outputId="9f9759d5-153a-40ab-9079-b7f27675143a" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Load the dataset
df = pd.read_csv('hypothyroid.csv')
# Check shape of the dataset
df.shape
# + id="MihrIoHR3MxU" colab_type="code" outputId="57bcbffc-c0cc-4243-b8f0-fafd0eb3667c" colab={"base_uri": "https://localhost:8080/", "height": 224}
#Check the first five rows of our dataset
df.head()
# + id="GGa5zki8GN-H" colab_type="code" outputId="2ff34ca3-2790-4762-ac7b-5e8e48a45531" colab={"base_uri": "https://localhost:8080/", "height": 476}
#Check null values
df.isnull().sum()
# + [markdown] id="O_Hz2GdKH_kw" colab_type="text"
# Our dataset has no null values. However if we check on a single column there is a ? which should be converted to NAN value
# + [markdown] id="FoDtI23VIl6d" colab_type="text"
# ### **Replace ? with NAN**
# + id="yqTOfpzM4fQL" colab_type="code" outputId="af8c5955-89ed-4c22-b6f7-8fd102ad51dc" colab={"base_uri": "https://localhost:8080/", "height": 493}
# Count unique elements in each column including NaN
uniqueValues = df.nunique(dropna=False)
uniqueValues
# + [markdown] id="0mc_xw5tKIak" colab_type="text"
#
#
# * Column sex has 3 unique values, to check we print the unique values.
#
#
# + id="b9Im55pUI04u" colab_type="code" outputId="4187e83a-bed9-4e82-995e-01b5af800af4" colab={"base_uri": "https://localhost:8080/", "height": 34}
#For example the below column has a unique value ? which needs to be replaced
df['sex'].unique()
# + id="EfxqibpYIqY7" colab_type="code" outputId="26a39b0f-b879-44cd-f441-7e16332695fe" colab={"base_uri": "https://localhost:8080/", "height": 224}
#Replace all rows with ? to nan
df.replace('?',np.nan,inplace=True)
df.head()
# + id="TAARQUEDP3FD" colab_type="code" outputId="5ca98ba8-4047-4598-9db9-605c36e52191" colab={"base_uri": "https://localhost:8080/", "height": 476}
#Check for missing values
df.isnull().sum()
# + id="sBV_Fgk2IxHO" colab_type="code" outputId="619690ac-27d9-4615-e726-4a4d04f2b942" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Confirm that the ? value has been replaced.
df['sex'].unique()
# + id="NJNUAOwJFST2" colab_type="code" outputId="94ab4e59-3fd8-4f72-8abf-a9612de16964" colab={"base_uri": "https://localhost:8080/", "height": 544}
#Checking data types of our dataset
df.info()
# + [markdown] id="mcf_51sfKoHT" colab_type="text"
# There are numerical variables that need to be converted to numeric type.
# + id="_psLaxo4LDvo" colab_type="code" outputId="3b6e8a7a-af55-4d23-a110-b109d6f35364" colab={"base_uri": "https://localhost:8080/", "height": 34}
num = ['age','TSH','T3','TT4','T4U','FTI']
num
# + id="wOgTcaUrLd6C" colab_type="code" colab={}
categorical= ['status','sex', 'on_thyroxine', 'query_on_thyroxine','on_antithyroid_medication', 'thyroid_surgery', 'query_hypothyroid',
'query_hyperthyroid', 'pregnant', 'sick', 'tumor', 'lithium', 'goitre','TSH_measured','T3_measured','TT4_measured',
'T4U_measured','FTI_measured','TBG_measured', 'TBG']
# + id="wWYLWGLXL8G8" colab_type="code" outputId="282f140d-2aac-4b88-bd31-5af6b2cf9552" colab={"base_uri": "https://localhost:8080/", "height": 476}
#convert object to numerical
df[num] = df[num].apply(pd.to_numeric)
df.dtypes
# + id="opT1LZUMFabx" colab_type="code" outputId="cedb92ee-9ea9-4bf6-b993-0854e0fcae54" colab={"base_uri": "https://localhost:8080/", "height": 34}
# To confirm they have been converted,split numerical variables from categorical variables
numerical_variables = [col for col in df.columns if df[col].dtypes != 'O']
numerical_variables
# + id="KR2ptYaRELoc" colab_type="code" outputId="b527768c-e1cb-4653-8ac2-0912232158be" colab={"base_uri": "https://localhost:8080/", "height": 357}
#Get all categorical variables
categorical_variables = [col for col in df.columns if df[col].dtypes == 'O']
categorical_variables
# + id="_xqi-czTPd8H" colab_type="code" outputId="a1db67fa-6ed6-44f6-aa04-a150c4292cff" colab={"base_uri": "https://localhost:8080/", "height": 476}
#Check for missing values
df.isnull().sum()
# + id="kgl7AkIPRyDf" colab_type="code" colab={}
#Fill missing values of numerical variables
# Use simple imputer to fill missing values with the mean
impute = SimpleImputer(strategy ='mean')
df[numerical_variables] = impute.fit_transform(df[numerical_variables])
# + id="Ox04ebMJEJBQ" colab_type="code" outputId="5ebe21a4-6cb1-4c92-b93b-3d4353d7ecd0" colab={"base_uri": "https://localhost:8080/", "height": 476}
# Fill missing values for categorical data
df['sex'].fillna(df['sex'].mode()[0], inplace=True)
df['TBG'].fillna(df['TBG'].mode()[0], inplace=True)
#Confirm there are no missing values
df.isnull().sum()
# + [markdown] id="MNfFT9kiVvbP" colab_type="text"
# ##**Data Preprocessing**
# + id="ILol9RUXWZEd" colab_type="code" outputId="7ee054af-f2a9-44d6-a3f0-b9c2fd7712d1" colab={"base_uri": "https://localhost:8080/", "height": 221}
#We define x and y
y = df['status']
y
# + id="vrdEp1j6cOJ8" colab_type="code" outputId="aa25dcf1-0624-4a1c-afc8-76c4dde099ea" colab={"base_uri": "https://localhost:8080/", "height": 85}
#Change our target values(y) to a binary
y =df['status']= np.where(df['status']=='hypothyroid',0,1)
print(y)
df['status'].value_counts()
# + [markdown] id="LA6KH0eYa6fj" colab_type="text"
#
#
# * 1 means it's negative
# * 0 means it's hypothyroid
#
#
# + id="u_UB8THmWqp5" colab_type="code" outputId="6851a0e6-4c30-4aa6-d7cd-6bb7b54e35fc" colab={"base_uri": "https://localhost:8080/", "height": 439}
X = df.drop(['status'], axis=1)
X
# + id="KX6bscyuWPVu" colab_type="code" outputId="1ea1ee79-a313-4de8-e533-ee0f0df52720" colab={"base_uri": "https://localhost:8080/", "height": 34}
from sklearn.model_selection import train_test_split
#Split our dataset train dataset size is 80% test datset is 20%
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
print(X_train.shape, X_test.shape)
# + id="yMgXQEMiXzUE" colab_type="code" outputId="e182e45c-316b-4a64-a298-b5e679a11bd0" colab={"base_uri": "https://localhost:8080/", "height": 122}
# !pip install sklearn
# + id="SZTRMoDWX7GB" colab_type="code" outputId="6ff7f681-4895-4779-c37c-80f2a002e3a9" colab={"base_uri": "https://localhost:8080/", "height": 292}
# !pip install category_encoders
# + id="7gN_G-7oWPIF" colab_type="code" colab={}
# encode categorical variables with one-hot encoding to numeric
encoder = ce.OneHotEncoder(cols=['sex','on_thyroxine','query_on_thyroxine','on_antithyroid_medication','thyroid_surgery','query_hypothyroid','query_hyperthyroid','pregnant','sick','tumor','lithium','goitre','TSH_measured','T3_measured','TT4_measured','T4U_measured','FTI_measured','TBG_measured','TBG'])
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
# + id="ub3X_Be9ZAUO" colab_type="code" outputId="5640408a-2ebb-4934-d460-95cfe14b74a4" colab={"base_uri": "https://localhost:8080/", "height": 136}
print(X_train.head(4))
# + id="HyflLGDMfBWn" colab_type="code" outputId="7ad17cac-ca37-414f-f96e-cca7ac157198" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Confirm there is no nan in train dataset.
np.any(np.isnan(X_train))
# + id="pTf6ioW6fs_N" colab_type="code" outputId="c49656b1-f0d3-47fa-9f1c-5be09cd7aca0" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Confirm there is no nan in test dataset
np.any(np.isnan(X_test))
# + [markdown] id="UQHhiK3IZmUJ" colab_type="text"
# ###**Feature scaling**
#
# Feature scaling is a method used to normalize the range of independent variables or features of data. In data processing, it is also known as data normalization.
#
# We need to normalize our independent variables. We use robust scaler to do this.
# + id="_OGbx_N1ZkVZ" colab_type="code" colab={}
cols = X_train.columns
scaler = RobustScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train = pd.DataFrame(X_train, columns=[cols])
X_test = pd.DataFrame(X_test, columns=[cols])
# + [markdown] id="rcJLjkRXbljM" colab_type="text"
# ##**Random Forest Classifier model with default parameters- 10 decision trees**
# + id="aVGDHL7nb4B6" colab_type="code" outputId="1c2ab032-e922-40be-c7dd-e8a5add78e71" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Intiate the randomforestclassifier
rf = RandomForestClassifier(random_state=0)
# fit the model
rf.fit(X_train, y_train)
# Predict the Test set results
y_pred = rf.predict(X_test)
# Check accuracy score
from sklearn.metrics import accuracy_score
print('Model accuracy score with 10 decision-trees : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
# + id="d5eQLiPrEvVt" colab_type="code" outputId="470fcb29-45a1-43b3-aff3-137e6dedc6be" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Check the error rate of the model.
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# + id="95CSTRObFpRm" colab_type="code" outputId="48dac86d-c908-4989-8da7-b98a4f57b15f" colab={"base_uri": "https://localhost:8080/", "height": 51}
#Compute the confusion matrix to understand the actual versus predicted variables.
from sklearn.metrics import confusion_matrix
confusion = confusion_matrix(y_test,y_pred)
confusion
# + [markdown] id="QQXyBrcXHUjZ" colab_type="text"
#
#
#
# * **Findings**
#
# * Model accuracy was 98% with an error rate of 0.14. The model predicted 602 negative and 17 hypothyroid correctly
# * To improve the model perfomance, we increase the number of decision trees to 100, increase the max depth and reduce sample split to 20.
#
#
# + [markdown] id="vIOufYtxgE60" colab_type="text"
# ##**Random forest classifier using 100 decision trees**
# + id="XIKfKLmce6Wo" colab_type="code" outputId="34b93702-0b14-4e9e-c638-2447009c56dc" colab={"base_uri": "https://localhost:8080/", "height": 153}
# Run the classifier with n_estimators = 100
rf1 = RandomForestClassifier(n_estimators=100, random_state=0,max_depth=5, min_samples_split = 20)
# fit the model to the training set
rf1.fit(X_train, y_train)
# + id="dw0ZiMO9Ds9n" colab_type="code" colab={}
# Predict on the test set results
y_pred1 = rf1.predict(X_test)
# + id="P8r_4d_QA91W" colab_type="code" outputId="373ad495-f84d-4e25-badb-ea7552d5ea5d" colab={"base_uri": "https://localhost:8080/", "height": 297}
# Create a comparison frame between the actual and predicted target variable
comparison_frame = pd.DataFrame({'Actual': y_test.flatten(), 'Predicted': y_pred.flatten()})
comparison_frame.describe()
# + id="g5O45vwJAqyb" colab_type="code" outputId="f30af848-95c8-408f-cdd7-5ebc1ed0826d" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Check accuracy score
print('Model accuracy score with 100 decision-trees : {0:0.4f}'. format(accuracy_score(y_test, y_pred_100)))
# + id="GmYTm9JuAtpQ" colab_type="code" outputId="3312428f-ad41-4259-e27d-0c22b30c0477" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Check the error rate using root mean squared error
from sklearn import metrics
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# + [markdown] id="dWOtGlBfdZak" colab_type="text"
# Error rate is very low. 0.14.Shows the model is relevantly good.
# + id="CJZjrdyiHCGM" colab_type="code" outputId="65ae71c2-b3d3-4ec4-80cb-086cf1d9264e" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Calculate a confusion matrix to identify what patients were predicted to be negative or have hypothyroid
confusion = confusion_matrix(y_test,y_pred1)
confusion
# + [markdown] id="CkwbVJi6BRWt" colab_type="text"
#
#
#
#
# * **Findings**
#
#
# * The model accuracy still remained the same 97% even after the number of decision trees.The error rate is very minimal : 0.14
#
# * The model predicted 602 patients were negative while in the actual they wrere negative.
# * The model predicted 17 patients had hypotyroid while in the actual they were.
#
#
# * This means the accuracy of the model is not affected by the change in decision trees.
#
# * However, we can still improve on the model using gradient boosting and see how it will perfom.
#
#
#
#
#
# + [markdown] id="fHNGcybXNJhd" colab_type="text"
# ##**Gradient Boosting Classifier**
# + id="wO_AcnLGBJrb" colab_type="code" outputId="dd0b2d0d-0e2b-4842-b776-d0d7c47bf41e" colab={"base_uri": "https://localhost:8080/", "height": 187}
#Intiate the gradient boosting classifier
gradient = GradientBoostingClassifier(learning_rate =0.1,n_estimators=100,max_depth=3,min_samples_split=2) # defining my classifier as gradient
#fit the train dataset in the classifier
gradient.fit(X_train,y_train)
# + id="BSR5V4g-Ngiz" colab_type="code" outputId="6cc16dbc-0625-42c7-b90a-33cbeb9237c2" colab={"base_uri": "https://localhost:8080/", "height": 510}
#Making a prediction
y_pred_g = gradient.predict(X_test)
y_pred_g
# + id="LlLwWvn9NmpR" colab_type="code" outputId="dac91dd5-c3f5-49de-cdf1-ccb4f2ed83cf" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Check the accuracy score of the gradient model
print("gradient_Accuracy score is :",metrics.accuracy_score(y_test, y_pred_g))
# + id="p0QCuPYLOn-o" colab_type="code" outputId="8b9ef89a-d4e9-434d-8ca4-7d5fb671cfed" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Calculate a confusion matrix to identify what patients were predicted to be negative or have hypothyroid
confusion = confusion_matrix(y_test,y_pred_g)
confusion
# + [markdown] id="I0FOGzQrN8RI" colab_type="text"
#
#
# * **Findings**
# * The accuracy score of the model increased to 98%.
#
# * The model predicted 600 patients were negative while in the actual they wrere negative.
# * The model predicted 23 patients had hypothyroid while in the actual they were.
# * In this case, we can use the gradient boosting classifier model compared to random forest as it's perfomance increased .
#
#
#
#
# + [markdown] id="wUBHc0hwPdFu" colab_type="text"
# ##**SVM(Support Vector Machine)**
# + id="PvcPVYfpN5jR" colab_type="code" colab={}
#For this, we will create svm before parameter tuning and after parameter tuning using rbf. This is because we are solving a classifier.
# SVM before parameter tuning
#svm = SVC(kernel = 'linear',C=1.0,gamma='auto',random_state=2)
#SVM after parameter tuning. RBf is used to increase the dimension
clf = SVC(kernel = 'sigmoid',C=1.0,gamma='auto',random_state=0)
# + id="p8ffngdUQPgv" colab_type="code" outputId="5bfc8d3a-aa8b-4b92-cae0-52c59152ab4f" colab={"base_uri": "https://localhost:8080/", "height": 85}
# fitting the train into the model
#svm.fit(X_train,y_train)
#svm_1.fit(X_train,y_train)
clf.fit(X_train,y_train)
# + id="Ygw-MTpKQTh_" colab_type="code" colab={}
# Now that we have trained our model, let's test how well it can predict if a patient is negattive or positive for hypothyroid
#Making predictions
#y_pred_svc = svm.predict(X_test)
#Making predictions with parameter tuning
y_pred1 = clf.predict(X_test)
# + id="U_eTOucKQqW2" colab_type="code" outputId="ec6b0355-ce4c-4088-ea09-fd9ad6d14a9a" colab={"base_uri": "https://localhost:8080/", "height": 51}
#Check accuracy of model before setting any parameters
print("Accuracy with linear kernel:",metrics.accuracy_score(y_test, y_pred_svc))
#Accuracy score using sigmoid function
print(accuracy_score(y_test,y_pred))
# + [markdown] id="2K0oZu_XREs3" colab_type="text"
#
#
#
# * **Findings**
#
#
# * Reason behind not plotting the graph to show the hyperplane is that we were dealing with multiple features so it would have been pretty hard to visualize.
#
# * The accuracy score of models was 97.6% after tuning the parameters using sigmoid the model accuracy dropped to 96%.
#
#
#
#
# + [markdown] id="coDQGwMjVc0V" colab_type="text"
# ##**Further questions**
#
#
# * Based on the fact that our dataset had many patients with negative results on hypothyroid,did it affect the model accuracy to be that high even after using advanced models?
#
#
#
#
#
# + [markdown] id="_PbldDYBWSmG" colab_type="text"
# ##**Challenges**
#
#
# * Despite using different classification models, differences between the model accuracy was very minimal.
#
#
# * It was a challenge picking the best model for prediction since they all had almost the same accuracy score and very low error rate.
#
#
#
#
#
# + [markdown] id="MS6DQw6hXB51" colab_type="text"
# ##**Future prospects**
#
#
# * Dataset should be evenly distributed interms of status to avoid model overfitting.
#
#
| _Moringa_Data_Science_Core_W8_Independent_Project_2020_07_Leah_Mbugua (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tf_2_00]
# language: python
# name: conda-env-tf_2_00-py
# ---
#Álgebra Linear com numpy
import numpy as np
#Multiplicar duas matrizes
A = np.array([[1, 2, 3, 4],[5, 6, 7, 8]])
B = np.array([[1, 2, 3],[4, 5, 6], [7, 8, 9], [10, 11, 12]])
print(" Dimensão de A:", A.shape) #2 linhas, 4 colunas
print(A)
print(" Dimensão de B:", B.shape) #2 linhas, 4 colunas
print(B)
#Número de colunas de A = ao número de linhas de B (4). Podemos multiplicar
C = np.dot(A, B)
print(" Dimensão de C:", C.shape) #2 linhas, 3 colunas
print(C)
#Matriz transposta
D = C.T
print(" Dimensão de D:", D.shape) #3 linhas, 2 colunas
print(D)
#Transposta do produto é igual ao produto das transpostas
print(np.dot(A,B).T)
print("\n")
print(np.dot(B.T, A.T))
# Álgebra linear com tensorflow
import tensorflow as tf
print(tf.__version__)
a = tf.constant([1, 2, 3, 4, 5, 6, 7, 8], shape=[2, 4])
b = tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], shape=[4, 3])
print(a)
print("\n")
print(b)
c = tf.tensordot(a, b, axes=1)
print(c)
d = tf.transpose(c)
print(d)
| jupyter-notebook/Algebra Linear.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
rootf = FOLDER_NAME ## Here insert your rbm-mhc folder path. e.g. '/home/user/rbm-mhc' ##
## imports ##
import sys, os, pickle
sys.path.append(rootf + '/PGM/source/')
sys.path.append(rootf + '/PGM/utilities/')
sys.path.append(rootf + '/Align_utils/')
from common_imports import set_num_threads
set_num_threads(1)
import numpy as np
import pandas as pd
import rbm,utilities
import Proteins_utils, RBM_utils, utilities,sequence_logo,plots_utils
import importlib
import butils
import sequence_logo
import matplotlib.pyplot as plt
# -
Here, we read the table where peptides are assigned a HLA-I type by RBM-MHC and we
visualize motifs corresponding to peptides predicted by RBM-MHC to be of the same
HLA binding specificity as in Fig. 2A.
# +
names = 'sample_file' ## here insert the sample file name ##
nameo = 'string_output' ## here insert the string used to name output files ##
out_fold = rootf + '/output_folder'
hlas = ['HLA-A*01:01', 'HLA-A*03:01', 'HLA-B*07:02', 'HLA-B*08:01', 'HLA-C*07:01', 'HLA-C*07:02']
nc = len(hlas)
filename_motif = out_fold + '/' + names + '_classification_' + nameo + '.txt'
table1 = pd.read_csv(filename_motif , sep='\t')
table2 = table1
## This line allows to exclude data with labels from IEDB ##
# table2 = table1[table1['Training']==0]
CC = 21 # number of symbols: amino acids + gap
SA = 9 # length of peptides
pseudocount = 0.000001
fig, axess = plt.subplots(6,1)
fig.set_figheight(7)
fig.set_figwidth(4)
fig.subplots_adjust(wspace=0.5, hspace = 1.1, bottom = 0.2)
st=12
sl=st
seqs_all=[]
for h in range(nc):
seqs_allh = list(table1[table1['RBM-MHC label'] == h]['Data Core'].values)
Nrbm=len(seqs_allh)
seqs_all.append(seqs_allh)
sel_n = butils.convert_number(seqs_allh)
pwm_rbm = utilities.average(sel_n, c = CC);
for i in range(SA):
for a in range(CC):
if pwm_rbm[i,a] < pseudocount:
pwm_rbm[i,a] = pseudocount
ax1= axess[h]
titlep = hlas[h] + ' motif (# seqs. = ' + str(Nrbm) + ')'
sequence_logo.Sequence_logo(pwm_rbm, ax = ax1, figsize=(12,3.5), ylabel= 'bits', title = titlep, show=True, ticks_every=1, ticks_labels_size=sl, title_size=st)
plt.tight_layout()
namefig = out_fold + '/motifs_reconstructed.png'
fig.savefig(namefig,dpi=300)
plt.close()
for cl in range(nc):
name_fi = out_fold + '/peptides_'+ nameo + '_' + str(hlas[cl])
with open(name_fi + '.txt', 'w') as out_f:
for r in range(len(seqs_all[cl])):
out_f.write(seqs_all[cl][r] + '\n')
# -
# Here, we upload the trained RBM model, we visualize its weights and
# examples of inputs to hidden units, similarly to Fig. 2A and Supplementary Figs. 2-3.
# +
name_r = out_fold + '/model_' + nameo + '.data'
RBM = RBM_utils.loadRBM(name_r)
hu = RBM.n_h ## hidden units of the model ##
## figure with weights ##
fig, axess = plt.subplots(10,1)
fig.set_figheight(11)
fig.set_figwidth(4)
fig.subplots_adjust(wspace=0.5, hspace = 1., bottom = 0.2)
st=12
sl=st
for h in range(hu):
ax1= axess[h]
titlep = ''
sequence_logo.Sequence_logo(RBM.weights[h], ax = ax1, figsize=(12,3.5), ylabel = r'W_' + str(h + 1), title=' ', ticks_every = 1,ticks_labels_size=sl,title_size=st)
plt.tight_layout()
namefig = out_fold + '/weights.png'
fig.savefig(namefig, bbox_inches = 'tight', pad_inches = 0, dpi = 300)
plt.close()
colors = ['r','orange', 'deepskyblue', 'blue', 'springgreen', 'forestgreen']
s2 = 18
sl = 12
list_ix=[0,1] ## example of hidden units, but any could be selected ##
list_iy=[2,3] ## example of hidden units, but any could be selected ##
s1 = 1.0
for ix in list_ix:
for iy in list_iy:
fig, ax = plt.subplots()
fig.set_figwidth(5)
fig.set_figheight(5)
for h in range(nc):
sel_n = butils.convert_number(seqs_all[h])
In = RBM.vlayer.compute_output(sel_n, RBM.weights)
ax.scatter(In[:,ix],In[:,iy], c = colors[h], s = s1, label = hlas[h])
ax.set_xlabel('Input to h_' + str(ix+1), fontsize=s2)
ax.set_ylabel('Input to h_' + str(iy+1), fontsize=s2)
ax.tick_params(axis='both', which='major', labelsize = s2)
plt.tight_layout()
plt.legend(fontsize=10,markerscale=2,frameon=False, loc ='upper right')
pathfig = out_fold + '/Input_' + str(ix+1) + '_vs_Input_' + str(iy+1) + '.png'
fig.savefig(pathfig, bbox_inches = 'tight', pad_inches = 0, dpi = 300)
plt.close()
# -
# Here, we read the table of scored peptides and we rank them according to the
# RBM-MHC presentation score
# +
namep = 'peptides-to-score' ## here insert the name of the file with peptides to score ##
fn = out_fold + '/scoring/' + namep + '_scored_' + nameo + '.txt'
scores_out = pd.read_csv(fn, sep = '\t')
llt = scores_out['Total RBM-MHC score'].values
pred = scores_out['Pred. HLA class'].values
seqs = list(scores_out['Sequences'].values)
length = scores_out['Length'].values
Scores_list = zip(seqs, llt, pred, length)
Srank = sorted(sorted(Scores_list, key = lambda x : x[3]), key = lambda x : x[1], reverse = True)
data_ranked = {'Peptide':list(np.array(Srank)[:,0]),
'RBM-MHC score': list(np.array(Srank)[:,1]),
'Predicted HLA-I': list(np.array(Srank)[:,2]),
'Peptide Length': list(np.array(Srank)[:,3])
}
pd.DataFrame (data_ranked, columns = ['Peptide','RBM-MHC score','Predicted HLA-I', 'Peptide Length'])
# -
| output_folder/.ipynb_checkpoints/Example_notebook-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Escalando Cartola com Cadeias de Markov e Programação Linear
# Autor: [<NAME>](https://jcalvesoliveira.github.io/)
# ### Motivação
#
# Meu time foi rebaixado, provavelmente não teremos Cartola FC para Série B em 2020.
# ### Resumo
#
# ___O time está em uma má fase!___
#
# Times de futebol passam por diferentes fases, e a impressão é sempre que os resultados resultados anteriores influenciam nos jogos seguintes. A maioria dos modelos que utilizamos para modelagem, assumem que os dados são i.i.d(independent and identically distributed), caso o jogo anterior realmente influencie no resultado, essa premissa não é verdadeira, e talvez um modelo para trabalhar com dados sequenciais funcione melhor.
# ### Técnica
#
# [Cadeias de Markov](http://www.columbia.edu/~jwp2128/Teaching/W4721/Spring2017/slides/lecture_4-11-17.pdf)
# ### Créditos:
# * Todas as bases utilizadas nesse estudo, são parte do trabalho conduzido pelo [Henrique Gomide](https://twitter.com/hpgomide), e estão disponíveis nesse [repositório](https://github.com/henriquepgomide/caRtola).
# * Vários dos conceitos aplicados nesse estudo, tem como referência o trabalho do professor [<NAME>](http://www.columbia.edu/~jwp2128/) da universidade de Columbia, para montar um ranking de clubes de basquete universitário com Cadeias de Markov. Todo o conceito aplicado está disponível [aqui](http://www.columbia.edu/~jwp2128/Teaching/W4721/Spring2017/slides/lecture_4-11-17.pdf).
# * A otimização linear teve como referência o artigo feito por <NAME> na International Conference on Sports Engineering ICSE-2017, disponível no [link](https://arxiv.org/ftp/arxiv/papers/1909/1909.12938.pdf).
# # Gerando lista com todos jogadores
# Para gerar a nossa matriz de transições, precisamos manter a quantidade de jogadores única. Para isso, vamos usar o arquivo de média dos jogadores, apenas para conseguir todos os jogadores que atuaram no ano e a sua posição.
import pandas as pd
medias = pd.read_csv(r'..\..\data\2019\2019-medias-jogadores.csv')
medias.head()
medias.shape
medias.columns
# Quantidade única de jogadores é do mesmo tamanho do Dataframe.
qtd_atletas = len(medias['player_id'].unique())
print(qtd_atletas)
# Para o contexto desse estudo, vamos analisar cada posição utilizada no Cartola separadamente. Sendo assim criamos uma lista com todas as posições existentes no arquivo médias.
posicoes = medias['player_position'].unique()
# Para facilitar a localização de cada jogador nas matrizes que construirmos, vamos criar um índice baseado no rankeamento do "player_id". Como teremos matrizes para cada posição, criamos um ranking para cada posição.
medias['Rank'] = None
for posicao in posicoes:
rank = medias[medias['player_position'] == posicao].player_id.rank(method='min')
rank = rank - 1
medias.iloc[rank.index,-1] = rank
colunas_unicos = ['Rank','player_id','player_position']
atletas = medias[colunas_unicos].drop_duplicates()
atletas.head()
atletas.shape
# # Partidas
# Os resultados das partidas irão gerar informações para a matriz de transição, sendo assim utilizamos o arquivo com todas as partidas de 2019 e selecionamos treino e teste mais adiante.
partidas = pd.read_csv(r'..\..\data\2019\2019_partidas.csv')
# Uma das hipóteses testadas nesse estudo, é o impacto da quantidade de gols do time na performance do jogador. Para facilitar a utilização desses dados, vamos normalizar as colunas de quantidade de gols dos time de casa e visitante.
partidas['home_score_norm'] = partidas['home_score'] / max(partidas['home_score'])
partidas['away_score_norm'] = partidas['away_score'] / max(partidas['away_score'])
partidas.head()
partidas.shape
# ### Dados das rodadas
# Agora, vamos importar os dados de performance de todos os jogadores em todas as rodadas de 2019, deixando uma coluna de identificação da rodada para podermos iterar nela.
df_partidas = pd.DataFrame()
for rodada in range(1,39):
df_rodada = pd.read_csv(r'..\..\data\2019\rodada-{}.csv'.format(rodada))
df_rodada['round'] = rodada
df_partidas =df_partidas.append(df_rodada,sort=False)
df_partidas.shape
# Para o contexto desse estudo não vamos analisar a performance de técnicos.
df_partidas = df_partidas[df_partidas['atletas.posicao_id'] != 'tec']
# Para colocar cada jogador em uma posição específica na matriz, vamos trazer a informação de ranking que criamos para o dataframe de partidas.
df_partidas = df_partidas.set_index('atletas.atleta_id').join(atletas.set_index('player_id'))
df_partidas.head()
df_partidas['Rank']
# ### Removendo jogadores não cadastrados
# Como a base para construção da nossa matriz é a tabela de atletas que criamos, caso algum jogador apareça na rodada e não esteja na tabela, desconsideramos esse jogador.
df_partidas.drop(df_partidas[df_partidas['Rank'].isnull()].index, inplace=True)
df_partidas['Rank'] = df_partidas['Rank'].astype(int)
# # Matriz M de estados
# Para cada posição(atacante, zagueiro, etc.), iniciamos uma matriz de zeros com tamanho *d x d*, sendo *d=quantidade de jogadores únicos*.
#
# ### Exemplo com atacantes
# Para o restante do estudo vamos analisar os resultados para os atacantes.
import numpy as np
posicao = 'ata'
qtd_atletas = len(atletas[atletas.player_position == posicao])
M = np.zeros((qtd_atletas,qtd_atletas))
M
M.shape
# # Atualizando a matriz
# 1. Selecionar partida.
# 2. Selecionar os jogadores que atuaram pelo time da casa.
# 3. Selecionar os jogadores que atuaram pelo time visitante.
# 4. Avaliar cada jogador do time da casa contra cada jogador do time visitante.
# 5. Atualizar a matriz de transições de acordo com a regra*:
#
# Para um jogo específico, ***j1*** é o índice do jogador avaliado do time da casa e ***j2*** o índice do jogador avaliado do time visitante.
# $$\hat{M}_{j1j1} \ \leftarrow \ \hat{M}_{j1j1} \ +\ gols\_time_{j1} \ +\ \frac{pontos_{j1}}{pontos_{j1} +pontos_{j2}}$$
#
# $$\hat{M}_{j2j2} \ \leftarrow \ \hat{M}_{j2j2} \ +\ gols\_time_{j2} \ +\ \frac{pontos_{j2}}{pontos_{j1} +pontos_{j2}}$$
#
# $$\hat{M}_{j1j2} \ \leftarrow \ \hat{M}_{j1j2} \ +\ gols\_time_{j2} \ +\ \frac{pontos_{j2}}{pontos_{j1} +pontos_{j2}}$$
#
# $$\hat{M}_{j2j1} \ \leftarrow \ \hat{M}_{j2j1} \ +\ gols\_time_{j1} \ +\ \frac{pontos_{j1}}{pontos_{j1} +pontos_{j2}}$$
#
# _*Regras para atacantes_
#
df_partidas_posicao = df_partidas[df_partidas['atletas.posicao_id'] == posicao].copy()
for partida in range(len(partidas)-1): #Vamos deixar a última partida de fora para testes
df_rodada = df_partidas_posicao[df_partidas_posicao['round'] == partidas['round'][partida]]
jogadores_casa = df_rodada[df_rodada['atletas.clube_id'] == partidas['home_team'][partida]]
jogadores_visitantes = df_rodada[df_rodada['atletas.clube_id'] == partidas['away_team'][partida]]
for j_casa in range(len(jogadores_casa)):
for j_visitante in range(len(jogadores_visitantes)):
score_casa = 0
score_visitante = 0
pontos_j_casa = jogadores_casa['atletas.pontos_num'].iloc[j_casa]
pontos_j_visitante = jogadores_visitantes['atletas.pontos_num'].iloc[j_visitante]
soma = pontos_j_casa + pontos_j_visitante
if soma != 0:
score_casa = pontos_j_casa / soma
score_visitante = pontos_j_visitante / soma
j1 = jogadores_casa['Rank'].iloc[j_casa]
j2 = jogadores_visitantes['Rank'].iloc[j_visitante]
M[j1,j1] = M[j1,j1] + partidas['home_score_norm'][partida] + score_casa
M[j1,j2] = M[j1,j2] + partidas['away_score_norm'][partida] + score_visitante
M[j2,j1] = M[j2,j1] + partidas['home_score_norm'][partida] + score_casa
M[j2,j2] = M[j2,j2] + partidas['away_score_norm'][partida] + score_visitante
M
# Depois de processar todas as partidas de todos os jogadores, vamos normalizar ***M*** para que todas as colunas somem 1.
M = M / np.sum(M,axis=1)
# # Distribuição estacionária
# Agora que temos a nossa matriz de transição pronta, podemos calcular a distribuição estacionária.
# +
evals, evecs = np.linalg.eig(M.T)
evec1 = evecs[:,np.isclose(evals, 1)]
evec1 = evec1[:,0]
stationary = evec1 / evec1.sum()
stationary = stationary.real
# -
# Por final geramos uma array de tamanho ***d***, lembrando que uma posição ***i*** aqui está relacionada a posição ***i*** no ranking de ids que criamos no começo do estudo.
#
# Podemos notar que as probabilidades são muito baixas, sendo muito difícil selecionar um jogador apenas pela probabilidades aqui geradas.
stationary
# Podemos verificar por exemplo quem teve probabilidade maior que 1.5%.
#
# O fato dos jogadores como Gabriel e <NAME> aparecem entre os 5 maiores, pode ser um indicador que a regra criada para uma comparação de pontos + quantidade de gols marcada pelo time, pode estar dando maior peso para esses jogadores, o que é uma coisa boa. :)
medias[medias.player_position == posicao][list(stationary > 0.015)]
# ## Calculando a distribuição para todas posições
# Para as posições de defesa, vamos substituir a pontuação referente aos gols marcado pelo time, por uma variável binária, referente a ter sofrido gol no jogo. Caso o time não tenha levado gol no jogo, damos pontuação de 1, se a defesa foi vazada o valor é 0.
#
# No meio-campo fazemos uma combinação das regras de defesa e ataque.
# +
stationaries = {}
for posicao in posicoes:
qtd_atletas = len(atletas[atletas.player_position == posicao])
M = np.zeros((qtd_atletas,qtd_atletas))
df_partidas_posicao = df_partidas[df_partidas['atletas.posicao_id'] == posicao].copy()
for partida in range(len(partidas)-1): #Vamos deixar a última partida de fora para testes
df_rodada = df_partidas_posicao[df_partidas_posicao['round'] == partidas['round'][partida]]
jogadores_casa = df_rodada[df_rodada['atletas.clube_id'] == partidas['home_team'][partida]]
jogadores_visitantes = df_rodada[df_rodada['atletas.clube_id'] == partidas['away_team'][partida]]
for j_casa in range(len(jogadores_casa)):
for j_visitante in range(len(jogadores_visitantes)):
score_casa = 0
score_visitante = 0
pontos_j_casa = jogadores_casa['atletas.pontos_num'].iloc[j_casa]
pontos_j_visitante = jogadores_visitantes['atletas.pontos_num'].iloc[j_visitante]
soma = pontos_j_casa + pontos_j_visitante
if soma != 0:
score_casa = pontos_j_casa / soma
score_visitante = pontos_j_visitante / soma
def_n_vazada_casa = 0 if partidas['away_score_norm'][partida] > 0 else 1
def_n_vazada_visitante = 0 if partidas['home_score_norm'][partida] > 0 else 1
if posicao == 'ata':
pontos_casa = partidas['home_score_norm'][partida] + score_casa
pontos_visitante = partidas['away_score_norm'][partida] + score_visitante
elif posicao == 'mei':
pontos_casa = partidas['home_score_norm'][partida] + def_n_vazada_casa + score_casa
pontos_visitante = partidas['away_score_norm'][partida] + def_n_vazada_visitante + score_visitante
else:
pontos_casa = def_n_vazada_casa + score_casa
pontos_visitante = def_n_vazada_visitante + score_visitante
j1 = jogadores_casa['Rank'].iloc[j_casa]
j2 = jogadores_visitantes['Rank'].iloc[j_visitante]
M[j1,j1] = M[j1,j1] + pontos_casa
M[j1,j2] = M[j1,j2] + pontos_visitante
M[j2,j1] = M[j2,j1] + pontos_casa
M[j2,j2] = M[j2,j2] + pontos_visitante
M = M / np.sum(M,axis=1)
evals, evecs = np.linalg.eig(M.T)
evec1 = evecs[:,np.isclose(evals, 1)]
evec1 = evec1[:,0]
stationary = evec1 / evec1.sum()
stationary = stationary.real
stationaries[posicao] = stationary
# -
# # Escalando para a rodada
# No cálculo da distribuição, deixamos a última rodada de 2019 de fora, agora podemos utilizá-la para testar o nosso modelo.
rodada = 38
# Primeiro vamos criar um DataFrame somente com as informações da rodada e colocar as probabilidades que encontramos referente a cada jogador.
df_rodada = df_partidas[df_partidas['round'] == rodada].copy()
df_rodada['Rank'] = df_rodada['Rank'].astype(int)
df_rodada['probs'] = 0
for jogador in range(len(df_rodada)):
posicao = df_rodada.iloc[jogador]['player_position']
rank = df_rodada.iloc[jogador]['Rank']
if rank:
df_rodada.iloc[jogador,-1] = stationaries[posicao][rank]
# Vamos utilizar também do recurso de status e só trabalhar com jogadores em status ***Provável***.
df_rodada = df_rodada[df_rodada['atletas.status_id'] == 'Provável'].copy()
df_rodada.head()
# # Otimizando a escalação
# Agora que temos as probabilidades de cada jogador, precisamos gerar a melhor escalação possível de acordo com as restrições de quantidade de "cartoletas" e quantidade de jogadores por posição.
#
# Para isso vamos usar ***Programação Linear*** para maximizar a soma das probabilidades, restringindo a escalação escolhida e a quantidade de cartoletas.
# Para os exemplos abaixo, vou usar a formação ***4-3-3*** e um total de ***140 cartoletas***.
#
# +
formacao = {
'ata': 3,
'mei': 3,
'lat': 2,
'zag': 2,
'gol':1
}
cartoletas = 140
# -
# ## Programação Linear
# Podemos representar esse problema com a seguinte notação matemática.
# * _zi_, probabilidade de cada jogador _i_
# * _ci_, custo de cada jogador _i_
# * _yi_, valor binário indicando se o jogador _i_ foi escalado ou não
# * _n_, total de jogadores
# * _ai_, valor binário indicando se o jogador _i_ é atacante
# * _mi_, valor binário indicando se o jogador _i_ é meio-campista
# * _li_, valor binário indicando se o jogador _i_ é laterai
# * _zi_, valor binário indicando se o jogador _i_ é zagueiro
# * _gi_, valor binário indicando se o jogador _i_ é goleiro
#
#
# $$ Max. \sum^n_{i=1}{z}_{i} * {y}_{i}$$
# Restrições:
# $$ \sum^n_{i=1}{c}_{i} * {y}_{i} <= 140 $$
# $$ \sum^n_{i=1}{a}_{i} * {y}_{i} = 3 $$
# $$ \sum^n_{i=1}{m}_{i} * {y}_{i} = 3 $$
# $$ \sum^n_{i=1}{l}_{i} * {y}_{i} = 2 $$
# $$ \sum^n_{i=1}{z}_{i} * {y}_{i} = 2 $$
# $$ \sum^n_{i=1}{g}_{i} * {y}_{i} = 1 $$
# As variáveis que entraram na equação são relacionadas as posições, custo e probabilidade. Sendo assim, vamos criar dicionários com cada uma dessas informações relacionadas ao nome do jogador para facilitar a montagem do problema.
# +
df_rodada.set_index('atletas.slug',inplace=True)
z = df_rodada['probs'].to_dict()
c = df_rodada['atletas.preco_num'].to_dict()
dummies_posicao = pd.get_dummies(df_rodada['atletas.posicao_id'])
dummies_posicao = dummies_posicao.to_dict()
# -
# Primeiro, iniciamos o problema de otimização e definimos uma função objetivo.
# +
from pulp import LpMaximize, LpProblem, lpSum, LpVariable
prob = LpProblem("Melhor_Escalacao", LpMaximize)
y = LpVariable.dicts("Atl",df_rodada.index,0,1,cat='Binary')
prob += lpSum([z[i] * y[i] for i in y])
# -
# Agora adicionamos todas as restrições e calculamos.
prob += lpSum([c[i] * y[i] for i in y]) <= cartoletas, "Limite de Cartoletas"
prob += lpSum([dummies_posicao['ata'][i] * y[i] for i in y]) == formacao['ata'], "Quantidade Atacantes"
prob += lpSum([dummies_posicao['lat'][i] * y[i] for i in y]) == formacao['lat'], "Quantidade Laterais"
prob += lpSum([dummies_posicao['mei'][i] * y[i] for i in y]) == formacao['mei'], "Quantidade Meio"
prob += lpSum([dummies_posicao['zag'][i] * y[i] for i in y]) == formacao['zag'], "Quantidade Zagueiros"
prob += lpSum([dummies_posicao['gol'][i] * y[i] for i in y]) == formacao['gol'], "Quantidade Goleiro"
prob.solve()
# Os jogadores escalados que maximizam as probabilidades dentro das restrições, ficam com o valor ***1*** para a variável de atletas.
escalados = []
for v in prob.variables():
if v.varValue == 1:
atleta = v.name.replace('Atl_','').replace('_','-')
escalados.append(atleta)
print(atleta, "=", v.varValue)
colunas = ['atletas.posicao_id','atletas.clube.id.full.name','atletas.pontos_num','atletas.preco_num']
df_rodada.loc[escalados][colunas]
# Podemos verificar qual foi o total de pontos que essa escalação somaria na última rodada.
df_rodada.loc[escalados]['atletas.pontos_num'].sum()
# Também o custo total.
df_rodada.loc[escalados]['atletas.preco_num'].sum()
# # Incluindo Palpites
# Agora a cereja do bolo...
#
# Até aqui, o nosso modelo compara posição por posição, os jogadores contra seus adversários. Simulando, de certa forma, qual jogador que deveríamos escolher, para cada posição, considerando a sequência de jogos que esse jogador teve.
#
# No cálculo da distribuição estacionária, podemos notar que as probabilidades são muito semelhantes, ficando difícil escolher jogadores com muita certeza. Faz todo sentido, o futebol é rodeado de incertezas. No entanto essa estatística pode nos ajudar a escalar o time automaticamente.
#
# Podemos então utilizar a probabilidade gerada pela cadeia de Markov para selecionar um time baseado em alguns palpites que temos para os jogos. Vamos fazer um sistema simples, onde distribuimos 10 pontos, para a importância de alguns fatores. Por exemplo, eu considero que jogar em casa é um fator importante, e também gosto de apostar em times que além de jogar em casa, vão pegar adversários que nas últimas posições no campeonato. Então, dei as seguintes notas para a última rodada:
#
#
# * 5 pontos - jogar em casa
# * 3 pontos - Internacional
# * 2 pontos - Fortaleza
# +
jogar_em_casa = 5
times = {
'Internacional':3,
'Fortaleza':2
}
# -
# Agora, aumentamos as probabilidades dos jogadores que se enquadram nessa regra, multiplicando o seu valor atual, pela porcentagem de pontos que demos a ele, por exemplo:
#
# * Jogadores que jogam em casa = Probabilidade * 150%
# * Jogadores Internacional = Probabilidade * 130%
# * Jogadores Fortaleza = Probabilidade * 120%
times_casa = partidas[partidas['round'] == rodada]['home_team']
df_rodada.loc[df_rodada['atletas.clube_id'].isin(times_casa),'probs'] = df_rodada.loc[
df_rodada['atletas.clube_id'].isin(times_casa),'probs'] * (jogar_em_casa / 10 + 1)
for time in times:
df_rodada.loc[df_rodada['atletas.clube.id.full.name'] == time,'probs'] = df_rodada.loc[
df_rodada['atletas.clube.id.full.name'] == time,'probs'] * (times[time] / 10 + 1)
z = df_rodada['probs'].to_dict()
# ## Programação Linear
# Podemos otimizar a equação novamente.
# +
from pulp import LpMaximize, LpProblem, lpSum, LpVariable
prob = LpProblem("Melhor_Escalacao", LpMaximize)
y = LpVariable.dicts("Atl",df_rodada.index,0,1,cat='Binary')
prob += lpSum([z[i] * y[i] for i in y])
# -
prob += lpSum([c[i] * y[i] for i in y]) <= cartoletas, "Limite de Cartoletas"
prob += lpSum([dummies_posicao['ata'][i] * y[i] for i in y]) == formacao['ata'], "Quantidade Atacantes"
prob += lpSum([dummies_posicao['lat'][i] * y[i] for i in y]) == formacao['lat'], "Quantidade Laterais"
prob += lpSum([dummies_posicao['mei'][i] * y[i] for i in y]) == formacao['mei'], "Quantidade Meio"
prob += lpSum([dummies_posicao['zag'][i] * y[i] for i in y]) == formacao['zag'], "Quantidade Zagueiros"
prob += lpSum([dummies_posicao['gol'][i] * y[i] for i in y]) == formacao['gol'], "Quantidade Goleiro"
prob.solve()
# Por fim geramos uma nova escalação, que levou em consideração os pesos que colocamos acima.
escalados = []
for v in prob.variables():
if v.varValue == 1:
atleta = v.name.replace('Atl_','').replace('_','-')
escalados.append(atleta)
print(atleta, "=", v.varValue)
# Ao avaliar a escalação abaixo, geramos dessa vez uma pontuação de ***81*** pontos, ***60%*** a mais que o resultado anterior.
#
# Outro ponto interessante é que usamos menos cartoletas do que na escalação anterior. Uma oportunidade é usar esse modelo para fazer escalações mais baratas, quando o objetivo for valorização. Para isso, basta colocar o limite que deseja no total de cartoletas.
colunas = ['atletas.posicao_id','atletas.clube.id.full.name','atletas.pontos_num','atletas.preco_num']
df_rodada.loc[escalados][colunas]
df_rodada.loc[escalados]['atletas.pontos_num'].sum()
df_rodada.loc[escalados]['atletas.preco_num'].sum()
# ### TODO
#
# * Reavaliar o efeito de jogadores que acumulam mais pontos, simplesmente pelo fato de jogarem mais partidas.
| src/python/markov-chain-lpp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="html"
# <style>
# .h1_cell, .just_text {
# box-sizing: border-box;
# padding-top:5px;
# padding-bottom:5px;
# font-family: "Times New Roman", Georgia, Serif;
# font-size: 125%;
# line-height: 22px; /* 5px +12px + 5px */
# text-indent: 25px;
# background-color: #fbfbea;
# padding: 10px;
# border-style: groove;
# }
#
# hr {
# display: block;
# margin-top: 0.5em;
# margin-bottom: 0.5em;
# margin-left: auto;
# margin-right: auto;
# border-style: inset;
# border-width: 2px;
# }
# </style>
# -
# <h1>
# <center>
# Module 1: Assignment
# </center>
# </h1>
# <div class=h1_cell>
#
# <p>In this assignment, I'll ask you to do a bit more data wrangling (AKA feature engineering) and to help implement K-NN.
# <p>
# </div>
# <hr>
# <h1>
# Set up
# </h1>
# <div class=h1_cell>
# <p>
# Let's get organized. We need to read the data in. And we need to import any libraries we need.
# <p>
# The first step is to get the data. You can see I am reading from a file. We wrote this file as part of the content notebook this week.
# <p>
# FYI: When the Python kernel is grinding away on a cell, you will see a `*` where the number normally appears. The execution of the cell below for me takes a couple seconds. If it remains a `*` for a minute, you have issues. Try restarting the kernel (see the Kernel tab above).
# </div>
# +
import pandas as pd
import os
week = '1a'
home_path = os.path.expanduser('~')
file_path = '/Dropbox/cis399_ds2_w18/table_history/'
file_name = 'titanic_table_w'+week+'.csv'
titanic_table = pd.read_csv(home_path + file_path + file_name)
# -
titanic_table.head(1)
# <hr>
# <h2>
# Importing libraries
# </h2>
# <div class=h1_cell>
# <p>
# We went over this in content notebook but as reminder: we want to bring in libraries that have useful functions we write during the quarter. For now, `week0` and `week1` just have silly functions so you can test your git set-up.
# </div>
import os
home_path = os.path.expanduser('~')
home_path
library_folder = '/Dropbox/cis399_ds2_w18/code_libraries/datascience_2'
os.chdir(home_path + library_folder)
# !git pull
# +
#load the lbirary from content this week
import sys
sys.path.append(home_path + library_folder)
from week1 import *
# %who function
# -
python_version() # just checking
#I am setting the option to see all the columns of our table as we build it, i.e., it has no max.
pd.set_option('display.max_columns', None)
titanic_table.head()
# <hr>
# <h1>
# Challenge 1: bag of chars
# </h1>
# <div class=h1_cell>
# <p>
# We have the Length column to use as a feature in K-NN. But it looks a little lonely. Let's add some more features. We will see a technique called bag-of-words in a future module. The general idea is to collect together all the words you see in a set of documents and give each a unique integer id. Then build a table with the columns labeled with the word ids and the rows the documents. To fill cell table[d][w], you count how many times word w appears in document d. If you have a lot of documents, you are going to have a lot of unique words, e.g., in the thousands. That's a lot of columns! With pandas, no problem.
# <p>
# In this challenge, I want you to try bag-of-characters (I just made that name up - I might trademark it). I want 26 new columns, one for each letter. For the text in each row['Name'], I want a count of how many times each letter appears in the name. You cold do this the tedious way and have 26 separate pieces of code. What I want to see is a loop that does it for you.
# <p>
# Please wrangle the Name string into lowercase. I think this is an iffy decision. It may be that uppercase letters carry information. Normalizing to lowercase washes that information out. It would really be no problem to have 52 new columns for upper and lower case. Or even add columns for punctuation. Adding columns is cheap and easy. But we will use lowercase for now.
# <p>
# To remind you of what we did in content notebook:
# <pre>
# <code>
# titanic_table['Length'] = titanic_table.apply(lambda row: len(row['Name']), axis=1)
# </code>
# </pre>
# <p>
# What you need to do is do the same for 26 new columns. You will have to do something other than taking the length. And please use a loop.
# </div>
# +
titanic_table.head()
# -
# <div class=h1_cell>
# <p>
# If your values don't match mine, make sure you are searching on the lowercase version of the name.
# </div>
# <h2>Challenge 2</h2>
# <p>
# <div class=h1_cell>
# <p>
# Let's try K-NN using our 27 new columns. The big picture is we will use those 27 columns as features in a classifier. We are trying to classify whether a person survived or not. I realize that we know already if they survived. But that is a good thing. We can pretend like we do not know, make a prediction, and then check if we were correct. We can grade ourselves in this way. I'll guide you through it.
# <p>
# But first, what the heck is K-NN? Well, first it is a machine learning algorithm. You can use it to predict that value of a target column. It's pretty sweet because it does not require any prior training. I tend to view it as crowd-sourcing. You treat the entire table as your crowd. Each row is a potential expert you want to solicit info from. Let's take an example. Let's say I want to predict the value of Survived for row 0. My crowd is rows 1 through 890. I want to find the experts in that crowd and have them convene to come up with a prediction for row 0 Survived. How many experts should I consult? K - the value of K is how many consultants. So 5-NN says use 5 top experts from the crowd. How do I find the K best? There are a number of ways to define best. We are going to use Euclidean Distance for now but will change in future. We treat each row as a vector of values. Staying with row 0, I'll first check the Euclidean Distance from row 0 and row 1. I'll record that some place for later. Then I'll try row 0 and row 2, again recording for later. The last step is to compare row 0 and row 890 and recording. I now have 890 recorded distances from row 0. I find the 5 rows with the smallest distance. Those are my experts!
# <p>
# How do the K experts decide among themselves? Simple voting. They each look in their own Survived column and vote accordingly. Majority wins. Let's go through it. I'll give you steps 1 and 2.
# </div>
# <h2>Step 1: need a list of features</h2>
# <p>
# <div class=h1_cell>
# With K-NN, we can choose which features we will use to compute distance. In our case, I want all the features other than Survived and Name. I'll put the features into a list knn_features.
# </div>
knn_features = list('abcdefghijklmnopqrstuvwxyz') + ['Length']
knn_features
# <h2>Step 2: Convert row to vector</h2>
# <p>
# <div class=h1_cell>
# What I would like is a list of values from a row that is the vector we will use to compute Euclidean distance. But I only want that vector to contain values from my 27 features. Let's practice on row 0. We will pretend we want to predict the value of Survived for row 0.
# <p>
# BTW: there is some new syntax here: the loc method. With loc, you can first list the row index (0 in our case) and then the list of columns you want from that row. Nice.
# </div>
vector0 = titanic_table.loc[0, knn_features] # will only capture our features in row 0.
vector0
#
# <h2>We now have a pandas series</h2>
# <p>
# <div class=h1_cell>
#
# vector0 is an instance of a pandas Series, which are kind of interesting. You can treat a series as a dictionary. You can treat a series as a list. Let's check it out.
# </div>
# +
print(type(vector0)) # pandas.series
print(vector0['Length']) # treat it like a dictionary
print(vector0[-1]) # treat it like a list
# -
#
# <h2>Step 3. Build a Euclidean Distance function</h2>
# <p>
# <div class=h1_cell>
# <p>
# Here you go. You can view vector0 as p. We can build another vector q to test out your function. In essence, you will hold p steady and compare it against 890 values for q.
# <p>
# <img src='https://www.dropbox.com/s/9wao0kf3u32i3e9/euclidian.png?raw=1'>
# </div>
def euclidean_distance(p,q):
# <div class=just_text>
# Let's test it by building another vector and trying it out.
# </div>
# +
vector1 = titanic_table.loc[1, knn_features] # will only capture our features in row 1.
euclidean_distance(vector0, vector1)
# -
# <h2>Step 4. Find the k best rows</h2>
# <p>
# <div class=h1_cell>
# We want to predict the Survived value for row 0. Our approach will be to find the k closest rows to row 0. By closest, I mean smallest euclidean distance. I am going to ask you to write a function top_k that will return a list of tuples that are the k closest. Each tuple has the form (row_index, distance). I'll do a couple of examples by hand below.
# </div>
# +
all_distances = []
all_distances.append((1, 28.792360097775937)) # I already found the distance between rows 0 and 1
all_distances
# -
vector2 = titanic_table.loc[2, knn_features]
all_distances.append((2, euclidean_distance(vector0, vector2)))
all_distances
# <div class=h1_cell>
# Wow. row 2 was a lot closer!
# </div>
# +
vector3 = titanic_table.loc[3, knn_features]
all_distances.append((3, euclidean_distance(vector0, vector3)))
all_distances
# -
# <div class=just_text>
# What your top_k function has to do is fill out the all_distances list with 890 tuples.
# <p>
# What then? I'll tell you my approach. I sorted all_distances then sliced off k. That is what I returned.
# <p>
# Here is the signature line of top_k.
def top_k(table, features, target_row_id, k):
# <div class=just_text>
# Let's test it out on a couple rows.
# <p>
# We can start with row 0. Out of the rows 1 to 890, we want the 5 closest to row 0
# </div>
top_for_0 = top_k(titanic_table, knn_features, 0, 5)
top_for_0
# <h2>We have our top 5 rows</h2>
# <p>
# <div class=h1_cell>
#
# Looks like rows 477, 482, 804, 806 and 75 are the 5 closest. Let's try it for row 1; pretend we want to predict Survived for row 1. So I want to check row 1 against row 0 and rows 2 - 890.
# </div>
top_for_1 = top_k(titanic_table, knn_features, 1, 5)
top_for_1
# <h2>Step 5. Do voting</h2>
# <p>
# <div class=h1_cell>
# Once I have my list of top k, I can do voting. What I want is the value each consultant (row) has for Survived, the column we are trying to classify. Majority wins. If k is even, ties are possible. I'll assume that ties go to 0 (somewhat arbitrarily pessimistic). I'll start you out.
# </div>
def knn_voting(table, target_column, topk_rows):
knn_voting(titanic_table, 'Survived', top_for_0)
#
# <div class=h1_cell>
# <p>
# Let's verify that answer.
# </div>
for i,d in top_for_0:
actual = titanic_table.loc[i,'Survived']
print(actual)
#
# <div class=h1_cell>
# <p>
# Looks ok. We have four 0 votes and only one 1 vote.
# </div>
# <h2>Challenge 3</h2>
# <p>
# <div class=h1_cell>
# <p>
# We are on the home stretch. What is left is to check out how well K-NN is working on classification. What we can do is produce predictions for all the rows. Then grab the actual values for all the rows. Zip them up and create a confusion matrix.
# <p>
# About computing the predictions for all rows. If you think a bit about this, the process could be slow. Looks to me like an `O(n**2)` algorithm. I'll time my version so you can compare to yours. My time: around 15 minutes, is on a relatively slow machine. But you can see the problem with `O(n**2)` algorithms!
# <p>
# Before jumping directly into using the whole table, try things out on the first 50 rows. Should be speedier.
# </div>
# +
sliced_table = titanic_table[:50] #you can use slice operation to get specific rows
import time
start = time.time()
predictions = []
for i,row in sliced_table.iterrows():
if i%10 == 0: print('did 10')
top5 = top_k(sliced_table, knn_features, i, 5)
predictions.append(knn_voting(sliced_table, 'Survived', top5))
end = time.time()
print(end - start) # in seconds
# -
print(predictions)
# <div class=h1_cell>
# If you are matching my predictions, you are ready to step up to the big enchilada.
# </div>
# +
import time
start = time.time()
predictions = []
for i,row in titanic_table.iterrows():
if i%20 == 0: print('did 20')
top5 = top_k(titanic_table, knn_features, i, 5)
predictions.append(knn_voting(titanic_table, 'Survived', top5))
end = time.time()
print(end - start) # in seconds
# -
# <div class=h1_cell>
# <p>
# Whew. That took a little while.
# <p>
# The next piece is a lot easier - pull out the values from the Survived column. Then zip things up.
# <p>
# But before moving on, it is interesting to speculate on how to speed K-NN up. One idea might be to cache all distances. Something like a dictionary where key is a pair of row indices and value is distance, e.g., {(0,1): 28.792360097775937, ...}. We know that dictionary lookup is fast in Python. Another idea is to eliminate the non-experts. After we run through all rows, find the rows who are never in the top K. Out of all 891 rows, they never were close enough to vote. Put them on a skip-list and don't consult them in future. You can probably think of other ways.
# </div>
# +
actuals = titanic_table['Survived'] #easy peasy to pull a column out of a table
zipped = zip(predictions,actuals)
zipped[:5]
# -
# <div class=h1_cell>
# <p>
# If you think about it, there are 4 cases in zipped: (0,0), (0,1), (1,0), (1,1). Let's package them up in a dictionary.
# <p>
# FYI: I always find it kind of cool that Python allows you great freedom in what you use as keys in a dictionary.
# </div>
confusion_dictionary = {(1,1):0, (1,0):0, (0,1):0, (0,0):0}
for pair in zipped:
confusion_dictionary[pair] += 1
confusion_dictionary
# <div class=h1_cell>
# <p>
# You can align our dictionary with the confusion matrix below. (1,0) is false positive, etc. Once you have the 4 values, you can compute a lot of interesting measures.
# <p>
# <img src='https://www.dropbox.com/s/zubecbzi8zsdzgg/confusion_matrix.png?raw=1'>
# <p>
# </div>
# <h2>Compute measures</h2>
# <p>
# <div class=h1_cell>
# <p>
# Let's try our hand at computing some of the measures in the confusion matrix.
# <p>
# First, it will make typing simpler if we pull out the 4 core vvalues from the confusion_matrix.
# <p>
# Remember that division in 2.7 defaults to int division unless numerator and/or denominator is a float. So `1/2 == 0` whereas `1.0/2 == .5`.
# </div>
confusion_dictionary
tp =
fp =
fn =
tn =
# <h3>Recall (AKA Sensitivity)</h3>
#
# +
recall =
recall
# -
# <h3>Fall-out</h3>
# +
fall_out =
fall_out
# -
# <h3>Precision</h3>
# +
precision =
precision
# -
# <h3>F1</h3>
# +
f1 =
f1
# -
# <h3>Accuracy</h3>
# +
accuracy =
accuracy
# -
# <h1>Kind of shocking</h1>
# <p>
# <div class=h1_cell>
# <p>
# If we just predicted a 0 for every row, we would get to .63 accuracy. So we are doing better than the mode.
# Not sure what to say. Maybe there is something in Numerology.
# </div>
# <h1>Let's Summarize</h1>
# <p>
# <div class=h1_cell>
# <p>
# We have learned that we can use text in classification/prediction problems. We used names, but we could have also used tweets, articles or entire books. It's all text. We will see this in future modules.
# <p>
# We learned that we can break out text into smaller chunks and use those chunks as columns. We used characters but later we will use words.
# <p>
# We learned that K-NN has some good features -- it requires no training and is fairly straightforward -- and bad features -- it is slow.
# <p>
# Our next big goal is to classify fake news in tweets (1 if fake, 0 if real). We will look at how to break out pieces of a tweet.
# </div>
# <h1>Saving your work</h1>
# <p>
# <div class=h1_cell>
# <p>
# Let's save the wrangled table.
# <p>
# </div>
# +
import os
week = '1b'
home_path = os.path.expanduser('~')
file_path = '/Dropbox/cis399_ds2_w18/table_history/'
file_name = 'titanic_table_w'+week+'.csv'
titanic_table.to_csv(home_path + file_path + file_name, index=False)
# -
#
# <h1>Read it back in</h1>
# <p>
# <div class=h1_cell>
# <p>
#
# Read it back in just to make sure of round-trip
# </div>
read_table = pd.read_csv(home_path + file_path + file_name)
read_table.head()
# <hr>
# <h1>Next up</h1>
# <div class=h1_cell>
# <p>
# We will start to work with fake-news.
# <p>
# But before dropping the Numerology idea all together, my colleague pointed me to this web site: http://www.paulsadowski.com/Numbers.asp. If I had more time, I bet I could turn K-NN loose on that site :)
# </div>
| .ipynb_checkpoints/module1_handout-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Sorting algorithms one by one
def binary_sort(l,key,lower,upper):
if upper - lower <= 1:
if key <= arr[lower]:
return lower - 1
else:
return lower
else:
mid = len(l) // 2
if l[mid] > key:
return binary_sort(l,key,lower,mid)
elif l[mid] < key:
return binary_sort(l,key,mid,upper)
else:
return mid
# +
l = [37, 23, 0, 17, 12, 72, 31, 46, 100, 88, 54]
for i in range(len(l)):
pos = binary_sort(l,i,0,len(l))
temp = l[i]
l[pos] = temp
print(l)
# -
# sort
def insertion_sort(arr):
for i in range(1, len(arr)):
temp = arr[i]
pos = binary_search(arr, temp, 0, i) + 1
for k in range(i, pos, -1):
arr[k] = arr[k - 1]
arr[pos] = temp
def binary_search(arr, key, start, end):
#key
if end - start <= 1:
if key < arr[start]:
return start - 1
else:
return start
mid = (start + end)//2
if arr[mid] < key:
return binary_search(arr, key, mid, end)
elif arr[mid] > key:
return binary_search(arr, key, start, mid)
else:
return mid
# main
arr = [1,5,3,4,8,6,3,4]
n = len(arr)
insertion_sort(arr)
print("Sorted array is:")
for i in range(n):
print(arr[i],end=" ")
# +
# Python3 program for above implementation
from collections import defaultdict
def check_count(arr,k):
mp = defaultdict(lambda:0)
for i in range(len(arr)):
mp[arr[i]] += 1
# Check for each value in hash
for key, values in mp.items():
if values > 2 * k:
return False
return True
# -
arr = [1,5,3,4,8,6,3,4]
if check_count(arr,k) == True:
print("Yes")
else:
print("No")
| Sorting_Algorithms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="LQV029hDzdMb"
import numpy as np
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 340} id="uyTph2zP0Qmm" outputId="f7986d2d-db9a-4e25-84a3-a2ad5e6e1249"
datanum = 1000
dim = 2
mu = np.zeros(dim)
sig = np.eye(dim)
sig[0, 1] = -0.5
sig[1, 0] = -0.5
print(mu, '\n', sig)
data = np.random.multivariate_normal(mu, sig, datanum)
# print(data)
plt.scatter(data[:, 0], data[:, 1])
# + colab={"base_uri": "https://localhost:8080/", "height": 284} id="aH7Wej3C0v_G" outputId="53f6841a-459f-402d-ff3b-c5e174f25865"
# plt.scatter(data[:, 0], data[:, 1])
# + colab={"base_uri": "https://localhost:8080/", "height": 322} id="Fkbzm9-52WPd" outputId="a438a53d-7ef6-4616-8bcd-4eda5f118427"
data2 = np.random.randn(datanum, dim)
data2.shape
# plt.scatter(data2[:, 0], data2[:, 1])
L = np.linalg.cholesky(sig)
z_data = np.matmul(data2, L.T) + mu
# 확인을 해보았더니, 제대로 동작되는 함수임을 시각적으로 판단할 수 있다.
plt.scatter(z_data[:, 0], z_data[:, 1])
'''
np.random.randn 함수가 믿을만한다고 생각된다면
np.random.multivariate_normal 함수와 비교해볼 수도 있다.
일반적으로 여러 함수들이 존재하나, 그것들이 모두 reliable한 것은 아닐수도 있다.
그것들을 테스트해보는 것도 중요함.
그냥 에러를 표시할 수도 있으나, 거의 대부분 표면상으로는 잘 작동하는 것으로 보이나
내부적으로 잘못 전개되는 경우도 허다하다.
'''
| honorary_sessions/01_gaussian_distribution/gaussian_distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Isomap Embedding
#
# * Non-linear dimensionality reduction through Isometric Mapping
# * 流形学习应该算是个大课题了,它的基本思想就是在高维空间中发现低维结构
# * 类似PCA的降维方法
# * PCA: 提供点的坐标进行降维
# * Isomap: 提供点之间的距离矩阵
from sklearn.manifold import Isomap
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
iris = load_iris()
iris.data.shape
embedding = Isomap(n_components=2)
X_transformed = embedding.fit_transform(iris.data)
X_transformed.shape
plt.scatter(X_transformed[:, 0], X_transformed[:, 1], c=iris.target)
plt.colorbar()
plt.show()
| sklearn/dimensionality_reduction_Isomap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 (py37)
# language: python
# name: py37
# ---
# # Solution to the Initial Value Problem:
#
# We solve
# \begin{equation}
# \epsilon\frac{\partial \theta}{\partial t} + \cos{(y)}\frac{\partial\theta}{\partial x} = \frac{1}{Pe}\frac{\partial^2\theta}{\partial x^2} + \epsilon\frac{\partial^2\theta}{\partial y^2} ,
# \end{equation}
#
# in the domain $0<y<2\pi$, $-2 \pi < x < 2 \pi$, $t>0$. Thus smallest the wavenumber (other than zero) is $k=0.5$.
#
# The initial condition is the Gaussian
#
# \begin{equation}
# \theta(x, y, 0) = \theta_{0}\exp{(-x^2/\sigma^2)} .
# \end{equation}
#
#
# ### We only calculate the average solution below
#
#
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import gridspec
import xarray as xr
from mathieu_functions import mathieu_functions as mfs
from mathieu_functions import A_coefficients
# +
# =================================
# Important parameters to define
# =================================
L = np.pi # Half of channel width (y-direction)
N = 50 # length of k-array
alpha = 2 # length of channel periodic in x. I have used alpha=10 before, but for the gaussian initial condition a value of 2 is better.
Nx = 500 # length of x-array
sigma=0.5 # changes width of gaussian
eps = 0.05 # ta / td << 1 for weakly diffusive processes.
Pe = 1 / eps
x = np.linspace(-alpha * L, alpha * L, Nx)
y = np.linspace(0, L, Nx//5)
X, Y = np.meshgrid(x, y)
K = np.arange(0, N / alpha, 1 / alpha) # wavenumber array.
K_test = np.linspace(0, N/alpha, 1000)
Q = (1j) * 2 * K / eps # Canonical Mathieu parameter
qf = Q[-1].imag # Largest value of Mathieu's parameter.
M = 75 # matrix size
t = np.linspace(0, .25, 100)
colors = ['#0000cc', '#990000', 'darkgreen',
'indigo', '#009999', 'orange',
'k']
# -
K
print('Last element of K, q, size(q)')
K[-1], Q[-1], len(Q)
# ## Calculate Mathieu Eigenfunctions and Eigenvals
#
# We don't need to calculate the Eigenfunctions, as we are only interested in calculated the averaged (exact) solution. We only
# need the eigenvalues $a_{2n}$ and the Fourier Coefficients $A_{2r}^{(2n)}$.
A_vals = A_coefficients(Q, M, 'even', 'one') ## NOTE: slow since Q and M are large.
fig, ax = plt.subplots(figsize=(20, 8))
gs = gridspec.GridSpec(3,2)
gs.update(hspace=0.05)
ax1 = plt.subplot(gs[4])
plt.plot(Q.imag, A_vals['a24'].real,'k', lw=4, label='n=12')
plt.plot(Q.imag, A_vals['a26'].real, '#808080', lw=4, label='n=13')
plt.xlim(Q[0].imag, Q[-1].imag)
# plt.ylim(0, 10)
plt.yticks(size=15)
plt.xlabel('q', fontsize=25)
plt.xticks(size=15)
plt.legend(fontsize=20, frameon=False)
ax2 = plt.subplot(gs[5])
plt.plot(Q.imag, A_vals['a24'].imag, 'k', lw=4, ls='--', label='n=12')
plt.plot(Q.imag, A_vals['a26'].imag, '#808080', lw=4, ls='--', label='n=13')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.xlabel('q', fontsize=25)
plt.xticks(size=15)
ax3 = plt.subplot(gs[2])
plt.plot(Q.imag, A_vals['a28'].real, 'k', lw=4, label='n=14')
plt.plot(Q.imag, A_vals['a30'].real, '#808080', lw=4, label='n=15')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.legend(fontsize=20, frameon=False)
plt.setp(ax3.get_xticklabels(), visible=False)
ax4 = plt.subplot(gs[3])
plt.plot(Q.imag, A_vals['a28'].imag, 'k', lw=4, ls='--', label='n=14')
plt.plot(Q.imag, A_vals['a30'].imag, '#808080',lw=4, ls='--', label='n=15')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.setp(ax4.get_xticklabels(), visible=False)
# plt.legend(fontsize=20, frameon=False)
ax5 = plt.subplot(gs[0])
plt.plot(Q.imag, A_vals['a32'].real, 'k', lw=4, label='n=16')
plt.plot(Q.imag, A_vals['a34'].real, '#808080', lw=4, label='n=17')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.legend(fontsize=20, frameon=False)
plt.setp(ax5.get_xticklabels(), visible=False)
plt.title(r'$Re(a_{2n})$', fontsize=25)
ax6 = plt.subplot(gs[1])
plt.plot(Q.imag, A_vals['a32'].imag, 'k', lw=4, ls='--', label='n=16')
plt.plot(Q.imag, A_vals['a34'].imag, '#808080', lw=4, ls='--', label='n=17')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.setp(ax6.get_xticklabels(), visible=False)
plt.title(r'$Im(a_{2n})$', fontsize=25)
plt.show()
fig, ax = plt.subplots(figsize=(20, 8))
gs = gridspec.GridSpec(3,2)
gs.update(hspace=0.05)
ax1 = plt.subplot(gs[4])
plt.plot(Q.imag, A_vals['a12'].real,'k', lw=4, label='n=6')
plt.plot(Q.imag, A_vals['a14'].real, '#808080', lw=4, label='n=7')
plt.xlim(Q[0].imag, Q[-1].imag)
# plt.ylim(0, 10)
plt.yticks(size=15)
plt.xlabel('q', fontsize=25)
plt.xticks(size=15)
plt.legend(fontsize=20, frameon=False)
ax2 = plt.subplot(gs[5])
plt.plot(Q.imag, A_vals['a12'].imag, 'k', lw=4, ls='--', label='n=6')
plt.plot(Q.imag, A_vals['a14'].imag, '#808080', lw=4, ls='--', label='n=7')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.xlabel('q', fontsize=25)
plt.xticks(size=15)
ax3 = plt.subplot(gs[2])
plt.plot(Q.imag, A_vals['a16'].real, 'k', lw=4, label='n=8')
plt.plot(Q.imag, A_vals['a18'].real, '#808080', lw=4, label='n=9')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.legend(fontsize=20, frameon=False)
plt.setp(ax3.get_xticklabels(), visible=False)
ax4 = plt.subplot(gs[3])
plt.plot(Q.imag, A_vals['a16'].imag, 'k', lw=4, ls='--', label='n=8')
plt.plot(Q.imag, A_vals['a18'].imag, '#808080',lw=4, ls='--', label='n=9')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.setp(ax4.get_xticklabels(), visible=False)
# plt.legend(fontsize=20, frameon=False)
ax5 = plt.subplot(gs[0])
plt.plot(Q.imag, A_vals['a20'].real, 'k', lw=4, label='n=10')
plt.plot(Q.imag, A_vals['a22'].real, '#808080', lw=4, label='n=11')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.legend(fontsize=20, frameon=False)
plt.setp(ax5.get_xticklabels(), visible=False)
plt.title(r'$Re(a_{2n})$', fontsize=25)
ax6 = plt.subplot(gs[1])
plt.plot(Q.imag, A_vals['a20'].imag, 'k', lw=4, ls='--', label='n=10')
plt.plot(Q.imag, A_vals['a22'].imag, '#808080', lw=4, ls='--', label='n=11')
plt.xlim(Q[0].imag, Q[-1].imag)
plt.yticks(size=15)
plt.setp(ax6.get_xticklabels(), visible=False)
plt.title(r'$Im(a_{2n})$', fontsize=25)
plt.show()
fig, ax = plt.subplots(figsize=(20, 8))
gs = gridspec.GridSpec(3,2)
gs.update(hspace=0.05)
ax1 = plt.subplot(gs[4])
plt.plot(Q.imag, A_vals['a0'].real, color=colors[0], lw=4, label='n=0')
plt.plot(Q.imag, A_vals['a2'].real, color=colors[1], lw=4, label='n=1')
plt.xlim(0, qf)
plt.ylim(0, 10)
plt.yticks(size=15)
plt.xlabel('q', fontsize=25)
plt.xticks(size=15)
plt.legend(fontsize=20, frameon=False)
ax2 = plt.subplot(gs[5])
plt.plot(Q.imag, A_vals['a0'].imag, color=colors[0], lw=4, ls='--', label='n=0')
plt.plot(Q.imag, A_vals['a2'].imag, color=colors[1], lw=4, ls='--', label='n=1')
plt.xlim(0, qf)
plt.yticks(size=15)
plt.xlabel('q', fontsize=25)
plt.xticks(size=15)
ax3 = plt.subplot(gs[2])
plt.plot(Q.imag, A_vals['a4'].real, color=colors[2], lw=4, label='n=2')
plt.plot(Q.imag, A_vals['a6'].real, color=colors[3], lw=4, label='n=3')
plt.xlim(0, qf)
plt.yticks(size=15)
plt.legend(fontsize=20, frameon=False)
plt.setp(ax3.get_xticklabels(), visible=False)
ax4 = plt.subplot(gs[3])
plt.plot(Q.imag, A_vals['a4'].imag, color=colors[2], lw=4, ls='--', label='n=2')
plt.plot(Q.imag, A_vals['a6'].imag, color=colors[3], lw=4, ls='--', label='n=3')
plt.xlim(0, qf)
plt.yticks(size=15)
plt.setp(ax4.get_xticklabels(), visible=False)
# plt.legend(fontsize=20, frameon=False)
ax5 = plt.subplot(gs[0])
plt.plot(Q.imag, A_vals['a8'].real, color=colors[4], lw=4, label='n=4')
plt.plot(Q.imag, A_vals['a10'].real, color=colors[5], lw=4, label='n=5')
plt.xlim(0, qf)
plt.yticks(size=15)
plt.legend(fontsize=20, frameon=False)
plt.setp(ax5.get_xticklabels(), visible=False)
plt.title(r'$Re(a_{2n})$', fontsize=25)
ax6 = plt.subplot(gs[1])
plt.plot(Q.imag, A_vals['a8'].imag, color=colors[4], lw=4, ls='--', label='n=4')
plt.plot(Q.imag, A_vals['a10'].imag, color=colors[5], lw=4, ls='--', label='n=5')
plt.xlim(0, qf)
plt.yticks(size=15)
plt.setp(ax6.get_xticklabels(), visible=False)
plt.title(r'$Im(a_{2n})$', fontsize=25)
plt.show()
# ## Create Ordered Lists
# +
OM = []
AA = []
for k in range(M // 2):
OM.append(0.25*np.copy(A_vals['a' + str(2 * k)]))
AA.append(np.copy(A_vals['A' + str(2 * k)][:, 0]))
COS = [np.exp(K[i] * x*(1j)) for i in range(N)]
# -
# ## Gaussian Initial condition
G = np.exp(-(x/sigma)**2)
fac = np.sqrt(np.pi)*sigma/(2*L)
arg = ((2 * L) / (np.pi*sigma))**2
cn = []
for n in range(N):
cn.append(fac * np.exp(-K[n]**2/(arg**2)))
# ## Defines a function that constructs the solution
def evolve_ds(AA, OM, K, cn, sigma, X, Y, t):
"""Constructs the solution to the IVP. So far, only works in case Phi=1"""
## Initialize the array
coords = {"time": t, "x": X[0, :]}
Temp = xr.DataArray(np.nan, coords=coords, dims=["time", 'x'])
ds = xr.Dataset({'Theta': Temp})
N = len(K)
for i in range(len(t)):
# print(i)
coeff=[]
for k in range(N):
CE2n = [2 * (AA[r][k]**2) * np.exp(-(OM[r][k] + K[k]**2)*t[i]) for r in range(len(AA))]
CE2n = sum(CE2n) # r-sum
coeff.append(cn[k] * CE2n * COS[k])
T0 = (sigma**2)*np.sum(coeff, axis=0).real # k-sum
ds['Theta'].data[i, :] = T0
return ds
# ## Construct solution
ds = evolve_ds(AA, OM, K, cn, sigma, X, Y, t)
# ## Initial condition
#
fig, ax = plt.subplots(figsize=(14, 8))
ds.Theta.isel(time=0).plot(lw=4)
plt.xticks(size=15)
plt.yticks(size=15)
plt.xlabel('x', fontsize=25)
plt.ylabel(r'$\theta(x, y, 0)$', fontsize=25, rotation=0, labelpad=45)
plt.show()
# ## Hovmoeller plot of (cross-channel) mean solution to the Advection Diffusion Equation
T, Xt = np.meshgrid(x, t)
cmap='nipy_spectral'
fig, ax = plt.subplots(figsize=(14, 8))
cf=plt.contourf(T, Xt, ds.Theta, levels=np.linspace(0, 1, 1000), cmap=cmap)
plt.xticks(size=15)
plt.yticks([0, .1, .2], size=15)
plt.ylim(0, 0.2)
plt.xlabel('x', fontsize=25)
plt.xlim(-alpha * L, alpha * L)
plt.ylabel(r'$\frac{t}{t_{d}}$', fontsize=35, rotation=0, labelpad = 35)
cbaxes = fig.add_axes([0.675, 0.935, 0.225, 0.03])
clb1 = plt.colorbar(cf,cax=cbaxes,ticks=[0, 0.5, 1],orientation='horizontal')
clb1.ax.tick_params(labelsize=15)
plt.show()
| Advection_Diffusion/.ipynb_checkpoints/test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Matplotlib 文本设置
# - 官方文档: https://matplotlib.org/tutorials/text/text_intro.html
#
# ## 文本命令
# ### Axes 添加文本
# `matplotlib.axes.Axes.text`
# ### x轴标记
# `matplotlib.axes.Axes.set_xlabel`
# ### y轴标记
# `matplotlib.axes.Axes.set_ylabel`
# ### Axes 标题
# `matplotlib.axes.Axes.set_title`
# ### Figure 任意文本
# `matplotlib.figure.Figure.text`
# ### Figure 标题
# `matplotlib.figure.Figure.suptitle`
# ### Axes 标注
# `matplotlib.axes.Axes.annotate`
# +
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure()
fig.suptitle('bold figure suptitle', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
fig.subplots_adjust(top=0.85)
ax.set_title('axes title')
ax.set_xlabel('xlabel')
ax.set_ylabel('ylabel')
ax.text(3, 8, 'boxed italics text in data coords', style='italic',
bbox={'facecolor': 'red', 'alpha': 0.5, 'pad': 10})
ax.text(2, 6, r'an equation: $E=mc^2$', fontsize=15)
ax.text(3, 2, 'unicode: Institut für Festkörperphysik')
ax.text(0.95, 0.01, 'colored text in axes coords',
verticalalignment='bottom', horizontalalignment='right',
transform=ax.transAxes,
color='green', fontsize=15)
ax.plot([2], [1], 'o')
ax.annotate('annotate', xy=(2, 1), xytext=(3, 4),
arrowprops=dict(facecolor='black', shrink=0.05))
ax.axis([0, 10, 0, 10])
plt.show()
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 数学表达式
# - 官方文档: https://matplotlib.org/tutorials/text/mathtext.html#sphx-glr-tutorials-text-mathtext-py
# - 通过文本命令将 LaTex 文本设置在画布中
# + pycharm={"name": "#%% \n", "is_executing": false}
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title(r'$\sum_{i=0}^\infty x_i$')
plt.show()
| doc/matplotlib/Text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Qiskit 7 - Shores algorithm
#
# #### Table of content
#
# 1. Introduction
# 2. Choose N
# 3. Get a
# 4. Phase estimation
# 5. QFT - Quantum Fourier Transform
# 6. Unitary Operator
# 7. Simulation
# 8. IBMQ
# +
import numpy as np
from math import gcd
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, Aer, execute, IBMQ
from qiskit.tools.visualization import circuit_drawer, plot_histogram
# -
# ## 1 - Introduction
#
# #### Problem
#
# The problem of factoring integers can be expressed in such way, that based on some integer $N$, one want to find the integers $N_1$ and $N_2$, such that: $N_1N_2 = N$, meanwhile: $1 < N_1, N_2 < N$
#
# Shores algorithm approaches this problem by reducing it into finding the period of a certain function.
#
# #### Algorithm
#
# ***step 1*** Pick an integer $N$ and use a classical algorithm to determine if it is prime or a power of prime. If so, exit.
#
# ***step 2*** Randomly choose an integer $a$ such that $1 < a < N$. Perform Euclid's algorithm to determine if the $GCD(a, N)$ is 1. If not, exit.
#
# ***step 3*** Use the quantum circuit represented by the unitary operator ${U}_{fa, N}$ to find a period $r$.
#
# ***step 4*** If $a$ is odd, or if $a^{\frac{r}{2}} \equiv -1$ Mod $N$, then return to step 2 and choose another $a$.
#
# ***step 5*** Use Euclid's algorithm to calculate $GCD((a^{\frac{r}{2}} + 1), N)$ and $GCD((a^{\frac{r}{2}} - 1), N)$. Return at least one of the nontrivial solutions.
# ## 2 - Choose N
#
# So we will work with a non-prime, non power of prime positive integer such that it is manageable on a handful couple of qubits. Allow me to put together the following list of candidates:
#
# * List of possible integers = [4, 6, 8, 10, 12, 14, 15, 18]
N = 4
N_bit = format(N, 'b')
print('N =', N)
print('N bit =', N_bit)
# ## 3 - Get a
#
# If $GCD(a,N) = 1$ a is a co-prime of N and that is what we can use in following steps. Hence we ought to pull random $a$ and check its common factors against N, if it is a 1 we keep it.
for i in range(N):
a = np.random.randint(2, N)
GCD = gcd(a, N)
if GCD is 1:
print('gcd is 1 for a:', a)
break
# ## 4 - Phase estimation
#
#
# #### The modular function
# First up is to find the powers of $a$ $Mod$ $N$, that is:
#
# $$a^0 Mod N, a^1 Mod N, a^3 Mod N, ...$$
#
# In other words we are to find the values $x$ of the function:
#
# $$f_{a,N}(x) = a^x Mod N$$
#
# However we are rather interested in the period $r$ of this function such that:
#
# $$f_{a,N}(r) = a^r Mod N$$
#
# It is known from a number theory theorem that for any co-prime $a \leq N$, the function $f_{a,N}(r)$ will output a 1 for some $r \leq N$. After it hits 1, the sequence of numbers will simply repeat itself. *(isn't this a mechanism we could use for reformulated optimization problems?).*
#
#
# #### State dynamics
# Next step is to implement the function $f_{a,N}$ on a quantum circuit, in order to do that we are to proceed according to the following procedure:
#
# **1:** Define 2 quantunRegisters. The first one $|x\rangle_m$ and the second one $|y\rangle_n$. Where $m$ = the binary length needed to hold N, and where $n$ = the binary length needed to hold the periodic base of N.
#
# $$|\varphi_0 \rangle = |x_m, y_n \rangle$$
#
# **2:** Place the $|x \rangle_m$ qubits in an equally weighted superposition:
#
# $$|\varphi_1 \rangle = \frac{\sum_{x\epsilon[0, 1]^m}|x, y_n\rangle}{\sqrt{2^m}}$$
#
# **3:** Evaluate the function $f_{a,N}(x)$ for all the superpositioned possibilities:
#
# $$|\varphi_2 \rangle = \frac{\sum_{x\epsilon[0, 1]^m}|x, f_{a,N}(x)\rangle}{\sqrt{2^m}} = \frac{\sum_{x\epsilon[0, 1]^m}|x, a^x Mod N\rangle}{\sqrt{2^m}}$$
#
# **4:** By measuring the bottom qubits $|y\rangle_n$ we obtain an estimate $a^{\overline{x}}Mod N$ for some $\overline{x}$. By the periodicity of $f_{a,N}$ we also have that:
#
# $$a^{\overline{x}} \equiv a^{\overline{x}+r}ModN$$
# and,
# $$a^{\overline{x}} \equiv a^{\overline{x}+2r}ModN$$
# such that for any $s$ $\epsilon$ $\mathbb Z$ we have:
# $$a^{\overline{x}} \equiv a^{\overline{x}+sr}ModN$$
#
# Furthermore we have that out of the $2^m$ superpositions in $|x\rangle$ in state $|\varphi_2\rangle$, there are $\frac{2^m}{r}$ of them that has the solution $\overline{x}$. This finally gives us:
#
# $$|\varphi_3 = \frac{\sum_{a \equiv a^{\overline{x}}ModN}|x,a^{\overline{x}}\rangle}{\frac{2^m}{r}} = \frac{\sum^{\frac{2^m}{r-1}}_{j=0}|t_0 + jr, a^{\overline{x}ModN}\rangle}{\frac{2^m}{r}}$$
#
# Where $t_0$ is the first time the measured the value of $a^{t_0} \equiv a^{\overline{x}}ModN$
#
# To boil this down, we want to estimate $\overline{x}$ since that gives us a value to plug into the function $f_{a,N}$ which gives us the period base we are looking for. Hence this stage starts with preparing the states and end with taking the measurements of $|y\rangle$ such that a satisfying value can be detected. *(Do we use that value and re instantiate the x qubits? Or is the superpoitioned x state to be 'automatically manipulated' by our measurements of y such that it simply takes on our preferred value?)*
#
#
# #### Quantum circuit
# So how do we construct the black box unitary? Allow us to establish a relationship between the period of the function $f_{a,N}$ and the phase value of the eigenvalue.
# this way solving the phase helps us find the period.
#
# Phase estimation can described such that if we know $U$ and $|\psi\rangle$, we can estimate $\phi$:
#
# $$U|\psi\rangle = e^{2\pi i\phi}|\psi\rangle$$
#
# Proceeding with the following specifications:
# $$N = 4 = 100_{bin}$$
# $$a = 3$$
# $$n = 3$$
# $$r = 2 = 01_{bin}$$
# $$freq = (1, 3)$$
# $$m = 6$$
# ### Define registers
x = QuantumRegister(6, 'x')
y = QuantumRegister(3, 'y')
c1 = ClassicalRegister(3, 'c1')
c2 = ClassicalRegister(6, 'c2')
# ### QFT - Quantum Fourier Transform
q1 = QuantumCircuit(x, y, c1, c2)
q1.barrier()
q1.h(x[0])
q1.crz(180, x[0], x[1])
q1.h(x[1])
q1.crz(90, x[0], x[2])
q1.crz(180, x[1], x[2])
q1.h(x[2])
q1.barrier()
q1.crz(45, x[0], x[3])
q1.crz(90, x[1], x[3])
q1.crz(180, x[2], x[3])
q1.h(x[3])
q1.barrier()
q1.measure([x[0], x[1], x[2], x[3]], [c2[0], c2[1], c2[2], c2[3]])
circuit_drawer(q1, output='mpl', plot_barriers=False)
sim = Aer.get_backend('qasm_simulator')
count = execute(q1, sim).result().get_counts()
plot_histogram(count)
# #### To be continued...
| Qiskit_7_Shores.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spam Message Classification using Deep Learning
#
# Natural language processing (NLP) for classifying SMS-messages as spam or not spam, based on the SMS contents.
# ## Install the Required Libraries
# !pip3 install -q --upgrade numpy==1.19.2 pandas==1.1.2 matplotlib==3.3.2 scikit-learn==0.23.2 keras==2.4.0 pydot==1.4.1
# ## Import Libraries
# +
import numpy as np
import pandas as pd
import tensorflow as tf
from collections import Counter
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding, LSTM, Dropout, Dense
from keras.models import Sequential
from keras.utils import to_categorical, plot_model
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ## Load the Dataset
sms_messages = pd.read_csv('../data/spam.csv', encoding='latin1')
sms_messages = sms_messages.iloc[:, [0, 1]]
sms_messages.columns = ["label", "message"]
sms_messages.head()
sms_messages.shape
# ## Exploratory Data Analysis
# ### Label Count
label_count = pd.value_counts(sms_messages["label"], sort=True)
label_count.plot(kind = "bar", color = ["green", "red"])
plt.title("Label Count: Not Spam vs. Spam")
plt.show()
# ### Message Uniqueness
sms_messages.groupby("label").describe()
# ### Feature Engineering: Message Length
sms_messages["length"] = sms_messages["message"].apply(len)
sms_messages.head()
# ### Compare Message Length by Label
# +
plt.figure(figsize=(12, 8))
sms_messages[sms_messages.label=="ham"].length.plot(
bins=35, kind="hist", color="green", label="Ham messages", alpha=0.6)
sms_messages[sms_messages.label=="spam"].length.plot(
kind="hist", color="red", label="Spam messages", alpha=0.6)
plt.legend()
plt.xlabel("Message Length")
# -
# ### Most Common Words
# +
common_ham = Counter(" ".join(sms_messages[sms_messages["label"] == "ham"]["message"]).split()).most_common(20)
df_common_ham = pd.DataFrame.from_dict(common_ham)
df_common_ham = df_common_ham.rename(columns={0: "Word", 1: "Count"})
common_spam = Counter(" ".join(sms_messages[sms_messages["label"] == "spam"]["message"]).split()).most_common(20)
df_common_spam = pd.DataFrame.from_dict(common_spam)
df_common_spam = df_common_spam.rename(columns={0: "Word", 1: "Count"})
# +
df_common_ham.plot.bar(legend = False, color = "green")
y_pos = np.arange(len(df_common_ham["Word"]))
plt.xticks(y_pos, df_common_ham["Word"])
plt.title("Most frequent word in the ham messages")
plt.xlabel("Words")
plt.ylabel("Count")
plt.show()
df_common_spam.plot.bar(legend = False, color = "red")
y_pos = np.arange(len(df_common_spam["Word"]))
plt.xticks(y_pos, df_common_spam["Word"])
plt.title("Most frequent word in the spam messages")
plt.xlabel("Words")
plt.ylabel("Count")
plt.show()
# -
# ## Build the Model
# ### Preprocessing
vocabular_size = 400
oov_token = "<OOV>"
max_length = 250
embedding_dimension = 16
number_of_epochs = 50
column_encoding = ({"ham": 0, "spam": 1})
sms_messages = sms_messages.replace(column_encoding)
sms_messages.head()
X = sms_messages["message"]
Y = sms_messages["label"]
tokenizer = Tokenizer(num_words = vocabular_size, oov_token = oov_token)
tokenizer.fit_on_texts(X)
X = tokenizer.texts_to_sequences(X)
X = np.array(X)
y = np.array(Y)
X = pad_sequences(X, maxlen = max_length)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# ### Design the Model Architecture
# +
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocabular_size, embedding_dimension, input_length = max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation = "relu"),
tf.keras.layers.Dense(1, activation = "sigmoid")
])
model.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["accuracy"])
model.summary()
# -
# Requires graphviz installed to work
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
history = model.fit(X_train, y_train, epochs = number_of_epochs, validation_data = (X_test, y_test), verbose = 2)
# +
result = model.evaluate(X_test, y_test)
loss = result[0]
accuracy = result[1]
print(f"[+] Accuracy: {accuracy * 100:.2f}%")
# -
# ### Save the Model
tf.keras.models.save_model(
model,
"../model",
)
# ### Test Prediction
# +
import pandas as pd
import tensorflow as tf
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
loaded_model = tf.keras.models.load_model(
"../model",
custom_objects = None,
compile = True,
)
# +
sms_messages = pd.read_csv('../data/spam.csv', encoding='latin1')
sms_messages = sms_messages.iloc[:, [1]]
sms_messages.columns = ["message"]
X = sms_messages["message"]
def get_predictions(txts):
tokenizer = Tokenizer(num_words = 400, oov_token = "<OOV>")
tokenizer.fit_on_texts(X)
txts = tokenizer.texts_to_sequences(txts)
txts = pad_sequences(txts, maxlen=250)
preds = loaded_model.predict(txts)
print(preds)
if(preds[0] > 0.5):
print("SPAM MESSAGE")
else:
print('NOT SPAM')
# -
# Spam message
txts=["Free entry in 2 a weekly competition to win FA Cup final tkts 21st May 2005"]
get_predictions(txts)
# Not Spam
txts = ["Hi man, I was wondering if we can meet tomorrow."]
get_predictions(txts)
| analytics/notebooks/Deep Learning - Spam Message Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# from selenium import webdriver
# # Using Chrome to access web
# driver = webdriver.Chrome()
# #get url
# driver.get('https://www.idx.co.id/perusahaan-tercatat/laporan-keuangan-dan-tahunan/')
# #selecting option from period
# driver.find_element_by_xpath ("//select[@id ='periodList']/option[text()='Triwulan 1']").click()
# +
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
import os
import time
url = "https://www.idx.co.id/perusahaan-tercatat/laporan-keuangan-dan-tahunan/"
#create a new chrome session
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.get(url)
# selecting option from period
driver.find_element_by_xpath ("//select[@id ='periodList']/option[text()='Triwulan 2']").click()
#click cari
driver.find_element_by_xpath ("//button[@id = 'searchButton']").click()
pageCounter = 0
time.sleep(3)
#BeautifulSoup grabs all financial report links
while pageCounter < 2:
soup_level1 = BeautifulSoup(driver.page_source, 'lxml')
for div in soup_level1.findAll('div', attrs ={'class':'financial-report-download ng-scope'}):
links = div.findAll('a', attrs = {'class':'ng-binding'}, href=re.compile("FinancialStatement"))
for a in links:
# driver.find_element_by_xpath("//div[@ng-repeat = 'attachments in res.Attachments']").click()
files = [url + a['href']]
for file in files:
file_name = file.split('/')[-1]
print ("Downloading file:%s"%file_name)
# create response object
r = requests.get(file, stream = True)
# download started
with open(file_name, 'wb') as f:
for chunk in r.iter_content(chunk_size=10000):
if chunk:
f.write(chunk)
print ("%s downloaded!\n"%file_name)
driver.find_element_by_xpath("//a[@href=''][text()='›']").click()
time.sleep(3)
pageCounter=pageCounter+1
# -
# !pip install urllib
# +
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
import os
import time
url = "https://www.idx.co.id/perusahaan-tercatat/laporan-keuangan-dan-tahunan/"
#create a new chrome session
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.get(url)
# selecting option from period
driver.find_element_by_xpath ("//select[@id ='periodList']/option[text()='Triwulan 2']").click()
#click cari
driver.find_element_by_xpath ("//button[@id = 'searchButton']").click()
pageCounter = 0
time.sleep(3)
# driver.find_element_by_xpath("//div[@ng-repeat = 'attachments in res.Attachments']").click()
driver.find_element_by_xpath("//a[@href=''][text()='›']").click()
# -
requests.get('https://www.idx.co.id/Portals/0/StaticData/ListedCompanies/Corporate_Actions/New_Info_JSX/Jenis_Informasi/01_Laporan_Keuangan/02_Soft_Copy_Laporan_Keuangan//Laporan%20Keuangan%20Tahun%202019/TW2/AALI/AALI_LK%20TW%20II%202019.pdf')
import requests
res = requests.get('https://www.idx.co.id/Portals/0/StaticData/ListedCompanies/Corporate_Actions/New_Info_JSX/Jenis_Informasi/01_Laporan_Keuangan/02_Soft_Copy_Laporan_Keuangan//Laporan%20Keuangan%20Tahun%202019/TW2/AALI/AALI_LK%20TW%20II%202019.pdf')
res.raise_for_status()
playFile = open('laporancoba.pdf', 'wb')
for chunk in res.iter_content(100000):
playFile.write(chunk)
# +
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
import os
import time
url = "https://www.idx.co.id/perusahaan-tercatat/laporan-keuangan-dan-tahunan/"
#create a new chrome session
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.get(url)
# selecting option from period
driver.find_element_by_xpath ("//select[@id ='periodList']/option[text()='Triwulan 2']").click()
time.sleep(3)
#click cari
driver.find_element_by_xpath ("//button[@id = 'searchButton']").click()
pageCounter = 0
time.sleep(3)
#BeautifulSoup grabs all financial report links
while pageCounter < 2:
soup_level1 = BeautifulSoup(driver.page_source, 'lxml')
for div in soup_level1.findAll('div', attrs ={'class':'financial-report-download ng-scope'}):
links = div.findAll('a', attrs = {'class':'ng-binding'}, href=re.compile("FinancialStatement"))
for a in links:
# driver.find_element_by_xpath("//div[@ng-repeat = 'attachments in res.Attachments']").click()
files = [url + a['href']]
for file in files:
file_name = file.split('/')[-1]
print ("Downloading file:%s"%file_name)
# create response object
r = requests.get(file, stream = True)
# download started
with open(file_name, 'wb') as f:
f.write(r.content)
time.sleep(20)
print ("%s downloaded!\n"%file_name)
driver.find_element_by_xpath("//a[@href=''][text()='›']").click()
time.sleep(3)
pageCounter=pageCounter+1
# -
| Project/IDX_scrapping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:auction]
# language: python
# name: conda-env-auction-py
# ---
# # Learning Position Auctions
#
# In this tutorial, we will extend the ideas from the [previous tutorial](learning-auctions-interdependence.ipynb). We will consider position auctions, like those found in paid search marketplaces.
#
# ## Motivating example
#
# Consider a three-bidder, two-slot position auction where the values for the three bidders are correlated. There is a signal $c\sim U[0,1]$, which we intepret as a _conversion rate_. The value of the item for bidder 1 is a random variable $v_1 = x_1 c$ where $x_1 \sim U[0,1]$, similarly for bidder 2 and bidder 3 with independent $x_i \sim U[0,1]$.
#
# The first slot has a click-through-rate (quality) of 1. The second slot has a click-through-rate of 0.5. A bidder may purchase one slot only, so we can consider this a special case of a multi-item, unit-demand scenario.
#
# ## Architectures and supporting functions
#
# As in the previous tutorial, we will make use of RegretNet.
#
# ### Preliminaries
#
# We will make heavy use of numpy, pandas, and pytorch.
# +
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
device = torch.device("cuda:1") if torch.cuda.is_available() else torch.device('cpu')
# -
# We will also make use of matplotlib and seaborn for visualization:
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# ### Common components
#
# We add common `dmch` components:
import dmch
from dmch import Mechanism
from dmch import SequentialMechanism
from dmch import create_spa_mechanism
# Now we define the auction scenario:
# +
# Number of bidders
bidders = 2
# Pr(click|position)
slot_weights = [1, 0.5]
# Number of slots
slots = len(slot_weights)
# -
# ## GSP
#
# For comparison, we define the GSP mechanism by using sequantial second-price auctions (SPA):
def create_gsp_mechanism():
return SequentialMechanism(
[create_spa_mechanism(bidders) for _ in range(len(slot_weights))],
bidders=bidders,
weights=slot_weights)
# ## RegretNet
#
# The allocation network is defined as follows:
def create_position_regret_net(bidders, hidden_layers=2, hidden_units=10, use_residuals=False, act_layer=nn.Sigmoid):
mbuilder = dmch.build_mechanism(bidders)
# create a layer to correctly size the hidden layers
mbuilder.allocation_builder.add_linear_layer(hidden_units)
mbuilder.payment_builder.add_linear_layer(hidden_units)
for _ in range(hidden_layers):
if use_residuals:
mbuilder.allocation_builder.add_residual_layer(act_layer=act_layer)
mbuilder.payment_builder.add_residual_layer(act_layer=act_layer)
else:
mbuilder.allocation_builder.add_linear_layer(hidden_units)
mbuilder.payment_builder.add_linear_layer(hidden_units)
mbuilder.allocation_builder.add_activation(act_layer())
mbuilder.payment_builder.add_activation(act_layer())
return mbuilder.build_sequential(slots,weights=slot_weights)
# ## Auction for the motivating example
#
#
# The networks will train on data that is sampled from the value distribution, which is loaded into a `DataLoader`.
# +
import torch.utils.data as data_utils
sample_size = 2**11
batch_size = 2**8
epochs = 1000
inputs = torch.rand(sample_size, bidders)
eval_inputs = torch.rand(sample_size, bidders)
inputs_loader=data_utils.DataLoader(
data_utils.TensorDataset(inputs),
batch_size=batch_size)
eval_inputs_loader=data_utils.DataLoader(
data_utils.TensorDataset(eval_inputs),
batch_size=batch_size)
# -
# Before training the networks, let's establish a GSP baseline.
gsp = create_gsp_mechanism()
gsp_report = pd.DataFrame(
dmch.evaluate(
gsp,
inputs_loader,
bidders,
epochs=epochs,
device=device,
misreport_lr=1e-1,
misreport_epochs=100))
# We now create a simple RegretNet instance.
regret_net = create_position_regret_net(bidders).to(device)
# We loop over the data for a number of epochs and record traces of the networks learning.
regret_net_report = pd.DataFrame(
dmch.train(
regret_net,
inputs_loader,
bidders,
epochs=epochs,
device=device,
rho=1e2,
mechanism_lr=1e-2,
misreport_lr=1e-1,
misreport_epochs=10))
# Next, let's review the DSIC violations of the network. In the figure below, we can see that networks have a large initial violation then quickly finds a region with low and decreasing violation.
fig, ax = plt.subplots(figsize=(8,6));
ax.plot(regret_net_report.groupby('epoch').mean()[['total_ir_violation']], label='RegretNet');
ax.legend();
fig, ax = plt.subplots(figsize=(8,6));
#ax.axhline(y=gsp_report.mean()[['total_dsic_violation']].values);
ax.plot(regret_net_report.groupby('epoch').mean()[['total_dsic_violation']], label='RegretNet');
ax.legend();
# Let's construct a evaluation data set.
regret_net_eval_report = pd.DataFrame(
dmch.evaluate(
regret_net,
eval_inputs_loader,
bidders,
device=device,
misreport_lr=1e-1,
misreport_epochs=100))
# The validated stats are as follows:
regret_net_eval_report.describe()
# We can compare this to the Myerson mechanism.
def myerson():
sorted_inputs,_ = torch.cat((inputs, torch.zeros(inputs.shape)),dim=1).sort(dim=1, descending=True)
virtual_inputs = 2*sorted_inputs-1
virtual_payments = (virtual_inputs[:,0:2]>=0).float() * (F.relu(virtual_inputs[:,1:3]))
payments = (virtual_inputs[:,0:2]>=0).float()*((virtual_payments+1)/2)
payments = payments[:,0] + 0.5 * payments[:,1]
return payments.mean().cpu().numpy()
# Let's review the revenue of the network: RegretNet exceeds GSP revenue with less violation and approaches Myerson revenue.
fig, ax = plt.subplots(figsize=(8,6));
ax.axhline(y=myerson(), label='Myerson', color='k');
ax.axhline(y=gsp_report.mean()[['revenue']].values, color='g', label='GSP');
ax.plot(regret_net_report.groupby('epoch').mean()[['revenue']], label='RegretNet');
ax.axhline(y=regret_net_eval_report[['revenue']].mean()[0], color='b', label='Validated RegretNet')
ax.legend();
# +
def plot_mechanism(mechanism):
X, Y = np.meshgrid(
np.arange(0.0, 1.0, 0.01),
np.arange(0.0, 1.0, 0.01))
inputs = torch.cat(
(torch.from_numpy(np.reshape(X, (100*100,1))),
torch.from_numpy(np.reshape(Y, (100*100,1)))),
dim=1).float().to(device)
allocation, payment = mechanism(inputs)
allocation_levels = np.arange(0, 1.5, 0.01)
bid_levels = np.arange(0, 1.0, 0.01)
fig, axes = plt.subplots(nrows=3, ncols=bidders+1, figsize=(20,15));
def plot_contour(tensor,axis_index,bidder_title,main_title,levels):
for bidder in range(bidders):
CS = axes[axis_index,bidder].tricontourf(
inputs[:,0].cpu().numpy(),
inputs[:,1].cpu().numpy(),
tensor[:,bidder].detach().cpu().numpy(),
levels=levels,
cmap="RdBu_r",
extend='both');
fig.colorbar(CS, ax=axes[axis_index,bidder]);
axes[axis_index,bidder].set_title(bidder_title+str(bidder));
CS = axes[axis_index,bidders].tricontourf(
inputs[:,0].cpu().numpy(),
inputs[:,1].cpu().numpy(),
tensor.sum(dim=1).detach().cpu().numpy(),
levels=levels,
cmap="RdBu_r",
extend='both');
fig.colorbar(CS, ax=axes[axis_index,bidders]);
axes[axis_index,bidders].set_title(main_title);
plot_contour(allocation,0,'Allocation to bidder ','Allocation to all bidders',allocation_levels)
plot_contour(payment,1,'Payment from bidder ','Payment from all bidders',allocation_levels)
plot_contour(payment/allocation,2,'Price for bidder ','Price for all bidders',bid_levels)
plot_mechanism(gsp)
plot_mechanism(regret_net)
| learning-position-auctions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="XHce_ManvqTK" colab_type="text"
# Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
# - Author: <NAME>
# - GitHub Repository: https://github.com/rasbt/deeplearning-models
# + [markdown] colab_type="text" id="vY4SK0xKAJgm"
# # Model Zoo -- Multilayer bidirectional RNN with LSTM
# + [markdown] colab_type="text" id="sc6xejhY-NzZ"
# Demo of a bidirectional RNN for sentiment classification (here: a binary classification problem with two labels, positive and negative) using LSTM (Long Short Term Memory) cells.
# + id="p6eOuUtTvwEz" colab_type="code" colab={}
# !pip install -q IPython
# !pip install -q ipykernel
# !pip install -q watermark
# !pip install -q matplotlib
# !pip install -q sklearn
# !pip install -q pandas
# !pip install -q pydot
# !pip install -q hiddenlayer
# !pip install -q graphviz
# + colab_type="code" id="moNmVfuvnImW" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="ce68ea59-4833-4e08-d242-2d90453ed3b6"
# %load_ext watermark
# %watermark -a '<NAME>' -v -p torch
import torch
import torch.nn.functional as F
from torchtext import data
from torchtext import datasets
import time
import random
torch.backends.cudnn.deterministic = True
# + [markdown] colab_type="text" id="GSRL42Qgy8I8"
# ## General Settings
# + colab_type="code" id="OvW1RgfepCBq" colab={}
RANDOM_SEED = 123
torch.manual_seed(RANDOM_SEED)
VOCABULARY_SIZE = 20000
LEARNING_RATE = 1e-4
BATCH_SIZE = 128
NUM_EPOCHS = 15
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BIDIRECTIONAL = True
EMBEDDING_DIM = 128
NUM_LAYERS = 2
HIDDEN_DIM = 128
OUTPUT_DIM = 1
# + [markdown] colab_type="text" id="mQMmKUEisW4W"
# ## Dataset
# + [markdown] colab_type="text" id="4GnH64XvsV8n"
# Load the IMDB Movie Review dataset:
# + colab_type="code" id="WZ_4jiHVnMxN" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="dd7e128f-fb9a-4d49-893d-f8be861d2a7d"
TEXT = data.Field(tokenize='spacy',
include_lengths=True) # necessary for packed_padded_sequence
LABEL = data.LabelField(dtype=torch.float)
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state=random.seed(RANDOM_SEED),
split_ratio=0.8)
print(f'Num Train: {len(train_data)}')
print(f'Num Valid: {len(valid_data)}')
print(f'Num Test: {len(test_data)}')
# + [markdown] colab_type="text" id="L-TBwKWPslPa"
# Build the vocabulary based on the top "VOCABULARY_SIZE" words:
# + colab_type="code" id="e8uNrjdtn4A8" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="3eeec361-cb3f-4816-b57a-a80693565b3f"
TEXT.build_vocab(train_data, max_size=VOCABULARY_SIZE)
LABEL.build_vocab(train_data)
print(f'Vocabulary size: {len(TEXT.vocab)}')
print(f'Number of classes: {len(LABEL.vocab)}')
# + [markdown] colab_type="text" id="JpEMNInXtZsb"
# The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: `<unk>` and `<pad>`.
# + [markdown] colab_type="text" id="eIQ_zfKLwjKm"
# Make dataset iterators:
# + colab_type="code" id="i7JiHR1stHNF" colab={}
train_loader, valid_loader, test_loader = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size=BATCH_SIZE,
sort_within_batch=True, # necessary for packed_padded_sequence
device=DEVICE)
# + [markdown] colab_type="text" id="R0pT_dMRvicQ"
# Testing the iterators (note that the number of rows depends on the longest document in the respective batch):
# + colab_type="code" id="y8SP_FccutT0" colab={"base_uri": "https://localhost:8080/", "height": 207} outputId="69b62367-2ea1-490c-d5dc-df109a4dcb19"
print('Train')
for batch in train_loader:
print(f'Text matrix size: {batch.text[0].size()}')
print(f'Target vector size: {batch.label.size()}')
break
print('\nValid:')
for batch in valid_loader:
print(f'Text matrix size: {batch.text[0].size()}')
print(f'Target vector size: {batch.label.size()}')
break
print('\nTest:')
for batch in test_loader:
print(f'Text matrix size: {batch.text[0].size()}')
print(f'Target vector size: {batch.label.size()}')
break
# + [markdown] colab_type="text" id="G_grdW3pxCzz"
# ## Model
# + colab_type="code" id="nQIUm5EjxFNa" colab={}
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.rnn = nn.LSTM(embedding_dim,
hidden_dim,
num_layers=NUM_LAYERS,
bidirectional=BIDIRECTIONAL)
self.fc = nn.Linear(hidden_dim*2, output_dim)
def forward(self, text, text_length):
#[sentence len, batch size] => [sentence len, batch size, embedding size]
embedded = self.embedding(text)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, text_length)
packed_output, (hidden, cell) = self.rnn(packed)
# combine both directions
combined = torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1)
return self.fc(combined.squeeze(0)).view(-1)
# + colab_type="code" id="Ik3NF3faxFmZ" colab={}
INPUT_DIM = len(TEXT.vocab)
torch.manual_seed(RANDOM_SEED)
model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
# + id="pFQwMhbVv10i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="eb45dca2-89fb-4b1e-9a10-186650a1e5af"
import hiddenlayer as hl
batch = next(iter(train_loader))
hl.build_graph(model, batch.text)
# + [markdown] colab_type="text" id="Lv9Ny9di6VcI"
# ## Training
# + colab_type="code" id="T5t1Afn4xO11" colab={}
def compute_binary_accuracy(model, data_loader, device):
model.eval()
correct_pred, num_examples = 0, 0
with torch.no_grad():
for batch_idx, batch_data in enumerate(data_loader):
text, text_lengths = batch_data.text
logits = model(text, text_lengths)
predicted_labels = (torch.sigmoid(logits) > 0.5).long()
num_examples += batch_data.label.size(0)
correct_pred += (predicted_labels == batch_data.label.long()).sum()
return correct_pred.float()/num_examples * 100
# + colab_type="code" id="EABZM8Vo0ilB" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="db469ce7-4df6-43ff-b1ca-02ebc1f837bf"
start_time = time.time()
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, batch_data in enumerate(train_loader):
text, text_lengths = batch_data.text
### FORWARD AND BACK PROP
logits = model(text, text_lengths)
cost = F.binary_cross_entropy_with_logits(logits, batch_data.label)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '
f'Batch {batch_idx:03d}/{len(train_loader):03d} | '
f'Cost: {cost:.4f}')
with torch.set_grad_enabled(False):
print(f'training accuracy: '
f'{compute_binary_accuracy(model, train_loader, DEVICE):.2f}%'
f'\nvalid accuracy: '
f'{compute_binary_accuracy(model, valid_loader, DEVICE):.2f}%')
print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')
print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')
print(f'Test accuracy: {compute_binary_accuracy(model, test_loader, DEVICE):.2f}%')
# + colab_type="code" id="jt55pscgFdKZ" colab={}
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence):
# based on:
# https://github.com/bentrevett/pytorch-sentiment-analysis/blob/
# master/2%20-%20Upgraded%20Sentiment%20Analysis.ipynb
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
length = [len(indexed)]
tensor = torch.LongTensor(indexed).to(DEVICE)
tensor = tensor.unsqueeze(1)
length_tensor = torch.LongTensor(length)
prediction = torch.sigmoid(model(tensor, length_tensor))
return prediction.item()
# + colab_type="code" id="O4__q0coFJyw" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="f0173eef-fe5e-4646-9558-e487d87ae154"
print('Probability positive:')
predict_sentiment(model, "I really love this movie. This movie is so great!")
# + colab_type="code" id="7lRusB3dF80X" colab={}
| pytorch_ipynb/rnn/rnn_lstm_bi_imdb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:word2vecenv]
# language: python
# name: conda-env-word2vecenv-py
# ---
# # Implementation of word2vec on Stanford Sentiment Treebank (SST) dataset
# “You shall know a word by the company it keeps” (<NAME>)
# ## Introduction
# This notebook is a step by step guide on implementation of word2vec skipgram on Stanford Sentiment Treebank (SST) dataset, and is the solution to coding sections of [Assignment #2](http://web.stanford.edu/class/cs224n/assignments/a2.pdf) of Stanford's ["CS224n: Natural Language Processing with Deep Learning"](http://web.stanford.edu/class/cs224n/) course. Contents of this notebook are taken from the course materials. <br>
# I recommend reading the original papers [1,2] and all the course materials on the word2vec (specially this [one]( http://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf) before proceeding to implementation. But if you are looking a for a shortcut, the [this link](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) covers all the major points in both papers.
# ## Conda environment
# First you need to create a conda virtual environment with all the necessary packages to run the code. Run the following command from within the repo directory to create a new env named "word2vecenv":
# + active=""
# conda env create -f env.yml
# -
# Activate the "word2vecenv" that you just created:
# + active=""
# source activate word2vecenv (or conda activate depending on your OS and anaconda version)
# -
# Installing the IPython kernel in your env:
# + active=""
# conda install ipykernel
# ipython kernel install --user
# -
# Now switch your notebook's kernel to "word2vec" env.
# ## Understanding negative sampling
# The original word2vec paper [1] proposed "Naive softmax loss" as objective function ($J$):
# $- \sum^{2m}_{j=0,j \neq m} u^T_{c-m+j}v_c + 2m \log \sum_{k=1}^{|V|} \exp(u_k^T v_c) $
# in which $v_c$ is the output word vector of the center word, $u_j$ is input word vector of outside word $j$, $|V|$ is the vocabulary size and $m$ is windows size. Note that everytime we update or evaluate $J$ we need to do a summation over the entire vocabulary (sum of $|V|$ terms), whihc is in order of millions and computationally huge! That's why author of the original paper came up with the idea of "Negative sampling loss" [2] to approximate the softmax normalization term (Sigma in the abvoe equation). The idea is that rather than looping over the entire vocabulary to do the summation, we generate negative samples and use them to estimate the objective function. We will use the latter in this notebook.
# Consider a pair $(w, c)$ of word and context. Did this pair come from the training data? Let’s denote by $P(D = 1|w, c)$ the probability that $(w, c)$ came from the corpus data. Correspondingly, $P(D = 0|w, c)$ will be the probability that $(w, c)$ did not come from the corpus data. First, let’s model $P(D = 1|w, c)$ with the sigmoid function:
# $P(D = 1|w, c,\theta) = \sigma (v_c^T v_w) = \frac{1}{1+exp(-v_c^T u_w)} $
# and naturally if the pair did not come from the corpus, we will have:
# $P(D = 0|w, c,\theta) = 1 - P(D = 0|w, c) =1 - \sigma (v_c^T v_w) = 1- \frac{1}{1+exp(-v_c^T u_w)} $
# For every training step, instead of looping over the entire vocabulary, we can just sample several negative examples! We "sample" from a noise distribution ($P_n(w)$) whose probabilities match the ordering of the frequency of the vocabulary. For a given center word (vector), $v_c$, and outside (context) word, $u_o$, and $K$ negative samples, $\tilde{u}_k^T$, our objective function for Skip-gram model will be:
# $J_{neg-sample} (v_c,u_o,U) = -\log \sigma(u^T_{o}v_c) - \sum_{k=1}^{K} \log \sigma (-\tilde{u}^T_{k}v_c) $
# in which $U$ is the matrix of outside words. We will need partial derivatives of $J_{neg-sample} (v_c,u_o,U)$ wrt to $v_c$,$u_o$ and $u_k$ to for backpropagation (try to work out these derivatives from $J_{neg-sample} (v_c,u_o,U)$):
# $\partial J_{neg-sample} (v_c,u_o,U) / \partial v_c = -(1 - \sigma(u^T_o v_c))u_o + \sum_{k=1}^{K} (1-\sigma(-u_k^Tv_c)) u_k$
# $\partial J_{neg-sample} (v_c,u_o,U) / \partial u_o = - (1- \sigma (u_o^T v_c))v_c$
# $\partial J_{neg-sample} (v_c,u_o,U) / \partial u_k = (1- \sigma (-u_k^T v_c))v_c$
# We will use these derivatives to implement *negSamplingLossAndGradient* function
# # Implementation
# ## Libraries
# +
import random
import numpy as np
from utils.treebank import StanfordSentiment
from utils.gradcheck import gradcheck_naive
from utils.utils import normalizeRows, softmax
import pickle
import matplotlib
import matplotlib.pyplot as plt
import time
import glob
import os.path as op
# Check Python Version
import sys
assert sys.version_info[0] == 3
assert sys.version_info[1] >= 5
# -
# Run the following command line code to fetch the "Stanford Sentiment Treebank (SST): dataset:
# + active=""
# sh get_datasets.sh
# -
# ## Take the data for a spin!
# Let's take a look at the dataset first and see what's inside!
dataset.numSentences()
# There are 11855 sentences in the dataset.
len(dataset.tokens())
# and 19539 'tokens'. "dataset.tokens()" is mapping from tokens(words) to indices
dataset.tokens()['python']
# That is the index of 'python' in our dictionary!
# ## 1. Naive softmax implementation
# ### Sigmoid function
# Good ol' sigmoid function which we will use to calculate the loss:
def sigmoid(x):
"""
Arguments:
x -- A scalar or numpy array.
Return:
s -- sigmoid(x)
"""
sig_x=1/(1+np.exp(-x))
return sig_x
# ### Negative sampler:
# We are going to define *getNegativeSamples* to draw random negative samples from the dataset:
def getNegativeSamples(outsideWordIdx, dataset, K):
""" Samples K indexes which are not the outsideWordIdx """
negSampleWordIndices = [None] * K
for k in range(K):
newidx = dataset.sampleTokenIdx()
while newidx == outsideWordIdx:
newidx = dataset.sampleTokenIdx()
negSampleWordIndices[k] = newidx
return negSampleWordIndices
# ### Negative sampling loss and gradient:
# We are going to use $\partial J_{neg-sample} (v_c,u_o,U) / \partial v_c$, $\partial J_{neg-sample} (v_c,u_o,U) / \partial u_o$ and $\partial J_{neg-sample} (v_c,u_o,U) / \partial u_k$ that we derived above to implement calculate the loss and gradient:
def negSamplingLossAndGradient(
centerWordVec,
outsideWordIdx,
outsideVectors,
dataset,
K=10
):
""" Negative sampling loss function for word2vec models
"""
negSampleWordIndices = getNegativeSamples(outsideWordIdx, dataset, K)
indices = [outsideWordIdx] + negSampleWordIndices
u_ws=outsideVectors[indices,:]
u_ws[1:,:]=-u_ws[1:,:]
sigmoid_uws=sigmoid(u_ws@centerWordVec.reshape(-1,1)).squeeze()
loss= -np.log(sigmoid_uws).sum()
gradCenterVec=(sigmoid_uws[0]-1)*u_ws[0,:]
for row in range(1,u_ws.shape[0]):
gradCenterVec=gradCenterVec-(1-sigmoid_uws[row])*u_ws[row,:]
gradOutsideVecs=np.zeros(outsideVectors.shape)
gradOutsideVecs[indices[0],:]=((sigmoid_uws[0]-1)*centerWordVec).reshape(-1,)
for i,idx in enumerate(indices[1:]):
gradOutsideVecs[idx,:]=gradOutsideVecs[idx,:]+((1-sigmoid_uws[i+1])*centerWordVec).reshape(-1,)
return loss, gradCenterVec, gradOutsideVecs
# ### Skipgram
# Given a minibatch including a center word and a list of outside words form the dataset, we will implement the *skipgram* function to calculate the loss and gradients:
def skipgram(currentCenterWord, windowSize, outsideWords, word2Ind,
centerWordVectors, outsideVectors, dataset,
word2vecLossAndGradient=negSamplingLossAndGradient):
""" Skip-gram model
Arguments:
currentCenterWord -- a string of the current center word
windowSize -- integer, context window size
outsideWords -- list of no more than 2*windowSize strings, the outside words
word2Ind -- a dictionary that maps words to their indices in
the word vector list
centerWordVectors -- center word vectors (as rows) for all words in vocab
(V in pdf handout)
outsideVectors -- outside word vectors (as rows) for all words in vocab
(U in pdf handout)
word2vecLossAndGradient -- the loss and gradient function for
a prediction vector given the outsideWordIdx
word vectors, could be one of the two
loss functions you implemented above.
Return:
loss -- the loss function value for the skip-gram model
(J in the pdf handout)
gradCenterVecs -- the gradient with respect to the center word vectors
(dJ / dV in the pdf handout)
gradOutsideVectors -- the gradient with respect to the outside word vectors
(dJ / dU in the pdf handout)
"""
loss = 0.0
gradCenterVecs = np.zeros(centerWordVectors.shape)
gradOutsideVectors = np.zeros(outsideVectors.shape)
idx_vc=word2Ind[currentCenterWord]
idx_uws=[word2Ind[outsideWord] for outsideWord in outsideWords]
vc=centerWordVectors[idx_vc,:].reshape(-1,1)
for idx_uw in idx_uws:
loss_uw, gradCenterVec_uw, gradOutsideVecs_uw = negSamplingLossAndGradient(vc,idx_uw,outsideVectors,dataset)
loss=loss+loss_uw
gradCenterVecs[idx_vc,:]= gradCenterVecs[idx_vc,:] + gradCenterVec_uw.reshape(1,-1)
gradOutsideVectors= gradOutsideVectors + gradOutsideVecs_uw
return loss, gradCenterVecs, gradOutsideVectors
# We also define a helper function to sequentially draw samples and perform stochastic gradient decent:
def word2vec_sgd_wrapper(batchsize,word2vecModel, word2Ind, wordVectors, dataset,
windowSize,
word2vecLossAndGradient=negSamplingLossAndGradient):
loss = 0.0
grad = np.zeros(wordVectors.shape)
N = wordVectors.shape[0]
centerWordVectors = wordVectors[:int(N/2),:]
outsideVectors = wordVectors[int(N/2):,:]
for i in range(batchsize):
windowSize1 = random.randint(1, windowSize)
centerWord, context = dataset.getRandomContext(windowSize1)
c, gin, gout = word2vecModel(
centerWord, windowSize1, context, word2Ind, centerWordVectors,
outsideVectors, dataset, word2vecLossAndGradient
)
loss += c / batchsize
grad[:int(N/2), :] += gin / batchsize
grad[int(N/2):, :] += gout / batchsize
return loss, grad
# ### Stochastic Gradient Decent:
# Takes a function (f) and an input vector (x0) and performs gradient decent. we also define two other functions; *save_params* to save the matrix of word vectors every $n$ iterations while training and *load_saved_params* to load saved word vectors.
def save_params(iter, params):
params_file = "saved_params_%d.npy" % iter
np.save(params_file, params)
with open("saved_state_%d.pickle" % iter, "wb") as f:
pickle.dump(random.getstate(), f)
def load_saved_params():
"""
A helper function that loads previously saved parameters and resets
iteration start.
"""
st = 0
for f in glob.glob("saved_params_*.npy"):
iter = int(op.splitext(op.basename(f))[0].split("_")[2])
if (iter > st):
st = iter
if st > 0:
params_file = "saved_params_%d.npy" % st
state_file = "saved_state_%d.pickle" % st
params = np.load(params_file)
with open(state_file, "rb") as f:
state = pickle.load(f)
return st, params, state
else:
return st, None, None
def sgd(f, x0, step, iterations, PRINT_EVERY=10,SAVE_PARAMS_EVERY = 5000,ANNEAL_EVERY = 20000,useSaved=False):
""" Stochastic Gradient Descent
Implement the stochastic gradient descent method in this function.
Arguments:
f -- the function to optimize, it should take a single
argument and yield two outputs, a loss and the gradient
with respect to the arguments
x0 -- the initial point to start SGD from
step -- the step size for SGD
iterations -- total iterations to run SGD for
postprocessing -- postprocessing function for the parameters
if necessary. In the case of word2vec we will need to
normalize the word vectors to have unit length.
PRINT_EVERY -- specifies how many iterations to output loss
Return:
x -- the parameter value after SGD finishes
"""
if useSaved:
start_iter, oldx, state = load_saved_params()
if start_iter > 0:
x0 = oldx
step *= 0.5 ** (start_iter / ANNEAL_EVERY)
if state:
random.setstate(state)
else:
start_iter = 0
x=x0
exploss=0
for iter in range(start_iter + 1, iterations + 1):
loss = None
grad=0
loss,grad=f(x)
x=x-step*grad
if iter % PRINT_EVERY == 0:
if not exploss:
exploss = loss
else:
exploss = .95 * exploss + .05 * loss
print("iter %d: %f" % (iter, exploss))
if iter % SAVE_PARAMS_EVERY == 0:
save_params(iter, x)
if iter % ANNEAL_EVERY == 0:
step *= 0.5
return x
# ## Showtime: Training!
# +
random.seed(314)
dataset = StanfordSentiment()
tokens = dataset.tokens()
nWords = len(tokens)
# A 10 dimensional vector, Google's word2vec has 300 features.
dimVectors = 10
# Context size: How far away from the center word look for outside words?
C = 5
max_windowSize=C
# -
wordVectors = np.concatenate(
((np.random.rand(nWords, dimVectors) - 0.5) /
dimVectors, np.zeros((nWords, dimVectors))),
axis=0)
# +
random.seed(31415)
np.random.seed(9265)
startTime=time.time()
batch_size=50
wordVectors = sgd(
lambda vec: word2vec_sgd_wrapper(batch_size,skipgram, tokens, vec, dataset, C,
negSamplingLossAndGradient),
wordVectors, 0.3, 42000, PRINT_EVERY=1000,SAVE_PARAMS_EVERY = 5000,ANNEAL_EVERY = 20000,useSaved=True)
endTime=time.time()
print("Training time: %d minutes" %((endTime - startTime)/60))
# -
# ## Results
# I am going to use PCA to project word vectors onto 2D space and plot them:
# +
wordVectors = np.concatenate(
(wordVectors[:nWords,:], wordVectors[nWords:,:]),
axis=0)
visualizeWords = [
"great", "cool", "brilliant", "wonderful", "well", "amazing",
"worth", "sweet", "enjoyable", "boring", "bad", "dumb",
"annoying", "female", "male", "queen", "king", "man", "woman", "rain", "snow",
"hail", "coffee", "tea"]
visualizeIdx = [tokens[word] for word in visualizeWords]
visualizeVecs = wordVectors[visualizeIdx, :]
temp = (visualizeVecs - np.mean(visualizeVecs, axis=0))
covariance = 1.0 / len(visualizeIdx) * temp.T.dot(temp)
U,S,V = np.linalg.svd(covariance)
coord = temp.dot(U[:,0:2])
# +
# %matplotlib inline
plt.figure()
for i in range(len(visualizeWords)):
plt.text(coord[i,0], coord[i,1], visualizeWords[i],
bbox=dict(facecolor='green', alpha=0.1))
plt.xlim((np.min(coord[:,0]), np.max(coord[:,0])))
plt.ylim((np.min(coord[:,1]), np.max(coord[:,1])))
plt.show()
# -
# # References
# - 1- https://arxiv.org/pdf/1301.3781.pdf
# - 2- http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf
| .ipynb_checkpoints/Implementation of word2vec-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as pl
from rocsDB import rocsDB
import polars as po
# +
dfs = []
db = rocsDB()
data = db.submit_query(f"""
SELECT
date_symptom_onset,
estimated_R_7_days
from
coronacases.nowcasting
where
date_symptom_onset >= '2021-10-10'
and
date_symptom_onset <= '2021-11-07'
order by
date_symptom_onset
""")
columns = list(zip(*data))
df = po.DataFrame({'days': columns[0], 'value': columns[1]})
db.close()
# +
pl.figure()
t = df['days']
y = df['value'].to_numpy()
mean = np.mean(y)
print('mean_R', mean)
pl.plot(t, y)
pl.legend()
# -
np.interp
| cookbook/delta_analysis_best_worse/get_R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="q8R61hWpsVwn"
# **Installing latest pickle to unserialize our data from database**
# + colab={"base_uri": "https://localhost:8080/"} id="6wO5v_K7u9ky" executionInfo={"status": "ok", "timestamp": 1607286784957, "user_tz": 180, "elapsed": 7624, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTkPeS0SHXD3dKxUc_qmh_4VlVzg8f0YHgxI3ZUw=s64", "userId": "05259232700742589409"}} outputId="97586706-eeb1-471e-b3d3-2ad4f61cc99a"
# !pip install pickle5
# + [markdown] id="xQPh5dV2smyu"
# **Adding needed imports**
# + id="4U3qrcCWt8Pz"
from google.colab import drive
drive.mount('/content/drive')
import os
import pickle5 as pickle
from HandReading import HandReading
from Imu import Imu
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedKFold
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, SimpleRNN, Dropout, LSTM
from sklearn.metrics import accuracy_score as acc
from tensorflow.keras.utils import plot_model
from sklearn.metrics import classification_report, confusion_matrix
from tensorflow.keras.callbacks import EarlyStopping
# + [markdown] id="NxkiVf5nstVD"
# **Creating dictionary mappers**
#
# + id="JA3Wykk635oS"
dict_word_id = {}
dict_id_word = {}
list_word = os.listdir('/content/drive/MyDrive/database')
for j in range(len(list_word)) :
dict_word_id[list_word[j]] = j
dict_id_word[j] = list_word[j]
print(dict_word_id)
print(dict_id_word)
# + [markdown] id="gsy9R6SatbOP"
# **Preparing input and labels data**
# + id="tWsGipmkutvS"
input_data, output = [], []
list_word = os.listdir('/content/drive/MyDrive/database')
for word in list_word:
list_pickle_file = os.listdir(f'/content/drive/MyDrive/database/{word}')
k = 0
for pickle_file in list_pickle_file:
k+=1
with open(f'/content/drive/MyDrive/database/{word}/{pickle_file}', 'rb') as input:
readings = pickle.load(input)
for reading in readings:
finger = []
for i in range(5):
finger.append(np.float16(reading.imus[i].accel[0]/255.0))
finger.append(np.float16(reading.imus[i].accel[1]/255.0))
finger.append(np.float16(reading.imus[i].accel[2]/255.0))
finger.append(np.float16(reading.imus[i].gyro[0]/255.0))
finger.append(np.float16(reading.imus[i].gyro[1]/255.0))
finger.append(np.float16(reading.imus[i].gyro[2]/255.0))
pass
input_data.append(finger)
out = np.zeros(11)
out[dict_word_id[word]] = 1.0
output.append(out)
input_data = np.array(input_data)
input_data = input_data.reshape((input_data.shape[0], input_data.shape[1], 1))
output = np.array(output)
# + [markdown] id="JJLjr6Wks6P1"
# **Creating neural network model**
# + id="G2VmEeNCsJ-4"
model = Sequential()
model.add(LSTM(units=50, input_shape=input_data[0].shape, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50))
model.add(Dropout(0.2))
model.add(Dense(11, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
# + [markdown] id="PeUJrzZbt5lx"
# **Separate data and train**
# + id="m2kVwlsJPE2J"
from sklearn.model_selection import StratifiedKFold
kfoldK = 10
kfold = StratifiedKFold(n_splits=10, shuffle=True)
callback = EarlyStopping(monitor='val_accuracy', min_delta = 0.001, patience=3)
iterations = 0
sumOfAccuracy = 0
maxepoch = 100
for train, test in kfold.split(input_data, output.argmax(1)):
iterations += 1
trainX, testX = input_data[train], input_data[test]
trainY, testY = output[train], output[test]
trainX, validX, trainY, validY = train_test_split(
trainX, trainY, test_size=0.3, stratify=trainY, random_state=42)
model.fit(trainX, trainY, epochs=maxepoch, batch_size=32, verbose=2, callbacks=[callback], validation_data=(validX, validY))
predicted = model.predict(testX)
pred = []
for j in predicted:
arg = np.argmax(j)
pred.append(arg)
pred = np.array(pred)
y_one = []
for y in testY :
arg = np.argmax(y)
y_one.append(arg)
y_one = np.array(y_one)
sumOfAccuracy += acc(y_one,pred)
avgAccuray = sumOfAccuracy/iterations
print(f"average accuracy = {avgAccuray}")
# + [markdown] id="C71psesJwyrp"
# **Analyse accuracies**
#
# + id="1nJueJzDPE8W"
prd = model.predict(testX)
pred = []
for j in range(len(prd)):
lis = prd[j]
arg = np.argmax(lis)
pred.append(arg)
pred = np.array(pred)
y_one = []
for y in testY :
arg = np.argmax(y)
y_one.append(arg)
y_one = np.array(y_one)
print(classification_report(y_one, pred))
print(dict_id_word)
print()
print(acc(y_one,pred))
print(f"average accuracy = {sumOfAccuracy/iterations}")
# + [markdown] id="6edt2gJkvGtN"
# **Real life usage example**
# + id="mC1PvHsuY_jJ"
with open(f'/content/drive/MyDrive/database/prazer-em-te-conhecer/3.pkl', 'rb') as input:
readings = pickle.load(input)
input_test_data = []
j = 0
for reading in readings:
finger = []
for i in range(5):
finger.append(np.float16(reading.imus[i].accel[0]/255.0))
finger.append(np.float16(reading.imus[i].accel[1]/255.0))
finger.append(np.float16(reading.imus[i].accel[2]/255.0))
finger.append(np.float16(reading.imus[i].gyro[0]/255.0))
finger.append(np.float16(reading.imus[i].gyro[1]/255.0))
finger.append(np.float16(reading.imus[i].gyro[2]/255.0))
input_test_data.append(finger)
input_test_data = np.array(input_test_data)
input_test_data = input_test_data.reshape((input_test_data.shape[0], input_test_data.shape[1], 1))
for inputData in input_test_data:
result1 = model.predict(inputData.reshape(1,30,1))
print(dict_id_word[np.argmax(result1)])
# + [markdown] id="WnmI6_hrvAR_"
# **Plot model structure**
# + id="YNEPCZBjVRBb"
plot_model(model, to_file='topology.png',show_shapes=True)
# + id="5ESiumKA5Xe8"
model.save(f"deep-lstm-{maxepoch}maxepoch-k{kfoldK}fold-{avgAccuray}avgacc.h5", save_format='.h5')
model.save(f"deep-lstm-{maxepoch}maxepoch-k{kfoldK}fold-{avgAccuray}avgacc")
# + id="dbxmz017Dqdh"
cm = confusion_matrix(y_one, pred)
print(cm)
plt.rcParams["figure.figsize"] = (20,8)
plt.imshow(cm, cmap=plt.cm.Reds)
plt.xlabel("Sinais obtidos")
plt.ylabel("Sinais corretos")
tick_marks = np.arange(len(list(dict_word_id.keys())[:-1]))
plt.tick_params(axis='both', which='major', labelsize=12)
plt.xticks(tick_marks, list(dict_word_id.keys())[:-1], rotation=60)
plt.yticks(tick_marks, list(dict_word_id.keys())[:-1])
plt.title('Matriz de Confusão')
plt.colorbar()
plt.show()
| kfold-lstm-deep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DAT210x - Programming with Python for DS
# ## Module5- Lab9
# +
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
matplotlib.style.use('ggplot') # Look Pretty
# -
# ### A Convenience Function
# This convenience method will take care of plotting your test observations, comparing them to the regression line, and displaying the R2 coefficient
def drawLine(model, X_test, y_test, title, R2):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(X_test, y_test, c='g', marker='o')
ax.plot(X_test, model.predict(X_test), color='orange', linewidth=1, alpha=0.7)
title += " R2: " + str(R2)
ax.set_title(title)
print(title)
print("Intercept(s): ", model.intercept_)
plt.show()
def drawPlane(model, X_test, y_test, title, R2):
# This convenience method will take care of plotting your
# test observations, comparing them to the regression plane,
# and displaying the R2 coefficient
fig = plt.figure()
ax = Axes3D(fig)
ax.set_zlabel('prediction')
# You might have passed in a DataFrame, a Series (slice),
# an NDArray, or a Python List... so let's keep it simple:
X_test = np.array(X_test)
col1 = X_test[:,0]
col2 = X_test[:,1]
# Set up a Grid. We could have predicted on the actual
# col1, col2 values directly; but that would have generated
# a mesh with WAY too fine a grid, which would have detracted
# from the visualization
x_min, x_max = col1.min(), col1.max()
y_min, y_max = col2.min(), col2.max()
x = np.arange(x_min, x_max, (x_max-x_min) / 10)
y = np.arange(y_min, y_max, (y_max-y_min) / 10)
x, y = np.meshgrid(x, y)
# Predict based on possible input values that span the domain
# of the x and y inputs:
z = model.predict( np.c_[x.ravel(), y.ravel()] )
z = z.reshape(x.shape)
ax.scatter(col1, col2, y_test, c='g', marker='o')
ax.plot_wireframe(x, y, z, color='orange', alpha=0.7)
title += " R2: " + str(R2)
ax.set_title(title)
print(title)
print("Intercept(s): ", model.intercept_)
plt.show()
# ### The Assignment
# Let's get started!
# First, as is your habit, inspect your dataset in a text editor, or spread sheet application. The first thing you should notice is that the first column is both unique (the name of each) college, as well as unlabeled. This is a HINT that it must be the index column. If you do not indicate to Pandas that you already have an index column, it'll create one for you, which would be undesirable since you already have one.
#
# Review the `.read_csv()` documentation and discern how to load up a dataframe while indicating which existing column is to be taken as an index. Then, load up the College dataset into a variable called `X`:
# .. your code here ..
X = pd.read_csv('Datasets/College.csv', index_col=0)
X.head()
X.info()
#
# This line isn't necessary for your purposes; but we'd just like to show you an additional way to encode features directly. The `.map()` method is like `.apply()`, but instead of taking in a lambda / function, you simply provide a mapping of keys:values. If you decide to embark on the "Data Scientist Challenge", this line of code will save you the trouble of converting it through other means:
X.Private = X.Private.map({'Yes':1, 'No':0})
# Create your linear regression model here and store it in a variable called `model`. Don't actually train or do anything else with it yet:
# +
# .. your code here ..
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# -
# The first relationship we're interested in is the number of accepted students, as a function of the amount charged for room and board.
# Using indexing, create two slices (series). One will just store the room and board column, the other will store the accepted students column. Then use train_test_split to cut your data up into `X_train`, `X_test`, `y_train`, `y_test`, with a `test_size` of 30% and a random_state of 7.
# +
# .. your code here ..
from sklearn.model_selection import train_test_split
data = X[['Room.Board']]
labels = X['Accept']
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.3, random_state=7)
# -
# Fit and score your model appropriately. Store the score in the `score` variable.
# .. your code here ..
model.fit(X_train, y_train)
score = model.score
from sklearn.metrics import r2_score
y_predict = model.predict(X_test)
r2_score(y_test, y_predict)
# We'll take it from here, buddy:
drawLine(model, X_test, y_test, "Accept(Room&Board)", score)
# Duplicate the process above; this time, model the number of accepted students, as a function of the number of enrolled students per college.
# +
# .. your code here ..
data = X[['Enroll']]
labels = X['Accept']
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.3, random_state=7)
model.fit(X_train, y_train)
score = model.score
# -
drawLine(model, X_test, y_test, "Accept(Enroll)", score)
y_predict = model.predict(X_test)
r2_score(y_test, y_predict)
# Duplicate the process above; this time, model the number of accepted students, as as function of the number of failed undergraduate students per college.
# +
# .. your code here ..
data = X[['F.Undergrad']]
labels = X['Accept']
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.3, random_state=7)
model.fit(X_train, y_train)
score = model.score
# -
y_predict = model.predict(X_test)
r2_score(y_test, y_predict)
drawLine(model, X_test, y_test, "Accept(F.Undergrad)", score)
# Duplicate the process above (almost). This time is going to be a bit more complicated. Instead of modeling one feature as a function of another, you will attempt to do multivariate linear regression to model one feature as a function of TWO other features.
#
# Model the number of accepted students as a function of the amount charged for room and board _and_ the number of enrolled students. To do this, instead of creating a regular slice for a single-feature input, simply create a slice that contains both columns you wish to use as inputs. Your training labels will remain a single slice.
# +
# .. your code here ..
data = X[['Room.Board', 'Enroll']]
labels = X['Accept']
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.3, random_state=7)
model.fit(X_train, y_train)
score = model.score
# -
drawPlane(model, X_test, y_test, "Accept(Room&Board,Enroll)", score)
y_predict = model.predict(X_test)
r2_score(y_test, y_predict)
# That concludes this assignment!
# ### Notes On Fitting, Scoring, and Predicting:
# Here's a hint to help you complete the assignment without pulling your hair out! When you use `.fit()`, `.score()`, and `.predict()` on your model, SciKit-Learn expects your training data to be in spreadsheet (2D Array-Like) form. This means you can't simply pass in a 1D Array (slice) and get away with it.
#
# To properly prep your data, you have to pass in a 2D Numpy Array, or a dataframe. But what happens if you really only want to pass in a single feature?
#
# If you slice your dataframe using `df[['ColumnName']]` syntax, the result that comes back is actually a _dataframe_. Go ahead and do a `type()` on it to check it out. Since it's already a dataframe, you're good -- no further changes needed.
#
# But if you slice your dataframe using the `df.ColumnName` syntax, OR if you call `df['ColumnName']`, the result that comes back is actually a series (1D Array)! This will cause SKLearn to bug out. So if you are slicing using either of those two techniques, before sending your training or testing data to `.fit` / `.score`, do `any_column = my_column.reshape(-1,1)`.
#
# This will convert your 1D array of `[n_samples]`, to a 2D array shaped like `[n_samples, 1]`. A single feature, with many samples.
#
# If you did something like `my_column = [my_column]`, that would produce an array in the shape of `[1, n_samples]`, which is incorrect because SKLearn expects your data to be arranged as `[n_samples, n_features]`. Keep in mind, all of the above only relates to your `X` or input data, and does not apply to your `y` or labels.
# ### Data Scientist Challenge
# You've experimented with a number of feature scaling techniques already, such as `MaxAbsScaler`, `MinMaxScaler`, `Normalizer`, `StandardScaler` and more from http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing.
#
# What happens if you apply scaling to your data before doing linear regression? Would it alter the quality of your results? Do the scalers that work on a per-feature basis, such as `MinMaxScaler` behave differently that those that work on a multi-feature basis, such as normalize? And moreover, once your features have been scaled, you won't be able to use the resulting regression directly... unless you're able to `.inverse_transform()` the scaling. Do all of the SciKit-Learn scalers support that?
#
# This is your time to shine and to show how much of an explorer you are: Dive deeper into uncharted lands, browse SciKit-Learn's documentation, scour Google, ask questions on Quora, Stack-Overflow, and the course message board, and see if you can discover something that will be of benefit to you in the future!
| Module5/Module5 - Lab9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.3
# language: julia
# name: julia-1.5
# ---
using PyPlot, StatsBase, Printf, DelimitedFiles;
using Revise;
using MDToolbox;
PyPlot.plt.style.use("seaborn-colorblind");
ENV["COLUMNS"] = 110; #display width for MDToolbox
# ### Potential energy function its gradients
V(x; k=1.0) = sum(- (1.0 / 2.0) .* k .* x.^2 .+ (1.0 ./ 4.0) .* k .* x.^4)
#V(x; k=1.0) = sum((1.0 .- x.^2).^2 .- 0.25 .* x)
V([0.0])
grad(x; k= 1.0) = - k .* x .+ k .* x.^3
#grad(x; k=1.0) = -2.0.*(1.0 .- x.^2).*2.0.*x .- 0.25
# ### Replica MD with exchange
function exchange_temperature!(m2i, i2m, icount, x_replica, pot_fun::Function, temperature_replica)
nreplica = length(x_replica)
m_array = 1:nreplica
t_array = temperature_replica[m_array]
b_array = 1.0 ./ t_array
i_array = m2i[m_array]
v_array = map(pot_fun, x_replica[i_array])
if mod(icount, 2) == 0
m_lower = 1:2:(nreplica-1)
m_higher = 2:2:nreplica
else
m_lower = 2:2:(nreplica-1)
m_higher = 3:2:nreplica
end
iaccepted = 0
for ipair = 1:length(m_higher)
m1 = m_lower[ipair]
m2 = m_higher[ipair]
delta = (b_array[m2] - b_array[m1]) * (v_array[m1] - v_array[m2])
if exp(-delta) > rand()
m2i[m_array[m1]], m2i[m_array[m2]] = m2i[m_array[m2]], m2i[m_array[m1]]
i2m[i_array[m1]], i2m[i_array[m2]] = i2m[i_array[m2]], i2m[i_array[m1]]
iaccepted += 1
end
end
return iaccepted / length(m_higher)
end
# +
nreplica = 4
temperature_replica = [0.01, 0.10, 0.30, 0.40];
#temperature_replica = [0.04, 1.25];
nstep = 1000;
nexchange = 100;
m2i = collect(1:nreplica)
i2m = collect(1:nreplica)
x_replica = []
for i = 1:nreplica
x = [-1.0]
push!(x_replica, x)
end
io_replica = []
for i = 1:nreplica
filename = "replica$(i).dat"
io = open(filename, "w")
push!(io_replica, io)
end
icount = 0
acceptance_ratio = 0.0
for iexchange = 1:nexchange
for i = 1:nreplica
x_replica[i] = propagate_md(grad, x_replica[i], temperature_replica[i2m[i]], nstep=nstep, io=io_replica[i]);
end
# do exchange
acceptance_ratio += exchange_temperature!(m2i, i2m, icount, x_replica, y -> V(y, k=1.0), temperature_replica)
icount += 1
end
for i = 1:nreplica
close(io_replica[i])
end
acceptance_ratio = acceptance_ratio / nexchange
# -
# ### Trajectory analysis
# +
traj_replica = []
temp_replica = []
for i = 1:nreplica
filename = "replica$(i).dat"
data = readdlm(filename);
push!(temp_replica, data[:, 1])
push!(traj_replica, data[:, 2])
end
# +
fig, ax = subplots(figsize=(8, 6))
for i = 1:nreplica
ax.plot(traj_replica[i], linewidth=0.5)
end
xlabel("step",fontsize=20)
ylabel("x(step)",fontsize=20)
ax.legend(["replica 1", "replica 2", "replica 3", "replica 4"])
tight_layout()
# -
# sort trajectories according to temperature
function sort_traj(traj_replica, temp_replica)
traj_sorted = deepcopy(traj_replica)
temp_sorted = deepcopy(temp_replica)
nframe = size(traj_replica[1], 1)
for iframe = 1:nframe
temp_snapshot = map(x -> x[iframe], temp_replica)
p = sortperm(temp_snapshot)
for m = 1:nreplica
traj_sorted[m][iframe, :] .= traj_replica[p[m]][iframe, :]
temp_sorted[m][iframe, :] .= temp_replica[p[m]][iframe, :]
end
end
return traj_sorted, temp_sorted
end
traj_sorted, temp_sorted = sort_traj(traj_replica, temp_replica)
temp_sorted[1]
# +
fig, ax = subplots(figsize=(8, 6))
for i = 1:nreplica
ax.plot(traj_sorted[i], linewidth=0.5)
end
xlabel("step",fontsize=20)
ylabel("x(step)",fontsize=20)
ax.legend(["temperature 1", "temperature 2", "temperature 3", "temperature 4"])
tight_layout()
# -
#x_grid = range(-1.3, 1.3, length=100);
x_grid = collect(-1.3:0.1:1.3)
pmf_theory = V.(x_grid, k=1) ./ temperature_replica[1]
pmf_theory .= pmf_theory .- minimum(pmf_theory);
pmf_observed, _ = compute_pmf(traj_sorted[1], grid_x = collect(x_grid), bandwidth=0.05);
# +
fig, ax = subplots(figsize=(8, 6))
ax.plot(x_grid, pmf_theory, linewidth=3)
xlabel("x",fontsize=20)
ylabel("PMF (KBT)",fontsize=20)
ax.plot(x_grid, pmf_observed, linewidth=3)
ax.legend(["theory", "observed"])
ax.xaxis.set_tick_params(which="major",labelsize=15)
ax.yaxis.set_tick_params(which="major",labelsize=15)
ax.grid(linestyle="--", linewidth=0.5)
tight_layout()
savefig("md_replica_exchange.png", dpi=350)
# -
x_grid[4], x_grid[24]
abs(pmf_observed[4] - pmf_observed[24])
| infinite_swapping/md_replica_exchange.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import importlib
import RNN
from RNN import *
importlib.reload(RNN)
path = 'data/us/covid/nyt_us_counties_daily.csv'
counties_df = load_county_data(path)
counties_df.head()
daterange = generate_date_range(counties_df, dateshift=35)
split_point = generate_split_point(daterange, frac = 0.6)
train_inputs, train_targets, train_conditions, test_inputs, \
test_targets, test_conditions, inputs_total, conditions_total, fips_many \
= generate_county_sets(counties_df, daterange, split_point=split_point)
print(train_inputs.shape)
print(train_targets.shape)
print(train_conditions.shape)
print(test_inputs.shape)
print(test_targets.shape)
print(test_conditions.shape)
# +
# For some reason, it won't let me make a model here? so the
# model is made in RNN.py and imported.
# model = MySimpleModel()
model, history = train_rnn(model, train_inputs, train_targets,
train_conditions, test_inputs,
test_targets, test_conditions, ep = 10)
# -
plot_loss(history)
plot_hist(model, train_inputs, train_conditions)
nyc = fips_many.index(36061)
la = fips_many.index(6037)
snoho = fips_many.index(53061)
counties_df
plot_predicted_vs_true(model, inputs_total, conditions_total, counties_df, fips_many, split_point, 25, ind = snoho)
plot_predicted_vs_true(model, inputs_total, conditions_total, counties_df, fips_many, split_point, 24, ind = 12)
| QQ/RNN_US_minimal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Spectrum
# + deletable=true editable=true
import sys
sys.path.insert(0, '..')
import matplotlib.pyplot as plt
# %matplotlib inline
# + deletable=true editable=true
import numpy as np
import scipy.sparse as sp
from lib.segmentation import segmentation_adjacency
from lib.graph import coarsen_adj
def plot_laplacians(image, segmentation_algorithm):
segmentation = segmentation_algorithm(image)
adj, points, mass = segmentation_adjacency(segmentation)
adjs, _, _ = coarsen_adj(adj, points, mass, levels=4)
for i in range(5):
print('Level {}:'.format(i), adjs[i].shape[0], 'nodes,', adjs[i].nnz // 2, 'edges')
laps = [sp.csgraph.laplacian(adj) for adj in adjs]
for i, lap in enumerate(laps):
lamb, U = np.linalg.eig(lap.toarray())
perm = lamb.argsort()
lamb = lamb[perm]
U = U[:, perm]
step = 2**i
x = range(step // 2, laps[0].shape[0], step)
lb = 'L_{} spectrum in [{}, {:.5f}]'.format(i, 0, lamb[-1])
plt.plot(x, np.real(lamb), '.', label=lb)
plt.legend(loc='best')
plt.xlim(0, laps[0].shape[0])
plt.ylim(ymin=0)
# + [markdown] deletable=true editable=true
# ## Load datasets
# + deletable=true editable=true
from lib.datasets import MNIST, Cifar10, PascalVOC
mnist = MNIST('../data/mnist').test.next_batch(1, shuffle=False)[0][0]
cifar_10 = Cifar10('../data/cifar_10').test.next_batch(2, shuffle=False)[0][1]
pascal_voc = PascalVOC('../test_data').test.next_batch(3, shuffle=False)[0][2]
# + deletable=true editable=true
from lib.segmentation import slic_fixed, quickshift_fixed
# + [markdown] deletable=true editable=true
# ## MNIST SLIC
# + deletable=true editable=true
slic = slic_fixed(num_segments=100, compactness=5, max_iterations=10, sigma=0)
plt.rcParams['figure.figsize'] = (10, 4)
plot_laplacians(mnist, slic)
# + [markdown] deletable=true editable=true
# ## MNIST Quickshift
# + deletable=true editable=true
quickshift = quickshift_fixed(ratio=1, kernel_size=2, max_dist=2, sigma=0)
plt.rcParams['figure.figsize'] = (10, 4)
plot_laplacians(mnist, quickshift)
# + [markdown] deletable=true editable=true
# ## Cifar10 SLIC
# + deletable=true editable=true
slic = slic_fixed(num_segments=150, compactness=5, max_iterations=10, sigma=0)
plt.rcParams['figure.figsize'] = (10, 4)
plot_laplacians(cifar_10, slic)
# + [markdown] deletable=true editable=true
# ## Cifar10 Quickshift
# + deletable=true editable=true
quickshift = quickshift_fixed(ratio=1, kernel_size=1, max_dist=5, sigma=0)
plt.rcParams['figure.figsize'] = (10, 4)
plot_laplacians(cifar_10, quickshift)
# + [markdown] deletable=true editable=true
# ## PascalVOC SLIC
# + deletable=true editable=true
slic = slic_fixed(num_segments=800, compactness=30, max_iterations=10, sigma=0)
plt.rcParams['figure.figsize'] = (10, 4)
plot_laplacians(pascal_voc, slic)
# + [markdown] deletable=true editable=true
# ## PascalVOC Quickshift
# + deletable=true editable=true
quickshift = quickshift_fixed(ratio=1, kernel_size=3, max_dist=15, sigma=0)
plt.rcParams['figure.figsize'] = (10, 4)
plot_laplacians(pascal_voc, quickshift)
| notebooks/spectrum.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # KNN(K最近邻分类)算法
# 如果有一个数据集中,有N类数据。输入没有标分类的数据集后,我们可以将预测集中的数据,和训练集的数据相比较,提取和预测数据最相似(距离最近)的K个数据,选择这K个数据中出现次数最多的标签,作为新数据的分类。
#
#
# KNN算法的思想非常简洁直观:
#
# 1、计算测试数据与各个训练数据之间的距离;
#
# 2、按照距离的递增关系进行排序;
#
# 3、选取距离最小的K个点;
#
# 4、确定前K个点所在类别的出现频率;
#
# 5、返回前K个点中出现频率最高的类别作为测试数据的预测分类。
# ### KNN算法的优点:
# 1、简单,易于实现;
#
# 2、因为找的是最近邻的数据点,因此当某些点数量稀少时,划分越准确,适合对稀有点分类;
#
# 3、使用多分类问题。
# ## 算法实现
# 我们利用一个案例,按照KNN算法的思想,逐步实现算法。
# ### KNN案例:优化约会网站的配对效果
#
#
# <b>项目概述</b>
#
# 海伦使用约会网站寻找约会对象。经过一段时间之后,她发现曾交往过三种类型的人:
#
# - 1:不喜欢的人
# - 2:魅力一般的人
# - 3:极具魅力的人
#
#
# 她希望:
#
# - 不喜欢的人则直接排除掉
# - 工作日与魅力一般的人约会
# - 周末与极具魅力的人约会
#
#
# 现在她收集到了一些约会网站未曾记录的数据信息,这更有助于匹配对象的归类。
#
# <b>开发流程</b>
#
# 海伦把这些约会对象的数据存放在文本文件 datingTestSet2.txt 中,总共有 1000 行。海伦约会的对象主要包含以下 3 种特征:
#
# - `Col1`:每年获得的飞行常客里程数
# - `Col2`:玩视频游戏所耗时间百分比
# - `Col3`:每周消费的冰淇淋公升数
#
# 文本文件数据格式如下:
# ```python
# 40920 8.326976 0.953952 3
# 14488 7.153469 1.673904 2
# 26052 1.441871 0.805124 1
# 75136 13.147394 0.428964 1
# 38344 1.669788 0.134296 1
#
# #### 读取数据
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('datingTestSet2.txt',sep = '\t',header = None)
X = np.array(data.iloc[:,:-1])
y = np.array(data.iloc[:,-1])
# #### 切分数据
#
# 我们可以直接调用sklearn的函数将数据集切分为训练集和测试集
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# 计算测试集数据和训练集间的距离,进行分类。
#
# 我们先用最简单的思想分类:将想要预测的样本,和训练集中每个样本的特征直接相减的绝对值之和作为距离,将距离最近的训练样本的标签标记为预测样本的标签。
class KNN:
def __init__(self):
pass
def train(self,X_train,y_train):
#读取训练集
self.X_train = np.array(X_train)
self.y_train = np.array(y_train)
def predict(self,X_test):
(m,d) = np.shape(X_test) #测试集的数量和特征数
y_pred = np.zeros((m)) #将预测的标签初始化为0
for i in range(m):
distance = np.sum(np.abs(self.Xtrain - X_test[i,:]),axis = 1) #求距离的绝对之和
min_index = np.argmin(distance) #找到最近点的索引
y_pred[i] = self.y_train[min_index] #将最近点的分类给新数据标记
return y_pred
# 我们可以将这个算法称为“最近邻算法“,直接取找最近的一个数据进行分类标记,我们将这个算法扩展到K近邻算法。
# 可以扩展的方向:
# * 选择不同的距离公式
# * 选择不同的K值
# #### 选择不同的距离公式:
# 上一个算法中用的距离公式为曼哈顿距离,将参数特征相减的绝对值求和,即L1距离。我们还可以用L2距离,
# 曼哈顿距离:$$d_1(I_1,I_2) = \sum_P|I_1^p - I_2^p|$$
# 欧式距离:$$d_2(I_1,I_2) = \sum_P\sqrt{(I_1^p - I_2^p)^2}$$
# 打个比方来说,当你搜索地图上的两个点,欧式距离就是将两个点用直线相连的空间距离;曼哈顿距离衡量的是你从A点开车到B点的距离,因为你不能穿过大楼和墙壁,所以衡量的是横向路线和纵向路线的的加总距离。
#
# KNN算法中,欧式距离用的更多,因为我们一般衡量变量特征的在多维空间中的距离,这时候不需要“开车绕墙”。
#
# 如有兴趣,可自行学习其他距离公式,添加到我们后面的算法中。
# #### 选择不同的K值
# 我们不再是选取排序后距离最近的一个训练数据打标签,而是选择距离最近的前K个训练数据,找到大多数近邻归属的类别,将预测值归为此类。
# 排序和计数我们可以直接调用argsort函数和Counter函数
#
# 按照以上思想,我们重新改写KNN算法:
from collections import Counter
class KNN:
def __init__(self,k=1,metric ='euclidean'): #默认距离算法为欧式距离,默认最近邻
self.metric = metric
self.k = k
def train(self,X_train,y_train):
self.X_train = np.array(X_train)
self.y_train = np.array(y_train)
def predict(self,x_test):
(m,d) = np.shape(x)#测试集的数量和特征数
y_pred = np.zeros((m))#将预测的标签初始化为0
for i in range(m):
if self.metric == 'manhattan':
distances = np.sum(np.abs(self.X_train - X_test[i,:]),axis = 1) #曼哈顿距离
if self.metric == 'euclidean':
distances = np.sqrt(np.sum(np.square(self.X_train - X_test[i,:]),axis = 1)) #欧式距离
sort = np.argsort(distances) #距离排序
top_K = [self.y_train[i] for i in sort[:self.k]] #找到K个近邻
k_counts = Counter(top_K) #对K个近邻的分类计出现频率
label = k_counts.most_common(1)[0][0] #将占大多数的那个分类标记为新数据的标签
ypred[i] = label
return ypred
# *可能你会问,如果两个分类刚好数量相等怎么办?可以有多种方法进行处理,如随机分类,如比较两类的距离总长度,我们这里不做更多处理,按Counter函数默认给出的分类。*
# #### 选择K值
# 那么到底如何选择K值呢?我们可以选择在测试集中表现最好的K值。
#
# 本任务中我们直接调用sklearn中的kFold函数,将数据集进行k折验证,取每次验证的评分平均值作为此K值的误差评分。(这两个k表示的意思不一样,请留意)
# 如何定义测试结果的评分呢?可以直观地将分类正确的比例作为衡量指标。定义准确度的函数为:
def score(ypred,ytest):
return sum(ypred == ytest)/len(ytest)
# 将我们自己撰写的分类器中添加评分函数,这就是一个相对完整的分类器了,我们可以将他和sklearn的结果做比较
from collections import Counter
class KNN:
def __init__(self,k,metric ='euclidean'):
pass
self.metric = metric
self.k = k
def train(self,X,y):
self.X_train = np.array(X)
self.y_train = np.array(y)
def predict(self,x_test):
x = np.array(x_test)
(m,d) = np.shape(x)
ypred = np.zeros((m))
for i in range(m):
if self.metric == 'manhattan':
distances = np.sum(np.abs(self.X_train - x[i,:]),axis = 1)
if self.metric == 'euclidean':
distances = np.sqrt(np.sum(np.square(self.X_train - x[i,:]),axis = 1))
nearest = np.argsort(distances)
#print(len(nearest))
top_K = [self.y_train[i] for i in nearest[:self.k]]
votes = Counter(top_K)
label = votes.most_common(1)[0][0]
#min_index = np.argmin(distance)
#ypred[i] = self.ytrain[min_index]
ypred[i] = label
return ypred
def score(self,ypred,ytest):
return sum(ypred == ytest)/len(ytest)
# #### 和sklearn的KNeighborsClassifier算法做比较
# +
#数据标准化
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
X_ = ss.fit(X)
X_std =ss.transform(X)
# +
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
k_range = range(1, 31)
k_error = []
#循环,取k=1到k=31,查看误差效果
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
#cv参数决定数据集划分比例,这里是按照5:1划分训练集和测试集
scores = cross_val_score(knn, X_std, y, cv=5, scoring='accuracy')
k_error.append(1 - scores.mean())
#画图,x轴为k值,y值为误差值
plt.plot(k_range, k_error)
plt.xlabel('Value of K for KNN')
plt.ylabel('Error')
plt.show()
# -
# 用我们自己撰写的K近邻算法测试数据,用同样的作图法输出每个K值的误差结果。
# +
from sklearn.model_selection import KFold
kf = KFold(n_splits=5,shuffle=False) #将数据集分为互斥的5等份,用作测试
k_errors = [] #建立初始的误差列表
for k in k_range:
knn = KNN(k=k)
scores = []
for train , test in kf.split(X_std,y):
knn.train(X_std[train],y[train])
ypred = knn.predict(X_std[test])
score = knn.score(ypred,y[test])
scores.append(1-score)
k_errors.append(np.mean(scores))
plt.plot(k_range, k_errors)
plt.xlabel('Value of K for KNN')
plt.ylabel('Error')
plt.show()
# -
# 观察到,算法在$k=21$的时候表现良好,取K值为21,来预测一个新数据
knn = KNN(k=21)
knn.train(X_std,y)
# 定义类别对应的标签
resultList = ['不喜欢的人', '魅力一般的人', '极具魅力的人']
#输入数据
ffMiles = float(input("每年获得的飞行常客里程数?"))
percentTats = float(input("玩视频游戏所耗时间百分比?"))
iceCream = float(input("每周消费的冰淇淋公升数?"))
inArr = np.array([[ffMiles, percentTats, iceCream]])
#用之前的fit的标准化数据来转换数据
x_new = ss.transform(inArr)
#预测数据
ypred = knn.predict(x_new)
print("这个人属于: ", resultList[int(ypred) - 1])
# ## 参考资料
# https://baike.baidu.com/item/%E9%82%BB%E8%BF%91%E7%AE%97%E6%B3%95/1151153?fromtitle=knn&fromid=3479559&fr=aladdin
# https://www.cnblogs.com/midiyu/p/10786765.html
# https://www.cnblogs.com/listenfwind/p/10685192.html
# https://blog.csdn.net/m0_38056893/article/details/102990001
# http://people.csail.mit.edu/dsontag/courses/ml12/slides/lecture10.pdf
# http://cs231n.github.io/classification/
# https://www.csd.uwo.ca/courses/CS4442b/L3-ML-knn.pdf
# http://cs231n.stanford.edu/slides/2019/cs231n_2019_lecture02.pdf
# https://blog.csdn.net/FrankieHello/article/details/79659111
# https://www.cnblogs.com/jyroy/p/9427977.html
| final/Task6_knn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Supervised sentiment: hand-built feature functions
# -
__author__ = "<NAME>"
__version__ = "CS224u, Stanford, Spring 2020"
# ## Contents
#
# 1. [Overview](#Overview)
# 1. [Set-up](#Set-up)
# 1. [Feature functions](#Feature-functions)
# 1. [Building datasets for experiments](#Building-datasets-for-experiments)
# 1. [Basic optimization](#Basic-optimization)
# 1. [Wrapper for SGDClassifier](#Wrapper-for-SGDClassifier)
# 1. [Wrapper for LogisticRegression](#Wrapper-for-LogisticRegression)
# 1. [Other scikit-learn models](#Other-scikit-learn-models)
# 1. [Experiments](#Experiments)
# 1. [Experiment with default values](#Experiment-with-default-values)
# 1. [A dev set run](#A-dev-set-run)
# 1. [Assessing BasicSGDClassifier](#Assessing-BasicSGDClassifier)
# 1. [Comparison with the baselines from Socher et al. 2013](#Comparison-with-the-baselines-from-Socher-et-al.-2013)
# 1. [A shallow neural network classifier](#A-shallow-neural-network-classifier)
# 1. [A softmax classifier in PyTorch](#A-softmax-classifier-in-PyTorch)
# 1. [Hyperparameter search](#Hyperparameter-search)
# 1. [utils.fit_classifier_with_crossvalidation](#utils.fit_classifier_with_crossvalidation)
# 1. [Example using LogisticRegression](#Example-using-LogisticRegression)
# 1. [Example using BasicSGDClassifier](#Example-using-BasicSGDClassifier)
# 1. [Statistical comparison of classifier models](#Statistical-comparison-of-classifier-models)
# 1. [Comparison with the Wilcoxon signed-rank test](#Comparison-with-the-Wilcoxon-signed-rank-test)
# 1. [Comparison with McNemar's test](#Comparison-with-McNemar's-test)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Overview
#
# * The focus of this notebook is __building feature representations__ for use with (mostly linear) classifiers (though you're encouraged to try out some non-linear ones as well!).
#
# * The core characteristics of the feature functions we'll build here:
# * They represent examples in __very large, very sparse feature spaces__.
# * The individual feature functions can be __highly refined__, drawing on expert human knowledge of the domain.
# * Taken together, these representations don't comprehensively represent the input examples. They just identify aspects of the inputs that the classifier model can make good use of (we hope).
#
# * These classifiers tend to be __highly competitive__. We'll look at more powerful deep learning models in the next notebook, and it will immediately become apparent that it is very difficult to get them to measure up to well-built classifiers based in sparse feature representations.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Set-up
#
# See [the previous notebook](sst_01_overview.ipynb#Set-up) for set-up instructions.
# -
from collections import Counter
import os
from sklearn.linear_model import LogisticRegression
import scipy.stats
from np_sgd_classifier import BasicSGDClassifier
import torch.nn as nn
from torch_shallow_neural_classifier import TorchShallowNeuralClassifier
import sst
import utils
# +
# Set all the random seeds for reproducibility. Only the
# system and torch seeds are relevant for this notebook.
utils.fix_random_seeds()
# -
SST_HOME = os.path.join('data', 'trees')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Feature functions
#
# * Feature representation is arguably __the most important step in any machine learning task__. As you experiment with the SST, you'll come to appreciate this fact, since your choice of feature function will have a far greater impact on the effectiveness of your models than any other choice you make.
#
# * We will define our feature functions as `dict`s mapping feature names (which can be any object that can be a `dict` key) to their values (which must be `bool`, `int`, or `float`).
#
# * To prepare for optimization, we will use `sklearn`'s [DictVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.DictVectorizer.html) class to turn these into matrices of features.
#
# * The `dict`-based approach gives us a lot of flexibility and frees us from having to worry about the underlying feature matrix.
# + [markdown] slideshow={"slide_type": "slide"}
# A typical baseline or default feature representation in NLP or NLU is built from unigrams. Here, those are the leaf nodes of the tree:
# -
def unigrams_phi(tree):
"""The basis for a unigrams feature function.
Parameters
----------
tree : nltk.tree
The tree to represent.
Returns
-------
defaultdict
A map from strings to their counts in `tree`. (Counter maps a
list to a dict of counts of the elements in that list.)
"""
return Counter(tree.leaves())
# + [markdown] slideshow={"slide_type": "slide"}
# In the docstring for `sst.sentiment_treebank_reader`, I pointed out that the labels on the subtrees can be used in a way that feels like cheating. Here's the most dramatic instance of this: `root_daughter_scores_phi` uses just the labels on the daughters of the root to predict the root (label). This will result in performance well north of 90% F1, but that's hardly worth reporting. (Interestingly, using the labels on the leaf nodes is much less powerful.) Anyway, don't use this function!
# -
def root_daughter_scores_phi(tree):
"""The best way we've found to cheat without literally using the
labels as part of the feature representations.
Don't use this for any real experiments!
"""
return Counter([child.label() for child in tree])
# It's generally good design to __write lots of atomic feature functions__ and then bring them together into a single function when running experiments. This will lead to reusable parts that you can assess independently and in sub-groups as part of development.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Building datasets for experiments
#
# The second major phase for our analysis is a kind of set-up phase. Ingredients:
#
# * A reader like `train_reader`
# * A feature function like `unigrams_phi`
# * A class function like `binary_class_func`
#
# The convenience function `sst.build_dataset` uses these to build a dataset for training and assessing a model. See its documentation for details on how it works. Much of this is about taking advantage of `sklearn`'s many functions for model building.
# -
train_dataset = sst.build_dataset(
SST_HOME,
reader=sst.train_reader,
phi=unigrams_phi,
class_func=sst.binary_class_func,
vectorizer=None)
print("Train dataset with unigram features has {:,} examples and {:,} features".format(
*train_dataset['X'].shape))
# Notice that `sst.build_dataset` has an optional argument `vectorizer`:
#
# * If it is `None`, then a new vectorizer is used and returned as `dataset['vectorizer']`. This is the usual scenario when training.
#
# * For evaluation, one wants to represent examples exactly as they were represented during training. To ensure that this happens, pass the training `vectorizer` to this function:
dev_dataset = sst.build_dataset(
SST_HOME,
reader=sst.dev_reader,
phi=unigrams_phi,
class_func=sst.binary_class_func,
vectorizer=train_dataset['vectorizer'])
print("Dev dataset with unigram features has {:,} examples "
"and {:,} features".format(*dev_dataset['X'].shape))
# + [markdown] slideshow={"slide_type": "slide"}
# ## Basic optimization
#
# We're now in a position to begin training supervised models!
#
# For the most part, in this course, we will not study the theoretical aspects of machine learning optimization, concentrating instead on how to optimize systems effectively in practice. That is, this isn't a theory course, but rather an experimental, project-oriented one.
#
# Nonetheless, we do want to avoid treating our optimizers as black boxes that work their magic and give us some assessment figures for whatever we feed into them. That seems irresponsible from a scientific and engineering perspective, and it also sends the false signal that the optimization process is inherently mysterious. So we do want to take a minute to demystify it with some simple code.
#
# The module `sgd_classifier` contains a complete optimization framework, as `BasicSGDClassifier`. Well, it's complete in the sense that it achieves our full task of supervised learning. It's incomplete in the sense that it is very basic. You probably wouldn't want to use it in experiments. Rather, we're going to encourage you to rely on `sklearn` for your experiments (see below). Still, this is a good basic picture of what's happening under the hood.
#
# So what is `BasicSGDClassifier` doing? The heart of it is the `fit` function (reflecting the usual `sklearn` naming system). This method implements a hinge-loss stochastic sub-gradient descent optimization. Intuitively, it works as follows:
#
# 1. Start by assuming that all the feature weights are `0`.
# 1. Move through the dataset instance-by-instance in random order.
# 1. For each instance, classify it using the current weights.
# 1. If the classification is incorrect, move the weights in the direction of the correct classification
#
# This process repeats for a user-specified number of iterations (default `10` below), and the weight movement is tempered by a learning-rate parameter `eta` (default `0.1`). The output is a set of weights that can be used to make predictions about new (properly featurized) examples.
#
# In more technical terms, the objective function is
#
# $$
# \min_{\mathbf{w} \in \mathbb{R}^{d}}
# \sum_{(x,y)\in\mathcal{D}}
# \max_{y'\in\mathbf{Y}}
# \left[\mathbf{Score}_{\textbf{w}, \phi}(x,y') + \mathbf{cost}(y,y')\right] - \mathbf{Score}_{\textbf{w}, \phi}(x,y)
# $$
#
# where $\mathbf{w}$ is the set of weights to be learned, $\mathcal{D}$ is the training set of example–label pairs, $\mathbf{Y}$ is the set of labels, $\mathbf{cost}(y,y') = 0$ if $y=y'$, else $1$, and $\mathbf{Score}_{\textbf{w}, \phi}(x,y')$ is the inner product of the weights
# $\mathbf{w}$ and the example as featurized according to $\phi$.
#
# The `fit` method is then calculating the sub-gradient of this objective. In succinct pseudo-code:
#
# * Initialize $\mathbf{w} = \mathbf{0}$
# * Repeat $T$ times:
# * for each $(x,y) \in \mathcal{D}$ (in random order):
# * $\tilde{y} = \text{argmax}_{y'\in \mathcal{Y}} \mathbf{Score}_{\textbf{w}, \phi}(x,y') + \mathbf{cost}(y,y')$
# * $\mathbf{w} = \mathbf{w} + \eta(\phi(x,y) - \phi(x,\tilde{y}))$
#
# This is very intuitive – push the weights in the direction of the positive cases. It doesn't require any probability theory. And such loss functions have proven highly effective in many settings. For a more powerful version of this classifier, see [sklearn.linear_model.SGDClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html#sklearn.linear_model.SGDClassifier). With `loss='hinge'`, it should behave much like `BasicSGDClassifier` (but faster!).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Wrapper for SGDClassifier
#
# For the sake of our experimental framework, a simple wrapper for `SGDClassifier`:
# -
def fit_basic_sgd_classifier(X, y):
"""Wrapper for `BasicSGDClassifier`.
Parameters
----------
X : 2d np.array
The matrix of features, one example per row.
y : list
The list of labels for rows in `X`.
Returns
-------
BasicSGDClassifier
A trained `BasicSGDClassifier` instance.
"""
mod = BasicSGDClassifier()
mod.fit(X, y)
return mod
# + [markdown] slideshow={"slide_type": "slide"}
# ### Wrapper for LogisticRegression
#
# As I said above, we likely don't want to rely on `BasicSGDClassifier` (though it does a good job with SST!). Instead, we want to rely on `sklearn`. Here's a simple wrapper for [sklearn.linear.model.LogisticRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) using our
# `build_dataset` paradigm.
# -
def fit_softmax_classifier(X, y):
"""Wrapper for `sklearn.linear.model.LogisticRegression`. This is
also called a Maximum Entropy (MaxEnt) Classifier, which is more
fitting for the multiclass case.
Parameters
----------
X : 2d np.array
The matrix of features, one example per row.
y : list
The list of labels for rows in `X`.
Returns
-------
sklearn.linear.model.LogisticRegression
A trained `LogisticRegression` instance.
"""
mod = LogisticRegression(
fit_intercept=True,
solver='liblinear',
multi_class='auto')
mod.fit(X, y)
return mod
# + [markdown] slideshow={"slide_type": "slide"}
# ### Other scikit-learn models
#
# * The [sklearn.linear_model](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model) package has a number of other classifier models that could be effective for SST.
#
# * The [sklearn.ensemble](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.ensemble) package contains powerful classifiers as well. The theme that runs through all of them is that one can get better results by averaging the predictions of a bunch of more basic classifiers. A [RandomForestClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier) will bring some of the power of deep learning models without the optimization challenges (though see [this blog post on some limitations of the current sklearn implementation](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)).
#
# * The [sklearn.svm](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.svm) contains variations on Support Vector Machines (SVMs).
# + [markdown] slideshow={"slide_type": "slide"}
# ## Experiments
#
# We now have all the pieces needed to run experiments. And __we're going to want to run a lot of experiments__, trying out different feature functions, taking different perspectives on the data and labels, and using different models.
#
# To make that process efficient and regimented, `sst` contains a function `experiment`. All it does is pull together these pieces and use them for training and assessment. It's complicated, but the flexibility will turn out to be an asset.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Experiment with default values
# -
_ = sst.experiment(
SST_HOME,
unigrams_phi,
fit_softmax_classifier,
train_reader=sst.train_reader,
assess_reader=None,
train_size=0.7,
class_func=sst.ternary_class_func,
score_func=utils.safe_macro_f1,
verbose=True)
# A few notes on this function call:
#
# * Since `assess_reader=None`, the function reports performance on a random train–test split. Give `sst.dev_reader` as the argument to assess against the `dev` set.
#
# * `unigrams_phi` is the function we defined above. By changing/expanding this function, you can start to improve on the above baseline, perhaps periodically seeing how you do on the dev set.
#
# * `fit_softmax_classifier` is the wrapper we defined above. To assess new models, simply define more functions like this one. Such functions just need to consume an `(X, y)` constituting a dataset and return a model.
# + [markdown] slideshow={"slide_type": "slide"}
# ### A dev set run
# -
_ = sst.experiment(
SST_HOME,
unigrams_phi,
fit_softmax_classifier,
class_func=sst.ternary_class_func,
assess_reader=sst.dev_reader)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Assessing BasicSGDClassifier
# -
_ = sst.experiment(
SST_HOME,
unigrams_phi,
fit_basic_sgd_classifier,
class_func=sst.ternary_class_func,
assess_reader=sst.dev_reader)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Comparison with the baselines from Socher et al. 2013
#
# Where does our default set-up sit with regard to published baselines for the binary problem? (Compare [Socher et al., Table 1](http://www.aclweb.org/anthology/D/D13/D13-1170.pdf).)
# -
_ = sst.experiment(
SST_HOME,
unigrams_phi,
fit_softmax_classifier,
class_func=sst.binary_class_func,
assess_reader=sst.dev_reader)
# ### A shallow neural network classifier
#
# While we're at it, we might as well see whether adding a hidden layer to our softmax classifier yields any benefits. Whereas `LogisticRegression` is, at its core, computing
#
# $$\begin{align*}
# y &= \textbf{softmax}(xW_{xy} + b_{y})
# \end{align*}$$
#
# the shallow neural network inserts a hidden layer with a non-linear activation applied to it:
#
# $$\begin{align*}
# h &= \tanh(xW_{xh} + b_{h}) \\
# y &= \textbf{softmax}(hW_{hy} + b_{y})
# \end{align*}$$
def fit_nn_classifier(X, y):
mod = TorchShallowNeuralClassifier(
hidden_dim=50, max_iter=100)
mod.fit(X, y)
return mod
_ = sst.experiment(
SST_HOME,
unigrams_phi,
fit_nn_classifier,
class_func=sst.binary_class_func)
# It looks like, with enough iterations (and perhaps some fiddling with the activation function and hidden dimensionality), this classifier would meet or exceed the baseline set up by `LogisticRegression`.
# + [markdown] slideshow={"slide_type": "slide"}
# ### A softmax classifier in PyTorch
#
# Our PyTorch modules should support easy modification. For example, to turn `TorchShallowNeuralClassifier` into a `TorchSoftmaxClassifier`, one need only write a new `define_graph` method:
# -
class TorchSoftmaxClassifier(TorchShallowNeuralClassifier):
def define_graph(self):
return nn.Linear(self.input_dim, self.n_classes_)
def fit_torch_softmax(X, y):
mod = TorchSoftmaxClassifier(max_iter=100)
mod.fit(X, y)
return mod
_ = sst.experiment(
SST_HOME,
unigrams_phi,
fit_torch_softmax,
class_func=sst.binary_class_func)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Hyperparameter search
#
# The training process learns __parameters__ — the weights. There are typically lots of other parameters that need to be set. For instance, our `BasicSGDClassifier` has a learning rate parameter and a training iteration parameter. These are called __hyperparameters__. The more powerful `sklearn` classifiers often have many more such hyperparameters. These are outside of the explicitly stated objective, hence the "hyper" part.
#
# So far, we have just set the hyperparameters by hand. However, their optimal values can vary widely between datasets, and choices here can dramatically impact performance, so we would like to set them as part of the overall experimental framework.
# + [markdown] slideshow={"slide_type": "slide"}
# ### utils.fit_classifier_with_crossvalidation
#
# Luckily, `sklearn` provides a lot of functionality for setting hyperparameters via cross-validation. The function `utils.fit_classifier_with_crossvalidation` implements a basic framework for taking advantage of these options.
#
# This method has the same basic shape as `fit_softmax_classifier` above: it takes a dataset as input and returns a trained model. However, to find its favored model, it explores a space of hyperparameters supplied by the user, seeking the optimal combination of settings.
#
# __Note__: this kind of search seems not to have a large impact for SST as we're using it. However, it can matter a lot for other data sets, and it's also an important step to take when trying to publish, since __reviewers are likely to want to check that your comparisons aren't based in part on opportunistic or ill-considered choices for the hyperparameters__.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Example using LogisticRegression
#
# Here's a fairly full-featured use of the above for the `LogisticRegression` model family:
# -
def fit_softmax_with_crossvalidation(X, y):
"""A MaxEnt model of dataset with hyperparameter
cross-validation. Some notes:
* 'fit_intercept': whether to include the class bias feature.
* 'C': weight for the regularization term (smaller is more regularized).
* 'penalty': type of regularization -- roughly, 'l1' ecourages small
sparse models, and 'l2' encourages the weights to conform to a
gaussian prior distribution.
Other arguments can be cross-validated; see
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
Parameters
----------
X : 2d np.array
The matrix of features, one example per row.
y : list
The list of labels for rows in `X`.
Returns
-------
sklearn.linear_model.LogisticRegression
A trained model instance, the best model found.
"""
basemod = LogisticRegression(
fit_intercept=True,
solver='liblinear',
multi_class='auto')
cv = 5
param_grid = {'fit_intercept': [True, False],
'C': [0.4, 0.6, 0.8, 1.0, 2.0, 3.0],
'penalty': ['l1','l2']}
best_mod = utils.fit_classifier_with_crossvalidation(
X, y, basemod, cv, param_grid)
return best_mod
# + slideshow={"slide_type": "-"}
softmax_experiment = sst.experiment(
SST_HOME,
unigrams_phi,
fit_softmax_with_crossvalidation,
class_func=sst.ternary_class_func)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Example using BasicSGDClassifier
# -
# The models written for this course are also compatible with this framework. They ["duck type"](https://en.wikipedia.org/wiki/Duck_typing) the `sklearn` models by having methods `fit`, `predict`, `get_params`, and `set_params`, and an attribute `params`.
def fit_basic_sgd_classifier_with_crossvalidation(X, y):
basemod = BasicSGDClassifier()
cv = 5
param_grid = {'eta': [0.01, 0.1, 1.0], 'max_iter': [10]}
best_mod = utils.fit_classifier_with_crossvalidation(
X, y, basemod, cv, param_grid)
return best_mod
sgd_experiment = sst.experiment(
SST_HOME,
unigrams_phi,
fit_basic_sgd_classifier_with_crossvalidation,
class_func=sst.ternary_class_func)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Statistical comparison of classifier models
#
# Suppose two classifiers differ according to an effectiveness measure like F1 or accuracy. Are they meaningfully different?
#
# * For very large datasets, the answer might be clear: if performance is very stable across different train/assess splits and the difference in terms of correct predictions has practical import, then you can clearly say yes.
#
# * With smaller datasets, or models whose performance is closer together, it can be harder to determine whether the two models are different. We can address this question in a basic way with repeated runs and basic null-hypothesis testing on the resulting score vectors.
#
# In general, one wants to compare __two feature functions against the same model__, or one wants to compare __two models with the same feature function used for both__. If both are changed at the same time, then it will be hard to figure out what is causing any differences you see.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Comparison with the Wilcoxon signed-rank test
#
# The function `sst.compare_models` is designed for such testing. The default set-up uses the non-parametric [Wilcoxon signed-rank test](https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test) to make the comparisons, which is relatively conservative and recommended by [Demšar 2006](http://www.jmlr.org/papers/v7/demsar06a.html) for cases where one can afford to do multiple assessments. For discussion, see [the evaluation methods notebook](evaluation_methods.ipynb#Wilcoxon-signed-rank-test).
#
# Here's an example showing the default parameters values and comparing `LogisticRegression` and `BasicSGDClassifier`:
# -
_ = sst.compare_models(
SST_HOME,
unigrams_phi,
fit_softmax_classifier,
stats_test=scipy.stats.wilcoxon,
trials=10,
phi2=None, # Defaults to same as first required argument.
train_func2=fit_basic_sgd_classifier, # Defaults to same as second required argument.
reader=sst.train_reader,
train_size=0.7,
class_func=sst.ternary_class_func,
score_func=utils.safe_macro_f1)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Comparison with McNemar's test
#
# [McNemar's test](https://en.wikipedia.org/wiki/McNemar%27s_test) operates directly on the vectors of predictions for the two models being compared. As such, it doesn't require repeated runs, which is good where optimization is expensive. For discussion, see [the evaluation methods notebook](evaluation_methods.ipynb#McNemar's-test).
# -
m = utils.mcnemar(
softmax_experiment['assess_dataset']['y'],
sgd_experiment['predictions'],
softmax_experiment['predictions'])
# +
p = "p < 0.0001" if m[1] < 0.0001 else m[1]
print("McNemar's test: {0:0.02f} ({1:})".format(m[0], p))
| sst_02_hand_built_features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torchvision as tv
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
# + language="bash"
#
# rm -rf .data
# mkdir .data
# unzip -q att-database-of-faces.zip -d .data/faces-training
# cp -a .data/faces-training .data/faces-test
#
# rm .data/faces-training/*/{1,2,3,4,5}.pgm
# rm .data/faces-test/*/{6,7,8,9,10}.pgm
# +
torch.manual_seed(1)
t = tv.transforms.Compose([
tv.transforms.Grayscale(),
tv.transforms.ToTensor()
])
# Load AT&T database of faces.
dataset = tv.datasets.ImageFolder(root=".data/faces-training", transform=t)
# +
target_person = 30
all_images_of_target = [img for img, label in dataset if label == target_person]
_, ax = plt.subplots(1, len(all_images_of_target), figsize=(20, 5))
for p, img in zip(ax, all_images_of_target):
p.imshow(img.squeeze(), cmap="gray")
p.axis("off")
plt.show()
# +
nc = 40
nf = 112 * 92
model = torch.nn.Linear(nf, nc)
opt = torch.optim.SGD(model.parameters(), lr=0.1)
criterion = torch.nn.CrossEntropyLoss()
# +
loader = torch.utils.data.DataLoader(dataset, batch_size=20, shuffle=True)
n_epochs = 20
cost = []
for i in range(n_epochs):
l = 0
n = 0
for img, labels in loader:
img = img.view(-1, nf) # from [nbatches, 1, 112, 92] to [nbatches, 10304]
output = model(img)
opt.zero_grad()
loss = criterion(output, labels)
loss.backward()
rnd = torch.distributions.normal.Normal(0.0, 1.0)
for p in model.parameters():
p.grad += rnd.sample(torch.Size(p.grad.shape)) * 0.3
##########################################################
# Enable the following lines to get more privacy.
##########################################################
#rnd = torch.distributions.normal.Normal(0.0, 1.0)
#for p in model.parameters():
# p.grad += rnd.sample(torch.Size(p.grad.shape)) * 0.3
##########################################################
opt.step()
l += loss.item()
n += 1
print(i, l/n)
cost.append(l/n)
plt.plot(cost)
plt.show()
# +
dataset = tv.datasets.ImageFolder(root=".data/faces-test", transform=t)
test_loader = torch.utils.data.DataLoader(dataset, batch_size=200)
with torch.no_grad():
img, labels = iter(test_loader).next()
r = model(img.view(-1, nf))
p = r.argmax(dim=1)
print("images:", len(labels))
print("accuracy:", (labels == p).sum().item() / len(labels))
# +
import torch.nn.functional as F
x = torch.zeros(nf, requires_grad=True)
o = torch.optim.SGD([x], lr=0.1)
for i in range(1000):
scores = F.softmax(model(x.view(1, nf)), dim=1).squeeze()
e = torch.tensor([1.0]) - scores[target_person] # error for the target label
o.zero_grad()
e.backward()
o.step()
x
# -
r = F.softmax(model(x), dim=0)
print("score of target person:", r[target_person].item())
print("scores:")
r
# +
img = x.view(112, 92).detach()
plt.imshow(img, cmap="gray")
plt.show()
| inversion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (dataSc)
# language: python
# name: datasc
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"></ul></div>
# +
data = """\
category\tdate1\tdate2
blue\t1/1/2018
blue\t\t1/2/2018
blue\t\t1/5/2018
blue\t2/1/2018
green\t1/3/2018
green\t\t1/8/2018
red\t12/1/2018
red\t\t11/1/2018
red\t\t12/5/2018
"""
# Find minimum date in date1 for the date1 for given category if it falls
# within 7 days.
# e.g. for blue 1/1 of date1, required date2 is 1/2.
#e.g. for blue 2/1/2018, it is nan
required = """\
category date1 isDateWithin7Days? min_date
0 blue 01-01-2018 True 01-02-2018
1 blue 02-01-2018 False NaN
2 green 01-03-2018 True 01-08-2018
3 red 12-01-2018 True 12-05-2018
"""
# -
with open('dates.csv','w') as fo:
fo.write(data)
# + language="bash"
# cat dates.csv
# +
import pandas as pd
import numpy as np
df = pd.read_csv('dates.csv',sep=r'\t',engine='python')
df['date1'] = pd.to_datetime(df['date1'], format = '%m/%d/%Y',errors='ignore')
df['date2'] = pd.to_datetime(df['date2'], format = '%m/%d/%Y',errors='ignore')
df
# -
# get date1 only
df1 = df.dropna(subset = ['date1']).drop(columns = ['date2'])
df1
# get date2 only
df2 = df.dropna(subset = ['date2']).drop(columns = ['date1'])
df2
# merge them
df3 = df1.merge(df2, on = 'category')
df3
df3['isDateWithin7Days?'] = df3['date2'].between(df3['date1'] - pd.Timedelta(days=7),
df3['date1'] + pd.Timedelta(days=7))
df3
# +
df4 = df3.copy()
min_dates = df4[df4['isDateWithin7Days?']].groupby(
['category', 'date1'])['date2'].min().reset_index().rename(
columns = {'date2': 'min_date'})
min_dates
# -
df3 = df3.groupby(
['category', 'date1'])['isDateWithin7Days?'].sum().reset_index()
df3
df3['isDateWithin7Days?'] = np.where(df3['isDateWithin7Days?'] > 0, True, False)
df3
df3.merge(min_dates, how = 'left', on = ['category', 'date1'])
| Interesting_Python/some_modules/date2_within_7_days_date1/date2_within_7days.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SparkMagic Sample to Access an Spark Cluster with Livy an load data from Cassandra
#
# Following configuration is set in the file `sparkmagic/config.json`:
#
# ````
# {
# "kernel_python_credentials" : {
# "username" : "",
# "password" : "",
# "url": "http://[livy-hostname]:8998",
# "auth": "None"
# },
# "session_configs": {
# "driverMemory": "2048M",
# "executorMemory": "2048M",
# "conf": {
# "spark.jars":"file:///opt/additionaljars/spark-cassandra-connector_2.11-2.4.3.jar",
# "spark.cores.max":3,
# "spark.cassandra.input.consistency.level": "QUORUM",
# "spark.cassandra.connection.host": "[cassandra-hostname]",
# "spark.cassandra.auth.username": "username",
# "spark.cassandra.auth.password": "password"
# }
# }
# }
# ````
# At least you have to configure the correct hostname of Apache Livy and the any node of the cassandra cluster inclusive username and password (if authentication is required)!
# On Spark Standalone Cluster the property spark.cores.max should be set to limit ressource that will be used by this session!
#
# Another important point is property "spark.jars" where the cassandra-connector as file path is set. This absolute file path must available on the livy server. Additionally you have to set the following property in Livy configuration to allow loading jars from a given folder:
#
# `livy.file.local-dir-whitelist = /opt/additionaljars/`
# %load_ext sparkmagic.magics
# %manage_spark
# + language="spark"
# spark.version
# + language="spark"
#
# table_name = "test"
# keyspace_name = "testkeyspace"
# df = spark.read.format("org.apache.spark.sql.cassandra").options(keyspace=keyspace_name,table=table_name).load().where("surname = 'Kohatz'")
# df.show()
| jupyter/jupyter-sparkmagic/notebooks/SampleNotebookWithCassandraConnector.ipynb |
# ---
# jupyter:
# jupytext:
# notebook_metadata_filter: all
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# language_info:
# codemirror_mode:
# name: ipython
# version: 3
# file_extension: .py
# mimetype: text/x-python
# name: python
# nbconvert_exporter: python
# pygments_lexer: ipython3
# version: 3.6.5
# plotly:
# description: How to configure Plotly's Python API to work with corporate proxies
# display_as: chart_studio
# has_thumbnail: true
# language: python
# layout: base
# name: Requests Behind Corporate Proxies
# order: 10
# permalink: python/proxy-configuration/
# thumbnail: thumbnail/net.jpg
# title: requests.exceptions.ConnectionError - Getting Around Corporate Proxies
# v4upgrade: true
# ---
# If you are behind a corporate firewall, you may see the error message:
# ```
# requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(10060, ...)
# ```
# Plotly uses the requests module to communicate with the Plotly server. You can configure proxies by setting the environment variables HTTP_PROXY and HTTPS_PROXY.
# ```
# $ export HTTP_PROXY="http://10.10.1.10:3128"
# $ export HTTPS_PROXY="http://10.10.1.10:1080"
# ```
# To use HTTP Basic Auth with your proxy, use the http://user:password@host/ syntax:
#
# ```
# $ export HTTP_PROXY="http://user:pass@10.10.1.10:3128/"
# ```
#
# Note that proxy URLs must include the scheme.
#
# You may also see this error if your proxy variable is set but you are no longer behind the corporate proxy. Check if a proxy variable is set with:
#
# ```
# $ echo $HTTP_PROXY
# $ echo $HTTPS_PROXY
# ```
# **Still not working?**
#
# [Log an issue](https://github.com/plotly/plotly.py)
#
# Contact [<EMAIL>]()
#
# Get in touch with your IT department, and ask them about corporate proxies.
#
# [Requests documentation on configuring proxies](http://docs.python-requests.org/en/latest/user/advanced/#proxies) the requests documentation.
#
# Plotly for IPython Notebooks is also available for [offline use](https://plot.ly/python/offline/).
#
# [Chart Studio Enterprise](https://plot.ly/product/enterprise) is available for behind-the-firewall corporate installations.
| _posts/python/chart-studio/proxy-configuration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Demonstration notebook for SfM alignment dataset
#
# This notebook demonstrates how to load and display the dataset.
import numpy as np
from matplotlib import pyplot as plt
import cv2
# #### Loading the global SfM model
# The 3d points in the global SfM model are stored as 64-bit floating point 3-vectors.
#
# x1,y1,z1,x2,y2,z2,...
#
points = np.fromfile('points.double3',dtype='float64')
points = points.reshape([-1,3])
points.shape
# The point cloud has a lot of outlier points in it. The office scene is contained in a $[-3~3]^3$ box.
plt.scatter(points[:,0],points[:,1],s=0.1)
plt.xlim([-3,3])
plt.ylim([-3,3])
plt.title('SfM Global Point Cloud')
plt.show()
# Each point in the global SfM model has an associated SIFT descriptor. These are stored as 32-bit float 128-vectors.
descriptors = np.fromfile('descriptors.float128',dtype='float32')
descriptors = descriptors.reshape([-1,128])
descriptors.shape
# #### Loading a SLAM sequence
# +
def skew3(x):
A = np.zeros((3,3))
A[0,1] = -x[2]
A[0,2] = x[1]
A[1,0] = x[2]
A[1,2] = -x[0]
A[2,0] = -x[1]
A[2,1] = x[0]
return A
def so3exp(r):
theta = np.sqrt(np.sum(r**2))
if theta < 1e-10:
return np.eye(3)
K = skew3(r/theta)
R = np.eye(3)+np.sin(theta)*K+(1-np.cos(theta))*np.dot(K,K)
return R
class Keyframe:
def __init__(self,directory,index):
basepath = '%s/frame%02d'%(directory,index)
self.descriptors = np.fromfile(basepath + '.descriptors.float128',dtype='float32').reshape([-1,128])
self.features = np.fromfile(basepath + '.features.float2',dtype='float32').reshape([-1,2])
self.gt_c = np.fromfile(basepath + '.groundtruth_position.double3',dtype='float64')
self.gt_r = np.fromfile(basepath + '.groundtruth_rotation.double3',dtype='float64')
self.gt_R = so3exp(self.gt_r)
self.gt_t = -np.dot(self.gt_R, self.gt_c)
self.slam_t = np.fromfile(basepath + '.slampose_translation.double3',dtype='float64')
self.slam_r = np.fromfile(basepath + '.slampose_rotation.double3',dtype='float64')
self.slam_R = so3exp(self.slam_r)
self.slam_c = -np.dot(self.slam_R.T, self.slam_t)
def get_matches(self,descriptors):
bf = cv2.BFMatcher()
matches = bf.knnMatch(self.descriptors,descriptors,k=2)
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append(m)
return good
def load_frames(path):
""" Load all keyframes in a directory."""
frames = []
index = 0
while True:
try:
frames.append(Keyframe(path,index))
index += 1
except:
break
return frames
# -
# We will load the office3 dataset as an example.
frames = load_frames('office3')
# Here we plot the ground truth keyframe positions on top of the SfM point cloud.
gt_c = np.stack([frame.gt_c for frame in frames])
plt.scatter(points[:,0],points[:,1],s=0.1)
plt.xlim([-3,3])
plt.ylim([-3,3])
plt.plot(gt_c[:,0],gt_c[:,1],'r-')
plt.title('Ground truth camera positions')
plt.show()
# Here is a plot of the camera trajectory as estimated by the SLAM system. Note that the coordinate system is not aligned with the ground truth.
slam_c = np.stack([frame.slam_c for frame in frames])
plt.plot(slam_c[:,0],slam_c[:,1],'r-')
plt.axis('equal')
plt.title('SLAM trajectory estimate')
plt.show()
# #### Test absolute orientation registration of SLAM poses and GT poses
#
# Now we will register the ground truth and SLAM trajectories using a similarity transformation and compute the resulting mean-squared error.
# code from https://github.com/raulmur/evaluate_ate_scale/blob/master/evaluate_ate_scale.py
def align(model,data):
"""Align two trajectories using the method of Horn (closed-form).
Input:
model -- first trajectory (3xn)
data -- second trajectory (3xn)
Output:
rot -- rotation matrix (3x3)
trans -- translation vector (3x1)
trans_error -- translational error per point (1xn)
"""
np.set_printoptions(precision=3,suppress=True)
model_zerocentered = model - model.mean(1)
data_zerocentered = data - data.mean(1)
W = np.zeros( (3,3) )
for column in range(model.shape[1]):
W += np.outer(model_zerocentered[:,column],data_zerocentered[:,column])
U,d,Vh = np.linalg.linalg.svd(W.transpose())
S = np.matrix(np.identity( 3 ))
if(np.linalg.det(U) * np.linalg.det(Vh)<0):
S[2,2] = -1
rot = U*S*Vh
rotmodel = rot*model_zerocentered
dots = 0.0
norms = 0.0
for column in range(data_zerocentered.shape[1]):
dots += np.dot(data_zerocentered[:,column].transpose(),rotmodel[:,column])
normi = np.linalg.norm(model_zerocentered[:,column])
norms += normi*normi
s = float(dots/norms)
trans = data.mean(1) - s*rot * model.mean(1)
model_aligned = s*rot * model + trans
alignment_error = model_aligned - data
trans_error = np.sqrt(np.sum(np.multiply(alignment_error,alignment_error),0)).A[0]
return rot,trans,trans_error, s
first_xyz = np.matrix(gt_c).T
second_xyz = np.matrix(slam_c).T
rot,trans,trans_error,scale = align(second_xyz,first_xyz)
second_xyz_aligned = scale * rot * second_xyz + trans
slam_c_aligned = np.array(second_xyz_aligned.T)
plt.scatter(points[:,0],points[:,1],s=0.1)
plt.xlim([-3,3])
plt.ylim([-3,3])
plt.plot(gt_c[:,0],gt_c[:,1],'r-')
plt.plot(slam_c_aligned[:,0],slam_c_aligned[:,1],'y-')
plt.title('SLAM and ground truth trajectories after alignment')
plt.legend(['GT','SLAM'])
plt.show()
mse = np.mean((gt_c-slam_c_aligned)**2)
print('Mean squared error after alignment: %f'%mse)
| Demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
datadf = pd.read_csv('data/visual-search-study-data-live-analyze.csv')
datadf['source'] = 'data'
dataMetricsdf = pd.read_csv('data/metrics.csv')
dataMetricsdf['source'] = 'data'
# get rid of outliers
datadf = datadf.loc[datadf['timetaken'] < 10000]
datadf = datadf.loc[datadf['timetaken'] > 250]
datadf = datadf.loc[datadf['uuid'] != '']
basemaps = [
'dark',
'streets',
'imagery'
]
# # add random all data to all other basemaps - same map so would have been same numbers
# for baseMapIndex in range(len(basemaps)):
# datadfRandomAll = datadf.loc[datadf['distractor'] == 'random']
# datadfRandomAll = datadfRandomAll.loc[datadf['distractorlevel'] == 'all']
# datadfRandomAll = datadfRandomAll.loc[datadf['basemap'] == 'none']
# datadfRandomAll['source'] = 'copied'
# datadfRandomAll['basemap'] = basemaps[baseMapIndex]
# datadfRandomAll['distractorcat'] = datadfRandomAll['distractorcat'].replace('none' , basemaps[baseMapIndex], regex=True)
# datadf = datadf.append(datadfRandomAll, ignore_index=True)
# # add clustered mixed data to all other basemaps - same map so would have been same numbers
# for baseMapIndex in range(len(basemaps)):
# datadfRandomAll = datadf.loc[datadf['distractor'] == 'clustered']
# datadfRandomAll = datadfRandomAll.loc[datadf['color'] == 'mixed']
# datadfRandomAll = datadfRandomAll.loc[datadf['basemap'] == 'none']
# datadfRandomAll['source'] = 'copied'
# datadfRandomAll['basemap'] = basemaps[baseMapIndex]
# datadfRandomAll['distractorcat'] = datadfRandomAll['distractorcat'].replace('none' , basemaps[baseMapIndex], regex=True)
# datadf = datadf.append(datadfRandomAll, ignore_index=True)
# create categories
datadf['grpcat'] = datadf['target'] + "-" + datadf['distractorcat']
datadf['category'] = datadf['distractorcat']
datadf['posthoccat'] = datadf['distractor'] + '-' + datadf['distractorlevel'] + '-' + datadf['color'] + '-' + datadf['target'].replace(' ' ,'_', regex=True)
datadf['posthoccat2'] = datadf['distractor'] + '-' + datadf['distractorlevel'] + '-' + datadf['color']
datadf['posthoccat3'] = datadf['distractor'] + '-' + datadf['color'] + '-' + datadf['target'].replace(' ' ,'_', regex=True)
# log metrics
dataMetricsdf['noplog'] = np.log(dataMetricsdf['number_of_patches'])
dataMetricsdf['contagionlog'] = np.log(dataMetricsdf['contagion'])
dataMetricsdf['shapeindexlog'] = np.log(dataMetricsdf['shape_index_mn'])
dataMetricsdf['fracdimlog'] = np.log(dataMetricsdf['fractal_dimension_mn'])
dataMetricsdf['ennlog'] = np.log(dataMetricsdf['euclidean_nearest_neighbor_mn'])
dataMetricsdf['patchensitylog'] = np.log(dataMetricsdf['patch_density'])
dataMetricsdf['areamnlog'] = np.log(dataMetricsdf['area_mn'])
# merge metrics and data
datadf = pd.merge(datadf, dataMetricsdf, on="distractorcat", how="left")
# export merged data
datadf.to_csv('data/data-and-metrics.csv')
# -
datadf['searchmapurl']
# +
import matplotlib.pyplot as plt
import seaborn as sns
# basemap = 'none'
highlim = 10
metric = 'number_of_patches'
datadf['nop_rank'] = pd.qcut(datadf[metric], highlim, labels = False)
metric = 'euclidean_nearest_neighbor_mn'
datadf['enn_rank'] = pd.qcut(datadf[metric], highlim, labels = False)
metric = 'contagion'
datadf['cont_rank'] = pd.qcut(datadf[metric], highlim, labels = False)
datadf
datadfMetricRankPivot = pd.pivot_table(datadf, values=['timetaken'], index=['enn_rank'], columns=['target'], aggfunc=np.mean)
datadfMetricRankPivot = datadfMetricRankPivot.rename_axis(None, axis=0)
datadfMetricRankPivot.columns = datadfMetricRankPivot.columns.get_level_values(1)
datadfMetricRankPivot['diff'] = datadfMetricRankPivot['less gestalt'] - datadfMetricRankPivot['gestalt']
datadfMetricRankPivot = datadfMetricRankPivot.sort_values(["diff"], ascending = (False))
datadfMetricRankPivot['rank'] = datadfMetricRankPivot.index
datadfMetricRankPivot
# # sns.lineplot(data=datadfMetricRankPivot, x="rank", y="diff")
# val = datadfMetricRankPivot.index.values[0]
# datadfLim = datadf.loc[datadf['cont_rank'] == val]
# grouped_single = datadfLim.groupby(['file', 'target']).agg({'timetaken': ['mean']})
# grouped_single = grouped_single.reset_index()
# grouped_single.columns = ['file', 'target', 'timetaken_mean']
# grouped_single
# Pivot = pd.pivot_table(grouped_single, values=['timetaken_mean'], index=['file'], columns=['target'], aggfunc=np.mean)
# Pivot = Pivot.rename_axis(None, axis=0)
# Pivot.columns = Pivot.columns.get_level_values(1)
# Pivot['file'] = Pivot.index
# Pivot['file'] = Pivot['file'].replace('/tiledata/maps/map-for-s3-tifs-georef/target-none/', '', regex=True)
# Pivot['file'] = Pivot['file'].replace('/tiledata/maps/map-for-s3-tifs-georef/target-none/', '', regex=True)
# Pivot['file'] = Pivot['file'].replace('/tiledata/maps/missed/', '', regex=True)
# Pivot['file'] = Pivot['file'].replace('/tiledata/maps/map-for-s3-tifs-georef/blue-target-none/', '', regex=True)
# Pivot['file'] = Pivot['file'].replace('.tif', '.png', regex=True)
# Pivot['diff'] = Pivot['less gestalt'] - Pivot['gestalt']
# Pivot = Pivot.sort_values(["diff"], ascending = (False))
# Pivot = Pivot.reset_index()
# Pivot.columns = ['rem', 'gestalt', 'less gestalt', 'no target', 'file', 'diff']
# Pivot = Pivot.drop(columns=['rem','gestalt', 'less gestalt', 'no target'])
# Pivot['rank'] = val
# Pivot
# Pivot.to_csv('data/top_maps-enn_rank-baddata.csv')
# -
# +
highlim = 10
metric = 'number_of_patches'
datadf['nop_rank'] = pd.qcut(datadf[metric], highlim, labels = False)
metric = 'euclidean_nearest_neighbor_mn'
datadf['enn_rank'] = pd.qcut(datadf[metric], highlim, labels = False)
metric = 'contagion'
datadf['cont_rank'] = pd.qcut(datadf[metric], highlim, labels = False)
datadfsort = datadf.loc[datadf['target'] != 'no target']
datadfsort = datadfsort.reset_index()
datadfsort = datadfsort.sort_values(["timetaken","contagion"], ascending = (False))
datadfsort['file'] = datadfsort['file'].replace('/tiledata/maps/map-for-s3-tifs-georef/target-none/', '', regex=True)
datadfsort['file'] = datadfsort['file'].replace('/tiledata/maps/map-for-s3-tifs-georef/target-none/', '', regex=True)
datadfsort['file'] = datadfsort['file'].replace('/tiledata/maps/missed/', '', regex=True)
datadfsort['file'] = datadfsort['file'].replace('/tiledata/maps/map-for-s3-tifs-georef/blue-target-none/', '', regex=True)
datadfsort['file'] = datadfsort['file'].replace('.tif', '.png', regex=True)
grouped_single = datadfsort.groupby(['file', 'target']).agg({'timetaken': ['mean'],
'nop_rank': ['mean'],
'enn_rank': ['mean'],
'cont_rank': ['mean']})
grouped_single = grouped_single.reset_index()
grouped_single.columns = ['file', 'target', 'timetaken_mean', 'nop_mean', 'enn_mean', 'cont_mean']
Pivot = (grouped_single.pivot_table(index=['file','nop_mean', 'enn_mean', 'cont_mean'],
columns='target', values='timetaken_mean')
.reset_index()
.rename_axis(None, axis=1))
Pivot['diff'] = Pivot['less gestalt'] - Pivot['gestalt']
Pivot['diffpercent'] = Pivot['diff']/Pivot['gestalt']
Pivot = Pivot.sort_values(["diffpercent"], ascending = (False))
Pivot
Pivot.to_csv('data/biggest_differences.csv')
Pivot = Pivot.sort_values(["less gestalt"], ascending = (False))
Pivot
Pivot.to_csv('data/rt_highest_lessgestalt.csv')
Pivot = Pivot.sort_values(["gestalt"], ascending = (False))
Pivot
Pivot.to_csv('data/rt_highest_gestalt.csv')
# -
| notebooks/import and clean.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from random import randrange
class Pet():
boredom_decrement = 4
hunger_decrement = 6
boredom_threshold = 5
hunger_threshold = 10
sounds = ['Mrrp']
def __init__(self, name = "Kitty", pet_type="dog"):
self.name = name
self.hunger = randrange(self.hunger_threshold)
self.boredom = randrange(self.boredom_threshold)
self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class
self.pet_type = pet_type
def clock_tick(self):
self.boredom += 1
self.hunger += 1
def mood(self):
if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold:
if self.pet_type == "dog": # if the pet is a dog, it will express its mood in different ways from a cat or any other type of animal
return "happy"
elif self.pet_type == "cat":
return "happy, probably"
else:
return "HAPPY"
elif self.hunger > self.hunger_threshold:
if self.pet_type == "dog": # same for hunger -- dogs and cats will express their hunger a little bit differently in this version of the class definition
return "hungry, arf"
elif self.pet_type == "cat":
return "hungry, meeeeow"
else:
return "hungry"
else:
return "bored"
def __str__(self):
state = " I'm " + self.name + ". "
state += " I feel " + self.mood() + ". "
return state
def hi(self):
print(self.sounds[randrange(len(self.sounds))])
self.reduce_boredom()
def teach(self, word):
self.sounds.append(word)
self.reduce_boredom()
def feed(self):
self.reduce_hunger()
def reduce_hunger(self):
self.hunger = max(0, self.hunger - self.hunger_decrement)
def reduce_boredom(self):
self.boredom = max(0, self.boredom - self.boredom_decrement)
# -
# ### Inheriting Variables and Methods
# +
CURRENR_YEAR = 2022
class Person:
def __init__(self, name, year_born):
self.name = name
self.year_born = year_born
def getAge(self):
return CURRENR_YEAR - self.year_born
def __str__(self):
return f'{self.name}, {self.getAge()}'
# -
alice = Person("<NAME>", 1990)
print(alice)
# Want to create a class 'Student'with same attributes than 'Person' plus one attribite
class Student:
def __init__(self, name, year_born):
self.name = name
self.year_born = year_born
self.knowledge = 0
def study(self):
self.knowledge += 1
def getAge(self):
return CURRENR_YEAR - self.year_born
def __str__(self):
return f'{self.name}, {self.getAge()}'
alice = Student("<NAME>", 1990)
alice.study()
print(alice.knowledge)
# #### with inheritance :
class Student(Person):
def __init__(self, name, year_born):
Person.__init__(self, name, year_born) #inheritance
self.knowledge = 0
def study(self):
self.knowledge += 1
alice = Student("<NAME>", 1990)
alice.study()
print(alice.knowledge)
print(alice.getAge())
# ### Invoking the Parent Class's Method
# +
from random import randrange
class Pet():
boredom_decrement = 4
hunger_decrement = 6
boredom_threshold = 5
hunger_threshold = 10
sounds = ['Mrrp']
def __init__(self, name = "Kitty", pet_type="dog"):
self.name = name
self.hunger = randrange(self.hunger_threshold)
self.boredom = randrange(self.boredom_threshold)
self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class
self.pet_type = pet_type
def clock_tick(self):
self.boredom += 1
self.hunger += 1
def mood(self):
if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold:
if self.pet_type == "dog": # if the pet is a dog, it will express its mood in different ways from a cat or any other type of animal
return "happy"
elif self.pet_type == "cat":
return "happy, probably"
else:
return "HAPPY"
elif self.hunger > self.hunger_threshold:
if self.pet_type == "dog": # same for hunger -- dogs and cats will express their hunger a little bit differently in this version of the class definition
return "hungry, arf"
elif self.pet_type == "cat":
return "hungry, meeeeow"
else:
return "hungry"
else:
return "bored"
def __str__(self):
state = " I'm " + self.name + ". "
state += " I feel " + self.mood() + ". "
return state
def hi(self):
print(self.sounds[randrange(len(self.sounds))])
self.reduce_boredom()
def teach(self, word):
self.sounds.append(word)
self.reduce_boredom()
def feed(self):
self.reduce_hunger()
def reduce_hunger(self):
self.hunger = max(0, self.hunger - self.hunger_decrement)
def reduce_boredom(self):
self.boredom = max(0, self.boredom - self.boredom_decrement)
# +
# the method feed will only return 'arf thanks'
class Dog(Pet):
def feed(self):
print('Arf! Thanks!')
# feed will return print and reduce hunger
class Dog(Pet):
def feed(self):
Pet.feed(self)
print('Arf! Thanks!')
d1 = Dog('Astro')
print(d1.hunger)
d1.feed()
print(d1.hunger)
# -
| 02_Inheritance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
index = ['a','b','c','d','e1','e2']
values = ['Austria','Belgium','Canada','Denmakr','England','Estonia']
s = pd.Series(values,index)
s
searchList = ['a','b']
s.loc[searchList]
searchListNotFound = ['a','b','f']
s.loc[searchListNotFound]
s.reindex(searchListNotFound)
s.index
s.index.intersection(searchListNotFound)
s.loc[s.index.intersection(searchListNotFound)]
import pandas as pd
index = ['a','b','c','d','e','e']
values = ['Austria','Belgium','Canada','Denmakr','England','Estonia']
s = pd.Series(values,index)
s
s.reindex(searchListNotFound)
s.loc[s.index.intersection(searchListNotFound)]
searchListNotFound +='e'
s.loc[s.index.intersection(searchListNotFound)]
| Data_Series/Reindex_intersection_Data_Series.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import warnings
warnings.filterwarnings("ignore")
import os
import jieba
import torch
import pickle
import torch.nn as nn
import torch.optim as optim
import pandas as pd
from ark_nlp.model.tm.bert import Bert
from ark_nlp.model.tm.bert import BertConfig
from ark_nlp.dataset import TMDataset
from ark_nlp.model.tm.bert import Task
from ark_nlp.model.tm.bert import get_default_model_optimizer
from ark_nlp.model.tm.bert import Tokenizer
# -
# ### 一、数据读入与处理
# #### 1. 数据读入
class CondTMDataset(TMDataset):
def __init__(
self,
data_path,
categories=None,
is_retain_dataset=False
):
super(CondTMDataset, self).__init__(data_path, categories, is_retain_dataset)
self.conditions = sorted(list(set([data['condition'] for data in self.dataset])))
self.condition2id = dict(zip(self.conditions, range(len(self.conditions))))
def _convert_to_transfomer_ids(self, bert_tokenizer):
features = []
for (index_, row_) in enumerate(self.dataset):
input_ids = bert_tokenizer.sequence_to_ids(row_['text_a'], row_['text_b'])
input_ids, input_mask, segment_ids = input_ids
label_ids = self.cat2id[row_['label']]
input_a_length = self._get_input_length(row_['text_a'], bert_tokenizer)
input_b_length = self._get_input_length(row_['text_b'], bert_tokenizer)
features.append({
'input_ids': input_ids,
'attention_mask': input_mask,
'token_type_ids': segment_ids,
'condition_ids': self.condition2id[row_['condition']],
'label_ids': label_ids
})
return features
# +
train_data_df = pd.read_json('../data/source_datasets/CHIP-STS/CHIP-STS_train.json')
train_data_df = (train_data_df
.rename(columns={'text1': 'text_a', 'text2': 'text_b', 'category': 'condition'})
.loc[:,['text_a', 'text_b', 'condition', 'label']])
dev_data_df = pd.read_json('../data/source_datasets/CHIP-STS/CHIP-STS_dev.json')
dev_data_df = dev_data_df[dev_data_df['label'] != "NA"]
dev_data_df = (dev_data_df
.rename(columns={'text1': 'text_a', 'text2': 'text_b', 'category': 'condition'})
.loc[:,['text_a', 'text_b', 'condition', 'label']])
# -
tm_train_dataset = CondTMDataset(train_data_df)
tm_dev_dataset = CondTMDataset(dev_data_df)
# #### 2. 词典创建和生成分词器
tokenizer = Tokenizer(vocab='nghuyong/ernie-1.0', max_seq_len=50)
# #### 3. ID化
tm_train_dataset.convert_to_ids(tokenizer)
tm_dev_dataset.convert_to_ids(tokenizer)
# <br>
#
# ### 二、模型构建
# #### 1. 模型参数设置
config = BertConfig.from_pretrained('nghuyong/ernie-1.0',
num_labels=len(tm_train_dataset.cat2id))
config.num_conditions = len(tm_train_dataset.condition2id)
# #### 2. 模型创建
from ark_nlp.nn.layer.layer_norm_block import CondLayerNormLayer
class CondBert(Bert):
def __init__(
self,
config,
encoder_trained=True,
pooling='cls_with_pooler'
):
super(CondBert, self).__init__(config, encoder_trained, pooling)
self.condition_embed = nn.Embedding(config.num_conditions, config.hidden_size)
nn.init.uniform_(self.condition_embed.weight.data)
self.cond_layer_normal = CondLayerNormLayer(config.hidden_size)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
condition_ids=None,
**kwargs
):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
return_dict=True,
output_hidden_states=True
)
condition_feature = self.condition_embed(condition_ids)
encoder_feature = self.cond_layer_normal(outputs.hidden_states[-1], condition_feature)
encoder_feature = self.mask_pooling(encoder_feature, attention_mask)
encoder_feature = self.dropout(encoder_feature)
out = self.classifier(encoder_feature)
return out
dl_module = CondBert.from_pretrained('nghuyong/ernie-1.0',
config=config)
# <br>
#
# ### 三、任务构建
# #### 1. 任务参数和必要部件设定
# 设置运行次数
num_epoches = 5
batch_size = 32
optimizer = get_default_model_optimizer(dl_module)
# #### 2. 任务创建
model = Task(dl_module, optimizer, 'ce', cuda_device=0)
# #### 3. 训练
model.fit(tm_train_dataset,
tm_dev_dataset,
lr=2e-5,
epochs=num_epoches,
batch_size=batch_size
)
# <br>
#
# ### 四、生成提交数据
# +
import torch
from torch.utils.data import DataLoader
from ark_nlp.factory.predictor.text_match import TMPredictor
class CondTMPredictor(TMPredictor):
def __init__(
self,
module,
tokernizer,
cat2id,
condition2id
):
super(CondTMPredictor, self).__init__(module, tokernizer, cat2id)
self.condition2id = condition2id
def _convert_to_transfomer_ids(
self,
text_a,
text_b,
condition
):
input_ids = self.tokenizer.sequence_to_ids(text_a, text_b)
input_ids, input_mask, segment_ids = input_ids
features = {
'input_ids': input_ids,
'attention_mask': input_mask,
'token_type_ids': segment_ids,
'condition_ids': np.array([self.condition2id[condition]])
}
return features
def _get_input_ids(
self,
text_a,
text_b,
condition
):
if self.tokenizer.tokenizer_type == 'transfomer':
return self._convert_to_transfomer_ids(text_a, text_b, condition)
else:
raise ValueError("The tokenizer type does not exist")
def predict_one_sample(
self,
text,
condition,
topk=None,
return_label_name=True,
return_proba=False
):
if topk is None:
topk = len(self.cat2id) if len(self.cat2id) > 2 else 1
text_a, text_b = text
features = self._get_input_ids(text_a, text_b, condition)
self.module.eval()
with torch.no_grad():
inputs = self._get_module_one_sample_inputs(features)
logit = self.module(**inputs)
logit = torch.nn.functional.softmax(logit, dim=1)
probs, indices = logit.topk(topk, dim=1, sorted=True)
preds = []
probas = []
for pred_, proba_ in zip(indices.cpu().numpy()[0], probs.cpu().numpy()[0].tolist()):
if return_label_name:
pred_ = self.id2cat[pred_]
preds.append(pred_)
if return_proba:
probas.append(proba_)
if return_proba:
return list(zip(preds, probas))
return preds
def predict_batch(
self,
test_data,
batch_size=16,
shuffle=False,
return_label_name=True,
return_proba=False
):
self.inputs_cols = test_data.dataset_cols
preds = []
probas = []
self.module.eval()
generator = DataLoader(test_data, batch_size=batch_size, shuffle=shuffle)
with torch.no_grad():
for step, inputs in enumerate(generator):
inputs = self._get_module_batch_inputs(inputs)
logits = self.module(**inputs)
preds.extend(torch.max(logits, 1)[1].cpu().numpy())
if return_proba:
logits = torch.nn.functional.softmax(logits, dim=1)
probas.extend(logits.max(dim=1).values.cpu().detach().numpy())
if return_label_name:
preds = [self.id2cat[pred_] for pred_ in preds]
if return_proba:
return list(zip(preds, probas))
return preds
# -
tm_predictor_instance = CondTMPredictor(model.module, tokenizer, tm_train_dataset.cat2id, tm_train_dataset.condition2id)
# +
import pandas as pd
test_df = pd.read_json('../data/source_datasets/CHIP-STS/CHIP-STS_test.json')
submit = []
for _id, _text_a, _text_b, _condition in zip(test_df['id'], test_df['text1'], test_df['text2'], test_df['category']):
if _condition == 'daibetes':
_condition = 'diabetes'
predict_ = tm_predictor_instance.predict_one_sample([_text_a, _text_b], _condition)[0]
submit.append({
'id': str(_id),
'text1': _text_a,
'text2': _text_b,
'label': predict_,
'category': _condition
})
# +
import json
output_path = '../data/output_datasets/CHIP-STS_test.json'
with open(output_path, 'w', encoding='utf-8') as f:
f.write(json.dumps(submit, ensure_ascii=False))
| example/CHIP-STS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Differences between interactive and production work
# Note: while standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we (the pandas development team) recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix.
#
# from:http://pandas.pydata.org/pandas-docs/stable/10min.html
import pandas as pd
import numpy as np
sample_numpy_data = np.array(np.arange(24)).reshape((6,4))
dates_index = pd.date_range('20160101', periods=6)
sample_df = pd.DataFrame(sample_numpy_data, index=dates_index, columns=list('ABCD'))
sample_df
# ##### selection using column name
sample_df['C']
# ##### selection using slice
# - remember: up to, but not including second index
sample_df[1:4]
# ##### selection using date time index
# - note: last index is included
sample_df['2016-01-01': '2016-01-04']
# ### Selection by label
# documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html
#
# label-location based indexer for selection by label
sample_df.loc[dates_index[1:3]]
# ##### Selecting using multi-axis by label
sample_df.loc[:, ['A', 'B']]
# ##### Label slicing, both endpoints are included
sample_df.loc['2016-01-01':'2016-01-03',['A','B']]
# ##### Reduce number of dimensions for returned object
# - notice order of 'D' and 'B'
sample_df.loc['2016-01-03',['D','B']]
# ##### using result
sample_df.loc['2016-01-03',['D','B']][0]*4
# ##### select a scalar
sample_df.loc[dates_index[2], 'C']
# ### Selection by Position
# documentation: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html
#
# integer-location based indexing for selection by position
sample_numpy_data[3]
sample_df.iloc[3]
# ##### integer slices
sample_df.iloc[1:3, 2:4]
# ##### lists of integers
sample_df.iloc[[0, 1, 3], [0, 2]]
# ##### slicing rows explicitly
# implicitly selecting all columns
sample_df.iloc[1:2, :]
# ##### slicing columns explicitly
# implicitly selecting all rows
sample_df.iloc[:, 1:2]
# ### Boolean Indexing
# ##### test based upon one column's data
sample_df.C > 14
# ##### test based upon entire data set
sample_df[sample_df > 14]
# ##### isin() method
# documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html
#
# Returns a boolean Series showing whether each element in the Series is exactly contained in the passed sequence of values.
sample_df_2 = sample_df.copy()
sample_df_2['Fruits'] = ['apple', 'orange','banana','strawberry','blueberry','pineapple']
sample_df_2
# select rows where 'Fruits' column contains eith 'banana' or 'pineapple'; notice 'smoothy', which is not in the column
sample_df_2[sample_df_2['Fruits'].isin(['banana','pineapple', 'smoothy'])]
| basics/second/Selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## imports ##
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle as pkl
import scipy.stats
####
## global ##
dataPath='/Users/ziegler/repos/mayfly/output/timeSeries1252021.pkl'
templatePitchAngles=np.linspace(85,90,51)
templatePos=np.linspace(0,5e-2,21)
radius=0.00
nPeaks=5
keysAmp=[]
keysInd=[]
keysR=[]
keysI=[]
for i in range(nPeaks):
keysAmp.append('pAmp'+str(i))
keysInd.append('pInd'+str(i))
keysR.append('pR'+str(i))
keysI.append('pI'+str(i))
colors=['r','b','g','c','m','k']
frequencyConversion=200e6/8192
####
## definitions ##
####
with open(dataPath,'rb') as infile:
data=pkl.load(infile)
data=pd.DataFrame(data)
#rads=np.arange(0.00,0.05,0.01)
rads=[0.0]
#print(rads)
fig,axs=plt.subplots()
fig.set_facecolor('white')
for i,rad in enumerate(rads):
dataR=data[data["r"]==rad].sort_values('pa')
nPeaks=len(dataR['ind'].iloc[0])
nEntries=dataR['pa'].size
meanPow=np.zeros(nEntries)
for n in range(nEntries):
meanPow[n]=np.mean(dataR['amp'].iloc[n])
for n in range(nPeaks):
tempInds=[]
tempAmps=[]
for j in range(nEntries):
tempInds.append((dataR['ind'].iloc[j])[n])
tempAmps.append((dataR['amp'].iloc[j])[n])
plot=axs.scatter(dataR['pa'],tempInds,c=meanPow,cmap='inferno')
fig.colorbar(plot,label='Mean Peak Amplitude')
plt.title("Frequency Peaks vs Pitch Angle, R=0.0 cm")
plt.xlabel(r'$\theta$/deg')
plt.ylabel('Frequency Index')
#plt.savefig('/Users/ziegler/repos/romulus/output/frequencyPeaksVsPitchAngleR0.04.png')
#print(meanPow)
#for j in range(nEntries):
# axs[i].scatter(dataR['pa'].iloc[j]*np.ones(len(dataR['ind'].iloc[j])),
# dataR['ind'].iloc[j],c=meanPow[j]*np.ones(len(dataR['ind'].iloc[j])),cmap='inferno')
#axs[i].tick_params(axis='y',labelsize=20)
#axs[i].tick_params(axis='x',labelsize=20)
#axs[i].set_title("Signal Peaks vs Pitch Angle\n R = " +str(rad*100)+" cm",fontsize=20)
#plt.savefig("/Users/ziegler/plots/signalPeaksVsPitchAngle2152021/peaksVsPitchAngleAllRads.png")
# +
dataR=data[data["r"]==radius].sort_values('pa')
nEntries=dataR['pa'].size
dataPairs=[]
for n in range(nEntries):
dataPairs.append(list(zip(dataR['pa'].iloc[n]*np.ones(dataR['ind'].iloc[n].size),dataR['ind'].iloc[n])))
dataPairs=np.array(dataPairs)
# +
usedElements=np.zeros(dataPairs.shape[0]*dataPairs.shape[1])
pairIndexList=np.arange(0,usedElements.size,1)
#print(pairIndexList)
numberUsed=len(np.where(usedElements==0)[0])
while numberUsed>0:
# select random data point that hasn't been used
for i in range(len(pairIndexList)):
selectedPointIndex=np.random.choice(pairIndexList)
if usedElements[selectedPointIndex]==0:
usedElements[selectedPointIndex]=1;
#print(usedElements)
break
selectedPointIndex=np.unravel_index(selectedPointIndex,(dataPairs.shape[0],dataPairs.shape[1]))
numberUsed=len(np.where(usedElements==0)[0])
# start building a line
line=[]
line.append(selectedPointIndex)
# step 1: find nearest neighbor point in the adjacent rows
selectedRow=selectedPointIndex[0]
selectedFrequency=selectedPointIndex[1]
minDistRowMinus=np.min(np.sum(((dataPairs[selectedRow-1,:,:]-dataPairs[selectedRow,selectedFrequency,:])**2),axis=1))
minIndRowMinus=np.argmin(np.sum(((dataPairs[selectedRow-1,:,:]-dataPairs[selectedRow,selectedFrequency,:])**2),axis=1))
try:
minDistRowPlus=np.min(np.sum(((dataPairs[selectedRow+1,:,:]-dataPairs[selectedRow,selectedFrequency,:])**2),axis=1))
minIndRowPlus=np.argmin(np.sum(((dataPairs[selectedRow+1,:,:]-dataPairs[selectedRow,selectedFrequency,:])**2),axis=1))
except:
distRowPlus=[]
# add that point to the line and the list of used points
if minDistRowMinus<minDistRowPlus:
line.append([selectedRow-1,minIndRowMinus])
usedElements=usedElements.reshape(dataPairs.shape[0],dataPairs.shape[1])
usedElements[selectedRow-1,minIndRowMinus]=1
usedElements=usedElements.reshape(dataPairs.shape[0]*dataPairs.shape[1])
numberUsed=len(np.where(usedElements==0)[0])
elif minDistRowMinus>=minDistRowPlus:
line.append([selectedRow+1,minIndRowPlus])
usedElements=usedElements.reshape(dataPairs.shape[0],dataPairs.shape[1])
usedElements[selectedRow+1,minIndRowPlus]=1
usedElements=usedElements.reshape(dataPairs.shape[0]*dataPairs.shape[1])
numberUsed=len(np.where(usedElements==0)[0])
# fit a line to the two points
point1=dataPairs[line[0][0],line[0][1],:]
point2=dataPairs[line[1][0],line[1][1],:]
points=np.array([point1,point2])
fit=scipy.stats.linregress(points[:,0],points[:,1])
# -
pairList=np.arange(0,dataPairs.shape[0]*dataPairs.shape[1],1)
#print(pairList)
pairIndex=np.random.choice(pairList)
print(pairIndex)
pairList=np.delete(pairList,pairIndex)
#print(pairList)
print(np.unravel_index(pairIndex,(dataPairs.shape[0],dataPairs.shape[1])))
| analysis/plotFrequencyPeaksVsPitchAngle2172021.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (cshl-sca-2017)
# language: python
# name: cshl-sca-2017
# ---
# <small><i>The PCA section of this notebook was put together by [<NAME>](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
# # Dimensionality Reduction: Principal Component Analysis in-depth
#
# Here we'll explore **Principal Component Analysis**, which is an extremely useful linear dimensionality reduction technique.
#
# We'll start with our standard set of initial imports:
# +
from __future__ import print_function, division
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting style defaults
import seaborn as sns; sns.set()
# -
# ## Introducing Principal Component Analysis
#
# Principal Component Analysis is a very powerful unsupervised method for *dimensionality reduction* in data. It's easiest to visualize by looking at a two-dimensional dataset:
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
# We can see that there is a definite trend in the data. What PCA seeks to do is to find the **Principal Axes** in the data, and explain how important those axes are in describing the data distribution:
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
# To see what these numbers mean, let's view them as vectors plotted on top of the data:
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
# Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
# The explained variance quantifies this measure of "importance" in direction.
#
# Another way to think of it is that the second principal component could be **completely ignored** without much loss of information! Let's see what our data look like if we only keep 95% of the variance:
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
# By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
# The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
#
# This is the sense in which "dimensionality reduction" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data.
# ### Application of PCA to Digits
#
# The dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before:
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
pca = PCA(2) # project from 64 to 2 dimensions
Xproj = pca.fit_transform(X)
print(X.shape)
print(Xproj.shape)
plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('tab10', 10))
plt.colorbar();
# We could also do the same plot, using Altair and Pandas:
# digits_smushed = pd.DataFrame(Xproj)
# digits_smushed['target'] = digits.target
# digits_smushed.head()
# This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, **without reference** to the labels.
# ### What do the Components Mean?
#
# PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via *eigenvectors*.
# The input data is represented as a vector: in the case of the digits, our data is
#
# $$
# x = [x_1, x_2, x_3 \cdots]
# $$
#
# but what this really means is
#
# $$
# image(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots
# $$
#
# If we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image:
# +
from decompositionplots import plot_image_components
sns.set_style('white')
plot_image_components(digits.data[0])
# -
# But the pixel-wise representation is not the only choice. We can also use other *basis functions*, and write something like
#
# $$
# image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
# $$
#
# What PCA does is to choose optimal **basis functions** so that only a few are needed to get a reasonable approximation.
# The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum:
from decompositionplots import plot_pca_interactive
plot_pca_interactive(digits.data)
# Here we see that with only six PCA components, we recover a reasonable approximation of the input!
#
# Thus we see that PCA can be viewed from two angles. It can be viewed as **dimensionality reduction**, or it can be viewed as a form of **lossy data compression** where the loss favors noise. In this way, PCA can be used as a **filtering** process as well.
# ### Choosing the Number of Components
#
# But how much information have we thrown away? We can figure this out by looking at the **explained variance** as a function of the components:
sns.set()
pca = PCA().fit(X)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
# Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
# ## Other Dimensionality Reducting Routines
#
# Note that scikit-learn contains many other unsupervised dimensionality reduction routines: some you might wish to try are
# Other dimensionality reduction techniques which are useful to know about:
#
# - [sklearn.decomposition.PCA](http://scikit-learn.org/0.13/modules/generated/sklearn.decomposition.PCA.html):
# Principal Component Analysis
# - [sklearn.decomposition.RandomizedPCA](http://scikit-learn.org/0.13/modules/generated/sklearn.decomposition.RandomizedPCA.html):
# extremely fast approximate PCA implementation based on a randomized algorithm
# - [sklearn.decomposition.SparsePCA](http://scikit-learn.org/0.13/modules/generated/sklearn.decomposition.SparsePCA.html):
# PCA variant including L1 penalty for sparsity
# - [sklearn.decomposition.FastICA](http://scikit-learn.org/0.13/modules/generated/sklearn.decomposition.FastICA.html):
# Independent Component Analysis
# - [sklearn.decomposition.NMF](http://scikit-learn.org/0.13/modules/generated/sklearn.decomposition.NMF.html):
# non-negative matrix factorization
# - [sklearn.manifold.LocallyLinearEmbedding](http://scikit-learn.org/0.13/modules/generated/sklearn.manifold.LocallyLinearEmbedding.html):
# nonlinear manifold learning technique based on local neighborhood geometry
# - [sklearn.manifold.IsoMap](http://scikit-learn.org/0.13/modules/generated/sklearn.manifold.Isomap.html):
# nonlinear manifold learning technique based on a sparse graph algorithm
#
# Each of these has its own strengths & weaknesses, and areas of application. You can read about them on the [scikit-learn website](http://sklearn.org).
# # Independent component analysis
#
# Here we'll learn about indepednent component analysis (ICA), a matrix decomposition method that's an alternative to PCA.
# ## Independent Component Analysis (ICA)
#
# ICA was originally created for the "cocktail party problem" for audio processing. It's an incredible feat that our brains are able to filter out all these different sources of audio, automatically!
#
# 
# (I really like how smug that guy looks - it's really over the top)
# [Source](http://www.telegraph.co.uk/news/science/science-news/9913518/Cocktail-party-problem-explained-how-the-brain-filters-out-unwanted-voices.html)
#
# ### Cocktail party problem
#
# Given multiple sources of sound (people talking, the band playing, glasses clinking), how do you distinguish independent sources of sound? Imagine at a cocktail party you have multiple microphones stationed throughout, and you get to hear all of these different sounds.
#
# 
#
# [Source](https://onionesquereality.wordpress.com/tag/cocktail-party-problem/)
#
#
# ### What if you applied PCA to the cocktail party problem?
#
# Example adapted from the excellent [scikit-learn documentation](http://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_blind_source_separation.html).
# +
import fig_code
fig_code.cocktail_party()
# -
# ### Discussion
#
# 1. What do you get when you apply PCA to the cocktail party problem?
# 2. How would you describe the difference between maximizing variance via orthogonal features (PCA) and finding independent signals (ICA)?
#
# ## Non-negative matrix factorization
#
# NMF is like ICA in that it is trying to learn the parts of the data that make up the whole, by looking at the reconstructability of them matrix. This was originally published by <NAME>, ["Learning the parts of objects by non-negative matrix factorization"](http://www.columbia.edu/~jwp2128/Teaching/E4903/papers/nmf_nature.pdf), and applied to image data below.
#
# 
#
# * VQ here is vector quantization (VQ), yet another dimensionality reduction method ... it's kinda like K-means but not
# ## Back to biology!
#
# Enough images and signal processing ... where is the RNA!??!? Let's apply these algorithms to some biological datasets.
#
# We'll use the 300-cell dataset (6 clusters, 50 cells each) data from the Macosko2015 paper.
#
# Rather than plotting each cell in each component, we'll look at the mean (or median) contribution of each component to the cell types.
from decompositionplots import explore_smushers
explore_smushers()
# ### Discussion
#
# Discuss the questions below while you play with the sliders.
#
# 1. Is the first component of each algorithm always the largest magnitude comopnent?
# 2. Which algorithm(s) tend to place an individual celltype in each component?
# 2. Which algorithm(s) seem to be driven by the "loudest" or largest changes in gene expression across all cells, rather than the unique contribution of each cell type?
# 3. How does the lowrank data affect the decomposition?
# 4. How does using the mean or median affect your interpretation?
#
# 1. How does the number of components influence the decomposition by PCA? (indicate all that apply)
# - You get to see more distinct signals in the data
# - It changes the components
# - It doesn't affect the first few components
# - You get to see more of the "special cases" in the variation of the data
# 2. How does the number of components influence the decomposition by ICA? (indicate all that apply)
# - You get to see more distinct signals in the data
# - It changes the components
# - It doesn't affect the first few components
# - You get to see more of the "special cases" in the variation of the data
# 2. How does the number of components influence the decomposition by NMF? (indicate all that apply)
# - You get to see more distinct signals in the data
# - It changes the components
# - It doesn't affect the first few components
# - You get to see more of the "special cases" in the variation of the data
# 3. What does the first component of PCA represent? (Check all that apply)
# - The features that change the most across the data
# - One distinct subset of features that appears independently of all other features
# - The axis of the "loudest" features in the dataset
# - A particular set of genes features that appear together and not with other features
# 3. What does the first component of ICA represent? (Check all that apply)
# - The features that change the most across the data
# - One distinct subset of features that appears independently of all other features
# - The axis of the "loudest" features in the dataset
# - A particular set of genes that appear together and not with other features
# 3. What does the first component of NMF represent? (Check all that apply)
# - The features that change the most across the data
# - One distinct subset of features that appears independently of all other features
# - The axis of the "loudest" features in the dataset
# - A particular set of genes that appear together and not with other features
| notebooks/2.4_matrix_decomposition_pca_ica_nmf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Tce3stUlHN0L"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" id="tuOe1ymfHZPu"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="drGgRRpWf2Qm"
# # Working with sparse tensors
# + [markdown] id="MfBg1C5NB3X0"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/guide/sparse_tensor"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/sparse_tensor.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/sparse_tensor.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/sparse_tensor.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="UIiXFIS4fj1m"
# When working with tensors that contain a lot of zero values, it is important to store them in a space- and time-efficient manner. Sparse tensors enable efficient storage and processing of tensors that contain a lot of zero values. Sparse tensors are used extensively in encoding schemes like [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) as part of data pre-processing in NLP applications and for pre-processing images with a lot of dark pixels in computer vision applications.
# + [markdown] id="A8XXQW3ENU5m"
# ## Sparse tensors in TensorFlow
#
# TensorFlow represents sparse tensors through the `tf.SparseTensor` object. Currently, sparse tensors in TensorFlow are encoded using the coordinate list (COO) format. This encoding format is optimized for hyper-sparse matrices such as embeddings.
#
# The COO encoding for sparse tensors is comprised of:
#
# * `values`: A 1D tensor with shape `[N]` containing all nonzero values.
# * `indices`: A 2D tensor with shape `[N, rank]`, containing the indices of the nonzero values.
# * `dense_shape`: A 1D tensor with shape `[rank]`, specifying the shape of the tensor.
#
# A ***nonzero*** value in the context of a `tf.SparseTensor` is a value that's not explicitly encoded. It is possible to explicitly include zero values in the `values` of a COO sparse matrix, but these "explicit zeros" are generally not included when referring to nonzero values in a sparse tensor.
#
# Note: `tf.SparseTensor` does not require that indices/values be in any particular order, but several ops assume that they're in row-major order. Use `tf.sparse.reorder` to create a copy of the sparse tensor that is sorted in the canonical row-major order.
# + [markdown] id="6Aq7ruwlyz79"
# ## Creating a `tf.SparseTensor`
#
# Construct sparse tensors by directly specifying their `values`, `indices`, and `dense_shape`.
# + id="SI2Mv3tihcmY"
import tensorflow as tf
# + id="vqQKGva4zSCs"
st1 = tf.SparseTensor(indices=[[0, 3], [2, 4]],
values=[10, 20],
dense_shape=[3, 10])
# + [markdown] id="l9eJeh31fWyr"
# <img src="images/sparse_tensor.png">
# + [markdown] id="-M3fMTFL0hXa"
# When you use the `print()` function to print a sparse tensor, it shows the contents of the three component tensors:
# + id="3oHWtmsBMLAI"
print(st1)
# + [markdown] id="qqePKJG6MNWk"
# It is easier to understand the contents of a sparse tensor if the nonzero `values` are aligned with their corresponding `indices`. Define a helper function to pretty-print sparse tensors such that each nonzero value is shown on its own line.
# + id="R_xFYuOo1ZE_"
def pprint_sparse_tensor(st):
s = "<SparseTensor shape=%s \n values={" % (st.dense_shape.numpy().tolist(),)
for (index, value) in zip(st.indices, st.values):
s += f"\n %s: %s" % (index.numpy().tolist(), value.numpy().tolist())
return s + "}>"
# + id="be4Dyiqt0fEH"
print(pprint_sparse_tensor(st1))
# + [markdown] id="3FBt8qk_zmz5"
# You can also construct sparse tensors from dense tensors by using `tf.sparse.from_dense`, and convert them back to dense tensors by using `tf.sparse.to_dense`.
# + id="cYwuCuNMf0Fu"
st2 = tf.sparse.from_dense([[1, 0, 0, 8], [0, 0, 0, 0], [0, 0, 3, 0]])
print(pprint_sparse_tensor(st2))
# + id="eFVPrwNPzyZw"
st3 = tf.sparse.to_dense(st2)
print(st3)
# + [markdown] id="GeuvyL_Z0Mwh"
# ## Manipulating sparse tensors
#
# Use the utilities in the `tf.sparse` package to manipulate sparse tensors. Ops like `tf.math.add` that you can use for arithmetic manipulation of dense tensors do not work with sparse tensors.
# + [markdown] id="LMYW4U4Qavvd"
# Add sparse tensors of the same shape by using `tf.sparse.add`.
# + id="vJwuSQIjayiN"
st_a = tf.SparseTensor(indices=[[0, 2], [3, 4]],
values=[31, 2],
dense_shape=[4, 10])
st_b = tf.SparseTensor(indices=[[0, 2], [7, 0]],
values=[56, 38],
dense_shape=[4, 10])
st_sum = tf.sparse.add(st_a, st_b)
print(pprint_sparse_tensor(st_sum))
# + [markdown] id="ls8_aQvnqZMj"
# Use `tf.sparse.sparse_dense_matmul` to multiply sparse tensors with dense matrices.
# + id="S0tWRLiE04uL"
st_c = tf.SparseTensor(indices=([0, 1], [1, 0], [1, 1]),
values=[13, 15, 17],
dense_shape=(2,2))
mb = tf.constant([[4], [6]])
product = tf.sparse.sparse_dense_matmul(st_c, mb)
print(product)
# + [markdown] id="9hxClYvfceZA"
# Put sparse tensors together by using `tf.sparse.concat` and take them apart by using `tf.sparse.slice`.
#
# + id="cp4NEW_5yLEY"
sparse_pattern_A = tf.SparseTensor(indices = [[2,4], [3,3], [3,4], [4,3], [4,4], [5,4]],
values = [1,1,1,1,1,1],
dense_shape = [8,5])
sparse_pattern_B = tf.SparseTensor(indices = [[0,2], [1,1], [1,3], [2,0], [2,4], [2,5], [3,5],
[4,5], [5,0], [5,4], [5,5], [6,1], [6,3], [7,2]],
values = [1,1,1,1,1,1,1,1,1,1,1,1,1,1],
dense_shape = [8,6])
sparse_pattern_C = tf.SparseTensor(indices = [[3,0], [4,0]],
values = [1,1],
dense_shape = [8,6])
sparse_patterns_list = [sparse_pattern_A, sparse_pattern_B, sparse_pattern_C]
sparse_pattern = tf.sparse.concat(axis=1, sp_inputs=sparse_patterns_list)
print(tf.sparse.to_dense(sparse_pattern))
# + id="XmE87XVPWPmc"
sparse_slice_A = tf.sparse.slice(sparse_pattern_A, start = [0,0], size = [8,5])
sparse_slice_B = tf.sparse.slice(sparse_pattern_B, start = [0,5], size = [8,6])
sparse_slice_C = tf.sparse.slice(sparse_pattern_C, start = [0,10], size = [8,6])
print(tf.sparse.to_dense(sparse_slice_A))
print(tf.sparse.to_dense(sparse_slice_B))
print(tf.sparse.to_dense(sparse_slice_C))
# + [markdown] id="37SOx7wB1eSX"
# If you're using TensorFlow 2.4 or above, use `tf.sparse.map_values` for elementwise operations on nonzero values in sparse tensors.
# + id="daZaPkkA1d09"
st2_plus_5 = tf.sparse.map_values(tf.add, st2, 5)
print(tf.sparse.to_dense(st2_plus_5))
# + [markdown] id="3zkRcxeo2Elw"
# Note that only the nonzero values were modified – the zero values stay zero.
#
# Equivalently, you can follow the design pattern below for earlier versions of TensorFlow:
# + id="bFSNOOqC0ySb"
st2_plus_5 = tf.SparseTensor(
st2.indices,
st2.values + 5,
st2.dense_shape)
print(tf.sparse.to_dense(st2_plus_5))
# + [markdown] id="GFhO2ZZ53ga1"
# ## Using `tf.SparseTensor` with other TensorFlow APIs
#
# Sparse tensors work transparently with these TensorFlow APIs:
#
# * `tf.keras`
# * `tf.data`
# * `tf.Train.Example` protobuf
# * `tf.function`
# * `tf.while_loop`
# * `tf.cond`
# * `tf.identity`
# * `tf.cast`
# * `tf.print`
# * `tf.saved_model`
# * `tf.io.serialize_sparse`
# * `tf.io.serialize_many_sparse`
# * `tf.io.deserialize_many_sparse`
# * `tf.math.abs`
# * `tf.math.negative`
# * `tf.math.sign`
# * `tf.math.square`
# * `tf.math.sqrt`
# * `tf.math.erf`
# * `tf.math.tanh`
# * `tf.math.bessel_i0e`
# * `tf.math.bessel_i1e`
#
# Examples are shown below for a few of the above APIs.
# + [markdown] id="6uNUl7EgSYGC"
# ### `tf.keras`
#
# The `tf.keras` API natively supports sparse tensors without any expensive casting or conversion ops. The Keras API lets you pass sparse tensors as inputs to a Keras model. Set `sparse=True` when calling `tf.keras.Input` or `tf.keras.layers.InputLayer`. You can pass sparse tensors between Keras layers, and also have Keras models return them as outputs. If you use sparse tensors in `tf.keras.layers.Dense` layers in your model, they will output dense tensors.
#
# The example below shows you how to pass a sparse tensor as an input to a Keras model.
# + id="E8za5DK8vfo7"
x = tf.keras.Input(shape=(4,), sparse=True)
y = tf.keras.layers.Dense(4)(x)
model = tf.keras.Model(x, y)
sparse_data = tf.SparseTensor(
indices = [(0,0),(0,1),(0,2),
(4,3),(5,0),(5,1)],
values = [1,1,1,1,1,1],
dense_shape = (6,4)
)
model(sparse_data)
model.predict(sparse_data)
# + [markdown] id="ZtVYmr7dt0-x"
# ### `tf.data`
#
# The `tf.data` API enables you to build complex input pipelines from simple, reusable pieces. Its core data structure is `tf.data.Dataset`, which represents a sequence of elements in which each element consists of one or more components.
#
# #### Building datasets with sparse tensors
#
# Build datasets from sparse tensors using the same methods that are used to build them from `tf.Tensor`s or NumPy arrays, such as `tf.data.Dataset.from_tensor_slices`. This op preserves the sparsity (or sparse nature) of the data.
# + id="3y9tiwuZ5oTD"
dataset = tf.data.Dataset.from_tensor_slices(sparse_data)
for element in dataset:
print(pprint_sparse_tensor(element))
# + [markdown] id="hFaY5Org59qk"
# #### Batching and unbatching datasets with sparse tensors
#
# You can batch (combine consecutive elements into a single element) and unbatch datasets with sparse tensors using the `Dataset.batch` and `Dataset.unbatch` methods respectively.
# + id="WkKE0VY66Ii2"
batched_dataset = dataset.batch(2)
for element in batched_dataset:
print (pprint_sparse_tensor(element))
# + id="ikZzPxl56bx1"
unbatched_dataset = batched_dataset.unbatch()
for element in unbatched_dataset:
print (pprint_sparse_tensor(element))
# + [markdown] id="6ywfpD_EIMd3"
# You can also use `tf.data.experimental.dense_to_sparse_batch` to batch dataset elements of varying shapes into sparse tensors.
# + [markdown] id="oB8QKh7p6ltl"
# #### Transforming Datasets with sparse tensors
#
# Transform and create sparse tensors in Datasets using `Dataset.map`.
# + id="E5lhicwef7Ah"
transform_dataset = dataset.map(lambda x: x*2)
for i in transform_dataset:
print(pprint_sparse_tensor(i))
# + [markdown] id="DBfQvIVutp65"
# ### tf.train.Example
#
# `tf.train.Example` is a standard protobuf encoding for TensorFlow data. When using sparse tensors with `tf.train.Example`, you can:
#
# * Read variable-length data into a `tf.SparseTensor` using `tf.io.VarLenFeature`. However, you should consider using `tf.io.RaggedFeature` instead.
#
# * Read arbitrary sparse data into a `tf.SparseTensor` using `tf.io.SparseFeature`, which uses three separate feature keys to store the `indices`, `values`, and `dense_shape`.
# + [markdown] id="Pir2Xt3nSe-4"
# ### `tf.function`
#
# The `tf.function` decorator precomputes TensorFlow graphs for Python functions, which can substantially improve the performance of your TensorFlow code. Sparse tensors work transparently with both `tf.function` and [concrete functions](https://www.tensorflow.org/guide/function#obtaining_concrete_functions).
# + id="6jXDueTOSeYO"
@tf.function
def f(x,y):
return tf.sparse.sparse_dense_matmul(x,y)
a = tf.SparseTensor(indices=[[0, 3], [2, 4]],
values=[15, 25],
dense_shape=[3, 10])
b = tf.sparse.to_dense(tf.sparse.transpose(a))
c = f(a,b)
print(c)
# + [markdown] id="YPe5uC_X7XjZ"
# ## Distinguishing missing values from zero values
#
# Most ops on `tf.SparseTensor`s treat missing values and explicit zero values identically. This is by design — a `tf.SparseTensor` is supposed to act just like a dense tensor.
#
# However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero.
#
# Note: This is generally not the intended usage of `tf.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically.
# + [markdown] id="tZ17F9e3ZJDS"
# Note that some ops like `tf.sparse.reduce_max` do not treat missing values as if they were zero. For example, when you run the code block below, the expected output is `0`. However, because of this exception, the output is `-3`.
# + id="kcNBVVtBZav_"
print(tf.sparse.reduce_max(tf.sparse.from_dense([-5, 0, -3])))
# + [markdown] id="zhzWLW-bMfI5"
# In contrast, when you apply `tf.math.reduce_max` to a dense tensor, the output is 0 as expected.
# + id="7Xy-g3VDNK9d"
print(tf.math.reduce_max([-5, 0, -3]))
# + [markdown] id="uK3U8l0kNL37"
# ## Further reading and resources
#
# * Refer to the [tensor guide](https://www.tensorflow.org/guide/tensor) to learn about tensors.
# * Read the [ragged tensor guide](https://www.tensorflow.org/guide/ragged_tensor) to learn how to work with ragged tensors, a type of tensor that lets you work with non-uniform data.
# * Check out this object detection model in the [TensorFlow Model Garden](https://github.com/tensorflow/models) that uses sparse tensors in a [`tf.Example` data decoder](https://github.com/tensorflow/models/blob/9139a7b90112562aec1d7e328593681bd410e1e7/research/object_detection/data_decoders/tf_example_decoder.py).
#
| site/en/guide/sparse_tensor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <img src='./img/nsidc_logo.png'/>
#
# # Data Discovery and Access via **earthdata** library
#
#
# **Credits**
# * Notebook by: <NAME> and <NAME>
# * Source material: [earthdata demo notebook](https://github.com/nsidc/earthdata) by <NAME>
#
# ## Objective
#
# * Use programmatic data access to discover and access NASA DAAC data using the **earthdata** library .
#
# ---
#
# ## Motivation and Background
# TL;DR **earthdata** uses NASA APIs to search, preview and access NASA datasets on-prem and in the cloud (with 4 lines of Python!).
# ___
#
# There are many ways to access NASA datasets. We can use the Earthdata search portal. We can use DAAC specific portals or tools.
# We can use Open Altimetry. These web portals are powerful but... they are not designed for programmatic access and reproducible workflows.
# This is extremely important in the age of the cloud and reproducible open science.
#
# The good news is that NASA also exposes APIs that allows us to search, transform and access data in a programmatic way.
# There are already some very useful client libraries for these APIs:
#
# * icepyx
# * python-cmr
# * eo-metadata-tools
# * harmony-py
# * Hyrax (OpenDAP)
# * cmr-stac
# * others
#
# Each of these libraries has amazing features and some similarities.
#
# In this context, **earthdata** aims to be a simple library that can deal with the important parts of the metadata so we can access or download data without having to worry if a given dataset is on-premises (DAAC server) or in the cloud. **earthdata** is a work in progress and improving often. You are encouraged to contribute to this [opensource library](https://github.com/nsidc/earthdata).
#
# Some important strengths of earthdata library:
# * Discovery and access to on prem and cloud-hosted data
# * Access to data across all of NASA DAACs.
# * Easy handling of S3 credentials for direct access to cloud-hosted data.
#
# ## Key Steps for Programmatic Data Access
#
# There are a few key steps for accessing data from the NASA DAAC APIs:
# 1. Authenticate with NASA Earthdata Login (and for cloud-hosted data with AWS access keys and token).
# 2. Query CMR to find data using filters, such as spatial extent and temporal range.
# 3. Order and download your data by interacting with DAAC APIs.
#
#
# We'll go through each of these steps during this tutorial, at the end summarizing how `earthdata` streamlines this process into a minimal number of lines of code.
# ___
# ### **Step 0. Import classes**
# +
# Import classes from earthdata
from earthdata import Auth, DataCollections, DataGranules, Store
# + [markdown] tags=[]
# ### **Step 1. Earthdata login**
#
# To access data using the <library name> library it is necessary to log into [Earthdata Login](https://urs.earthdata.nasa.gov/). To do this, enter your NASA Earthdata credentials in the next step after executing the following code cell.
#
# **Note**: If you don't have NASA Earthdata credentials you have to register first at the link above. You don't need to be a NASA employee to register with NASA Earthdata! Note that if you did not enter your Earthdata Login username and email into the form in the pre-Hackweek email, you will not be on the ICESat-2 cloud data early access list and you will not have access to ICESat-2 data in the cloud. You will still have access to all publicly available data sets.
#
# +
#Entering our Earthdata Login credentials.
auth = Auth().login(strategy='netrc')
if auth.authenticated is False:
auth = Auth().login(strategy='interactive')
# + [markdown] tags=[]
# ### **Step 2. Query the Common Metadata Repository (CMR)**
#
# #### Query CMR for Data Collections
#
# The DataCollection class can query CMR for any collection (collection = data set) using all of CMR's Query parameters and has built-in accessors for the common ones.
# This makes it ideal for one liners and easier notation.
#
# This means we can narrow our search in CMR by filtering on keyword, temporal range, area of interest, and data provider, e.g.:
# - temporal("2020-03-01", "2020-03-30")
# - keyword('land ice')
# - bounding_box(-134.7,58.9,-133.9,59.2)
# - provider("NSIDC_ECS")
#
# -
# We're going to go through a couple of examples of querying CMR and accessing data - the first for accessing on prem data and the second for accessing cloud-hosted data.
#
# The first thing we'll do is set up our query object.
# + tags=[]
Query = DataCollections().keyword('land ice').bounding_box(-134.7,58.9,-133.9,59.2).provider("NSIDC_ECS")
# Query = DataCollections().keyword('land ice').bounding_box(-134.7,58.9,-133.9,59.2).daac("NSIDC")
# Query = DataCollections().keyword('land ice').bounding_box(-134.7,58.9,-133.9,59.2).data_center("NSIDC")
print(f'Collections found: {Query.hits()}')
# -
#
# Then we'll create a collections object from our query.
# + tags=["hide-output"]
collections = Query.get(10)
# Inspect 1st result.
print(collections[0:1])
# -
# To reduce the number of metadata fields displayed, we can select which fields to print when creating our collections object.
# + tags=["hide-output"]
collections = Query.fields(['ShortName','Abstract']).get(5)
# Inspect 5 results printing just the ShortName and Abstract
print(collections[0:5])
# -
# The results from DataCollections are enhanced python dict objects. We can select which metadata fields from CMR to display.
# The concept ID is an important parameter in CMR. It's a unique identifier for a data collection (collection = data set). We'll use the concept ID when querying for data granules (granules = files) below.
collections[0]["meta.concept-id"]
collections[0]["umm.ShortName"]
# + tags=["hide-output"]
collections[0]["umm.RelatedUrls"]
# -
# #### Query CMR for Data Granules
#
# The DataGranules class provides similar functionality as the collection class. As mentioned above, concept IDs are unique identifiers for data sets (collections). To query for granules from the exact data set and version in which you are interested, query granules using concept-id.
# You can search data granules using a short name but that could (more likely will) return different versions of the same data granules. Even when specifying both short name and version number, a query won't distinguish between on prem or cloud hosted granules.
#
# In this example we're querying for data granules from ICESat-2 [ATL06](https://nsidc.org/data/ATL06) version `005` dataset.
# +
# Generally speaking we won't need the auth instance for *queries* to collections and granules, unless the data set is under restricted access (like NSIDC_CPRD).
Query = DataGranules().concept_id('C2144439155-NSIDC_ECS').bounding_box(-134.7,58.9,-133.9,59.2).temporal("2020-03-01", "2020-03-30")
print(f'Granules found: {Query.hits()}')
# + tags=["hide-output"]
granules = Query.get()
print(granules[0:4])
# -
# #### Pretty printing data granules
#
# Since we are in a notebook we can take advantage of it to see a more user friendly version of the granules with the built-in function `display`
# This will render browse image for the granule if available and eventually will have a similar representation as the one from the Earthdata search portal
# printing 2 granules using display
[display(granule) for granule in granules[0:2]]
# ### **Step 3. Accessing the data**
#
# #### On-prem access 📡
#
# DAAC hosted data
# %%time
# accessing the data on prem means downloading it if we are in a local environment or "uploading them" if we are in the cloud.
access = Store(auth)
files = access.get(granules[1:2], local_path = "/tmp/demo-atl06")
# In a terminal, "ls /tmp" to see where the files are going.
# #### Cloud access ☁️
#
# Same API, just a different place
#
# The cloud is not something magical, but having infrastructure on-demand is quite handy to have for many scientific workflows, especially if the data already lives in "the cloud".
# As for NASA, data migration started in 2020 and will continue into the foreseeable future. Not all, but most of NASA data will be available in Amazon Web Services object simple storage service or AWS S3.
#
# To work with this data the first thing we need to do is to get the proper credentials for accessing data in their S3 buckets. These credentials are on a per-DAAC basis and last a mere 1 hour. In the near future the Auth class will keep track of this to regenerate the credentials as needed.
#
# With `earthdata` a researcher can get the files regardless if they are on-prem or cloud based with the same API call, although an important consideration is that if we want direct access to data in the cloud we must run the code in the cloud. This is because some S3 buckets are configured to only allow direct access (s3:// links) if the requester is in the same zone, `us-west-2`.
#
# +
Query = DataCollections(auth).keyword('land ice').bounding_box(-134.7,58.9,-133.9,59.2).provider("NSIDC_CPRD")
print(f'Collections found: {Query.hits()}')
# -
# Oh no! What!? Zero hits? :(
#
# The 'hits' method above will tell you the number of query hits, but only for publicly available data sets.
# Because cloud hosted ICESat-2 data are not yet publicly available, CMR will return "0" hits, if you filtered DataCollections by provider = NSIDC_CPRD.
# For now we need an alternative method of seeing how many cloud data sets are available at NSIDC. This is only temporary until cloud-hosted ICESat-2 become publicly available. We can create a collections object (we're going to want one of
# these soon anyhow) and print the len() of the collections object to see the true number of hits.
#
# Create a collections object
# + tags=["skip-execution"]
# We can create a collections object from our query.
collections = Query.fields(['ShortName','Abstract']).get()
print(len(collections))
# + tags=["hide-output", "skip-execution"]
# Inspect 1st result.
print(collections[0:5])
# + tags=["skip-execution"]
Query = DataGranules(auth).concept_id("C2153572614-NSIDC_CPRD").bounding_box(-134.7,58.9,-133.9,59.2).temporal("2020-03-01", "2020-03-30")
print(f"Granule hits: {Query.hits()}")
cloud_granules = Query.get(4)
print(len(cloud_granules))
# is this a cloud hosted data granule?
cloud_granules[0].cloud_hosted
# + tags=["skip-execution"]
# %%time
# If we get an error here, most likely is because we are running this code outside the us-west-2 region.
try:
files = access.get(cloud_granules[0:2], "/tmp/demo-NSIDC_CPRD/")
except Exception as e:
print("If we are here maybe we are not in us-west-2 or the collection ")
# -
# ## Recap
#
# ```python
# from earthdata import Auth, DataGranules, DataCollections, Store
# auth = Auth().login()
# access = Store(auth)
#
# Query = DataGranules(auth).concept_id("C2144439155-NSIDC_ECS").bounding_box(-134.7,58.9,-133.9,59.2).temporal("2020-03-01", "2020-03-30")
# granules = Query.get()
# files = access.get(granules)
# ```
#
# **Wait, we said 4 lines of Python**
#
# ```python
# from earthdata import Auth, DataGranules, Store
# auth = Auth().login()
# granules = DataGranules(auth).concept_id("C2144439155-NSIDC_ECS").bounding_box(-134.7,58.9,-133.9,59.2).temporal("2020-03-01", "2020-03-30").get_all()
# files = Store(auth).get(granules, '/tmp')
# ```
# The Demo notebook in the [earthdata library GitHub repo](https://github.com/nsidc/earthdata) showcases much more of earthdata's capabilities, including many handy methods for querying CMR for collections and granules. Please take a look on your own when you are ready to start using earthdata library. You are invited to contribute!
# Data provider ID cheat sheet.
#
# <img src='./img/data_provider_cheat_sheet.png'/>
# ### Related links
#
# **CMR** API documentation: https://cmr.earthdata.nasa.gov/search/site/docs/search/api.html
#
# **EDL** API documentation: https://urs.earthdata.nasa.gov/
#
# NASA OpenScapes: https://nasa-openscapes.github.io/earthdata-cloud-cookbook/
#
# NSIDC: https://nsidc.org
| book/tutorials/data_access/data_access_2_earthdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import psutil
from IPython.core.display import display, HTML
pd.options.display.max_columns = None
display(HTML("<style>.container { width:100% !important; }</style>"))
# # Getting started
#
# Below you can see a basic data loader from the sample data file. Copy this notebook and start analyzing! Not sure where to start with Jupyter and Pandas? <NAME>'s [First Python Notebook](http://www.firstpythonnotebook.org/) is a great introduction.
df = pd.read_csv('data/hacknight_ticket_sample_data_2015.csv', low_memory=False, parse_dates=['issue_date', 'ticket_queue_date'])
df.info()
df.head()
df['vehicle_make'].unique()
df['vehicle_make'].value_counts()
df.columns
sns.distplot(df['fine_level2_amount'])
sns.distplot(df['total_payments'])
officer = df['officer'].value_counts()
officer
notice_lvl = df['notice_level'].unique()
notice_lvl
viol_code = df['violation_code'].unique()
viol_code
viols = df['violation_description'].unique()
viols
len(viols)
(2970733-2926441)/(2970733+2926441)
| start-here.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Lets assume we have one unary random variable **X**.
#
# ### PDF (probability density function)
#
# - If **X** is a continuous random variable, then we can define its PDF: $f_X(x)$
#
# - The probability of **X** falling within a particular range of values can be given by the integral of __X__’s PDF over that range, that is:
# $$
# P(a \leq X \leq b) = \int_{a}^{b}f_X(x)d_x
# $$
#
# ### PMF (probability mass function)
#
# - If **X** is a discrete random variable, then we can define its PMF: $p_X(x)$
#
# - PMF is a function that gives the probability that a [discrete random variable](https://en.wikipedia.org/wiki/Random_variable#Discrete_random_variable) is exactly equal to some value. The probability mass function is often the primary means of defining a [discrete probability distribution](https://en.wikipedia.org/wiki/Probability_distribution#Discrete_probability_distribution), and such functions exist for either [scalar](https://en.wikipedia.org/wiki/Variable_(computer_science)) or [multivariate random variables](https://en.wikipedia.org/wiki/Multivariate_random_variable) whose domain is discrete.
#
# ### CDF (cumulative distribution function)
#
# - No matter **X** is continuous type or discrete type, we can define its CDF: $F_X(x)$
#
# - Definition of CDF:
#
# 1. To continuous type: $F_X(x) = P(X \leq x) = \int_{-\infty}^{x}f_X(t)d_t$
# 2. To discrete type: $F_X(x) = P(X \leq x)$
# ### Credit to [在连续随机变量中,概率密度函数(PDF)、概率分布函数、累积分布函数(CDF)之间的关系是什么?](https://www.zhihu.com/question/36853661/answer/69775009)
# ### References
#
# - [Probability density function](https://en.wikipedia.org/wiki/Probability_density_function)
#
# - [Probability mass function](https://en.wikipedia.org/wiki/Probability_mass_function)
#
# - [Cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function)
| ml_basics/rdm005_PDF_PMF_CDF/PDF_PMF_CDF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Anaconda 3
# language: python
# name: anaconda
# ---
# # Link two datasets
# ## Introduction
#
# This example shows how two datasets with data about persons can be linked. We will try to link the data based on attributes like first name, surname, sex, date of birth, place and address. The data used in this example is part of [Febrl](https://sourceforge.net/projects/febrl/) and is fictitious.
#
# First, start with importing the ``recordlinkage`` module. The submodule ``recordlinkage.datasets`` contains several datasets that can be used for testing. For this example, we use the Febrl datasets 4A and 4B. These datasets can be loaded with the function ``load_febrl4``.
# + nbsphinx="hidden"
# %precision 5
from __future__ import print_function
import pandas as pd
pd.set_option('precision',5)
pd.options.display.max_rows = 10
# -
import recordlinkage
from recordlinkage.datasets import load_febrl4
# The datasets are loaded with the following code. The returned datasets are of type ``pandas.DataFrame``. This makes it easy to manipulate the data if desired. For details about data manipulation with ``pandas``, see their comprehensive documentation http://pandas.pydata.org/.
# +
dfA, dfB = load_febrl4()
dfA
# -
# ## Make record pairs
# It is very intuitive to compare each record in DataFrame ``dfA`` with all records of DataFrame ``dfB``. In fact, we want to make record pairs. Each record pair should contain one record of ``dfA`` and one record of ``dfB``. This process of making record pairs is also called 'indexing'. With the ``recordlinkage`` module, indexing is easy. First, load the ``index.Index`` class and call the `.full` method. This object generates a full index on a ``.index(...)`` call. In case of deduplication of a single dataframe, one dataframe is sufficient as argument.
indexer = recordlinkage.Index()
indexer.full()
pairs = indexer.index(dfA, dfB)
# With the method ``index``, all possible (and unique) record pairs are made. The method returns a ``pandas.MultiIndex``. The number of pairs is equal to the number of records in ``dfA`` times the number of records in ``dfB``.
print (len(dfA), len(dfB), len(pairs))
# Many of these record pairs do not belong to the same person. In case of one-to-one matching, the number of matches should be no more than the number of records in the smallest dataframe. In case of full indexing, ``min(len(dfA), len(N_dfB))`` is much smaller than ``len(pairs)``. The ``recordlinkage`` module has some more advanced indexing methods to reduce the number of record pairs. Obvious non-matches are left out of the index. Note that if a matching record pair is not included in the index, it can not be matched anymore.
# One of the most well known indexing methods is named *blocking*. This method includes only record pairs that are identical on one or more stored attributes of the person (or entity in general). The blocking method can be used in the ``recordlinkage`` module.
# +
indexer = recordlinkage.Index()
indexer.block('given_name')
candidate_links = indexer.index(dfA, dfB)
print (len(candidate_links))
# -
# The argument 'given_name' is the blocking variable. This variable has to be the name of a column in ``dfA`` and ``dfB``. It is possible to parse a list of columns names to block on multiple variables. Blocking on multiple variables will reduce the number of record pairs even further.
# Another implemented indexing method is *Sorted Neighbourhood Indexing* (``recordlinkage.index.SortedNeighbourhood``). This method is very useful when there are many misspellings in the string were used for indexing. In fact, sorted neighbourhood indexing is a generalisation of blocking. See the documentation for details about sorted neighbourd indexing.
# ## Compare records
# Each record pair is a candidate match. To classify the candidate record pairs into matches and non-matches, compare the records on all attributes both records have in common. The ``recordlinkage`` module has a class named ``Compare``. This class is used to compare the records. The following code shows how to compare attributes.
# +
# This cell can take some time to compute.
compare_cl = recordlinkage.Compare()
compare_cl.exact('given_name', 'given_name', label='given_name')
compare_cl.string('surname', 'surname', method='jarowinkler', threshold=0.85, label='surname')
compare_cl.exact('date_of_birth', 'date_of_birth', label='date_of_birth')
compare_cl.exact('suburb', 'suburb', label='suburb')
compare_cl.exact('state', 'state', label='state')
compare_cl.string('address_1', 'address_1', threshold=0.85, label='address_1')
features = compare_cl.compute(candidate_links, dfA, dfB)
# -
# The comparing of record pairs starts when the ``compute`` method is called. All attribute comparisons are stored in a DataFrame with horizontally the features and vertically the record pairs.
features
features.describe()
# The last step is to decide which records belong to the same person. In this example, we keep it simple:
# Sum the comparison results.
features.sum(axis=1).value_counts().sort_index(ascending=False)
features[features.sum(axis=1) > 3]
# ## Full code
# +
import recordlinkage
from recordlinkage.datasets import load_febrl4
dfA, dfB = load_febrl4()
# Indexation step
indexer = recordlinkage.Index()
indexer.block('given_name')
candidate_links = indexer.index(dfA, dfB)
# Comparison step
compare_cl = recordlinkage.Compare()
compare_cl.exact('given_name', 'given_name', label='given_name')
compare_cl.string('surname', 'surname', method='jarowinkler', threshold=0.85, label='surname')
compare_cl.exact('date_of_birth', 'date_of_birth', label='date_of_birth')
compare_cl.exact('suburb', 'suburb', label='suburb')
compare_cl.exact('state', 'state', label='state')
compare_cl.string('address_1', 'address_1', threshold=0.85, label='address_1')
features = compare_cl.compute(candidate_links, dfA, dfB)
# Classification step
matches = features[features.sum(axis=1) > 3]
print(len(matches))
| docs/notebooks/link_two_dataframes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# -------------------
# # Intorduction to `PyMC2`
#
# #### <NAME>
# -----------------------
# Installation:
#
# `>> conda install pymc`
# %matplotlib inline
import numpy as np
import scipy as sp
import pymc as pm
import seaborn as sb
import matplotlib.pyplot as plt
# ## Probabilistic model
#
# Suppose you have a sample $\{y_t\}_{t=0}^{T}$ and want to characeterize it by the following probabilistic model; for $t\geq 0$
#
# $$ y_{t+1} = \rho y_t + \sigma_x \varepsilon_{t+1}, \quad \varepsilon_{t+1}\stackrel{iid}{\sim}\cal{N}(0,1) $$
#
# with the initial value $y_0 \sim {\cal N}\left(0, \frac{\sigma_x^2}{1-\rho^2}\right)$ and suppose the following (independent) prior beliefs for the parameters $\theta \equiv (\rho, \sigma_x)$
# - $\rho \sim \text{U}(-1, 1)$
# - $\sigma_x \sim \text{IG}(a, b)$
#
#
# **Aim:** given the statistical model and the prior $\pi(\theta)$ we want to ''compute'' the posterior distribution $p\left( \theta \hspace{1mm} | \hspace{1mm} y^T \right)$ associated with the sample $y^T$.
#
# **How:** if no conjugate form available, sample from $p\left( \theta \hspace{1mm} | \hspace{1mm} y^T \right)$ and learn about the posterior's properties from that sample
#
# > **Remark:** We go from the prior $\pi$ to the posterior $p$ by using Bayes rule:
# \begin{equation}
# p\left( \theta \hspace{1mm} | \hspace{1mm} y^T \right) = \frac{f( y^T \hspace{1mm}| \hspace{1mm}\theta) \pi(\theta) }{f( y^T)}
# \end{equation}
# The first-order autoregression implies that the likelihood function of $y^T$ can be factored as follows:
#
# >$$ f(y^T \hspace{1mm}|\hspace{1mm} \theta) = f(y_T| y_{T-1}; \theta)\cdot f(y_{T-1}| y_{T-2}; \theta) \cdots f(y_1 | y_0;\theta )\cdot f(y_0 |\theta) $$
# where for all $t\geq 1$
# $$ f(y_t | y_{t-1}; \theta) = {\mathcal N}(\rho y_{t-1}, \sigma_x^2) = {\mathcal N}(\mu_t, \sigma_x^2)$$
#
# Generate a sample with $T=100$ for known parameter values:
# $$\rho = 0.5\quad \sigma_x = 1.0$$
# +
def sample_path(rho, sigma, T, y0=None):
'''
Simulates the sample path for y of length T+1 starting from a specified initial value OR if y0
is None, it initializes the path with a draw from the stationary distribution of y.
Arguments
-----------------
rho (Float) : AR coefficient
sigma (Float) : standard deviation of the error
T (Int) : length of the sample path without x0
y0 (Float) : initial value of X
Return:
-----------------
y_path (Numpy Array) : simulated path
'''
if y0 == None:
stdev_erg = sigma / np.sqrt(1 - rho**2)
y0 = np.random.normal(0, stdev_erg)
y_path = np.empty(T+1)
y_path[0] = y0
eps_path = np.random.normal(0, 1, T)
for t in range(T):
y_path[t + 1] = rho * y_path[t] + sigma * eps_path[t]
return y_path
#-------------------------------------------------------
# Pick true values:
rho_true, sigma_x_true, T = 0.5, 1.0, 20
#np.random.seed(1453534)
sample = sample_path(rho_true, sigma_x_true, T)
# -
# ## Probabilistic models in `pymc`
#
# *Model instance* $\approx$ collection of random variables linked together according to some rules
#
# ### Linkages (hierarchical structure):
# * **parent**: variables that influence another variable
# - e.g. $\rho$ and $\sigma_x$ are parents of $y_0$, $a$ and $b$ are parents of $sigma_x$
#
# * **child**: variables that are affected by other variables (subjects of parent variables)
# - e.g. $y_t$ is a child of $y_{t-1}$, $\rho$ and $\sigma_x$
#
# >*Why are they useful?*
#
# > child variable's current value automatically changes whenever its parents' values change
#
# ### Random variables:
# - have a `value` attribute producing the current internal value (given the values of the parents)
# - computed on-demand and cached for efficiency.
# - other important attributes: `parents` (gives dictionary), `children` (gives a set)
#
# Two main classes of random variables in `pymc`:
#
# #### 1) Stochastic variable:
# - variable whose value is not completely determined by its parents
# - *Examples:*
# * parameters with a given distribution
# * observable variables (data) = particular realizations of a random variable (see below)
# - treated by the back end as random number generators (see built-in `random()` method)
# - `logp` attribute: evaluate the logprob (mass or density) at the current value; for vector-valued variables it returns the sum of the (joint) logprob
# - *Initialization:*
# * define the distribution (built-in or your own) with `name` + params of the distribution (can be `pymc` variable)
# * optional flags:
# - `value`: for a default initial value; if not specified, initialized by a draw from the given distribution
# - `size`: for multivariate array of independent stochastic variables. (Alternatively: use array as a distribution parameter)
# Initialize `stochastic variables`
# Priors:
rho = pm.Uniform('rho', lower = -1, upper = 1) # note the capitalized distribution name (rule for pymc distributions)
sigma_x = pm.InverseGamma('sigma_x', alpha = 3, beta = 1)
# +
# random() method
print('Initialization:')
print("Current value of rho = {: f}".format(rho.value.reshape(1,)[0]))
print("Current logprob of rho = {: f}".format(rho.logp))
rho.random()
print('\nAfter redrawing:')
print("Current value of rho = {: f}".format(rho.value.reshape(1,)[0]))
print("Current logprob of rho = {: f}".format(rho.logp))
# -
# ------------------
# #### 2) Determinsitic variable:
# - variable that is entirely determined by its parents
# - ''exact functions'' of stochastic variables, however, we can treat them as a variable and not a Python function.
# - *Examples:*
# * model implied restrictions on how the parameters and the observable variables are related
# - $\text{var}(y_0)$ is a function of $\rho$ and $\sigma_x$
# - $\mu_{t}$ is an exact function of $\rho$ and $y_{t-1}$
# * sample statistics, i.e. deterministic functions of the sample
# - *Initialization:*
# * decorator form:
# - Python function of stochastic variables AND default values + the decorator `pm.deterministic`
# * elementary operations (no need to write a function or decorate): $+$, $-$, $*$, $/$
# * `pymc.Lambda`
# Initialize `deterministic variables`:
#
# (a) Standard deviation of $y_0$ is a deterministic function of $\rho$ and $\sigma$
# +
@pm.deterministic(trace = False)
def y0_stdev(rho = rho, sigma = sigma_x):
return sigma / np.sqrt(1 - rho**2)
# Alternatively:
#y0_stdev = pm.Lambda('y0_stdev', lambda r = rho, s = sigma_x: s / np.sqrt(1 - r**2) )
# -
# (b) Conditional mean of $y_t$, $\mu_y$, is a deterministic function of $\rho$ and $y_{t-1}$
# +
# For elementary operators simply write
mu_y = rho * sample[:-1]
print(type(mu_y))
# You could also write, to generate a list of Determinisitc functions
#MU_y = [rho * sample[j] for j in range(T)]
#print(type(MU_y))
#print(type(MU_y[1]))
#MU_y = pm.Container(MU_y)
#print(type(MU_y))
# -
# Let's see the parents of `y0_stdev`...
y0_stdev.parents
# Notice that this is a dictionary, so for example...
y0_stdev.parents['rho'].value
rho.random()
y0_stdev.parents['rho'].value # if the parent is a pymc variable, the current value will be always 'updated'
# ... and as we alter the parent's value, the child's value changes accordingly
# +
print("Current value of y0_stdev = {: f}".format(y0_stdev.value))
rho.random()
print('\nAfter redrawing rho:')
print("Current value of y0_stdev = {: f}".format(y0_stdev.value))
# -
# and similarly for `mu_y`
print("Current value of mu_y:")
print(mu_y.value[:4])
rho.random()
print('\nAfter redrawing rho:')
print("Current value of mu_y:")
print(mu_y.value[:4])
# ### How to tell `pymc` what you 'know' about the data?
#
# We define the data as a stochastic variable with fixed values and set the `observed` flag equal to `True`
#
# For the sample $y^T$, depending on the question at hand, we might want to define
# - either $T + 1$ scalar random variables
# - or a scalar $y_0$ and a $T$-vector valued $Y$
#
# In the current setup, as we fix the value of $y$ (observed), it doesn't really matter (approach A is easier). However, if we have an array-valued stochastic variable with mutable value, the restriction that we cannot update the values of stochastic variables' in-place becomes onerous in the sampling step (where the step method should propose array-valued variable). Straight from the pymc documentation:
# >''In this case, it may be preferable to partition the variable into several scalar-valued variables stored in an array or list.''
# #### (A) $y_0$ as a scalar and $Y$ as a vector valued random variable
y0 = pm.Normal('y0', mu = 0.0, tau = 1 / y0_stdev, observed = True, value = sample[0])
Y = pm.Normal('Y', mu = mu_y, tau = 1 / sigma_x, observed=True, value = sample[1:])
Y.value
# Notice that the value of this variable is fixed (even if the parent's value changes)
Y.parents['tau'].value
sigma_x.random()
print(Y.parents['tau'].value)
Y.value
# #### (B) $T+1$ scalar random variables
#
# Define an array with `dtype=object`, fill it with scalar variables (use loops) and define it as a `pymc.Container` (this latter step is not necessary, but based on my experience Container types work much more smoothly in the blocking step when we are sampling).
# +
Y_alt = np.empty(T + 1, dtype = object)
Y_alt[0] = y0 # definition of y0 is the same as above
for i in range(1, T + 1):
Y_alt[i] = pm.Normal('y_{:d}'.format(i), mu = mu_y[i-1], tau = 1 / sigma_x)
print(type(Y_alt))
Y_alt
# -
# Currently, this is just a numpy array of `pymc.Deterministic` functions. We can make it a `pymc` object by using the `pymc.Container` type.
Y_alt = pm.Container(Y_alt)
type(Y_alt)
# and the pymc methods are applied element-wise.
# ### Create a `pymc.Model` instance
# Remember that it is just a collection of random variables (`Stochastic` and `Deterministic`), hence
ar1_model = pm.Model([rho, sigma_x, y0, Y, y0_stdev, mu_y])
ar1_model.stochastics # notice that this is an unordered set (!)
ar1_model.deterministics
# This object have very limited awareness of the structure of the probabilistic model that it describes and does not itslef possess methods for updating the values in the sampling methods.
# ----------------
# # Fitting the model to the data (MCMC algorithm)
#
# ### MCMC algorithms
#
# The joint prior distribution is sitting on an $N$-dimensional space, where $N$ is the number of parameters we are about to make inference on (see the figure below). Looking at the data through the probabilistic model deform the prior surface into the posterior surface, that we need to explore. In principle, we could naively search this space by picking random points in $\mathbb{R}^N$ and calculate the corresponding posterior value (Monte Carlo methods), but a more efficient (especially in higher dimensions) way is to do Markov Chain Monte Carlo (MCMC), which is basically an intelligent way of discovering the posterior surface.
#
# MCMC is an iterative procedure: at every iteration, it proposes a nearby point in the space, then ask 'how likely that this point is close to the maximizer of the posterior surface?', it accepts the proposed point if the likelihood exceeds a particular level and rejects it otherwise (by going back to the old position). The key feature of MCMC is that it produces proposals by simulating a Markov chain for which the posterior is the unique, invariant limiting distribution. In other words, after a possible 'trasition period' (i.e. post converegence), it starts producing draws from the posterior.
#
#
# ### MCMC algorithm in `pymc`
#
# By default it uses the *Metropolis-within-Gibbs* algorithm (in my oppinion), which is based on two simple principles:
# 1. **Blocking and conditioning:**
# - Divide the $N$ variables of $\theta$ into $K\leq N$ blocks and update every block by sampling from the conditional density, i.e. from the distribuition of the block parameters conditioned on all parameters in the other $K-1$ blocks being at their current values.
# * At scan $t$, cycle through the $K$ blocks
# $$\theta^{(t)} = [\theta^{(t)}_1, \theta^{(t)}_2, \theta^{(t)}_3, \dots, \theta^{(t)}_K] $$
# * Sample from the conditionals
# \begin{align}
# \theta_1^{(t+1)} &\sim f(\theta_1\hspace{1mm} | \hspace{1mm} \theta^{(t)}_2, \theta^{(t)}_3, \dots, \theta^{(t)}_K; \text{data}) \\
# \theta_2^{(t+1)} &\sim f(\theta_2\hspace{1mm} | \hspace{1mm} \theta^{(t+1)}_1, \theta^{(t)}_3, \dots, \theta^{(t)}_K; \text{data}) \\
# \theta_3^{(t+1)} &\sim f(\theta_3\hspace{1mm} | \hspace{1mm} \theta^{(t+1)}_1, \theta^{(t+1)}_2, \dots, \theta^{(t)}_K; \text{data}) \\
# \dots & \\
# \theta_K^{(t+1)} &\sim f(\theta_3\hspace{1mm} | \hspace{1mm} \theta^{(t+1)}_1, \theta^{(t+1)}_2, \dots, \theta^{(t+1)}_{K-1}; \text{data})
# \end{align}
#
# 2. **Sampling (choose/construct `pymc.StepMethod`):** if for a given block the conditional density $f$ can be expressed in (semi-)analytic form, use it, if not, use Metrololis-Hastings
#
# * Semi-closed form example: Foreward-backward sampler (<NAME> Kohn, 1994):
# * Metropolis(-Hastings) algorithm:
# 1. Start at $\theta$
# 2. Propose a new point in the parameterspace according to some proposal density $J(\theta' | \theta)$ (e.g. random walk)
# 3. Accept the proposed point with probability
# $$\alpha = \min\left( 1, \frac{p(\theta'\hspace{1mm} |\hspace{1mm} \text{data})\hspace{1mm} J(\theta \hspace{1mm}|\hspace{1mm} \theta')}{ p(\theta\hspace{1mm} |\hspace{1mm} \text{data})\hspace{1mm} J(\theta' \hspace{1mm}| \hspace{1mm}\theta)} \right) $$
# - If accept: Move to the proposed point $\theta'$ and return to Step 1.
# - If reject: Don't move, keep the point $\theta$ and return to Step 1.
# 4. After a large number of iterations (once the Markov Chain convereged), return all accepted $\theta$ as a sample from the posterior
#
# Again, a `pymc.Model` instance is not much more than a collection, for example, the model variables (blocks) are not matched with step methods determining how to update values in the sampling step. In order to do that, first we need to construct an MCMC instance, which is then ready to be sampled from.
#
# MCMC‘s primary job is to create and coordinate a collection of **step methods**, each of which is responsible for updating one or more variables (blocks) at each step of the MCMC algorithm. By default, step methods are automatically assigned to variables by PyMC (after we call the sample method).
#
# #### Main built-in `pymc.StepMethod`s
# * Metropolis
# * AdaptiveMetropolis
# * Slicer
# * Gibbs
#
# you can assign step methods manually by calling the method `use_step_method(method, *args, **kwargs)`:
M = pm.MCMC(ar1_model)
# Notice that the step_methods are not assigned yet
M.step_method_dict
# You can specify them now, or if you call the `sample` method, pymc will assign the step_methods automatically according to some rule
# draw a sample of size 20,000, drop the first 1,000 and keep only every 5th draw
M.sample(iter = 50000, burn = 1000, thin = 5)
# ... and you can check what kind of step methods have been assigned (the default in most cases is the Metropolis step method for non-observed stochastic variables, while in case of observed stochastics, we simply draw from the prior)
M.step_method_dict
# The sample can be reached by the trace method (use the names you used at the initialization not the python name -- useful if the two coincide)
M.trace('rho')[:20]
M.trace('sigma_x')[:].shape
# Then this is just a numpy array, so you can do different sort of things with it. For example plot
# +
sigma_sample = M.trace('sigma_x')[:]
rho_sample = M.trace('rho')[:]
fig, ax = plt. subplots(1, 2, figsize = (15, 5))
ax[0].plot(sigma_sample)
ax[1].hist(sigma_sample)
# -
# Acutally, you don't have to waste your time on construction different subplots. `pymc`'s built-in plotting functionality creates pretty informative plots for you (baed on `matplotlib`). On the figure below
# - Upper left subplot: trace,
# - Lower left subplot: autocorrelation (try to resample the model with `thin=1`),
# - Right subplot: histogram with the mean
from pymc.Matplot import plot as fancy_plot
fancy_plot(M.trace('rho'))
# For a non-graphical summary of the posterior use the `stats()` method
# +
M.stats('rho')
# Try also:
#M.summary()
# +
N = len(rho_sample)
rho_pr = [rho.random() for i in range(N)]
sigma_pr = [sigma_x.random() for i in range(N)]
Prior = np.vstack([rho_pr, sigma_pr]).T
Posterior = np.vstack([rho_sample, sigma_sample]).T
# +
fig, bx = plt.subplots(1, 2, figsize = (17, 10), sharey = True)
sb.kdeplot(Prior, shade = True, cmap = 'PuBu', ax = bx[0])
bx[0].patch.set_facecolor('white')
bx[0].collections[0].set_alpha(0)
bx[0].axhline(y = sigma_x_true, color = 'DarkRed', lw =2)
bx[0].axvline(x = rho_true, color = 'DarkRed', lw =2)
bx[0].set_xlabel(r'$\rho$', fontsize = 18)
bx[0].set_ylabel(r'$\sigma_x$', fontsize = 18)
bx[0].set_title('Prior', fontsize = 20)
sb.kdeplot(Posterior, shade = True, cmap = 'PuBu', ax = bx[1])
bx[1].patch.set_facecolor('white')
bx[1].collections[0].set_alpha(0)
bx[1].axhline(y = sigma_x_true, color = 'DarkRed', lw =2)
bx[1].axvline(x = rho_true, color = 'DarkRed', lw =2)
bx[1].set_xlabel(r'$\rho$', fontsize = 18)
bx[1].set_ylabel(r'$\sigma_x$', fontsize = 18)
bx[1].set_title('Posterior', fontsize = 20)
plt.xlim(-1, 1)
plt.ylim(0, 1.5)
plt.tight_layout()
plt.savefig('beamer/prior_post.pdf')
# +
rho_grid = np.linspace(-1, 1, 100)
sigmay_grid = np.linspace(0, 1.5, 100)
U = sp.stats.uniform(-1, 2)
IG = sp.stats.invgamma(3)
fig2, cx = plt.subplots(2, 2, figsize = (17, 12), sharey = True)
cx[0, 0].plot(rho_grid, U.pdf(rho_grid), 'r-', lw = 3, alpha = 0.6, label = r'$\rho$ prior')
cx[0, 0].set_title(r"Marginal prior for $\rho$", fontsize = 18)
cx[0, 0].axvline(x = rho_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\rho$')
cx[0, 0].legend(loc='best', fontsize = 16)
cx[0, 0].set_xlim(-1, 1)
sb.distplot(rho_sample, ax = cx[0,1], kde_kws={"color": "r", "lw": 3, "label": r"$\rho$ posterior"})
cx[0, 1].set_title(r"Marginal posterior for $\rho$", fontsize = 18)
cx[0, 1].axvline(x = rho_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\rho$')
cx[0, 1].legend(loc='best', fontsize = 16)
cx[0, 1].set_xlim(-1, 1)
cx[1, 0].plot(sigmay_grid, IG.pdf(sigmay_grid), 'r-', lw=3, alpha=0.6, label=r'$\sigma_y$ prior')
cx[1, 0].set_title(r"Marginal prior for $\sigma_y$", fontsize = 18)
cx[1, 0].axvline(x = sigma_x_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\sigma_y$')
cx[1, 0].legend(loc = 'best', fontsize = 16)
cx[1, 0].set_xlim(0, 3)
sb.distplot(sigma_sample, ax = cx[1,1], kde_kws={"color": "r", "lw": 3, "label": r"$\sigma_y$ posterior"})
cx[1, 1].set_title(r"Marginal posterior for $\sigma_y$", fontsize = 18)
cx[1, 1].axvline(x = sigma_x_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\sigma_y$')
cx[1, 1].legend(loc = 'best', fontsize = 16)
cx[1, 1].set_xlim(0, 3)
plt.tight_layout()
plt.savefig('beamer/marginal_prior_post.pdf')
# -
# ## Sources and further reading:
#
# `pymc` official documentation: https://pymc-devs.github.io/pymc/index.html
#
# Rich set of fun examples (very easy read) -- **Probabilistic Programming & Bayesian Methods for Hackers**
# http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/
#
# Nice example about `potential`: http://healthyalgorithms.com/2008/11/05/mcmc-in-python-pymc-to-sample-uniformly-from-a-convex-body/
#
# Non-trivial example comparing the Gibbs and Metropolis algorithms:
# https://github.com/aflaxman/pymc-examples/blob/master/gibbs_for_uniform_ball.ipynb
#
# Another example: https://users.obs.carnegiescience.edu/cburns/ipynbs/PyMC.html
| lecture7/Intro_to_pymc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/linreg_bayes_svi_hmc_pyro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="DXAlhAzkwGMP"
# # Bayesian linear regression in Pyro
# We compare stochastic variational inference with HMC for Bayesian linear regression. We use the example from sec 8.1 of [Statistical Rethinking ed 2](https://xcelab.net/rm/statistical-rethinking/).
# The code is modified from https://pyro.ai/examples/bayesian_regression.html and
# https://pyro.ai/examples/bayesian_regression_ii.html.
#
# For a NumPyro version (that uses Laplace approximation instead of SVI/ HMC), see https://fehiepsi.github.io/rethinking-numpyro/08-conditional-manatees.html.
#
# + colab={"base_uri": "https://localhost:8080/"} id="CaBMZERRwBcR" outputId="4b425a0c-e958-4ac5-f0ea-f8637e52c59d"
# #!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
# !pip3 install pyro-ppl
# + id="icxereXfwToC"
import os
from functools import partial
import torch
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import pyro
import pyro.distributions as dist
from pyro.nn import PyroSample
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.infer import Predictive
from pyro.infer import SVI, Trace_ELBO
from pyro.infer import MCMC, NUTS
pyro.set_rng_seed(1)
from torch import nn
from pyro.nn import PyroModule
# + [markdown] id="zToQkjg-whSm"
# # Data
#
# The dataset has 3 variables: $A$ (whether a country is in Africa or not), $R$ (its terrain ruggedness), and $G$ (the log GDP per capita in 2000). We want to preict $G$ from $A$, $R$, and $A \times R$. The response variable is very skewed, so we log transform it.
# + id="VosIzvMYxZQ6"
DATA_URL = "https://d2hg8soec8ck9v.cloudfront.net/datasets/rugged_data.csv"
data = pd.read_csv(DATA_URL, encoding="ISO-8859-1")
df = data[["cont_africa", "rugged", "rgdppc_2000"]]
df = df[np.isfinite(df.rgdppc_2000)]
df["rgdppc_2000"] = np.log(df["rgdppc_2000"])
# + colab={"base_uri": "https://localhost:8080/", "height": 488} id="bQ13a_i7xxyS" outputId="3818eeff-8918-4471-cfc1-8ea23bb8571c"
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = df[df["cont_africa"] == 1]
non_african_nations = df[df["cont_africa"] == 0]
sns.scatterplot(non_african_nations["rugged"],
non_african_nations["rgdppc_2000"],
ax=ax[0])
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
sns.scatterplot(african_nations["rugged"],
african_nations["rgdppc_2000"],
ax=ax[1])
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations");
# + id="oRVVnVGpx4an"
# Dataset: Add a feature to capture the interaction between "cont_africa" and "rugged"
# ceofficeints are: beta_a, beta_r, beta_ar
df["cont_africa_x_rugged"] = df["cont_africa"] * df["rugged"]
data = torch.tensor(df[["cont_africa", "rugged", "cont_africa_x_rugged", "rgdppc_2000"]].values,
dtype=torch.float)
x_data, y_data = data[:, :-1], data[:, -1]
# + [markdown] id="iZnGZr9ryPEP"
# # Ordinary least squares
#
# We define the linear model as a simple neural network with no hidden layers. We fit it by using maximum likelihood, optimized by (full batch) gradient descent, as is standard for DNNs.
# + id="IjZLw7CCyzWH"
pyro.set_rng_seed(1)
#linear_reg_model = PyroModule[nn.Linear](3, 1)
linear_reg_model = nn.Linear(3, 1)
# + colab={"base_uri": "https://localhost:8080/"} id="dzYF0y6yzUZI" outputId="21619b64-a888-4cf6-f41a-6b4bbb62da8c"
print(type(linear_reg_model))
# + colab={"base_uri": "https://localhost:8080/"} id="YgmomZP-yYoA" outputId="a546855d-64e6-4212-e695-715b40e0d18b"
# Define loss and optimize
loss_fn = torch.nn.MSELoss(reduction='sum')
optim = torch.optim.Adam(linear_reg_model.parameters(), lr=0.05)
num_iterations = 1500
def train():
# run the model forward on the data
y_pred = linear_reg_model(x_data).squeeze(-1)
# calculate the mse loss
loss = loss_fn(y_pred, y_data)
# initialize gradients to zero
optim.zero_grad()
# backpropagate
loss.backward()
# take a gradient step
optim.step()
return loss
for j in range(num_iterations):
loss = train()
if (j + 1) % 200 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss.item()))
# + colab={"base_uri": "https://localhost:8080/"} id="Nkyu2Wo0yxw6" outputId="eb65c7e5-b464-47f0-d3e8-55bb99c69c64"
# Inspect learned parameters
print("Learned parameters:")
for name, param in linear_reg_model.named_parameters():
print(name, param.data.numpy())
# + colab={"base_uri": "https://localhost:8080/"} id="FjmDtKOy0Iki" outputId="5ef199c7-41eb-49ad-8a01-22de196de2ce"
mle_weights = linear_reg_model.weight.data.numpy().squeeze()
print(mle_weights)
mle_bias = linear_reg_model.bias.data.numpy().squeeze()
print(mle_bias)
mle_params = [mle_weights, mle_bias]
print(mle_params)
# + colab={"base_uri": "https://localhost:8080/", "height": 431} id="qaua5kICzCCM" outputId="6de29c48-7820-485f-f2ac-fce5f50ab2c1"
fit = df.copy()
fit["mean"] = linear_reg_model(x_data).detach().cpu().numpy()
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = fit[fit["cont_africa"] == 1]
non_african_nations = fit[fit["cont_africa"] == 0]
fig.suptitle("Regression Fit", fontsize=16)
ax[0].plot(non_african_nations["rugged"], non_african_nations["rgdppc_2000"], "o")
ax[0].plot(non_african_nations["rugged"], non_african_nations["mean"], linewidth=2)
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
ax[1].plot(african_nations["rugged"], african_nations["rgdppc_2000"], "o")
ax[1].plot(african_nations["rugged"], african_nations["mean"], linewidth=2)
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations");
# + [markdown] id="_BC19x7uzJCd"
# # Bayesian model
#
# To make a Bayesian version of the linear neural network, we need to use a Pyro module instead of a torch.nn.module. This lets us replace torch tensors containg the parameters with random variables, defined by PyroSample commands. We also specify the likelihood function by using a plate over the multiple observations.
# + id="TKIH76N_zJ3y"
class BayesianRegression(PyroModule):
def __init__(self, in_features, out_features):
super().__init__()
self.linear = PyroModule[nn.Linear](in_features, out_features)
self.linear.weight = PyroSample(dist.Normal(0., 1.).expand([out_features, in_features]).to_event(2))
self.linear.bias = PyroSample(dist.Normal(0., 10.).expand([out_features]).to_event(1))
def forward(self, x, y=None):
sigma = pyro.sample("sigma", dist.Uniform(0., 10.))
mean = self.linear(x).squeeze(-1)
mu = pyro.deterministic("mu", mean) # save this variable so we can access it later
with pyro.plate("data", x.shape[0]):
obs = pyro.sample("obs", dist.Normal(mean, sigma), obs=y)
return mean
# + [markdown] id="yJa76XkEBOvj"
# # Utilities
# + [markdown] id="MGadNJZJN9Zj"
# ## Summarize posterior
# + id="XJIn7L3QBQxn"
def summary_np_scalars(samples):
site_stats = {}
for site_name, values in samples.items():
marginal_site = pd.DataFrame(values)
describe = marginal_site.describe(percentiles=[.05, 0.25, 0.5, 0.75, 0.95]).transpose()
site_stats[site_name] = describe[["mean", "std", "5%", "25%", "50%", "75%", "95%"]]
return site_stats
def summary_torch(samples):
site_stats = {}
for k, v in samples.items():
site_stats[k] = {
"mean": torch.mean(v, 0),
"std": torch.std(v, 0),
"5%": v.kthvalue(int(len(v) * 0.05), dim=0)[0],
"95%": v.kthvalue(int(len(v) * 0.95), dim=0)[0],
}
return site_stats
# + id="tlQ3ebQuLNX3"
def plot_param_post_helper(samples, label, axs):
ax = axs[0]
sns.distplot(samples['linear.bias'], ax=ax, label=label)
ax.set_title('bias')
for i in range(0,3):
ax = axs[i+1]
sns.distplot(samples['linear.weight'][:,0,i], ax=ax, label=label)
ax.set_title(f'weight {i}')
def plot_param_post(samples_list, label_list):
fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(12, 10))
axs = axs.reshape(-1)
fig.suptitle("Marginal Posterior density - Regression Coefficients", fontsize=16)
n_methods = len(samples_list)
for i in range(n_methods):
plot_param_post_helper(samples_list[i], label_list[i], axs)
ax = axs[-1]
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='upper right');
def plot_param_post2d_helper(samples, label, axs, shade=True):
ba = samples["linear.weight"][:,0,0] # africa indicator
br = samples["linear.weight"][:,0,1] # ruggedness
bar = samples["linear.weight"][:,0,2] # africa*ruggedness
sns.kdeplot(ba, br, ax=axs[0], shade=shade, label=label)
axs[0].set(xlabel="bA", ylabel="bR", xlim=(-2.5, -1.2), ylim=(-0.5, 0.1))
sns.kdeplot(br, bar, ax=axs[1], shade=shade, label=label)
axs[1].set(xlabel="bR", ylabel="bAR", xlim=(-0.45, 0.05), ylim=(-0.15, 0.8))
def plot_param_post2d(samples_list, label_list):
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 6))
axs = axs.reshape(-1)
fig.suptitle("Cross-section of the Posterior Distribution", fontsize=16)
n_methods = len(samples_list)
shades = [False, True] # first method is contour, second is shaded
for i in range(n_methods):
plot_param_post2d_helper(samples_list[i], label_list[i], axs, shades[i])
ax = axs[-1]
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='upper right');
#fig.legend()
# + [markdown] id="IzGDTNImOBWC"
# ## Plot posterior predictions
# + id="FFxwlrxINvpO"
def plot_pred_helper(predictions, africa, ax):
nations = predictions[predictions["cont_africa"] == africa]
nations = nations.sort_values(by=["rugged"])
ax.plot(nations["rugged"], nations["mu_mean"], color="k")
ax.plot(nations["rugged"], nations["true_gdp"], "o")
# uncertainty about mean
ax.fill_between(nations["rugged"], nations["mu_perc_5"], nations["mu_perc_95"],
alpha=0.2, color="k")
# uncertainty about observations
ax.fill_between(nations["rugged"], nations["y_perc_5"], nations["y_perc_95"],
alpha=0.15, color="k")
ax.set(xlabel="Terrain Ruggedness Index", ylabel="log GDP (2000)")
return ax
def make_post_pred_df(samples_pred):
pred_summary = summary_torch(samples_pred)
mu = pred_summary["_RETURN"]
#mu = pred_summary["mu"]
y = pred_summary["obs"]
predictions = pd.DataFrame({
"cont_africa": x_data[:, 0],
"rugged": x_data[:, 1],
"mu_mean": mu["mean"],
"mu_perc_5": mu["5%"],
"mu_perc_95": mu["95%"],
"y_mean": y["mean"],
"y_perc_5": y["5%"],
"y_perc_95": y["95%"],
"true_gdp": y_data,
})
return predictions
def plot_pred(samples_pred):
predictions = make_post_pred_df(samples_pred)
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
plot_pred_helper(predictions, 0, axs[0])
axs[0].set_title('Non-African nations')
plot_pred_helper(predictions, 1, axs[1])
axs[1].set_title('African nations')
# + [markdown] id="D5_w1RzE7dDP"
# # HMC inference
# + colab={"base_uri": "https://localhost:8080/"} id="aYVpPqAt4d39" outputId="1ed95fc7-8b1f-4d07-f4e2-9bec723bafa7"
pyro.set_rng_seed(1)
model = BayesianRegression(3, 1)
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=1000, warmup_steps=200)
mcmc.run(x_data, y_data)
# + colab={"base_uri": "https://localhost:8080/"} id="llAcCKws7z7j" outputId="9bab22ad-02be-409d-a22d-0ac56699e635"
print(mcmc.get_samples().keys())
print(mcmc.get_samples()['linear.weight'].shape)
print(mcmc.get_samples()['linear.bias'].shape)
# + colab={"base_uri": "https://localhost:8080/"} id="xD6rjNeMBvdB" outputId="23d29440-f837-48fe-8264-3b92cbf5c47f"
hmc_samples_torch = mcmc.get_samples()
summary_torch(hmc_samples_torch)
# + [markdown] id="XJ6m0hJ3DWgZ"
# ## Parameter posterior
# + colab={"base_uri": "https://localhost:8080/", "height": 822} id="ExWwOGYFB1-z" outputId="a6e1d3ff-76ed-4cee-a60f-8c658d74e79c"
hmc_samples_params = {k: v.detach().cpu().numpy() for k, v in hmc_samples_torch.items()}
plot_param_post([hmc_samples_params], ['HMC'])
# + colab={"base_uri": "https://localhost:8080/"} id="vA26j9gfZTWQ" outputId="9b6af8b3-3586-426d-852c-c10483aadb68"
hmc_samples_params['linear.weight'].shape
# + colab={"base_uri": "https://localhost:8080/", "height": 520} id="dzFb2S8CqMRU" outputId="bd99ba82-a45b-41cd-a82e-0a99a7f917fb"
plot_param_post2d([hmc_samples_params], ['HMC'])
# + [markdown] id="bzxlvElIDZRk"
# ## Predictive posterior
# + colab={"base_uri": "https://localhost:8080/"} id="o1B66BHiHiw3" outputId="18cddf97-7e2a-4f1f-e827-8538dfacb7aa"
predictive = Predictive(model, mcmc.get_samples())
hmc_samples_pred = predictive(x_data)
print(hmc_samples_pred.keys())
print(hmc_samples_pred['obs'].shape)
print(hmc_samples_pred['mu'].shape)
# + colab={"base_uri": "https://localhost:8080/"} id="d79Q8d6Z-Cq0" outputId="ce849ea7-e987-43da-8a8e-75972cdc5e51"
#predictive = Predictive(model, mcmc.get_samples(), return_sites=("obs", "_RETURN"))
predictive = Predictive(model, mcmc.get_samples(), return_sites=("obs", "mu", "_RETURN"))
hmc_samples_pred = predictive(x_data)
print(hmc_samples_pred.keys())
print(hmc_samples_pred['obs'].shape)
print(hmc_samples_pred['mu'].shape)
print(hmc_samples_pred['_RETURN'].shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 403} id="OET6RbjdNz45" outputId="6e0e6ada-9ee3-4dae-ba62-18153d6013e4"
plot_pred(hmc_samples_pred)
plt.savefig('linreg_africa_post_pred_hmc.pdf', dpi=300)
# + [markdown] id="izz11PXGzuor"
# # Diagonal Gaussian variational posterior
# + [markdown] id="bej6wzYp2gfJ"
# ## Fit
# + id="kSFDgjeZzxH7"
pyro.set_rng_seed(1)
model = BayesianRegression(3, 1)
guide = AutoDiagonalNormal(model)
adam = pyro.optim.Adam({"lr": 0.03})
svi = SVI(model, guide, adam, loss=Trace_ELBO())
# + colab={"base_uri": "https://localhost:8080/"} id="W8koXK66z7g3" outputId="1225c07d-9c66-4bb7-fb6b-b095dc2c28c0"
pyro.clear_param_store()
num_iterations = 1000
for j in range(num_iterations):
# calculate the loss and take a gradient step
loss = svi.step(x_data, y_data)
if j % 100 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))
# + [markdown] id="7uzP3mpH2iGz"
# ## Parameter posterior
# + colab={"base_uri": "https://localhost:8080/"} id="aPdPvnPwX8Bi" outputId="1fc41814-0a75-4ea6-cdac-63ef844b7c59"
post = guide.get_posterior()
nsamples = 800
samples = post.sample(sample_shape=(nsamples,))
print(samples.shape) # [800,5]
print(torch.mean(samples,dim=0)) # transform(sigma), weights 0:2, bias
# + colab={"base_uri": "https://localhost:8080/"} id="9dU8FyQYY4Lu" outputId="5b200907-a323-4f93-c0d4-67ddd7041c2d"
weights = np.reshape(samples[:,1:4].detach().cpu().numpy(), (-1, 1, 3))
bias = samples[:,4].detach().cpu().numpy()
diag_samples_params = {'linear.weight': weights, 'linear.bias': bias}
print(diag_samples_params['linear.weight'].shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 952} id="_1DG1ifuc0Yr" outputId="adc98dfa-293e-4372-cd29-a1cfd9f3a14f"
plot_param_post([hmc_samples_params, diag_samples_params, ], ['HMC', 'Diag'])
plt.savefig('linreg_africa_post_marginals_hmc_diag.pdf', dpi=300)
# + colab={"base_uri": "https://localhost:8080/", "height": 585} id="xbEb4skycsv_" outputId="dd91ceee-8b4c-4c1e-8778-a1d7fca93b3a"
plot_param_post2d([hmc_samples_params, diag_samples_params], ['HMC', 'Diag'])
plt.savefig('linreg_africa_post_2d_hmc_diag.pdf', dpi=300)
# + [markdown] id="L798ffc32vBq"
# ## Posterior predictive
#
# We extract posterior predictive distribution for obs, and the return value of the model (which is the mean prediction).
# + colab={"base_uri": "https://localhost:8080/"} id="OL4XiD4f22BZ" outputId="2a3b53a6-4210-41e1-b1cd-b7868fd30d27"
predictive = Predictive(model, guide=guide, num_samples=800, return_sites=("obs", "_RETURN"))
diag_samples_pred = predictive(x_data)
print(diag_samples_pred.keys())
print(diag_samples_pred['_RETURN'].shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 403} id="x2uQ62X14hFZ" outputId="b744aaf9-64d4-4df1-8b55-d2a3e8dab69a"
plot_pred(diag_samples_pred)
# + [markdown] id="4V3ovZkUYdh8"
# ## Scratch
#
# Experiments with the log(sigma) term.
#
#
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="371Vts3bBHOY" outputId="4c00e730-77fa-4f64-fe50-b8c87a3eecc0"
print(pyro.get_param_store().keys())
# + colab={"base_uri": "https://localhost:8080/"} id="YFlWyrA10EEE" outputId="9c5af015-26ee-4c0c-9fc6-9aa582807753"
guide.requires_grad_(False)
for name, value in pyro.get_param_store().items():
print(name, pyro.param(name))
# + colab={"base_uri": "https://localhost:8080/"} id="Pk9k0_cr03pm" outputId="434c6836-d724-4081-e588-7d4dfa7f6e3a"
# derive posterior quantiles for model parameters from the variational parameters
# note that we transform to the original parameter domain (eg sigma is in [0,10])
quant = guide.quantiles([0.5])
print(quant)
# + colab={"base_uri": "https://localhost:8080/"} id="QRF7VK6rstT6" outputId="74a6f04a-7034-46db-b30f-c4fc22fc9b7e"
post = guide.get_posterior()
print(type(post))
print(post)
print(post.support)
print(post.mean) # transform(sigma), weights 0:2, bias
# + colab={"base_uri": "https://localhost:8080/"} id="93KkloPTBXrJ" outputId="3a9bcfda-2ba5-4a8c-eaa9-915fe6108011"
# gaussian approx for sigma contains mean and variance of t=logit(sigma/10)
# so to get posterior for sigma, we need to apply sigmoid(t)
s = torch.tensor(-2.2916)
print(torch.sigmoid(s)*10)
# + colab={"base_uri": "https://localhost:8080/"} id="g8sFb-rltPGu" outputId="e674747a-2d7b-49ed-9502-33b4166dd55d"
nsamples = 800
samples = post.sample(sample_shape=(nsamples,))
print(samples.shape)
print(torch.mean(samples,dim=0)) # E[transform(sigma)]=-2.2926,
# + colab={"base_uri": "https://localhost:8080/"} id="ANeiZwlpR4BV" outputId="90eed7b3-b38d-46b0-f105-3d96739453a6"
s = torch.tensor(-2.2926)
print(torch.sigmoid(s)*10)
# + colab={"base_uri": "https://localhost:8080/"} id="R9ulvgigwej3" outputId="16978b2e-b85e-4040-deb7-880e8fa105fe"
transform = guide.get_transform()
trans_samples = transform(samples)
print(torch.mean(trans_samples,dim=0))
trans_samples = transform.inv(samples)
print(torch.mean(trans_samples,dim=0))
# + colab={"base_uri": "https://localhost:8080/"} id="3c0p9agDW_YI" outputId="4d43c489-ab38-4847-b87d-67fbeb4c6bfc"
sample = guide.sample_latent()
print(sample)
sample = list(guide._unpack_latent(sample))
print(sample)
# + colab={"base_uri": "https://localhost:8080/"} id="XW2MQeD4MKac" outputId="9654f6eb-7a5e-4e36-8180-526ab4b766f9"
predictive = Predictive(model, guide=guide, num_samples=800)
samples = predictive(x_data)
print(samples.keys())
print(samples['linear.bias'].shape)
print(samples['linear.weight'].shape)
# + [markdown] id="YTGWsgzrdfSD"
# # Full Gaussian variational posterior
# + [markdown] id="XU3PsyB7d0Cy"
# ## Fit
# + id="L7_C_wdtdggs"
pyro.set_rng_seed(1)
model = BayesianRegression(3, 1)
from pyro.infer.autoguide import AutoMultivariateNormal, init_to_mean
guide = AutoMultivariateNormal(model, init_loc_fn=init_to_mean)
adam = pyro.optim.Adam({"lr": 0.03})
svi = SVI(model, guide, adam, loss=Trace_ELBO())
# + colab={"base_uri": "https://localhost:8080/"} id="MeeADsNady7D" outputId="466c1db7-0143-4c6a-e140-f741705c1a4b"
pyro.clear_param_store()
num_iterations = 1000
for j in range(num_iterations):
# calculate the loss and take a gradient step
loss = svi.step(x_data, y_data)
if j % 100 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))
# + [markdown] id="OA4lVTued1eM"
# ## Parameter posterior
# + colab={"base_uri": "https://localhost:8080/"} id="EmCc5TMEd2o_" outputId="4fbf6512-5813-48b8-f280-ad27a78edc79"
post = guide.get_posterior()
nsamples = 800
samples = post.sample(sample_shape=(nsamples,))
print(samples.shape) # [800,5]
print(torch.mean(samples,dim=0)) # transform(sigma), weights 0:2, bias
# + colab={"base_uri": "https://localhost:8080/"} id="2xs8IG76d7T1" outputId="63323a3c-4f6a-4500-b658-dcd238fbe7b4"
weights = np.reshape(samples[:,1:4].detach().cpu().numpy(), (-1, 1, 3))
bias = samples[:,4].detach().cpu().numpy()
full_samples_params = {'linear.weight': weights, 'linear.bias': bias}
print(full_samples_params['linear.weight'].shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 952} id="RbOLfzFyd9kb" outputId="7d96d758-62ee-40d0-d26a-682ef645099f"
plot_param_post([hmc_samples_params, full_samples_params, ], ['HMC', 'Full'])
plt.savefig('linreg_africa_post_marginals_hmc_full.pdf', dpi=300)
# + colab={"base_uri": "https://localhost:8080/", "height": 585} id="POE4Zk6PeM9A" outputId="75f77af2-9b66-4824-9809-01a2fcf7cfd4"
plot_param_post2d([hmc_samples_params, full_samples_params], ['HMC', 'Full'])
plt.savefig('linreg_africa_post_2d_hmc_full.pdf', dpi=300)
# + [markdown] id="JPQ-unGcecSL"
# ## Predictive posterior
# + colab={"base_uri": "https://localhost:8080/", "height": 403} id="SxHj3gwpeSg1" outputId="bd8f9e73-81f1-4bc8-c539-8ad80bc146ee"
predictive = Predictive(model, guide=guide, num_samples=800, return_sites=("obs", "_RETURN"))
full_samples_pred = predictive(x_data)
plot_pred(full_samples_pred)
# + id="BVk4jyzNejjY"
| notebooks/linreg_bayes_svi_hmc_pyro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# invoking 1.py for randomly generating 1000 numbers between 0 to 100
# %run 1.py
# resulting in a 1.csv file
# invoking 2.py for generating 1000 numbers based on the function y=3x+6 from 1.csv,
# %run 2.py
# resulting in a 2.csv file
# invoking 3.py for visualizing the result which combines 1.csv with 2.csv
# %run 3.py
# saving the figure
| .ipynb_checkpoints/Three Scripts-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing required Libraries
import csv
from bs4 import BeautifulSoup
from datetime import datetime
import requests
# etting the url for our template
template= 'https://in.indeed.com/jobs?q={}&l={}'
def get_url(position,location):
'''Generate an url for pos an loc'''
template= 'https://in.indeed.com/jobs?q={}&l={}'
position = position.replace(' ', '+')
location = location.replace(' ', '%2C+')
url=template.format(position,location)
return url
url=get_url('python develepor','Bengaluru Karnataka')
print(url)
# #### Extract Raw HTML
# See here I have used another link url1 coz the one was causing sme trouble it was gving some missing links
url1='https://in.indeed.com/jobs?q=python+developer&l=Bengaluru%2C+Karnataka'
response=requests.get(url1)
# if <Response [200]> is the o/p for this its working fine..
response
soup=BeautifulSoup(response.text,'html.parser')
cards = soup.find_all('div', 'jobsearch-SerpJobCard')
len(cards)
# #### Prototyping model with single record
card=cards[0]
job_title = card.h2.a.get('title')
company = card.find('span', 'company').text.strip()
job_location = card.find('div', 'recJobLoc').get('data-rc-loc')
post_date = card.find('span', 'date').text
today = datetime.today().strftime('%Y-%m-%d')
summary = card.find('div', 'summary').text.strip().replace('\n', ' ')
# this does not exists for all jobs, so handle the exceptions
salary_tag = card.find('span', 'salaryText')
if salary_tag:
salary = salary_tag.text.strip()
else:
salary = ''
job_url = 'https://www.indeed.com' + card.h2.a.get('href')
record = (job_title, company, job_location, post_date, today, summary, salary, job_url)
record
# #### Generalize the model with a function¶
#
def get_record(card):
"""Extract job data from a single record"""
job_title = card.h2.a.get('title')
company = card.find('span', 'company').text.strip()
job_location = card.find('div', 'recJobLoc').get('data-rc-loc')
post_date = card.find('span', 'date').text
today = datetime.today().strftime('%Y-%m-%d')
summary = card.find('div', 'summary').text.strip().replace('\n', ' ')
job_url = 'https://www.indeed.com' + card.h2.a.get('href')
# this does not exists for all jobs, so handle the exceptions
salary_tag = card.find('span', 'salaryText')
if salary_tag:
salary = salary_tag.text.strip()
else:
salary = ''
record = (job_title, company, job_location, post_date, today, summary, salary, job_url)
return record
# +
records = []
for card in cards:
record = get_record(card)
records.append(record)
# -
# #### Get the next page
while True:
try:
url = 'https://www.indeed.com' + soup.find('a', {'aria-label': 'Next'}).get('href')
except AttributeError:
break
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
cards = soup.find_all('div', 'jobsearch-SerpJobCard')
for card in cards:
record = get_record(card)
records.append(record)
# #### Put it all together
# +
import csv
from datetime import datetime
import requests
from bs4 import BeautifulSoup
def get_url(position, location):
"""Generate url from position and location"""
template = 'https://www.indeed.com/jobs?q={}&l={}'
position = position.replace(' ', '+')
location = location.replace(' ', '+')
url = ('https://in.indeed.com/jobs?q=python+developer&l=Bengaluru%2C+Karnataka')
return url
def get_record(card):
"""Extract job data from a single record"""
job_title = card.h2.a.get('title')
company = card.find('span', 'company').text.strip()
job_location = card.find('div', 'recJobLoc').get('data-rc-loc')
post_date = card.find('span', 'date').text
today = datetime.today().strftime('%Y-%m-%d')
summary = card.find('div', 'summary').text.strip().replace('\n', ' ')
job_url = 'https://www.indeed.com' + card.h2.a.get('href')
# this does not exists for all jobs, so handle the exceptions
salary_tag = card.find('span', 'salaryText')
if salary_tag:
salary = salary_tag.text.strip()
else:
salary = ''
record = (job_title, company, job_location, post_date, today, summary, salary, job_url)
return record
def main(position, location):
"""Run the main program routine"""
records = []
url = get_url(position, location)
# extract the job data
while True:
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
cards = soup.find_all('div', 'jobsearch-SerpJobCard')
for card in cards:
record = get_record(card)
records.append(record)
try:
url = 'https://www.indeed.com' + soup.find('a', {'aria-label': 'Next'}).get('href')
except AttributeError:
break
# save the job data
with open('results.csv', 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['JobTitle', 'Company', 'Location', 'PostDate', 'ExtractDate', 'Summary', 'Salary', 'JobUrl'])
writer.writerows(records)
# -
# run the main program
main('python developer', 'bengaluru karnataka')
| Indeed Web Scraping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json # will be needed for saving preprocessing details
import numpy as np # for data manipulation
import pandas as pd # for data manipulation
from sklearn.model_selection import train_test_split # will be used for data split
import requests
# load dataset
df = pd.read_csv('https://raw.githubusercontent.com/santosh006/MLdata/master/adult/data.csv', skipinitialspace=True)
x_cols = [c for c in df.columns if c != 'income']
# set input matrix and target column
X = df[x_cols]
y = df['income']
# show first rows of data
df.head()
# data split train / test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=1234)
for i in range(100):
input_data = dict(X_test.iloc[i])
target = y_test.iloc[i]
r = requests.post("http://127.0.0.1:8000/api/v1/income_classifier/predict?status=ab_testing", input_data)
response = r.json()
# provide feedback
requests.put("http://127.0.0.1:8000/api/v1/mlrequests/{}".format(response["request_id"]), {"feedback": target})
| research/ab_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Ez89NXMurP9v" colab_type="code" colab={}
# !pip install --upgrade tables
# !pip install eli5
# !pip install xgboost
# + id="kfJ3HXv4rl2E" colab_type="code" colab={}
import pandas as pd
import numpy as np
from sklearn.dummy import DummyRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
import eli5
from eli5.sklearn import PermutationImportance
# + id="G5EXd1lZr3it" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="588d6e77-6597-438f-aa78-2986c0f217fb" executionInfo={"status": "ok", "timestamp": 1583428380768, "user_tz": -60, "elapsed": 978, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
# cd "/content/drive/My Drive/Colab Notebooks/matrix_two/dw_matrix_cars"
# + [markdown] id="nkbytCvjxlUQ" colab_type="text"
# ## Wczytywanie danych
# + id="j-WehAA6r7LV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="77205d27-d3d7-4dfb-e6a1-f16b158a4913" executionInfo={"status": "ok", "timestamp": 1583428402685, "user_tz": -60, "elapsed": 6248, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
df = pd.read_hdf('data/car.h5')
df.shape
# + [markdown] id="Ww0kaxsZsJ3W" colab_type="text"
# ## Features Engineering
# + id="X02vew2P3KOr" colab_type="code" colab={}
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT ] = factorized_values
# + id="yKmmRs7u4Yfz" colab_type="code" outputId="c1acaea4-ce09-4708-8d8a-b82666a0fa3e" executionInfo={"status": "ok", "timestamp": 1583428617104, "user_tz": -60, "elapsed": 1985, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
# + id="cQ-CGtqR5k8t" colab_type="code" colab={}
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error' )
return np.mean(scores), np.std(scores)
# + [markdown] id="_pOtJ-nwup8P" colab_type="text"
# ## Decision Tree
# + id="nfOqNk9zuf9Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="597f9391-5807-43fa-f112-d52224972bd5" executionInfo={"status": "ok", "timestamp": 1583429178855, "user_tz": -60, "elapsed": 4167, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
run_model (DecisionTreeRegressor(max_depth=5), cat_feats)
# + [markdown] id="kdBfbB1put-Y" colab_type="text"
# ## Random Forest
# + id="rh1wGS3iuwB4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e9556327-db0f-4bb5-d63f-69cd5e0da59a" executionInfo={"status": "ok", "timestamp": 1583429290289, "user_tz": -60, "elapsed": 82911, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
model = RandomForestRegressor(max_depth=5, n_estimators=50, random_state=0)
run_model(model, cat_feats)
# + [markdown] id="mZggDYpPvGBU" colab_type="text"
# ## XGBoost
# + id="s2UKZRG3vISC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="e9f25517-c9d0-4c67-ec56-e81cc34c9381" executionInfo={"status": "ok", "timestamp": 1583429438767, "user_tz": -60, "elapsed": 59140, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
xgb_params = {
'max_depth' : 5,
'n_estimators': 50,
'leraning_rate': 0.1,
'seed' : 0
}
run_model(xgb.XGBRegressor(**xgb_params), cat_feats)
# + id="kXEInu5Iv-8y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="16999f19-2b95-4aaf-e652-5795cac855a1" executionInfo={"status": "ok", "timestamp": 1583433238591, "user_tz": -60, "elapsed": 368511, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
m = xgb.XGBRegressor(**xgb_params)
m.fit(X,y)
imp = PermutationImportance(m, random_state=0).fit(X, y)
eli5.show_weights(imp, feature_names = cat_feats)
# + id="LTg0mRqhCD87" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4b94fe64-c16b-4fe3-9d86-2c7105ab90c1" executionInfo={"status": "ok", "timestamp": 1583436430744, "user_tz": -60, "elapsed": 972, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
feats = ['param_napęd__cat',
'param_rok-produkcji__cat',
'param_stan__cat',
'param_skrzynia-biegów__cat',
'param_faktura-vat__cat',
'param_moc__cat',
'param_marka-pojazdu__cat',
'feature_kamera-cofania__cat',
'param_typ__cat',
'param_pojemność-skokowa__cat',
'seller_name__cat',
'feature_wspomaganie-kierownicy__cat',
'param_model-pojazdu__cat',
'param_wersja__cat',
'param_kod-silnika__cat',
'feature_system-start-stop__cat',
'feature_asystent-pasa-ruchu__cat',
'feature_czujniki-parkowania-przednie__cat',
'feature_łopatki-zmiany-biegów__cat',
'feature_regulowane-zawieszenie__cat']
len(feats)
# + id="50mZxwKaKoJa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="71c104c3-37fe-4de0-e811-00e7b74f4ab2" executionInfo={"status": "ok", "timestamp": 1583436467807, "user_tz": -60, "elapsed": 13843, "user": {"displayName": "Asasas S", "photoUrl": "", "userId": "00941150960329998400"}}
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="iDXctMz19L-D" colab_type="code" colab={}
# Zmieniamy rok produkcji, bo po kategoryzacji zamiło rok na liczby niepotrzebnie
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc__cat','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa__cat','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat']
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="yeibeVQfMGwx" colab_type="code" colab={}
# Zmieniamy moc, bo po kategoryzacji zamiło moc na liczby niepotrzebnie
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]) )
# + id="wprHW1obNGfw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="a1a79d52-b15e-4a21-dcbc-c587773b5540" executionInfo={"status": "ok", "timestamp": 1583437133482, "user_tz": -60, "elapsed": 13230, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa__cat','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat']
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="SRxtLQWuNniL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="d3d5724c-a113-400e-d924-4e970e5ff1fe" executionInfo={"status": "ok", "timestamp": 1583437226883, "user_tz": -60, "elapsed": 990, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
# Zmieniamy pojemnosc-skokowa, bo po kategoryzacji zamiło ją na liczby niepotrzebnie
df['param_pojemność-skokowa'].unique()
# + id="dN3dzcrUNsAB" colab_type="code" colab={}
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(x.split('cm')[0].replace(' ','')) )
# + id="MbsKvU43OOYl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="21b1d122-0e7c-4681-c391-ea1dfd8d0a9e" executionInfo={"status": "ok", "timestamp": 1583437415323, "user_tz": -60, "elapsed": 13148, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00941150960329998400"}}
feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat']
run_model(xgb.XGBRegressor(**xgb_params), feats)
| day4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## 1. Overview of the Dataframe
import pandas
from datetime import datetime
from pytz import utc
data = pandas.read_csv("reviews.csv", parse_dates= ["Timestamp"])
data.head()
data.shape
data.columns
data.hist("Rating")
# ## 2. Selecting data from the dataframe
# ### Selecting a column
data["Rating"]
# ### Selecting multiple columns
data[["Course Name", "Rating"]]
# ### Selecting a Row
#
data.iloc[3]
# ### Selecting multiple rows
data.iloc[1:3]
# ### Selecting a Cross Section
#
data[["Course Name", "Rating"]].iloc[1:3]
# ### Selecting a particular cell
data["Timestamp"].iloc[2]
# ## 3. Filtering Data Based On Conditions
# ### One Condition
data[data["Rating"] > 4]
len(data[data["Rating"] > 4])
data[data["Rating"] > 4].count()
ratingFiltered = data[data["Rating"] > 4]
ratingFiltered["Rating"]
ratingFiltered["Rating"].mean()
# ### Multiple conditions
data[( data["Rating"] > 4 ) & (data["Course Name"] == "Python for Beginners with Examples")]
dualCondition = data[( data["Rating"] > 4 ) & (data["Course Name"] == "Python for Beginners with Examples")]
dualCondition["Rating"].mean()
# ## 4. Time Based Filtering
data[ (data["Timestamp"] >= datetime(2020,7,1, tzinfo =utc)) & (data["Timestamp"] <= datetime(2020,12,31, tzinfo = utc)) ]
# +
# You need to parse the dataframe Timestamp columns as dates and interpret the datetime ranges with the same Timezone
# as the Timestamps
# -
# ## 5. From data to information
# ### Average of Rating of All Courses
data["Rating"].mean()
# ### Average Rating for a particular course
data[(data["Course Name"] == "Python for Beginners with Examples")]["Rating"].mean()
# ### Average Rating for a particular period
data[ (data["Timestamp"] >= datetime(2020,7,1, tzinfo =utc)) & (data["Timestamp"] <= datetime(2020,12,31, tzinfo = utc)) ]["Rating"].mean()
# ### Average Rating for a particular course and period
df1 = data[ (data["Timestamp"] >= datetime(2020,7,1, tzinfo =utc)) & (data["Timestamp"] <= datetime(2020,12,31, tzinfo = utc))]
df1[df1["Course Name"] == "Python for Beginners with Examples"]["Rating"].mean()
# ### Average of Commented Ratings
#
data[data["Comment"].isnull()]["Rating"].mean()
# ### Average of Commented Ratings
#
data[data["Comment"].notnull()]["Rating"].mean()
# ### Number of Uncommented Ratings
#
data[data["Comment"].isnull()]["Rating"].count()
# ### Number of Commented Ratings
#
data[data["Comment"].notnull()]["Rating"].count()
# ### Number of Comments Containing a Certain Word
#
data[(data["Comment"].str.contains("accent", na = False))]["Comment"].count()
data[(data["Comment"].str.contains("accent", na = False))]["Rating"].mean()
| 04. Data Analysis/review_analysis/reviews.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CENTRAL LIMIT THEOREM
#
# #### https://github.com/SelcukDE
# ## Sample Mean for a Uniform Distribution
import random
import math
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
random.seed(54312)
# +
sample_size = 100
sim_num = 10000
# +
mean_list = []
for i in range(sim_num):
sample_list = []
for i in range(sample_size):
sample_list.append(random.randint(0, 100))
sample_mean = sum(sample_list)/sample_size
mean_list.append(sample_mean)
# -
plt.hist(mean_list, bins=100, density=True, color='r')
plt.grid()
mu = 50
sigma = math.sqrt(((100 ** 2) / 12)) / (math.sqrt(sample_size))
x = np.linspace(mu-4*sigma, mu + 4*sigma, 100)
plt.plot(x, stats.norm.pdf(x, mu, sigma))
plt.show()
# ## Sample Mean for a Exponential Distribution
# +
sample_size = 40
sim_num = 10000
# +
mean_list = []
for i in range(sim_num):
sample_list = []
for i in range(sample_size):
sample_list.append(np.random.exponential(1))
sample_mean = sum(sample_list)/sample_size
mean_list.append(sample_mean)
# -
plt.hist(mean_list, bins=100, density=True, color='r')
plt.grid()
mu = 1
sigma = 1/(math.sqrt(sample_size))
x = np.linspace(mu-4*sigma, mu + 4*sigma, 100)
plt.plot(x, stats.norm.pdf(x, mu, sigma))
plt.show()
# # CONFIDENCE INTERVAL
# +
sample_list = []
for i in range(30):
sample_list.append(random.randint(0, 10))
random.seed(39809)
# +
sample_mean = np.mean(sample_list)
sample_mean
# +
n = len(sample_list)
cl = 0.95
std = 1
# -
(1-cl)/2 + cl
critic_value = stats.norm.ppf(((1-cl)/2) + cl)
critic_value
lower_limit = sample_mean - (critic_value * (std/math.sqrt(n)))
lower_limit
upper_limit = sample_mean + (critic_value * (std/math.sqrt(n)))
upper_limit
print(f'Your {cl} z confidence interval is ({lower_limit:.2f}, {upper_limit:.2f})')
# ## Exercise from Slides
sample_list = [2, 3, 5, 6, 9]
# +
sample_mean = np.mean(sample_list)
sample_mean
# -
std = 2.5
n = len(sample_list)
cl = 0.90
critic_value = stats.norm.ppf(((1-cl)/2) + cl)
critic_value
lower_limit = sample_mean - (critic_value * (std/math.sqrt(n)))
lower_limit
upper_limit = sample_mean + (critic_value * (std/math.sqrt(n)))
upper_limit
print(f'Your {cl} z confidence interval is ({lower_limit:.2f}, {upper_limit:.2f})')
stats.norm.interval(cl, loc=sample_mean, scale=std/math.sqrt(n))
# ## Exercise
# +
import pandas as pd
df = pd.read_csv("samples.csv")
# -
sample_mean = df['Demand'].mean()
sample_mean
# +
std = 75
n = len(df['Demand'])
cl = 0.95
# -
critic_value = stats.norm.ppf(((1-cl)/2) + cl)
critic_value
lower_limit = sample_mean - (critic_value * (std/math.sqrt(n)))
lower_limit
upper_limit = sample_mean + (critic_value * (std/math.sqrt(n)))
upper_limit
print(f'Your {cl} z confidence interval is ({lower_limit:.2f}, {upper_limit:.2f})')
# Using Scipy
stats.norm.interval(cl, loc=sample_mean, scale=std/math.sqrt(n))
# +
sample_mean = 38
std = 6.5
n = 25
cl = 0.95
# -
stats.norm.interval(cl, loc=sample_mean, scale=std/math.sqrt(n))
| Statistics_S9_Central Limit Theorem & Conf.Int..ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from matplotlib import pyplot as plt
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010]
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]
# create a line chart years on x-axis, gdp on y-axis
plt.plot(years, gdp, color ='green', marker='o',linestyle ='solid')
# add a title
plt.title("Nominal GDP")
# add a label to the y-axis
plt.ylabel("Trillion of $")
plt.show()
| _A simple line Chart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# -
# <font style='font-size: 20px;'>Here I'll predict the food truck profits according to a city's population.</font><br>
# <font style='font-size: 20px;'>Hypothesis - Profit will directly increase with increase in population.</font>
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('../input/programming-assignment-linear-regression/ex1data1.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# # Linear Regression
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_lin_pred = regressor.predict(X_test)
print(regressor.predict([[5.0]]))
from sklearn.metrics import r2_score
r2_score(y_test, y_lin_pred)
print(regressor.coef_)
print(regressor.intercept_)
plt.scatter(X_train, y_train, color = 'red')
plt.plot(X_train, regressor.predict(X_train), color = 'blue')
plt.title('Consumption of eateries outside')
plt.xlabel('City Population')
plt.ylabel('Food Truck Profit')
plt.show()
# <font style = 'font-size: 20px;'>-> Hypothesis is assumed correct.</font><br>
# <font style = 'font-size: 20px;'>-> As the population increases, consumption of street-side food increases.</font><br>
# <font style = 'font-size: 20px;'>-> For every 2.5 units increase in population, the profit increases by 1.170.</font><br>
# <font style = 'font-size: 20px;'>-> If the population is 0, then the food trucks have to bear a loss of 3.777 units because SP = 0 units.</font><br>
# <font style = 'font-size: 20px;'>-> But for a considerate amount(till 12.5) population, the profit lingered around 0-5 units. Till 12.5 the profit didn't increase as predicted by the regression line hence the accuracy ~53%. So we can say that population does not impact directly to the profit and some more variables are required to effectively predict the food truck sales.</font>
# # Polynomial Regression
from sklearn.preprocessing import PolynomialFeatures
poly_reg = PolynomialFeatures(degree = 4)
X_poly = poly_reg.fit_transform(X_train)
poly_lin_reg = LinearRegression()
poly_lin_reg.fit(X_poly, y_train)
y_poly_pred = poly_lin_reg.predict(poly_reg.fit_transform(X_test))
from sklearn.metrics import r2_score
r2_score(y_test, y_poly_pred)
# # Decision Tree Regression
from sklearn.tree import DecisionTreeRegressor
decision_reg = DecisionTreeRegressor(random_state = 0)
decision_reg.fit(X_train, y_train)
y_decision_pred = decision_reg.predict(X_test)
from sklearn.metrics import r2_score
r2_score(y_test, y_decision_pred)
# # Random Tree Regressor
from sklearn.ensemble import RandomForestRegressor
random_reg = RandomForestRegressor(n_estimators = 10, random_state = 0)
random_reg.fit(X_train, y_train)
y_random_pred = random_reg.predict(X_test)
from sklearn.metrics import r2_score
r2_score(y_test, y_random_pred)
| Simple Linear Regression/Population Effect on Food Truck Sales/population-effect-on-food-truck-sales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="8fDZ6RaXmGBK"
# Lambda School Data Science
#
# *Unit 2, Sprint 3, Module 3*
#
# ---
# + [markdown] colab_type="text" id="-hTictxWYih7"
# # Applied Modeling, Module 3
#
# - Visualize and interpret **partial dependence plots**
# + [markdown] colab_type="text" id="LoxNYFBXYih9"
# ### Links
# - [Kaggle / <NAME>: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)
# - [<NAME>: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904)
# + [markdown] colab_type="text" id="mDthquUBYiiB"
#
#
# ### Three types of model explanations this unit:
#
# #### 1. Global model explanation: all features in relation to each other _(Yesterday)_
# - Feature Importances: _Default, fastest, good for first estimates_
# - Drop-Column Importances: _The best in theory, but much too slow in practice_
# - Permutaton Importances: _A good compromise!_
#
# #### 2. Global model explanation: individual feature(s) in relation to target _(Today)_
# - Partial Dependence plots
#
# #### 3. Individual prediction explanation _(Tomorrow)_
# - Shapley Values
#
# _Note that the coefficients from a linear model give you all three types of explanations!_
# + [markdown] colab_type="text" id="MMt1ghi8_1X4"
# ### Setup
#
# Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.
#
# Libraries:
#
# - category_encoders
# - matplotlib
# - numpy
# - pandas
# - [**pdpbox**](https://github.com/SauceCat/PDPbox) (`conda install -c conda-forge pdpbox`)
# - plotly
# - seaborn
# - scikit-learn
# - xgboost
# + colab={} colab_type="code" id="dPcY3kPZ_2em"
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# !pip install pdpbox
# If you're working locally:
else:
DATA_PATH = '../data/'
# + colab={} colab_type="code" id="WTdq6HMGolY8"
# Ignore this warning: https://github.com/dmlc/xgboost/issues/4300
# xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='xgboost')
# + [markdown] colab_type="text" id="acFiA1u7un9B"
# ## Lending Club: Predict interest rate
# + colab={} colab_type="code" id="ItMkFUNABo9Y"
import pandas as pd
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(DATA_PATH+'lending-club/lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Just use 36 month loans
history = history[history.term==' 36 months']
# Index & sort by issue date
history = history.set_index('issue_d').sort_index()
# Clean data, engineer feature, & select subset of features
history = history.rename(columns=
{'annual_inc': 'Annual Income',
'fico_range_high': 'Credit Score',
'funded_amnt': 'Loan Amount',
'title': 'Loan Purpose'})
history['Interest Rate'] = history['int_rate'].str.strip('%').astype(float)
history['Monthly Debts'] = history['Annual Income'] / 12 * history['dti'] / 100
columns = ['Annual Income',
'Credit Score',
'Loan Amount',
'Loan Purpose',
'Monthly Debts',
'Interest Rate']
history = history[columns]
history = history.dropna()
# Test on the last 10,000 loans,
# Validate on the 10,000 before that,
# Train on the rest
test = history[-10000:]
val = history[-20000:-10000]
train = history[:-20000]
# + colab={} colab_type="code" id="c-JM8xXECmV0"
# Assign to X, y
target = 'Interest Rate'
features = history.columns.drop('Interest Rate')
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
# + colab={"base_uri": "https://localhost:8080/", "height": 283} colab_type="code" id="rv9XP3-iMPDW" outputId="098c719c-4493-4ab5-9f20-e22b2e41e7dd"
# The target has some right skew, but it's not too bad
# %matplotlib inline
import seaborn as sns
sns.distplot(y_train);
# + [markdown] colab_type="text" id="5kaaMlMwqn4O"
# ### Fit Linear Regression model
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="7sym2GJ3Ndh2" outputId="34319f8a-9b2b-4694-fdd4-cd25300cc55d"
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
lr = make_pipeline(
ce.TargetEncoder(),
StandardScaler(),
LinearRegression()
)
lr.fit(X_train, y_train)
print('Linear Regression R^2', lr.score(X_val, y_val))
# + [markdown] colab_type="text" id="ex2xIb6Gq0LD"
# ### Fit Gradient Boosting model
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="IErTkZa3CWT4" outputId="496c143b-9aa5-406e-b3ec-f100deac1158"
from sklearn.metrics import r2_score
from xgboost import XGBRegressor
gb = make_pipeline(
ce.OrdinalEncoder(),
XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
)
gb.fit(X_train, y_train)
y_pred = gb.predict(X_val)
print('Gradient Boosting R^2', r2_score(y_val, y_pred))
# + [markdown] colab_type="text" id="F_FV6mxql0Qt"
# ### Explaining Linear Regression
# + colab={"base_uri": "https://localhost:8080/", "height": 106} colab_type="code" id="KYolPjVZkkFA" outputId="5632d3a8-e43c-4566-9e26-aded516fb7e1"
example = X_val.iloc[[0]]
example
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="cIHdK65GnNDJ" outputId="ef4e0382-ed76-4887-bb23-bc45915fdd64"
pred = lr.predict(example)[0]
print(f'Predicted Interest Rate: {pred:.2f}%')
# + colab={"base_uri": "https://localhost:8080/", "height": 857} colab_type="code" id="ldbtKx0SlsVW" outputId="9d525eb9-ebe6-4a82-d83c-614f772d76dd"
import numpy as np
def vary_income(model, example):
print('Vary income, hold other features constant', '\n')
example = example.copy()
preds = []
for income in range(20000, 200000, 20000):
example['Annual Income'] = income
pred = model.predict(example)[0]
print(f'Predicted Interest Rate: {pred:.3f}%')
print(example.to_string(), '\n')
preds.append(pred)
print('Difference between predictions')
print(np.diff(preds))
#returning x y of the "derivative" of the target (change in target)
return list(range(len(preds)-1)),np.diff(preds)
# +
import matplotlib.pyplot as plt
x,y = vary_income(lr, example)
plt.plot(x,y)
# -
#If you get rid of the scaler, and then look at the coefficients
#the annual income *20000 = the difference between predictions
lr.named_steps['linearregression'].coef_
# + colab={"base_uri": "https://localhost:8080/", "height": 857} colab_type="code" id="gTUldxpImI2h" outputId="e0545f78-7318-4874-9151-98466e2afd57"
example2 = X_val.iloc[[2]]
vary_income(lr, example2)
# -
# The thing is that we actually don't expect a linear relationship in real life. A difference in income from 20-40k is going to make a larger difference than from 160-180k. The problem with linear models is that the coefficient of a variable is not dependent on the other variables, or the value of the variable itself. Linear regression is very simple, it assumes the same pattern for a specific variable throughout the space that you are trying to model. Sometimes this is a good enough of a prediction, when you have variables that are truly indipendent, or the difference is small enough that guessing with a linear regresson is good enough
#
# The derivative is a horizontal line (the slope is not changing)
# + [markdown] colab_type="text" id="ubzQ-47YtdSD"
# ### Explaining Gradient Boosting???
# + colab={"base_uri": "https://localhost:8080/", "height": 857} colab_type="code" id="V77CAqUytaD5" outputId="b2547d7e-8f2c-402d-bb47-50800e319753"
x,y = vary_income(gb, example)
plt.plot(x,y)
# + colab={"base_uri": "https://localhost:8080/", "height": 857} colab_type="code" id="L8Rb54SwtmQ8" outputId="ed36784f-0485-440f-837a-e4e6d0060779"
x,y = vary_income(gb, example2)
plt.plot(x,y)
# -
# + [markdown] colab_type="text" id="pIPNg2Wsm2ex"
# ## Partial Dependence Plots
# + [markdown] colab_type="text" id="5O6s9jisYijI"
# From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):
#
#
# >**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction.
# + [markdown] colab_type="text" id="zN2C8QTMYijI"
# [Animation by <NAME>](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/pdp.html#examples)
#
# > Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.
# > 1. Define grid along feature
# > 2. Model predictions at grid points
# > 3. Line per data instance -> ICE (Individual Conditional Expectation) curve
# > 4. Average curves to get a PDP (Partial Dependence Plot)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="DAP9c1-9vQll" outputId="ea0651a5-22c5-4c8d-b233-f2ece6d355dd"
# %matplotlib inline
import matplotlib.pyplot as plt
examples = pd.concat([example, example2])
for income in range(20000, 200000, 20000):
examples['Annual Income'] = income
preds = gb.predict(examples)
for pred in preds:
plt.scatter(income, pred, color='grey')
plt.scatter(income, np.mean(preds), color='red')
plt.title('Partial Dependence')
plt.xlabel('Income')
plt.ylabel('Interest Rate')
# + [markdown] colab_type="text" id="QOUzbLKpYijB"
# ## Partial Dependence Plots with 1 feature
#
# #### PDPbox
# - [Gallery](https://github.com/SauceCat/PDPbox#gallery)
# - [API Reference: pdp_isolate](https://pdpbox.readthedocs.io/en/latest/pdp_isolate.html)
# - [API Reference: pdp_plot](https://pdpbox.readthedocs.io/en/latest/pdp_plot.html)
# + colab={} colab_type="code" id="2YPeL9n9ZBG3"
# Later, when you save matplotlib images to include in blog posts or web apps,
# increase the dots per inch (double it), so the text isn't so fuzzy
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
# + colab={"base_uri": "https://localhost:8080/", "height": 582} colab_type="code" id="cegKbw4B43lG" outputId="25364ab9-cd10-4352-b6f1-f7d474fc933a"
from pdpbox.pdp import pdp_isolate,pdp_plot
feature = 'Annual Income'
isolated = pdp_isolate(
model = gb,
dataset = X_val,
model_features = X_val.columns,
feature = feature,
)
pdp_plot(isolated,feature)
# -
isolated = pdp_isolate(
model = gb,
datas)
# + [markdown] colab_type="text" id="T7bIBNtfU3cY"
# #### You can customize it
#
# PDPbox
# - [API Reference: PDPIsolate](https://pdpbox.readthedocs.io/en/latest/PDPIsolate.html)
# + colab={"base_uri": "https://localhost:8080/", "height": 404} colab_type="code" id="-aDvICqIZcwS" outputId="f656a89b-6f21-4d95-91ef-d3e9706e14c5"
grid = isolated.feature_grids
grid
# -
pdp = isolated.pdp
plt.figure(figsize=(10,6))
plt.plot(grid,pdp)
plt.xlim(1,150000)
#ice = individual conditional expectation curves.
isolated.ice_lines
# + [markdown] colab_type="text" id="LOu_hUU6YijJ"
# ## Partial Dependence Plots with 2 features
#
# See interactions!
#
# PDPbox
# - [Gallery](https://github.com/SauceCat/PDPbox#gallery)
# - [API Reference: pdp_interact](https://pdpbox.readthedocs.io/en/latest/pdp_interact.html)
# - [API Reference: pdp_interact_plot](https://pdpbox.readthedocs.io/en/latest/pdp_interact_plot.html)
#
# Be aware of a bug in PDPBox version <= 0.20:
# - With the `pdp_interact_plot` function, `plot_type='contour'` gets an error, but `plot_type='grid'` works
# - This will be fixed in the next release of PDPbox: https://github.com/SauceCat/PDPbox/issues/40
# + colab={"base_uri": "https://localhost:8080/", "height": 585} colab_type="code" id="edL2X3QtYijJ" outputId="f2365abe-d1db-42b6-fa06-aac4c1f64c59"
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features = ['Annual Income','Credit Score']
interaction = pdp_interact(
model=gb,
dataset= X_val,
model_features=X_val.columns,
features = features
)
pdp_interact_plot(interaction,plot_type='grid',feature_names=features)
# +
#Number of predictions requred to make a pdp with 2 features
#given the size of your dataset, and the number of grid points,
len(X_val) * 10**2
#Because every grid is a combination that gets tested ON EVERY SINGLE DATAPOINT
#in the validation set.
#This problem grows exponentially.
# -
#note that this is in tidy data format
interaction.pdp
# +
pdp =interaction.pdp.pivot_table(
values ='preds',
columns =features[0], # annual income
index = features[1] #credit score
)[::-1] #slice notation to reverse index order so y axis is descending
plt.figure(figsize=(10,8))
sns.heatmap(pdp,annot=True,fmt='.2f')
# + [markdown] colab_type="text" id="h0zF-MIh47JK"
# ### 3D with Plotly!
# + colab={"base_uri": "https://localhost:8080/", "height": 542} colab_type="code" id="TVI9Y93Z0t4B" outputId="e8a3589c-ac03-4a38-fe73-e837a3240b8d"
import plotly.graph_objs as go
surface = go.Surface(x = pdp.columns,
y = pdp.index,
z = pdp.values)
fig = go.Figure(surface)
fig.show()
# -
#we can get rid of the higher income values that are skewing the data by dropping the
#edge column(s) from the pdp df.
pdp.columns
pdp.drop(columns =[751329.0,1000.0])
# +
surface = go.Surface(x = pdp.columns,
y = pdp.index,
z = pdp.values)
fig = go.Figure(surface)
fig.show()
# + [markdown] colab_type="text" id="F7GKnVW01bBK"
# # Partial Dependence Plots with categorical features
#
# 1. I recommend you use Ordinal Encoder or Target Encoder, outside of a pipeline, to encode your data first. (If there is a natural ordering, then take the time to encode it that way, instead of random integers.) Then use the encoded data with pdpbox.
# 2. There's some extra work to get readable category names on your plot, instead of integer category codes.
# + colab={"base_uri": "https://localhost:8080/", "height": 134} colab_type="code" id="dKd-dI7Y1LSL" outputId="c197ba55-b705-404b-d678-dc8df2a096ff"
# Fit a model on Titanic data
import category_encoders as ce
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
df = sns.load_dataset('titanic')
df.age = df.age.fillna(df.age.median())
df = df.drop(columns='deck')
df = df.dropna()
target = 'survived'
features = df.columns.drop(['survived', 'alive'])
X = df[features]
y = df[target]
# Use Ordinal Encoder, outside of a pipeline
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_encoded, y)
# -
# + colab={"base_uri": "https://localhost:8080/", "height": 582} colab_type="code" id="izClfuUV1lSt" outputId="ec9499a0-5624-4f57-a30d-2e595166e578"
# Use Pdpbox
# %matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'sex'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
# + colab={"base_uri": "https://localhost:8080/", "height": 454} colab_type="code" id="wuQPHCti1opV" outputId="68819661-d66e-4a15-a12b-5ceb21793098"
# Look at the encoder's mappings
encoder.mapping
# + colab={"base_uri": "https://localhost:8080/", "height": 582} colab_type="code" id="x5UVHSuQ1toP" outputId="f0060aa8-760b-4bc3-e6b5-00fdcd841ad6"
pdp.pdp_plot(pdp_dist, feature)
# Manually change the xticks labels
plt.xticks([1, 2], ['male', 'female']);
# + colab={} colab_type="code" id="BbaVTJIa104W"
# Let's automate it
feature = 'sex'
for item in encoder.mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
# + colab={"base_uri": "https://localhost:8080/", "height": 582} colab_type="code" id="gSSoKtbi14OP" outputId="b9d1dcf0-d7ed-4311-b3f3-ceabc8befa8e"
pdp.pdp_plot(pdp_dist, feature)
# Automatically change the xticks labels
plt.xticks(category_codes, category_names);
# + colab={"base_uri": "https://localhost:8080/", "height": 585} colab_type="code" id="I0R4gnZY199a" outputId="b6b88be0-6d19-447b-80bd-c49d0216355a"
features = ['sex', 'age']
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
# + colab={"base_uri": "https://localhost:8080/", "height": 513} colab_type="code" id="vAhHHHO52OJt" outputId="802abdce-446f-4423-f99c-5b0c8f91a9ce"
pdp = interaction.pdp.pivot_table(
values='preds',
columns=features[0], # First feature on x axis
index=features[1] # Next feature on y axis
)[::-1] # Reverse the index order so y axis is ascending
pdp = pdp.rename(columns=dict(zip(category_codes, category_names)))
plt.figure(figsize=(10,8))
sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis')
plt.title('Partial Dependence of Titanic survival, on sex & age');
| module3/lesson_applied_modeling_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example - Reproject Match (For Raster Calculations/Stacking)
#
# `rio.reproject_match` will reproject to match the resolution, projection, and region of another raster.
#
# This is useful for raster caclulations and stacking rasters.
# +
import rioxarray # for the extension to load
import xarray
import matplotlib.pyplot as plt
# %matplotlib inline
# -
def print_raster(raster):
print(
f"shape: {raster.rio.shape}\n"
f"resolution: {raster.rio.resolution()}\n"
f"bounds: {raster.rio.bounds()}\n"
f"sum: {raster.sum().item()}\n"
f"CRS: {raster.rio.crs}\n"
)
# ## Load in xarray datasets
xds = xarray.open_dataarray("../../test/test_data/input/MODIS_ARRAY.nc")
xds_match = xarray.open_dataarray("../../test/test_data/input/MODIS_ARRAY_MATCH.nc")
fig, axes = plt.subplots(ncols=2, figsize=(12,4))
xds.plot(ax=axes[0])
xds_match.plot(ax=axes[1])
plt.draw()
print("Original Raster:\n----------------\n")
print_raster(xds)
print("Raster to Match:\n----------------\n")
print_raster(xds_match)
# ## Reproject Match
#
# API Reference:
#
# - DataArray: [rio.reproject_match()](../rioxarray.rst#rioxarray.raster_array.RasterArray.reproject_match)
# - Dataset: [rio.reproject_match()](../rioxarray.rst#rioxarray.raster_dataset.RasterDataset.reproject_match)
xds_repr_match = xds.rio.reproject_match(xds_match)
print("Reprojected Raster:\n-------------------\n")
print_raster(xds_repr_match)
print("Raster to Match:\n----------------\n")
print_raster(xds_match)
# ## Raster Calculations
#
# Now that the rasters have the same projection, resolution, and extents,
# you can do raster calculations.
#
# It is recommended to use ``assign_coords`` to make the coordinates the exact same
# due to tiny differences in the coordinate values due to floating precision ([issue 298](https://github.com/corteva/rioxarray/issues/298)).
xds_repr_match = xds_repr_match.assign_coords({
"x": xds_match.x,
"y": xds_match.y,
})
xds_sum = xds_repr_match + xds_match
print("Sum Raster:\n-----------\n")
print_raster(xds_sum)
# +
fig, axes = plt.subplots(ncols=3, figsize=(16,4))
xds_repr_match.plot(ax=axes[0])
xds_match.plot(ax=axes[1])
xds_sum.plot(ax=axes[2])
plt.draw()
| docs/examples/reproject_match.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # `ipytest` Summary
#
# `ipytest` aims to make testing code in IPython notebooks easy. At its core, it offers a way to run pytest tests inside the notebook environment. It is also designed to make the transfer of the tests into proper python modules easy by supporting to use standard `pytest` features.
#
# To get started install `ipytest` via:
#
# ```bash
# pip install -U ipytest
# ```
#
# To use `ipytest`, import it and configure the notebook. In most cases, running `ipytest.autoconfig()` will result in reasonable defaults:
#
# - Tests can be executed with the `%run_pytest` and `%run_pytest[clean]` magics
# - The `pytest` assert rewriting system to get nice assert messages will integrated into the notebook
# - If not notebook name is given, a workaround using temporary files will be used
#
# For more control, pass the relevant arguments to `ipytest.autconfig()`. For details, see the documentation in the readme.
import ipytest
ipytest.autoconfig()
# ## Execute tests
#
# To execute test, just decorate the cells containing the tests with the `%%run_pytest[clean]` magic:
# +
# %%run_pytest[clean]
# define the tests
def test_my_func():
assert my_func(0) == 0
assert my_func(1) == 0
assert my_func(2) == 2
assert my_func(3) == 2
def my_func(x):
return x // 2 * 2
# -
# ## Using pytest fixtures
#
# Common pytest features, such as fixtures and parametrize, are supported out of the box:
# +
# %%run_pytest[clean]
import pytest
@pytest.mark.parametrize('input,expected', [
(0, 0),
(1, 0),
(2, 2),
(3, 2),
])
def test_parametrized(input, expected):
assert my_func(input) == expected
@pytest.fixture
def my_fixture():
return 42
def test_fixture(my_fixture):
assert my_fixture == 42
# -
# ## The difference between `%%run_pytest` and `%%run_pytest[clean]`
#
# The notebook interface has a lot of hidden state, since functions stay visible, even if the corresponding code is deleted. For example, renaming a function will keep the old function around. When using `ipytest`, any function defined in the notebook and matching the name scheme `test*` will be discovered. To make test dicovery easier to understand, the `%%run_pytest[clean]` magic will delete any object which name matches the patter `[Tt]est*` before running the cell. If this behavior is not wanted, use `%%run_pytest`.
| Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
#------------------------------------------------------------------------------------
import numpy as np
import pycuda.gpuarray as gpuarray
from pycuda.tools import make_default_context
import matplotlib as matplotlib
import pylab as plt
from mpl_toolkits.mplot3d import Axes3D
#-------------------------------------------------------------------------------------
from pywignercuda_path import SetPyWignerCUDA_Path
SetPyWignerCUDA_Path()
from GPU_Dirac3D import *
# +
#import pyqtgraph as pg
# -
# %matplotlib inline
class DefinePotential_Dirac3D( GPU_Dirac3D ):
def __init__ (self):
#....................Defining the geometry.....................................
gridDIM_X = 256
gridDIM_Y = 256
gridDIM_Z = 256
X_amplitude =15
Y_amplitude =15
Z_amplitude =15
dt = 0.01
timeSteps = 1200
skipFrames = 100
#...................Defining the kinematic-dynamical constants.................
self.mass = 1.
self.c = 1.
self.hBar = 1.
#...................Defining the potential and initial state parameters........
hBar = 1
self.px = 0.
self.py = 5.
self.pz = 0.
self.energy = self.c*np.sqrt( (self.mass*self.c)**2 + self.px**2 + self.py**2 + self.pz**2 )
#...................Defining the potential.....................................
V0 = 7.
w = 0.3
self.Potential_0_String = '{0}*0.5*( 1. + tanh((y-2.)/{1}) )'.format(V0,w)
self.Potential_1_String = '0.'
self.Potential_2_String = '0.'
self.Potential_3_String = '0.'
#..................Defining the output directory/file .........................
self.fileName = '/home/rcabrera/DATA/Dirac3D/Klein_Y.hdf5'
self.Compute_Ehrenfest_P = True
self.antiParticleStepFiltering = False
#..............................................................................
amplitude = (X_amplitude,Y_amplitude,Z_amplitude)
gridDIM = (gridDIM_X,gridDIM_Y,gridDIM_Z)
self.Compute_Ehrenfest_P = True
GPU_Dirac3D.__init__( self,gridDIM, amplitude ,dt,timeSteps, skipFrames = skipFrames,frameSaveMode='Spinor')
def Set_Initial_Condition_SpinUp (self):
def gaussian(x,y,z):
y0 = -3
return np.exp( - ( x**2 + (y-y0)**2 + z**2)/2. )
self.Psi_init = self.Spinor_Particle_SpinUp( (self.px,self.py,self.pz) , gaussian )
norm = self.Norm( self.Psi_init )
#self.FilterElectrons(1)
self.Psi_init /= norm
"""def Set_Initial_Condition_SpinDown (self):
def gaussian(x,y,z):
return np.exp( - ( x**2 + y**2 + z**2)/2. )
self.Psi_init = self.Spinor_Particle_SpinDown( (self.px,self.py,self.pz) , gaussian )
norm = self.Norm_X_CPU( self.Psi_init )
self.Psi_init /= norm """
instance = DefinePotential_Dirac3D()
# +
instance = DefinePotential_Dirac3D()
instance.Set_Initial_Condition_SpinUp()
#instance_Dirac3D.FilterElectrons(1)
print ' '
print ' dx = ', instance.dX
Psi_end = instance.Run ()
# -
instance.Psi_end.shape
# +
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , instance.X_average , 'g',label= '$x^1 $')
ax.plot( instance.timeRange , instance.Y_average , 'r',label= '$x^2 $')
ax.plot( instance.timeRange , instance.Z_average , 'b--',label= '$x^3 $')
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
# -
if instance.Compute_Ehrenfest_P == True:
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , instance.Px_average , 'g',label= '$p^1 $')
ax.plot( instance.timeRange , instance.Py_average , 'r',label= '$p^2 $')
ax.plot( instance.timeRange , instance.Pz_average , 'b--',label= '$p^3 $')
ax.set_ylim(-6,6)
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
# +
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , np.gradient( instance.X_average , instance.dt) , 'g',
label= '$\\frac{dx^1}{dt} $')
ax.plot( instance.timeRange , instance.Alpha1_average ,'r--' ,label='$c \\alpha^1$')
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
# +
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , np.gradient( instance.Y_average , instance.dt) , 'g',
label= '$\\frac{dy^1}{dt} $')
ax.plot( instance.timeRange , instance.Alpha2_average ,'r--' ,label='$c \\alpha^2$')
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
# +
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , np.gradient( instance.Z_average , instance.dt) , 'g',
label= '$\\frac{dx^3}{dt} $')
ax.plot( instance.timeRange , instance.Alpha3_average ,'r--' ,label='$c \\alpha^3$')
ax.set_ylim(-0.1,0.1)
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
# -
if instance.Compute_Ehrenfest_P == True:
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , np.gradient( instance.Px_average , instance.dt) , 'g',
label= '$\\frac{dp^1}{dt} $')
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
if instance.Compute_Ehrenfest_P == True:
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , np.gradient( instance.Py_average , instance.dt) , 'g',
label= '$\\frac{dp^2}{dt} $')
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
if instance.Compute_Ehrenfest_P == True:
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , np.gradient( instance.Pz_average , instance.dt) , 'g',
label= '$\\frac{dp^3}{dt} $')
ax.set_xlabel(r'$t$',**axis_font)
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
if instance.Compute_Ehrenfest_P == True:
axis_font = {'size':'24'}
fig, ax = plt.subplots(figsize=(20, 7))
ax.plot( instance.timeRange , instance.K_Energy_average + instance.Potential_0_average , 'g',
label= '$ Energy$')
ax.set_ylim(0,7)
ax.set_xlabel(r'$t$',**axis_font)
ax.grid('on')
ax.legend(bbox_to_anchor=(1.05, 0.5), loc=1, prop={'size':22})
# +
fig, ax = plt.subplots(figsize=(20, 10))
#ax = fig.gca(projection='3d')
ax.scatter(
instance.X_average,
instance.Y_average )
ax.set_xlabel(r'$x$',**axis_font)
ax.set_ylabel(r'$y$',**axis_font)
ax.set_xlim(-1. ,1.)
ax.set_ylim(-5 ,5)
#ax.set_zlim(-5,0)
# +
fig, ax = plt.subplots(figsize=(20, 10))
ax = fig.gca(projection='3d')
ax.scatter(
instance.X_average,
instance.Y_average,
instance.Z_average
)
ax.set_xlabel(r'$x$',**axis_font)
ax.set_ylabel(r'$y$',**axis_font)
ax.set_zlabel(r'$z$',**axis_font)
ax.set_ylim(-2 ,6)
ax.set_zlim(-0.1,0.1)
ax.set_xlim(-1,1)
# -
| instances/Dirac3D/Klein_Y.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="kR-4eNdK6lYS"
# Deep Learning
# =============
#
# Assignment 3
# ------------
#
# Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.
#
# The goal of this assignment is to explore regularization techniques.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="JLpLa8Jt7Vu4"
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
# + [markdown] colab_type="text" id="1HrCK6e17WzV"
# First reload the data we generated in `1_notmnist.ipynb`.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 11777, "status": "ok", "timestamp": 1449849322348, "user": {"color": "", "displayName": "", "isAnonymous": false, "isMe": true, "permissionId": "", "photoUrl": "", "sessionId": "0", "userId": ""}, "user_tz": 480} id="y3-cj1bpmuxc" outputId="e03576f1-ebbe-4838-c388-f1777bcc9873"
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# + [markdown] colab_type="text" id="L7aHrm6nGDMB"
# Reformat into a shape that's more adapted to the models we're going to train:
# - data as a flat matrix,
# - labels as float 1-hot encodings.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 11728, "status": "ok", "timestamp": 1449849322356, "user": {"color": "", "displayName": "", "isAnonymous": false, "isMe": true, "permissionId": "", "photoUrl": "", "sessionId": "0", "userId": ""}, "user_tz": 480} id="IRSyYiIIGIzS" outputId="3f8996ee-3574-4f44-c953-5c8a04636582"
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="RajPLaL_ZW6w"
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# + [markdown] colab_type="text" id="sgLbUAQ1CW-1"
# ---
# Problem 1
# ---------
#
# Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.
#
# ---
# +
beta = 0.02 # Coefficient of regularisation function
batch_size = 128
graph = tf.Graph()
with graph.as_default():
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits)
+ beta*tf.nn.l2_loss(weights))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
# +
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# +
# Now the solution with neural network model
beta = 0.02 # Coefficient of regularisation function
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
hidden_nodes = 512
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases = tf.Variable(tf.zeros([hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights) + biases),
weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits)
+ beta*tf.nn.l2_loss(weights))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
train_prediction = tf.nn.softmax(logits)
logits_test = tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights) + biases),
weights2) + biases2
logits_valid = tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights) + biases),
weights2) + biases2
valid_prediction = tf.nn.softmax(logits_valid)
test_prediction = tf.nn.softmax(logits_test)
# +
batch_size = 128
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# + [markdown] colab_type="text" id="na8xX2yHZzNF"
# ---
# Problem 2
# ---------
# Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
#
# ---
# +
batch_size = 128
num_steps = 3001
beta = 0
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
hidden_nodes = 512
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases = tf.Variable(tf.zeros([hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights) + biases),
weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits)
+ beta*tf.nn.l2_loss(weights))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
train_prediction = tf.nn.softmax(logits)
logits_test = tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights) + biases),
weights2) + biases2
logits_valid = tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights) + biases),
weights2) + biases2
valid_prediction = tf.nn.softmax(logits_valid)
test_prediction = tf.nn.softmax(logits_test)
train_subset = 500
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
train_new = train_dataset[:train_subset, :]
train_labels_new = train_labels[:train_subset]
offset = (step * batch_size) % (train_labels_new.shape[0] - batch_size)
batch_data = train_new[offset:(offset + batch_size), :]
batch_labels = train_labels_new[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# + [markdown] colab_type="text" id="ww3SCBUdlkRc"
# ---
# Problem 3
# ---------
# Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.
#
# What happens to our extreme overfitting case?
#
# ---
# +
beta = 0
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
hidden_nodes = 1024
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases = tf.Variable(tf.zeros([hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(2*tf.nn.dropout(tf.nn.relu(tf.matmul(tf_train_dataset, weights) + biases), 0.35),
weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits)
+ beta*tf.nn.l2_loss(weights))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
logits_pred = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights) + biases),
weights2) + biases2
train_prediction = tf.nn.softmax(logits)
logits_test = tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights) + biases),
weights2) + biases2
logits_valid = tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights) + biases),
weights2) + biases2
valid_prediction = tf.nn.softmax(logits_valid)
test_prediction = tf.nn.softmax(logits_test)
train_subset = 1000
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
train_new = train_dataset[:train_subset, :]
train_labels_new = train_labels[:train_subset]
offset = (step * batch_size) % (train_labels_new.shape[0] - batch_size)
batch_data = train_new[offset:(offset + batch_size), :]
batch_labels = train_labels_new[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# + [markdown] colab_type="text" id="-b1hTz3VWZjw"
# ---
# Problem 4
# ---------
#
# Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).
#
# One avenue you can explore is to add multiple layers.
#
# Another one is to use learning rate decay:
#
# global_step = tf.Variable(0) # count the number of steps taken.
# learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
# optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
#
# ---
#
# +
beta = 0.02
for i in range(30):
beta = (np.random.rand()+0.0001)/20
hidden_nodes = int(100+np.random.rand()*1000)
dec_rate = np.random.rand()
dec_size = int(300 + np.random.rand()*1000)
keep_rate = 0.4 + np.random.rand()/2
print(beta, hidden_nodes, dec_size, dec_rate, keep_rate)
# Good values!
# beta, hidden_nodes, dec_size, dec_rate, keep_rate = 0.027728714028579116, 235 ,436 ,0.9664904412289589 ,0.6749914285955139
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases = tf.Variable(tf.zeros([hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
layer1 = tf.matmul(tf_train_dataset, weights) + biases
layer2 = (1/keep_rate)*tf.nn.dropout(tf.nn.relu(layer1), keep_rate)
logits = tf.matmul(layer2 ,weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits)
+ beta*tf.nn.l2_loss(weights))
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, dec_size, dec_rate)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
train_prediction = tf.nn.softmax(logits)
logits_test = tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights) + biases),
weights2) + biases2
logits_valid = tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights) + biases),
weights2) + biases2
valid_prediction = tf.nn.softmax(logits_valid)
test_prediction = tf.nn.softmax(logits_test)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
"""if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))"""
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# -
| tensorflow/examples/udacity/3_regularization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
os.chdir('..')
# +
import pandas as pd
import numpy as np
import scipy.optimize as sco
import warnings
warnings.filterwarnings('ignore')
import sklearn.cluster as cl
from sklearn import mixture
data_folder = '../data/'
path = data_folder + 'GEV_SM/swissmetro.dat'
from classes.MNLogit import *
from helpers.data import *
from helpers.algos import *
from helpers.models import *
import copy
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['text.usetex'] = True
import time
import pickle
import hyperopt
from hyperopt.pyll.base import scope
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
import seaborn as sns
import numdifftools as nd
sns.set(font_scale=1.5)
# For the Python notebook
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
seed = 1234
# -
model = load_model(path, 'norm')
x = np.zeros(len(model.params))
# %%time
resbfgs = sco.minimize(model.negloglikelihood, x, method='BFGS', tol=1e-8, jac=model.neg_grad)
resbfgs.x
resbfgs.fun
# # SBFGS
# +
nbr = 20
draws = 10
res = {}
# +
dct = {}
start = time.time()
epochs, xs, lls = bfgs(model, x, nbr, 'eye', False)
stop = time.time()
dct['epochs'] = epochs
dct['lls'] = lls
dct['times'] = stop-start
res['BFGS-eye'] = dct
# +
dct = {}
start = time.time()
epochs, xs, lls = bfgs(model, x, nbr, 'hessian', False)
stop = time.time()
dct['epochs'] = epochs
dct['lls'] = lls
dct['times'] = stop-start
res['BFGS-hess'] = dct
# +
dct = {}
lls = []
times = []
for d in range(draws):
start = time.time()
ep, x_val, ll = res_bfgs(model, x, nbr, 100)
stop = time.time()
times.append(stop-start)
lls.append(ll)
dct['epochs'] = np.array(ep)
dct['lls'] = np.array(lls)
dct['times'] = np.array(times)
res['RES_BFGS-100'] = dct
# +
dct = {}
lls = []
times = []
for d in range(draws):
start = time.time()
ep, x_val, ll = res_bfgs(model, x, nbr, 1000)
stop = time.time()
times.append(stop-start)
lls.append(ll)
dct['epochs'] = np.array(ep)
dct['lls'] = np.array(lls)
dct['times'] = np.array(times)
res['RES_BFGS-1000'] = dct
# -
with open('../data/SBFGS2.p', 'wb') as outfile:
pickle.dump(res, outfile)
# +
with open('../data/SBFGS2.p', 'rb') as infile:
res = pickle.load(infile)
colors = {'RES_BFGS-100': (232/255,164/255,29/255),
'RES_BFGS-1000': (0/255,152/255,205/255)}
labels = {
'BFGS-eye': 'BFGS (Identity start)',
'BFGS-hess': 'BFGS (Hessian start)',
'RES_BFGS-100': 'RES-BFGS (batch size: 100)',
'RES_BFGS-1000': 'RES-BFGS (batch size: 1000)'
}
plt.figure(figsize=(5,3), frameon=False)
sns.set_context("paper")
sns.set(font_scale = 1.3)
sns.set_style("white", {
"font.family": "sans-serif",
"font.serif": ['Helvetica'],
"font.scale": 2
})
sns.set_style("ticks", {"xtick.major.size": 4,
"ytick.major.size": 4})
ax = plt.subplot(111)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.plot([res['BFGS-eye']['epochs'][0], res['BFGS-eye']['epochs'][-1]],
[-resbfgs.fun, -resbfgs.fun], 'r--',
label='Optimal log-likelihood')
ax.plot(res['BFGS-eye']['epochs'], res['BFGS-eye']['lls'], 'k', label=labels['BFGS-eye'])
ax.plot(res['BFGS-hess']['epochs'], res['BFGS-hess']['lls'], '--k', label=labels['BFGS-hess'])
for key in ['RES_BFGS-100', 'RES_BFGS-1000']:
epochs = res[key]['epochs']
plus = []
minus = []
avg = []
vals = res[key]['lls']
for i in range(vals.shape[1]):
avg.append(np.mean(vals[:,i]))
minus.append(np.percentile(vals[:,i], 5))
plus.append(np.percentile(vals[:,i], 95))
ax.plot(epochs, avg, linestyle='-', color=colors[key] , label=labels[key])
ax.fill_between(epochs, plus, minus, color=colors[key] , alpha=0.5)
plt.xlabel('Epoch')
plt.ylabel('Normalized log-likelihood ($\\bar{\\mathcal{L}}$)')
leg = ax.legend(frameon=True)
max_ep = 10
ax.set_xlim([-0.05, max_ep])
plt.savefig('../figures/SBFGS{}.pdf'.format(max_ep), bbox_inches='tight')
# -
for algo in res.keys():
avg = {}
idx = next(x[0] for x in enumerate(res[algo]['epochs']) if x[1] >= 10)
lls = res[algo]['lls']
try:
tmp = lls[:,idx]
avg = np.mean(tmp)
except:
avg = lls[idx]
print(" LL for {}: {:.6f}".format(algo, avg))
| code/notebooks/RES-BFGS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Homework 9 - Dimensional Reduction tutorial
# ## Introduction
# ### What is Dimensionality?
#
# "Dimensionality in statistics refers to how many attributes a dataset has. For example, healthcare data is notorious for having vast amounts of variables (e.g. blood pressure, weight, cholesterol level). In an ideal world, this data could be represented in a spreadsheet, with one column representing each dimension. In practice, this is difficult to do, in part because many variables are inter-related."
#
#
# ### High Dimensional Data
#
# "High Dimensional means that the number of dimensions are staggeringly high — so high that calculations become extremely difficult. With high dimensional data, the number of features can exceed the number of observations. For example, microarrays, which measure gene expression, can contain tens of hundreds of samples. Each sample can contain tens of thousands of genes."
#
# ### What is Reduction of Dimensionality?
#
# Reduction of dimensionality means to simplify understanding of data, either numerically or visually, while maintaining the data integrety.To reduce dimensionality, you could combine related data into groups using a tool like multidimensional scaling to identify similarities in data. You could also use clustering to group items together.
#
#
# Data source - "https://www.statisticshowto.datasciencecentral.com/dimensionality/"
#
# In this tutorial we will see how by combining a technique called Principal Component Analysis (PCA) together with Cluster Analysis we can represent in a two-dimensional space data defined in a higher dimensional one while, at the same time, be able to group this data in similar groups or clusters and find hidden relationships in our data.
# ## Initial Data setup
#
# In this tutorial, we will use a dataset from kaggle, which has data of the world population from 1960 to 2015.
# https://www.kaggle.com/mrpantherson/metal-by-nation#world_population_1960_2015.csv
import pandas as pd
import numpy as np
mydata = pd.read_csv(".\data\world_population_1960_2015.csv",index_col = 0,
thousands = ',')
mydata.index.names = ['country']
mydata.columns.names = ['year']
mydata.head()
# Check if the dataframe has any missing or Nan values
#np.isnan(mydata)
mydata.isnull().values.any()
# The dataframe has missing data/ Nan values, fill the missing data with the fillna() using the method you like, we will using mean().
mmdata = mydata.fillna(mydata.mean()) #my modified data = mmdata
mmdata
# ## Dimensionality Reduction with PCA
# In this section, we want to be able to represent each country in a two dimensional space.
# By using PCA we will be able to reduce the 55 columns ( 1960 to 2015) to just the two of them that best captures that information.
# In order to do so, we will first how to perform PCA and plot the first two PCs
# Python's sklearn machine learning library comes with a PCA implementation.
# This implementation uses the scipy.linalg implementation of the singular value decomposition.
# When using this implementation of PCA we need to specify in advance the number
# of principal components we want to use.
# +
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(mmdata)
# -
PCA(copy=True, n_components=2, whiten=False)
# We will use the transform method to get a lower dimension representation of our data
# frame, as a numPy array.
existing_2d = pca.transform(mmdata)
existing_df_2d = pd.DataFrame(existing_2d)
existing_df_2d.index = mmdata.index
existing_df_2d.columns = ['PC1','PC2']
existing_df_2d.head()
pca.explained_variance_ratio_ #calculate the varience
# ## Display in graph and plots
# +
#plot the lower dimensionality version of our dataset.
# %matplotlib inline
ax = existing_df_2d.plot(kind='scatter', x='PC2', y='PC1', figsize=(16,8))
for i, country in enumerate(mmdata.index):
ax.annotate([country],
(existing_df_2d.iloc[i].PC2, existing_df_2d.iloc[i].PC1)
)
# -
# Let's now create a bubble chart, by setting the point size to a value proportional to the mean value for all the years in that particular country.
# +
from sklearn.preprocessing import normalize
existing_df_2d['country_mean'] = pd.Series(mmdata.mean(axis=1), index=existing_df_2d.index)
country_mean_max = existing_df_2d['country_mean'].max()
country_mean_min = existing_df_2d['country_mean'].min()
country_mean_scaled = (existing_df_2d.country_mean-country_mean_min) / country_mean_max
existing_df_2d['country_mean_scaled'] = pd.Series(
country_mean_scaled,
index=existing_df_2d.index)
existing_df_2d.head()
# -
# plot using this variable size
existing_df_2d.plot(
kind='scatter',
x='PC2',
y='PC1',
s=existing_df_2d['country_mean_scaled']*100,
figsize=(16,8))
# let's associate the size with the change between 1995 and 2005. Note that in the scaled version, those values close to zero will make reference to those with negative values in the original non-scaled version, since we are scaling to a [0,1] range.
existing_df_2d['country_change'] = pd.Series(
mmdata['2005']-mmdata['1995'],
index=existing_df_2d.index)
country_change_max = existing_df_2d['country_change'].max()
country_change_min = existing_df_2d['country_change'].min()
country_change_scaled = (existing_df_2d.country_change - country_change_min) / country_change_max
existing_df_2d['country_change_scaled'] = pd.Series(
country_change_scaled,
index=existing_df_2d.index)
existing_df_2d[['country_change','country_change_scaled']].head()
# +
#plot the above
existing_df_2d.plot(
kind='scatter',
x='PC2', y='PC1',
s=existing_df_2d['country_change_scaled']*100,
figsize=(16,8))
# -
# ## PCA Results
#
# At the very bottom of our charts we saw an important concentration of countries. While we descend that axis, the number of countries is more sparse. We saw that the first PC already explains almost 99% of the variance, while the second one accounts for another ~1% for a total.
# ## Exploring Data Structure with k-means Clustering
# we will use sklearn, in this case its k-means clustering implementation, in order to perform our clustering.
# +
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4) #taking number of clusters as 4
clusters = kmeans.fit(mmdata)
# -
# Now we need to store the cluster assignments together with each country in our
# data frame. The cluster labels are returned in clusters.labels_.
existing_df_2d['cluster'] = pd.Series(clusters.labels_, index=existing_df_2d.index)
# +
#plot the cluster
import numpy as np
existing_df_2d.plot(
kind='scatter',
x='PC2',y='PC1',
c=existing_df_2d.cluster.astype(np.float),
figsize=(16,8))
# -
# # Extra Credit
# ## Cluster Analysis/interpretation
#
# Below, we will analyse the items ( counties ) in each cluster. For which we will create a new dataframe, which has the existing dataframe along with a column which contains the value of each cluster.
cluster_map = pd.DataFrame()
cluster_map['data_index'] = existing_df_2d.index.values
cluster_map['cluster'] = clusters.labels_
# ### Cluster 0
cluster_map[cluster_map.cluster == 0]
# ### Cluster 1
cluster_map[cluster_map.cluster == 1]
# ### Cluster 2
cluster_map[cluster_map.cluster == 2]
# ### Cluster 3
cluster_map[cluster_map.cluster == 3]
cluster_map.groupby("cluster")['cluster'].count() #count of counties in each cluster
#plot the count of countries in each cluster
cluster_map.groupby("cluster")['cluster'].count().plot(kind='bar')
# we see that cluster 0 has more number of countries when compared to the other 3 clusters. According to the plot seen in k-means, we can assume that this is the cluster which is dense at the bottom.
# # Reference
# - [1] https://www.codementor.io/jadianes/data-science-python-pandas-r-dimensionality-reduction-du1081aka
# - [2] https://stackoverflow.com/questions/36195457/python-sklearn-kmeans-how-to-get-the-values-in-the-cluster
| Homeworks/Homework9/VALLABHANENI_AKHILA_homework9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Cartopy Examples
# %matplotlib inline
# +
from __future__ import (absolute_import, division, print_function, unicode_literals)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
import cartopy.feature as cfeature
from netCDF4 import Dataset
from wrf import to_np, getvar, smooth2d, get_cartopy, latlon_coords, CoordPair, GeoBounds, cartopy_xlim, cartopy_ylim
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Get sea level pressure and cloud top temperature
slp = getvar(ncfile, "slp")
ctt = getvar(ncfile, "ctt")
# Smooth the SLP
smooth_slp = smooth2d(slp, 3)
# Extract the latitude and longitude coordinate arrays as regular numpy array instead of xarray.DataArray
lats, lons = latlon_coords(slp)
# Get the cartopy projection class
cart_proj = get_cartopy(slp)
# Create the figure
fig = plt.figure(figsize=(8,8))
ax = plt.axes(projection=cart_proj)
# Download and create the states, land, and oceans using cartopy features
states = cfeature.NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_shp')
land = cfeature.NaturalEarthFeature(category='physical', name='land', scale='50m',
facecolor=cfeature.COLORS['land'])
ocean = cfeature.NaturalEarthFeature(category='physical', name='ocean', scale='50m',
facecolor=cfeature.COLORS['water'])
# Make the pressure contours.
#contour_levels = [960, 965, 970, 975, 980, 990]
#c1 = plt.contour(lons, lats, to_np(smooth_slp), levels=contour_levels, colors="white",
# transform=crs.PlateCarree(), zorder=3, linewidths=1.0)
# Add pressure contour labels
#plt.clabel(c1, contour_levels, inline=True, fmt='%.0f', fontsize=7)
# Create the filled cloud top temperature contours
#contour_levels = [-80, -70, -60, -50, -40, -30, -20, -10, 0]
contour_levels = np.arange(-80.0, 10.0, 10.0)
plt.contourf(to_np(lons), to_np(lats), to_np(ctt), contour_levels, cmap=get_cmap("Greys"),
transform=crs.PlateCarree(), zorder=2)
plt.plot([-80,-77.8], [26.76,26.76], color="yellow", marker="o", transform=crs.PlateCarree(), zorder=3)
# Create the label bar for cloud top temperature
#cb2 = plt.colorbar(ax=ax, fraction=0.046, pad=0.04)
cb2 = plt.colorbar(ax=ax, shrink=.90)
# Draw the oceans, land, and states
ax.add_feature(ocean)
ax.add_feature(land)
ax.add_feature(states, linewidth=.5, edgecolor="black")
# Crop the domain to the region around the hurricane
hur_bounds = GeoBounds(CoordPair(lat=np.amin(to_np(lats)), lon=-85.0),
CoordPair(lat=30.0, lon=-72.0))
ax.set_xlim(cartopy_xlim(ctt, geobounds=hur_bounds))
ax.set_ylim(cartopy_ylim(ctt, geobounds=hur_bounds))
ax.gridlines(color="white", linestyle="dotted")
# Add the title and show the image
plt.title("Hurricane Matthew Cloud Top Temperature (degC) ")
#plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/matthew.png",
# transparent=True, bbox_inches="tight")
plt.show()
# +
# %matplotlib inline
from __future__ import (absolute_import, division, print_function, unicode_literals)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature
from netCDF4 import Dataset
from wrf import to_np, getvar, smooth2d, ll_to_xy, CoordPair, vertcross, getproj, get_proj_params, to_xy_coords, latlon_coords
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Extract model height, dbz, and wind speed
z = getvar(ncfile, "z", timeidx=0)
dbz = getvar(ncfile, "dbz", timeidx=0)
wspd = getvar(ncfile, "uvmet_wspd_wdir", units="kt")[0,:]
Z = 10**(dbz/10.)
start_point = CoordPair(lat=26.75, lon=-80.0)
end_point = CoordPair(lat=26.75, lon=-77.8)
# Compute the vertical cross-section interpolation. Also, include the lat/lon points along the cross-section.
z_cross = vertcross(Z, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
dbz_cross = 10.0 * np.log10(z_cross)
# Create the figure
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4,4))
#ax = plt.axes([0.1,0.1,0.8,0.8])
# Define the contour levels [0, 50, 100, 150, ....]
levels = [5 + 5*n for n in range(15)]
# Make the contour plot
a = axes[0].contourf(to_np(wspd_cross))
# Add the color bar
fig.colorbar(a, ax=axes[0])
b = axes[1].contourf(to_np(dbz_cross), levels=levels)
fig.colorbar(b, ax=axes[1])
# Set the x-ticks to use latitude and longitude labels.
coord_pairs = to_np(dbz_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str() for pair in to_np(coord_pairs)]
axes[0].set_xticks(x_ticks[::20])
axes[0].set_xticklabels([], rotation=45)
axes[1].set_xticks(x_ticks[::20])
axes[1].set_xticklabels(x_labels[::20], rotation=45, fontsize=6)
# Set the y-ticks to be height.
vert_vals = to_np(dbz_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
axes[0].set_yticks(v_ticks[::20])
axes[0].set_yticklabels(vert_vals[::20], fontsize=6)
axes[1].set_yticks(v_ticks[::20])
axes[1].set_yticklabels(vert_vals[::20], fontsize=6)
# Set the x-axis and y-axis labels
axes[1].set_xlabel("Latitude, Longitude", fontsize=7)
axes[0].set_ylabel("Height (m)", fontsize=7)
axes[1].set_ylabel("Height (m)", fontsize=7)
# Add a title
axes[0].set_title("Cross-Section of Wind Speed (kt)", {"fontsize" : 10})
axes[1].set_title("Cross-Section of Reflectivity (dBZ)", {"fontsize" : 10})
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/matthew_cross.png",
transparent=True, bbox_inches="tight")
plt.show()
# +
# %matplotlib inline
from __future__ import (absolute_import, division, print_function, unicode_literals)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
import cartopy.feature as cfeature
from netCDF4 import Dataset
from wrf import (getvar, to_np, vertcross, smooth2d, CoordPair, GeoBounds, get_cartopy,
latlon_coords, cartopy_xlim, cartopy_ylim)
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Get the WRF variables
slp = getvar(ncfile, "slp")
smooth_slp = smooth2d(slp, 3)
ctt = getvar(ncfile, "ctt")
z = getvar(ncfile, "z", timeidx=0)
dbz = getvar(ncfile, "dbz", timeidx=0)
Z = 10**(dbz/10.)
wspd = getvar(ncfile, "wspd_wdir", units="kt")[0,:]
# Set the start point and end point for the cross section
start_point = CoordPair(lat=26.76, lon=-80.0)
end_point = CoordPair(lat=26.76, lon=-77.8)
# Compute the vertical cross-section interpolation. Also, include the lat/lon points along the cross-section
# in the metadata by setting latlon to True.
z_cross = vertcross(Z, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
dbz_cross = 10.0 * np.log10(z_cross)
# Get the lat/lon points
lats, lons = latlon_coords(slp)
# Get the cartopy projection object
cart_proj = get_cartopy(slp)
# Create a figure that will have 3 subplots
fig = plt.figure(figsize=(10,7))
ax_ctt = fig.add_subplot(1,2,1,projection=cart_proj)
ax_wspd = fig.add_subplot(2,2,2)
ax_dbz = fig.add_subplot(2,2,4)
# Download and create the states, land, and oceans using cartopy features
states = cfeature.NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_shp')
land = cfeature.NaturalEarthFeature(category='physical', name='land', scale='50m',
facecolor=cfeature.COLORS['land'])
ocean = cfeature.NaturalEarthFeature(category='physical', name='ocean', scale='50m',
facecolor=cfeature.COLORS['water'])
# Make the pressure contours
contour_levels = [960, 965, 970, 975, 980, 990]
c1 = ax_ctt.contour(lons, lats, to_np(smooth_slp), levels=contour_levels, colors="white",
transform=crs.PlateCarree(), zorder=3, linewidths=1.0)
# Create the filled cloud top temperature contours
contour_levels = [-80.0, -70.0, -60, -50, -40, -30, -20, -10, 0, 10]
ctt_contours = ax_ctt.contourf(to_np(lons), to_np(lats), to_np(ctt), contour_levels, cmap=get_cmap("Greys"),
transform=crs.PlateCarree(), zorder=2)
ax_ctt.plot([start_point.lon, end_point.lon], [start_point.lat, end_point.lat], color="yellow",
marker="o", transform=crs.PlateCarree(), zorder=3)
# Create the color bar for cloud top temperature
cb_ctt = fig.colorbar(ctt_contours, ax=ax_ctt, shrink=.60)
cb_ctt.ax.tick_params(labelsize=5)
# Draw the oceans, land, and states
ax_ctt.add_feature(ocean)
ax_ctt.add_feature(land)
ax_ctt.add_feature(states, linewidth=.5, edgecolor="black")
# Crop the domain to the region around the hurricane
hur_bounds = GeoBounds(CoordPair(lat=np.amin(to_np(lats)), lon=-85.0),
CoordPair(lat=30.0, lon=-72.0))
ax_ctt.set_xlim(cartopy_xlim(ctt, geobounds=hur_bounds))
ax_ctt.set_ylim(cartopy_ylim(ctt, geobounds=hur_bounds))
ax_ctt.gridlines(color="white", linestyle="dotted")
# Make the contour plot for wind speed
wspd_contours = ax_wspd.contourf(to_np(wspd_cross), cmap=get_cmap("jet"))
# Add the color bar
cb_wspd = fig.colorbar(wspd_contours, ax=ax_wspd)
cb_wspd.ax.tick_params(labelsize=5)
# Make the contour plot for dbz
levels = [5 + 5*n for n in range(15)]
dbz_contours = ax_dbz.contourf(to_np(dbz_cross), levels=levels, cmap=get_cmap("jet"))
cb_dbz = fig.colorbar(dbz_contours, ax=ax_dbz)
cb_dbz.ax.tick_params(labelsize=5)
# Set the x-ticks to use latitude and longitude labels
coord_pairs = to_np(dbz_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str() for pair in to_np(coord_pairs)]
ax_wspd.set_xticks(x_ticks[::20])
ax_wspd.set_xticklabels([], rotation=45)
ax_dbz.set_xticks(x_ticks[::20])
ax_dbz.set_xticklabels(x_labels[::20], rotation=45, fontsize=4)
# Set the y-ticks to be height
vert_vals = to_np(dbz_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
ax_wspd.set_yticks(v_ticks[::20])
ax_wspd.set_yticklabels(vert_vals[::20], fontsize=4)
ax_dbz.set_yticks(v_ticks[::20])
ax_dbz.set_yticklabels(vert_vals[::20], fontsize=4)
# Set the x-axis and y-axis labels
ax_dbz.set_xlabel("Latitude, Longitude", fontsize=5)
ax_wspd.set_ylabel("Height (m)", fontsize=5)
ax_dbz.set_ylabel("Height (m)", fontsize=5)
# Add a title
ax_ctt.set_title("Cloud Top Temperature (degC)", {"fontsize" : 7})
ax_wspd.set_title("Cross-Section of Wind Speed (kt)", {"fontsize" : 7})
ax_dbz.set_title("Cross-Section of Reflectivity (dBZ)", {"fontsize" : 7})
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/matthew.png",
transparent=True, bbox_inches="tight")
plt.show()
# +
# %matplotlib inline
# SLP
from __future__ import (absolute_import, division, print_function, unicode_literals)
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature
from wrf import to_np, getvar, smooth2d, get_cartopy, cartopy_xlim, cartopy_ylim
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Get the sea level pressure
slp = getvar(ncfile, "slp")
# Smooth the sea level pressure since it tends to be noisy near the mountains
smooth_slp = smooth2d(slp, 3)
# Get the latitude and longitude points
lats, lons = latlon_coords(slp)
# Get the cartopy mapping object
cart_proj = get_cartopy(slp)
# Create a figure
fig = plt.figure(figsize=(12,9))
# Set the GeoAxes to the projection used by WRF
ax = plt.axes(projection=cart_proj)
# Download and add the states and coastlines
states = NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_shp')
ax.add_feature(states, linewidth=.5)
ax.coastlines('50m', linewidth=0.8)
# Make the contour outlines and filled contours for the smoothed sea level pressure.
plt.contour(to_np(lons), to_np(lats), to_np(smooth_slp), 10, colors="black", transform=crs.PlateCarree())
plt.contourf(to_np(lons), to_np(lats), to_np(smooth_slp), 10, transform=crs.PlateCarree(), cmap=get_cmap("jet"))
# Add a color bar
plt.colorbar(ax=ax, shrink=.62)
# Set the map limits. Not really necessary, but used for demonstration.
ax.set_xlim(cartopy_xlim(smooth_slp))
ax.set_ylim(cartopy_ylim(smooth_slp))
# Add the gridlines
ax.gridlines(color="black", linestyle="dotted")
plt.title("Sea Level Pressure (hPa)")
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/cartopy_slp.png",
transparent=True, bbox_inches="tight")
# +
# %matplotlib inline
from __future__ import (absolute_import, division, print_function, unicode_literals)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature
from netCDF4 import Dataset
from wrf import to_np, getvar, CoordPair, vertcross
# Open the NetCDF file
filename = "/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00"
ncfile = Dataset(filename)
# Extract the model height and wind speed
z = getvar(ncfile, "z")
wspd = getvar(ncfile, "uvmet_wspd_wdir", units="kt")[0,:]
# Create the start point and end point for the cross section
start_point = CoordPair(lat=26.76, lon=-80.0)
end_point = CoordPair(lat=26.76, lon=-77.8)
# Compute the vertical cross-section interpolation. Also, include the lat/lon points along the cross-section.
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
# Create the figure
fig = plt.figure(figsize=(12,6))
ax = plt.axes()
# Make the contour plot
wspd_contours = ax.contourf(to_np(wspd_cross), cmap=get_cmap("jet"))
# Add the color bar
plt.colorbar(wspd_contours, ax=ax)
# Set the x-ticks to use latitude and longitude labels.
coord_pairs = to_np(wspd_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str(fmt="{:.2f}, {:.2f}") for pair in to_np(coord_pairs)]
ax.set_xticks(x_ticks[::20])
ax.set_xticklabels(x_labels[::20], rotation=45, fontsize=8)
# Set the y-ticks to be height.
vert_vals = to_np(wspd_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
ax.set_yticks(v_ticks[::20])
ax.set_yticklabels(vert_vals[::20], fontsize=8)
# Set the x-axis and y-axis labels
ax.set_xlabel("Latitude, Longitude", fontsize=12)
ax.set_ylabel("Height (m)", fontsize=12)
plt.title("Vertical Cross Section of Wind Speed (kt)")
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/cartopy_cross.png",
transparent=True, bbox_inches="tight")
plt.show()
# +
from __future__ import (absolute_import, division, print_function, unicode_literals)
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature
from wrf import getvar, interplevel, to_np, latlon_coords, get_cartopy, cartopy_xlim, cartopy_ylim
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Extract the pressure, geopotential height, and wind variables
p = getvar(ncfile, "pressure")
z = getvar(ncfile, "z", units="dm")
ua = getvar(ncfile, "ua", units="kt")
va = getvar(ncfile, "va", units="kt")
wspd = getvar(ncfile, "wspd_wdir", units="kts")[0,:]
# Interpolate geopotential height, u, and v winds to 500 hPa
ht_500 = interplevel(z, p, 500)
u_500 = interplevel(ua, p, 500)
v_500 = interplevel(va, p, 500)
wspd_500 = interplevel(wspd, p, 500)
# Get the lat/lon coordinates
lats, lons = latlon_coords(ht_500)
# Get the map projection information
cart_proj = get_cartopy(ht_500)
# Create the figure
fig = plt.figure(figsize=(12,9))
ax = plt.axes(projection=cart_proj)
# Download and add the states and coastlines
states = NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_shp')
ax.add_feature(states, linewidth=0.5)
ax.coastlines('50m', linewidth=0.8)
# Add the 500 hPa geopotential height contours
levels = np.arange(520., 580., 6.)
contours = plt.contour(to_np(lons), to_np(lats), to_np(ht_500), levels=levels, colors="black",
transform=crs.PlateCarree())
plt.clabel(contours, inline=1, fontsize=10, fmt="%i")
# Add the wind speed contours
levels = [25, 30, 35, 40, 50, 60, 70, 80, 90, 100, 110, 120]
wspd_contours = plt.contourf(to_np(lons), to_np(lats), to_np(wspd_500), levels=levels,
cmap=get_cmap("rainbow"),
transform=crs.PlateCarree())
plt.colorbar(wspd_contours, ax=ax, orientation="horizontal", pad=.05)
# Add the 500 hPa wind barbs, only plotting every 125th data point.
plt.barbs(to_np(lons[::125,::125]), to_np(lats[::125,::125]), to_np(u_500[::125, ::125]),
to_np(v_500[::125, ::125]), transform=crs.PlateCarree(), length=6)
# Set the map bounds
ax.set_xlim(cartopy_xlim(ht_500))
ax.set_ylim(cartopy_ylim(ht_500))
ax.gridlines()
plt.title("500 MB Height (dm), Wind Speed (kt), Barbs (kt)")
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/cartopy_500.png",
transparent=True, bbox_inches="tight")
plt.show()
# +
from __future__ import print_function
from netCDF4 import Dataset
from wrf import getvar, get_cartopy, latlon_coords, geo_bounds
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
slp = getvar(ncfile, "slp")
# Get the cartopy mapping object
cart_proj = get_cartopy(slp)
print (cart_proj)
# Get the latitude and longitude coordinate. This is needed for plotting.
lats, lons = latlon_coords(slp)
# Get the geobounds for the full domain
bounds = geo_bounds(slp)
print (bounds)
# Get the geographic boundaries for a subset of the domain
slp_subset = slp[150:250, 150:250]
slp_subset_bounds = geo_bounds(slp_subset)
print (slp_subset_bounds)
# +
from __future__ import print_function
from netCDF4 import Dataset
from wrf import get_cartopy, geo_bounds
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
cart_proj = get_cartopy(wrfin=ncfile)
print (cart_proj)
bounds = geo_bounds(wrfin=ncfile)
print (bounds)
# -
# # Basemap Examples
# +
from __future__ import (absolute_import, division, print_function, unicode_literals)
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from mpl_toolkits.basemap import Basemap
from wrf import to_np, getvar, smooth2d, get_basemap, latlon_coords
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Get the sea level pressure
slp = getvar(ncfile, "slp")
# Smooth the sea level pressure since it tends to be noisy near the mountains
smooth_slp = smooth2d(slp, 3)
# Get the latitude and longitude points
lats, lons = latlon_coords(slp)
# Get the basemap object
bm = get_basemap(slp)
# Create a figure
fig = plt.figure(figsize=(12,9))
# Add geographic outlines
bm.drawcoastlines(linewidth=0.25)
bm.drawstates(linewidth=0.25)
bm.drawcountries(linewidth=0.25)
# Convert the lats and lons to x and y. Make sure you convert the lats and lons to
# numpy arrays via to_np, or basemap crashes with an undefined RuntimeError.
x, y = bm(to_np(lons), to_np(lats))
# Draw the contours and filled contours
bm.contour(x, y, to_np(smooth_slp), 10, colors="black")
bm.contourf(x, y, to_np(smooth_slp), 10, cmap=get_cmap("jet"))
# Add a color bar
plt.colorbar(shrink=.62)
plt.title("Sea Level Pressure (hPa)")
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/basemap_slp.png",
transparent=True, bbox_inches="tight")
plt.show()
# +
# %matplotlib inline
from __future__ import (absolute_import, division, print_function, unicode_literals)
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from wrf import getvar, interplevel, to_np, get_basemap, latlon_coords
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Extract the pressure, geopotential height, and wind variables
p = getvar(ncfile, "pressure")
z = getvar(ncfile, "z", units="dm")
ua = getvar(ncfile, "ua", units="kt")
va = getvar(ncfile, "va", units="kt")
wspd = getvar(ncfile, "wspd_wdir", units="kts")[0,:]
# Interpolate geopotential height, u, and v winds to 500 hPa
ht_500 = interplevel(z, p, 500)
u_500 = interplevel(ua, p, 500)
v_500 = interplevel(va, p, 500)
wspd_500 = interplevel(wspd, p, 500)
# Get the lat/lon coordinates
lats, lons = latlon_coords(ht_500)
# Get the basemap object
bm = get_basemap(ht_500)
# Create the figure
fig = plt.figure(figsize=(12,9))
ax = plt.axes()
# Convert the lat/lon coordinates to x/y coordinates in the projection space
x, y = bm(to_np(lons), to_np(lats))
# Add the 500 hPa geopotential height contours
levels = np.arange(520., 580., 6.)
contours = bm.contour(x, y, to_np(ht_500), levels=levels, colors="black")
plt.clabel(contours, inline=1, fontsize=10, fmt="%i")
# Add the wind speed contours
levels = [25, 30, 35, 40, 50, 60, 70, 80, 90, 100, 110, 120]
wspd_contours = bm.contourf(x, y, to_np(wspd_500), levels=levels,
cmap=get_cmap("rainbow"))
plt.colorbar(wspd_contours, ax=ax, orientation="horizontal", pad=.05)
# Add the geographic boundaries
bm.drawcoastlines(linewidth=0.25)
bm.drawstates(linewidth=0.25)
bm.drawcountries(linewidth=0.25)
# Add the 500 hPa wind barbs, only plotting every 125th data point.
bm.barbs(x[::125,::125], y[::125,::125], to_np(u_500[::125, ::125]),
to_np(v_500[::125, ::125]), length=6)
plt.title("500 MB Height (dm), Wind Speed (kt), Barbs (kt)")
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/basemap_500.png",
transparent=True, bbox_inches="tight")
plt.show()
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from netCDF4 import Dataset
from wrf import getvar, to_np, vertcross, smooth2d, CoordPair, get_basemap, latlon_coords
# Open the NetCDF file
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
# Get the WRF variables
slp = getvar(ncfile, "slp")
smooth_slp = smooth2d(slp, 3)
ctt = getvar(ncfile, "ctt")
z = getvar(ncfile, "z", timeidx=0)
dbz = getvar(ncfile, "dbz", timeidx=0)
Z = 10**(dbz/10.)
wspd = getvar(ncfile, "wspd_wdir", units="kt")[0,:]
# Set the start point and end point for the cross section
start_point = CoordPair(lat=26.76, lon=-80.0)
end_point = CoordPair(lat=26.76, lon=-77.8)
# Compute the vertical cross-section interpolation. Also, include the lat/lon points along the cross-section in
# the metadata by setting latlon to True.
z_cross = vertcross(Z, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True, meta=True)
dbz_cross = 10.0 * np.log10(z_cross)
# Get the latitude and longitude points
lats, lons = latlon_coords(slp)
# Create the figure that will have 3 subplots
fig = plt.figure(figsize=(10,7))
ax_ctt = fig.add_subplot(1,2,1)
ax_wspd = fig.add_subplot(2,2,2)
ax_dbz = fig.add_subplot(2,2,4)
# Get the basemap object
bm = get_basemap(slp)
# Convert the lat/lon points in to x/y points in the projection space
x, y = bm(to_np(lons), to_np(lats))
# Make the pressure contours
contour_levels = [960, 965, 970, 975, 980, 990]
c1 = bm.contour(x, y, to_np(smooth_slp), levels=contour_levels, colors="white",
zorder=3, linewidths=1.0, ax=ax_ctt)
# Create the filled cloud top temperature contours
contour_levels = [-80.0, -70.0, -60, -50, -40, -30, -20, -10, 0, 10]
ctt_contours = bm.contourf(x, y, to_np(ctt), contour_levels, cmap=get_cmap("Greys"),
zorder=2, ax=ax_ctt)
point_x, point_y = bm([start_point.lon, end_point.lon], [start_point.lat, end_point.lat])
bm.plot([point_x[0], point_x[1]], [point_y[0], point_y[1]], color="yellow",
marker="o", zorder=3, ax=ax_ctt)
# Create the color bar for cloud top temperature
cb_ctt = fig.colorbar(ctt_contours, ax=ax_ctt, shrink=.60)
cb_ctt.ax.tick_params(labelsize=5)
# Draw the oceans, land, and states
bm.drawcoastlines(linewidth=0.25, ax=ax_ctt)
bm.drawstates(linewidth=0.25, ax=ax_ctt)
bm.drawcountries(linewidth=0.25, ax=ax_ctt)
bm.fillcontinents(color=np.array([ 0.9375 , 0.9375 , 0.859375]),
ax=ax_ctt, lake_color=np.array([ 0.59375 , 0.71484375, 0.8828125 ]))
bm.drawmapboundary(fill_color=np.array([ 0.59375 , 0.71484375, 0.8828125 ]), ax=ax_ctt)
# Draw Parallels
parallels = np.arange(np.amin(lats), 30., 2.5)
bm.drawparallels(parallels, ax=ax_ctt, color="white")
merids = np.arange(-85.0, -72.0, 2.5)
bm.drawmeridians(merids, ax=ax_ctt, color="white")
# Crop the image to the hurricane region
x_start, y_start = bm(-85.0, np.amin(lats))
x_end, y_end = bm(-72.0, 30.0)
ax_ctt.set_xlim([x_start, x_end])
ax_ctt.set_ylim([y_start, y_end])
# Make the contour plot for wspd
wspd_contours = ax_wspd.contourf(to_np(wspd_cross), cmap=get_cmap("jet"))
# Add the color bar
cb_wspd = fig.colorbar(wspd_contours, ax=ax_wspd)
cb_wspd.ax.tick_params(labelsize=5)
# Make the contour plot for dbz
levels = [5 + 5*n for n in range(15)]
dbz_contours = ax_dbz.contourf(to_np(dbz_cross), levels=levels, cmap=get_cmap("jet"))
cb_dbz = fig.colorbar(dbz_contours, ax=ax_dbz)
cb_dbz.ax.tick_params(labelsize=5)
# Set the x-ticks to use latitude and longitude labels.
coord_pairs = to_np(dbz_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str() for pair in to_np(coord_pairs)]
ax_wspd.set_xticks(x_ticks[::20])
ax_wspd.set_xticklabels([], rotation=45)
ax_dbz.set_xticks(x_ticks[::20])
ax_dbz.set_xticklabels(x_labels[::20], rotation=45, fontsize=4)
# Set the y-ticks to be height.
vert_vals = to_np(dbz_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
ax_wspd.set_yticks(v_ticks[::20])
ax_wspd.set_yticklabels(vert_vals[::20], fontsize=4)
ax_dbz.set_yticks(v_ticks[::20])
ax_dbz.set_yticklabels(vert_vals[::20], fontsize=4)
# Set the x-axis and y-axis labels
ax_dbz.set_xlabel("Latitude, Longitude", fontsize=5)
ax_wspd.set_ylabel("Height (m)", fontsize=5)
ax_dbz.set_ylabel("Height (m)", fontsize=5)
# Add titles
ax_ctt.set_title("Cloud Top Temperature (degC)", {"fontsize" : 7})
ax_wspd.set_title("Cross-Section of Wind Speed (kt)", {"fontsize" : 7})
ax_dbz.set_title("Cross-Section of Reflectivity (dBZ)", {"fontsize" : 7})
plt.savefig("/Users/ladwig/Documents/workspace/wrf_python/doc/source/_static/images/basemap_front.png",
transparent=False, bbox_inches="tight")
plt.show()
# +
from __future__ import print_function
from netCDF4 import Dataset
from wrf import getvar, get_basemap, latlon_coords, geo_bounds
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
slp = getvar(ncfile, "slp")
# Get the basemap mapping object
basemap_proj = get_basemap(slp)
print (basemap_proj)
# Get the latitude and longitude coordinate. This is needed for plotting.
lats, lons = latlon_coords(slp)
# Get the geobounds for the full domain
bounds = geo_bounds(slp)
print(bounds)
# Get the geographic boundaries for a subset of the domain
slp_subset = slp[150:250, 150:250]
slp_subset_bounds = geo_bounds(slp_subset)
print (slp_subset_bounds)
# +
from __future__ import print_function
from netCDF4 import Dataset
from wrf import get_basemap, geo_bounds
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
cart_proj = get_basemap(wrfin=ncfile)
print (cart_proj)
bounds = geo_bounds(wrfin=ncfile)
print (bounds)
# +
# SLP
from __future__ import (absolute_import, division, print_function, unicode_literals)
import Ngl
import Nio
import numpy as np
from wrf import to_np, getvar, smooth2d, get_pyngl, latlon_coords
ncfile = Nio.open_file(b"/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00.nc")
# Get the sea level pressure
ctt = getvar(ncfile, "ctt")
wks = Ngl.open_wks(b"png", b"test")
# Get the map projection
resources = get_pyngl(ctt)
# This needs to be False if you want to crop
resources.tfDoNDCOverlay = False
resources.mpLeftCornerLatF = 30.0
resources.mpRightCornerLatF = np.amin(to_np(ctt.coords["XLAT"]))
resources.mpLeftCornerLonF = -85.
resources.mpRightCornerLonF = -75.
lats, lons = latlon_coords(ctt, as_np=True)
resources.sfYArray = lats
resources.sfXArray = lons
resources.cnLevelSelectionMode = b"ExplicitLevels" # Define your own
resources.cnLevels = np.arange(-80.,30.,10.)
resources.cnFillOn = True
resources.cnFillMode = b"RasterFill"
#resources.cnFillPalette = Ngl.read_colormap_file(b"MPL_Greys")
Ngl.contour_map(wks, to_np(ctt), resources)
Ngl.end()
# +
from __future__ import print_function
from netCDF4 import Dataset
from wrf import getvar, get_pyngl, latlon_coords, geo_bounds
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
slp = getvar(ncfile, "slp")
# Get the pyngl mapping object
pyngl_resources = get_pyngl(slp)
print (pyngl_resources)
# Get the latitude and longitude coordinate. This is needed for plotting.
lats, lons = latlon_coords(slp)
# Get the geobounds for the full domain
bounds = geo_bounds(slp)
print(bounds)
# Get the geographic boundaries for a subset of the domain
slp_subset = slp[150:250, 150:250]
slp_subset_bounds = geo_bounds(slp_subset)
print (slp_subset_bounds)
# +
from __future__ import print_function
from netCDF4 import Dataset
from wrf import get_pyngl, geo_bounds
ncfile = Dataset("/Users/ladwig/Documents/wrf_files/wrfout_d01_2016-10-07_00_00_00")
pyngl_resources = get_pyngl(wrfin=ncfile)
print (pyngl_resources)
bounds = geo_bounds(wrfin=ncfile)
print (bounds)
# -
# # OpenMP Routines
# +
from __future__ import print_function
from wrf import omp_enabled
print(omp_enabled())
# +
from __future__ import print_function
from wrf import omp_get_num_procs
print(omp_get_num_procs())
# +
from __future__ import print_function
from wrf import omp_set_num_threads, omp_get_max_threads
omp_set_num_threads(4)
print(omp_get_max_threads())
# +
from __future__ import print_function
from wrf import omp_set_schedule, omp_get_schedule, OMP_SCHED_GUIDED
omp_set_schedule(OMP_SCHED_GUIDED, 0)
sched, modifier = omp_get_schedule()
print(sched, modifier)
# -
# # Loop and Fill Technique
# +
from __future__ import print_function, division
import numpy as np
from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES
filename_list = ["/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-28_00:00:00",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-28_12:00:00",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-29_00:00:00"]
# Result shape (hardcoded for this example, modify as necessary)
result_shape = (9, 29, 96, 96)
# Only need 4-byte floats
z_final = np.empty(result_shape, np.float32)
# Modify this number if using more than 1 time per file
times_per_file = 4
for timeidx in xrange(result_shape[0]):
# Compute the file index and the time index inside the file
fileidx = timeidx // times_per_file
file_timeidx = timeidx % times_per_file
f = Dataset(filename_list[fileidx])
z = getvar(f, "z", file_timeidx)
z_final[timeidx,:] = z[:]
f.close()
# -
# # Using the cache argument
# +
from __future__ import print_function
import time
from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES, extract_vars
wrf_filenames = ["/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-28_00:00:00",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-28_12:00:00",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-29_00:00:00"]
wrfin = [Dataset(x) for x in wrf_filenames]
start = time.time()
my_cache = extract_vars(wrfin, ALL_TIMES, ("P", "PSFC", "PB", "PH", "PHB", "T", "QVAPOR",
"HGT", "U", "V", "W"))
end = time.time()
print ("Time taken to build cache: ", (end-start), "s")
vars = ("avo", "eth", "cape_2d", "cape_3d", "ctt", "dbz", "mdbz",
"geopt", "helicity", "lat", "lon", "omg", "p", "pressure",
"pvo", "pw", "rh2", "rh", "slp", "ter", "td2", "td", "tc",
"theta", "tk", "tv", "twb", "updraft_helicity", "ua", "va",
"wa", "uvmet10", "uvmet", "z", "cfrac", "zstag", "geopt_stag")
start = time.time()
for var in vars:
v = getvar(wrfin, var, ALL_TIMES)
end = time.time()
no_cache_time = (end-start)
print ("Time taken without variable cache: ", no_cache_time, "s")
start = time.time()
for var in vars:
v = getvar(wrfin, var, ALL_TIMES, cache=my_cache)
end = time.time()
cache_time = (end-start)
print ("Time taken with variable cache: ", cache_time, "s")
improvement = ((no_cache_time-cache_time)/no_cache_time) * 100
print ("The cache decreased computation time by: ", improvement, "%")
# -
# # Using the cache argument with OpenMP
# +
from __future__ import print_function
import time
from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES, extract_vars, omp_set_num_threads, omp_get_num_procs
wrf_filenames = ["/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-28_00:00:00",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-28_12:00:00",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/wrfout_d02_2005-08-29_00:00:00"]
wrfin = [Dataset(x) for x in wrf_filenames]
start = time.time()
my_cache = extract_vars(wrfin, ALL_TIMES, ("P", "PSFC", "PB", "PH", "PHB", "T", "QVAPOR",
"HGT", "U", "V", "W"))
end = time.time()
print ("Time taken to build cache: ", (end-start), "s")
omp_set_num_threads(omp_get_num_procs())
vars = ("avo", "eth", "cape_2d", "cape_3d", "ctt", "dbz", "mdbz",
"geopt", "helicity", "lat", "lon", "omg", "p", "pressure",
"pvo", "pw", "rh2", "rh", "slp", "ter", "td2", "td", "tc",
"theta", "tk", "tv", "twb", "updraft_helicity", "ua", "va",
"wa", "uvmet10", "uvmet", "z", "cfrac", "zstag", "geopt_stag")
start = time.time()
for var in vars:
v = getvar(wrfin, var, ALL_TIMES)
end = time.time()
no_cache_time = (end-start)
print ("Time taken without variable cache: ", no_cache_time, "s")
start = time.time()
for var in vars:
v = getvar(wrfin, var, ALL_TIMES, cache=my_cache)
end = time.time()
cache_time = (end-start)
print ("Time taken with variable cache: ", cache_time, "s")
improvement = ((no_cache_time-cache_time)/no_cache_time) * 100
print ("The cache decreased computation time by: ", improvement, "%")
omp_set_num_threads(1)
| test/ipynb/Doc_Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext sql
# %sql postgresql://huiren:1234@127.0.0.1/postgres
# %sql SELECT * FROM songplays LIMIT 5;
# %sql SELECT * FROM users LIMIT 5;
# %sql SELECT * FROM songs LIMIT 5;
# %sql SELECT * FROM artists LIMIT 5;
# %sql SELECT * FROM time LIMIT 5;
| 1. Data-Modeling-with-Postgres-and-Apache-Cassandra/Data Modeling with Postgres/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns # improves plot aesthetics
# +
def _invert(x, limits):
"""inverts a value x on a scale from
limits[0] to limits[1]"""
return limits[1] - (x - limits[0])
def _scale_data(data, ranges):
"""scales data[1:] to ranges[0],
inverts if the scale is reversed"""
for d, (y1, y2) in zip(data[1:], ranges[1:]):
assert (y1 <= d <= y2) or (y2 <= d <= y1)
x1, x2 = ranges[0]
d = data[0]
if x1 > x2:
d = _invert(d, (x1, x2))
x1, x2 = x2, x1
sdata = [d]
for d, (y1, y2) in zip(data[1:], ranges[1:]):
if y1 > y2:
d = _invert(d, (y1, y2))
y1, y2 = y2, y1
sdata.append((d-y1) / (y2-y1)
* (x2 - x1) + x1)
return sdata
class ComplexRadar():
def __init__(self, fig, variables, ranges,
n_ordinate_levels=6):
angles = np.arange(0, 360, 360./len(variables))
axes = [fig.add_axes([0.1,0.1,0.9,0.9], polar=True,
label = "axes{}".format(i))
for i in range(len(variables))]
l, text = axes[0].set_thetagrids(angles, labels=variables)
for txt, angle in zip(text, angles):
if 0 <= angle < 90 or 270 <= angle < 360:
txt.set_horizontalalignment('left')
elif 90 <= angle < 270:
txt.set_horizontalalignment('right')
for ax in axes[1:]:
ax.patch.set_visible(False)
ax.grid("off")
ax.xaxis.set_visible(False)
for i, ax in enumerate(axes):
grid = np.linspace(*ranges[i],
num=n_ordinate_levels)
gridlabel = ["{}".format(round(x,2)) for x in grid]
if np.count_nonzero(np.equal(np.mod(grid, 1), 0)) == len(grid):
gridlabel = ["{:d}".format(x) for x in grid.astype(np.int16)]
if ranges[i][0] > ranges[i][1]:
grid = grid[::-1] # hack to invert grid
gridlabel = gridlabel[::-1]
elif ranges[i][0] < ranges[i][1]:
gridlabel[0] = "" # clean up origin
if 0 <= angles[i] < 90:
ax.set_rgrids(grid, labels=gridlabel, angle=angles[i],
ha='right', va='top')
elif 90 <= angles[i] < 180:
ax.set_rgrids(grid, labels=gridlabel, angle=angles[i],
ha='left', va='top')
elif 180 <= angles[i] < 270:
ax.set_rgrids(grid, labels=gridlabel, angle=angles[i],
ha='left', va='bottom')
elif 270 <= angles[i] < 360:
ax.set_rgrids(grid, labels=gridlabel, angle=angles[i],
ha='right', va='bottom')
#ax.set_rgrids(grid, labels=gridlabel, angle=angles[i])
#ax.spines["polar"].set_visible(False)
ax.set_ylim(*ranges[i])
# variables for plotting
self.angle = np.deg2rad(np.r_[angles, angles[0]])
self.ranges = ranges
self.ax = axes[0]
def plot(self, data, *args, **kw):
sdata = _scale_data(data, self.ranges)
self.ax.plot(self.angle, np.r_[sdata, sdata[0]], *args, **kw)
def fill(self, data, *args, **kw):
sdata = _scale_data(data, self.ranges)
self.ax.fill(self.angle, np.r_[sdata, sdata[0]], *args, **kw)
# -
# example data
variables = ("Supply Current [µA]", "Offset Voltage [mV]", "GBW [MHz]",
"Bias Current [nA]", "Offset Current [nA]", "Slew Rate [V/µs]")
lmv324 = (410, 1.7, 1, 15, 5, 1)
lmv554 = (148, 1, 3, 20, 1, 1)
opa4191 = (560, 0.01, 2.2, 0.005, 0.002, 6)
ltc6255 = (260, 0.1, 6.5, 5, 2, 1.8)
ranges = [(0, 560), (0, 1.7), (6.5, 0), (0, 20), (0, 5), (6, 0)]
# plotting
fig1 = plt.figure(figsize=(6, 6))
radar = ComplexRadar(fig1, variables, ranges)
radar.plot(lmv324, label='LMV324 (original)')
radar.fill(lmv324, alpha=0.2)
radar.plot(lmv554, label='LMV554')
radar.fill(lmv554, alpha=0.2)
radar.plot(opa4191, label='OPA4191')
radar.fill(opa4191, alpha=0.2)
radar.plot(ltc6255, label='LTC6255')
radar.fill(ltc6255, alpha=0.2)
fig1.legend(bbox_to_anchor=(0.3,1))
plt.title('Op Amp comparison', fontsize=16)
plt.show()
x = np.linspace(*(0, 560), num=6)
np.equal(np.mod(x, 1), 0)
| radar_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="RgwRgmgGh-5n" colab_type="code" outputId="85ac01fe-b54b-471a-c1d8-1dc6231b61f4" colab={"base_uri": "https://localhost:8080/", "height": 34}
from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
# + id="7UgRqko4iBll" colab_type="code" outputId="d3de24b1-744c-4f68-f2eb-4c29aae36d9b" colab={"base_uri": "https://localhost:8080/", "height": 124}
from google.colab import drive
drive.mount('/content/drive')
# + id="cCFnrAYYjJmU" colab_type="code" colab={}
def load_dataset(path):
data = load_files(path)
files = np.array(data['filenames'])
targets = np_utils.to_categorical(np.array(data['target']), 4)
return files, targets
train_files, train_targets = load_dataset('/content/drive/My Drive/bedsore_data/train')
valid_files, valid_targets = load_dataset('/content/drive/My Drive/bedsore_data/test')
# + id="tum4v46uoP_u" colab_type="code" colab={}
wound_classes = [item[43:-1] for item in sorted(glob("/content/drive/My Drive/bedsore_data/train/*/"))]
# + id="-5pnPd0_pFU8" colab_type="code" outputId="f20f699e-3c23-41cf-8125-ff95b9d354ad" colab={"base_uri": "https://localhost:8080/", "height": 121}
print('There are %d total categories.' % len(wound_classes))
print(wound_classes)
print('There are %s total images.\n' % len(np.hstack([train_files, valid_files])))
print('There are %d training images.' % len(train_files))
print('There are %d validation images.' % len(valid_files))
# + id="2CdJMvNOpPye" colab_type="code" outputId="ba103a71-316f-4631-9147-235c6488844d" colab={"base_uri": "https://localhost:8080/", "height": 1000}
print(train_files)
# + id="sOZtqzoepcCQ" colab_type="code" outputId="d1f0068c-b3d3-433d-cc04-dd7778ed0e44" colab={"base_uri": "https://localhost:8080/", "height": 350}
from keras.applications.resnet50 import ResNet50
ResNet50_model = ResNet50(weights='imagenet')
# + id="879FCYmopo2u" colab_type="code" colab={}
from keras.preprocessing import image
from tqdm import tqdm
def path_to_tensor(img_path, width=224, height=224):
img = image.load_img(img_path, target_size=(width, height))
x = image.img_to_array(img)
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths, width=224, height=224):
list_of_tensors = [path_to_tensor(img_path, width, height) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
# + id="8nnaxYrBpwFD" colab_type="code" colab={}
from keras.applications.resnet50 import preprocess_input, decode_predictions
def ResNet50_predict_labels(img_path):
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_model.predict(img))
# + id="oc2neqNbp6HB" colab_type="code" colab={}
import keras
import timeit
def show_history_graph(history):
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
class EpochTimer(keras.callbacks.Callback):
train_start = 0
train_end = 0
epoch_start = 0
epoch_end = 0
def get_time(self):
return timeit.default_timer()
def on_train_begin(self, logs={}):
self.train_start = self.get_time()
def on_train_end(self, logs={}):
self.train_end = self.get_time()
print('Training took {} seconds'.format(self.train_end - self.train_start))
def on_epoch_begin(self, epoch, logs={}):
self.epoch_start = self.get_time()
def on_epoch_end(self, epoch, logs={}):
self.epoch_end = self.get_time()
print('Epoch {} took {} seconds'.format(epoch, self.epoch_end - self.epoch_start))
# + id="7RIej-PUqDj1" colab_type="code" colab={}
# + id="6j0Mit7lqGXB" colab_type="code" colab={}
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# + id="_vR3WCNdqKJt" colab_type="code" outputId="f0cf8d4b-90de-4aec-ac57-69121f141b5c" colab={"base_uri": "https://localhost:8080/", "height": 52}
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
# + id="pJHJ30adqOUS" colab_type="code" colab={}
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.models import Model
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="MK0DFk-qqZZg" colab_type="code" colab={}
from keras.applications.inception_v3 import preprocess_input
from tqdm import tqdm
def paths_to_inception_tensor(img_paths, width=299, height=299):
list_of_tensors = [preprocess_input(path_to_tensor(img_path, width, height)) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
# + id="URWkMbv1qgjN" colab_type="code" outputId="53a63580-0f3b-47c5-97e7-3594c30962fd" colab={"base_uri": "https://localhost:8080/", "height": 52}
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.preprocessing.image import ImageDataGenerator
img_width, img_height = 299, 299
batch_size = 8
num_classes = 4
train_dir = '/content/drive/My Drive/bedsore_data/train'
valid_dir = '/content/drive/My Drive/bedsore_data/test'
train_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True
)
validation_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True
)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
)
validation_generator = validation_datagen.flow_from_directory(
valid_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
)
# + id="CzqOGaqBqszh" colab_type="code" outputId="9ebbfe7e-61a7-4601-8399-fd9e213847e5" colab={"base_uri": "https://localhost:8080/", "height": 1000}
import math
base_model = InceptionV3(weights='imagenet', include_top=False)
for layer in base_model.layers:
layer.trainable = False
output = base_model.output
output = GlobalAveragePooling2D()(output)
top_layers = Dense(4, activation='softmax')(output)
finetune_model = Model(inputs=base_model.input, outputs=top_layers)
finetune_model.summary()
finetune_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# + id="UnMsqEBRq5Kf" colab_type="code" outputId="c0474a9c-df70-4668-80f4-50dfa0d3b110" colab={"base_uri": "https://localhost:8080/", "height": 1000}
steps = math.ceil(train_tensors.shape[0]/batch_size)
validation_steps = math.ceil(valid_tensors.shape[0]/batch_size)
print('Training Samples: {} Validation Samples: {} Batch Size: {} Steps: {}'.format(
train_tensors.shape[0], valid_tensors.shape[0], batch_size, steps))
epochs = 20
epochtimer = EpochTimer()
hist = finetune_model.fit_generator(
train_generator,
validation_data=validation_generator,
epochs=epochs,
steps_per_epoch=steps,
validation_steps=validation_steps,
callbacks=[epochtimer])
show_history_graph(hist)
top_model_file = 'saved_models/weights.best.{}.hdf5'.format('inceptionv3_top')
finetune_model.save(top_model_file)
# + id="kR_X6CJ6vPwz" colab_type="code" colab={}
#for the testing of data uncomment the following
'''
test_files, test_targets = load_dataset('<directory of test images>')
test_tensors = paths_to_tensor(test_files).astype('float32')/255
inception_test_tensors = paths_to_inception_tensor(test_files)
predictions = [np.argmax(finetune_model.predict(np.expand_dims(feature, axis=0))) for feature in inception_test_tensors]
test_accuracy = 100*np.sum(np.array(predictions)==np.argmax(test_targets, axis=1))/len(predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
'''
# + id="bRCQfkkwfcu-" colab_type="code" colab={}
'''
#for training the entire model instead of the top layer, we'll need to uncomment this and need a lot more computational power for completing the model efficiently and a lot more data so as to prevent overfitting.
from keras.optimizers import SGD
NB_IV3_LAYERS_TO_FREEZE = 172
for layer in finetune_model.layers[:NB_IV3_LAYERS_TO_FREEZE]:
layer.trainable = False
for layer in finetune_model.layers[NB_IV3_LAYERS_TO_FREEZE:]:
layer.trainable = True
finetune_model.summary()
finetune_model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
steps = math.ceil(train_tensors.shape[0]/batch_size)
validation_steps = math.ceil(valid_tensors.shape[0]/batch_size)
print('Training Samples: {} Validation Samples: {} Batch Size: {} Steps: {}'.format(
train_tensors.shape[0], valid_tensors.shape[0], batch_size, steps))
epochs = 20
epochtimer = EpochTimer()
hist = finetune_model.fit_generator(
train_generator,
validation_data=validation_generator,
epochs=epochs,
steps_per_epoch=steps,
validation_steps=validation_steps,
callbacks=[epochtimer])
show_history_graph(hist)
finetune_model_file = 'saved_models/weights.best.{}.hdf5'.format('inceptionv3_finetune')
finetune_model.save(finetune_model_file)
predictions = [np.argmax(finetune_model.predict(np.expand_dims(feature, axis=0))) for feature in inception_test_tensors]
test_accuracy = 100*np.sum(np.array(predictions)==np.argmax(test_targets, axis=1))/len(predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
finetune_model_file = 'saved_models/weights.best.{}.hdf5'.format('inceptionv3_finetune')
finetune_model.load_weights(finetune_model_file)
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input as preprocess_inception_input
def extract_InceptionV3(tensor):
return InceptionV3(weights='imagenet', include_top=False).predict(preprocess_inception_input(tensor))
def burn_classifier(img_path, selected_model, bottleneck=True, img_width=223, img_height=223):
tensor = path_to_tensor(img_path, img_width, img_height)
tensor = preprocess_inception_input(tensor)
predictions = selected_model.predict(tensor)
y_hat = np.argmax(predictions)
return _names[y_hat]
show_history_graph(hist)
show_history_graph(hist)
'''
| BedsoreDetection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.2 64-bit (''playground'': conda)'
# name: python38264bitplaygroundconda181cc4e8a1f74f20aba28f8bf4ca7131
# ---
import pandas as pd
import numpy as np
data = pd.read_csv('demo.csv')
data
| ML Notebooks/00--Practice Work/on vscode/pandas/jupyterinvscode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Chinmay2911/Basic-Image-Classification-Tensorflow-Keras/blob/main/Basic_Image_Classification_Tensorflow_%26_Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="zHttLO6mcIvg"
# IMAGE CLASSIFICATION
#
# This guide trains a neural network model to classify images of clothing, like sneakers and shirts. It uses tf.keras, a high-level API to build and train models in TensorFlow.
# + id="773vcrj6abVx"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
# + colab={"base_uri": "https://localhost:8080/"} id="E7TVHIq7alZ3" outputId="f5ebb130-aaac-41ba-a363-1641cd2566e3"
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# + [markdown] id="Ryie6eFxa6YE"
# Each image is mapped to a single label.
#
# Label Class (0 - T-shirt/top) (1 - Trouser) (2 - Pullover) (3 - Dress) (4 - Coat) (5 - Sandal) (6 - Shirt) (7 - Sneaker) (8 - Bag) (9 - Ankle boot)
# + id="uaWDuz8ya2S8"
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# + [markdown] id="CfpQ387XbA3A"
# Explore the data
#
# Explore the format of the dataset before training the model.
# + colab={"base_uri": "https://localhost:8080/"} id="3JrZc9doa-Pj" outputId="5130aaf1-13e3-4f5e-f9eb-fce3384ef94f"
train_images.shape
# + colab={"base_uri": "https://localhost:8080/"} id="ucOmjuaUbDyD" outputId="ad6a819f-7d3f-4383-d894-07aae60d7121"
len(train_labels)
# + colab={"base_uri": "https://localhost:8080/"} id="_zek0tu1bFLr" outputId="36a66b07-20b3-4547-8737-33187e004a32"
train_labels
# + colab={"base_uri": "https://localhost:8080/"} id="7_9ZFViLbGbD" outputId="3f2f1e4c-2340-46e4-a9b0-1879f183a37a"
len(test_labels)
# + [markdown] id="GeakYkeQbJwZ"
# Preprocess the data
#
# The data must be preprocessed before training the network.
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="7A8QDXzWbHyk" outputId="9a282006-04c0-43e5-e743-2a0b410d51de"
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
# + [markdown] id="uimH-9OpbOkp"
# Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, we divide the values by 255. It's important that the training set and the testing set be preprocessed in the same way.
# + id="MlCBbPMkbMZD"
train_images = train_images / 255.0
test_images = test_images / 255.0
# + [markdown] id="Rl052tX0bUJh"
# To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the training set and display the class name below each image.
# + colab={"base_uri": "https://localhost:8080/", "height": 589} id="sfkboVFGbSAr" outputId="95185773-2e74-4e43-a755-0218173f6282"
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
# + [markdown] id="17kNUaXKbZTv"
# Build the model
#
# Building the neural network requires configuring the layers of the model, then compiling the model. The basic building block of a neural network is the layer. Layers extract representations from the data fed into them.
# + id="OyLbWQT7bWpL"
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
# + [markdown] id="OWow4h85bdq1"
# Compile the model
#
# Before the model is ready for training, it needs a few more settings. These are added during the model's compile step.
# + id="JC23mQMdbbyk"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + [markdown] id="Drx0cZ_NbhVF"
# TRAIN THE MODEL
#
# Feed the training data to the model. In this example, the training data is in the train_images and train_labels arrays. Verify that the predictions match the labels from the test_labels array.
# + colab={"base_uri": "https://localhost:8080/"} id="lQZmd2pfbf2E" outputId="b387cca0-6dc0-42a4-c546-8ab75cabbb69"
model.fit(train_images, train_labels, epochs=10)
# + [markdown] id="byuSZvs9bsbg"
# This model reaches an accuracy of about 0.91 (or 91%) on the training data.
# + [markdown] id="WTzqvqogbvqR"
# Evaluate accuracy
#
# Next, compare how the model performs on the test dataset:
# + colab={"base_uri": "https://localhost:8080/"} id="7qRQJgvObjZ0" outputId="279b91ab-b29c-4db8-d826-5055f6b4cf89"
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
# + [markdown] id="IXaG9DuPb0WX"
# Make predictions
#
# With the model trained, you can use it to make predictions about some images.
# + id="UwOAQkelbyGF"
probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
# + id="2f5cotYNb4zc"
predictions = probability_model.predict(test_images)
# + colab={"base_uri": "https://localhost:8080/"} id="gkrF3khJb6ZM" outputId="c5b91094-b9bc-4399-802b-5d15911294c9"
predictions[0]
# + [markdown] id="OBOz_2Q0b_SC"
# A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing.
#
# Test accuracy of 0.88 was achieved which could be further improved.
# + id="cPsbhX2Qb8_0"
| Basic_Image_Classification_Tensorflow_&_Keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/EOHFA-GOAT/heart-disease-analysis/blob/master/Analyzing_Heart_Disease.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="gM37V9_qUXlm" colab_type="text"
# # Analyzing Heart Disease
# Hello! I'll be exploring the [heart disease dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the University of California, Irvine. The database that this set came from contains 76 attributes, but the set itself only contains 14.
# #Acknowledgements
# Creators:
#
# Hungarian Institute of Cardiology. Budapest: <NAME>, M.D.
#
# University Hospital, Zurich, Switzerland: <NAME>, M.D.
#
# University Hospital, Basel, Switzerland: <NAME>, M.D.
#
# V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: <NAME>, M.D., Ph.D.
#
# Donor:
# <NAME> (aha '@' ics.uci.edu) (714) 856-8779
# #The Attributes
# 1. Age
# 2. Sex
# 1 = male
# 0 = female
# 3. Chest pain (CP)
# Value 0: asymptomatic
# Value 1: atypical angina
# Value 2: non-anginal pain
# Value 3: typical angina
# 4. trestbps
# Resting blood pressure (in mm Hg on admission to the hospital)
# 5. chol
# Serum cholestorol in mg/dl
# 6. fbs (Fasting blood sugar)
# (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
# 7. restecg - Resting electrocardiographic results
# 8. thalach - Maximum heart rate achieved
# 9. exang - Exercise induced angina (1= Yes, 0 = No)
# 10. oldpeak - ST depression induced by exercise relative to rest
# 11. slope - The slope of the peak exercise ST segment
#
# i: Upsloping
#
# ii: Flat
#
# iii: Downsloping
# 12. ca (coloured arteries) - Number of major vessels (0-3) colored by flourosopy
# 13. thal - 3 = normal; 6 = fixed defect; 7 = reversable defect
# 14. target - 0 = Heart disease present, 1 = Heart disease absent
#
# #Objective
# 1. Find any correlations between attributes
# 2. Find correlations between each attribute and the diagnosis of heart disease
#
# #Let's Begin!
# + id="PJtF0dEYUMEg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="afea061e-f28c-49a6-9dd1-fe5cf2464252"
#the usual...
import numpy as np
import pandas as pd
import scipy.stats # Needed to compute statistics for categorical data (yep I'm using my AP Stats skills!)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set() # Making sns as default for plots
data = pd.read_csv('./drive/My Drive/heart.csv') #for some reason "from google.colab import files" isn't working for me...
data.head()
# + id="MuFczzOagG-J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c61f3462-cf9d-449a-c703-c4486171cead"
data.shape
# + id="cYvyxITggZtA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="83743f45-8d6b-4a2a-c5c0-8cf45f446801"
data.isnull().sum()
# + [markdown] id="Dxg8KyLAgcC_" colab_type="text"
# Yay! No NaN or null values!
# #Time for Pairplot
# + id="iiW_u4E-gboG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1acf6ac4-4430-45f2-ac03-9d94b08f4252"
g = sns.pairplot(data)
g.fig.suptitle('Pair plot', fontsize = 20)
g.fig.subplots_adjust(top= 0.9);
# + [markdown] id="ZEjEq4cOvu1p" colab_type="text"
# #Correlation Matrix
# + id="CtUvNrMlvuUU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 598} outputId="59cbdcb5-569c-44e2-b9dd-ce6cfd9722a9"
plt.figure(figsize=(15,10))
corrMatrix = data.corr()
sns.heatmap(corrMatrix, annot=True)
plt.show()
# + [markdown] id="tRfzcF_FhXoO" colab_type="text"
# #Correlation between age and heart disease
# + id="xYkrXsyGh4XR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="09d59711-65be-4dbe-efb2-d96c4b9224d7"
# Look into distribution by plotting a histogram
plt.figure(figsize=(10,4))
plt.legend(loc='upper left')
g = sns.countplot(data = data, x = 'age', hue = 'target')
g.legend(title = 'Heart disease patient?', loc='center left', bbox_to_anchor=(1.25, 0.5), ncol=1)
# + [markdown] id="OBh3oB0diNew" colab_type="text"
# Seems like heart disease patients are clustered around the ages of late 50's and 60's
# + id="qbtALPxEiY-T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="5926483f-cced-4224-d981-655768f12d24"
# Heart disease patients
age_corr = ['age', 'target']
age_corr1 = data[age_corr]
age_corr_y = data[age_corr1['target'] == 0].groupby(['age']).size().reset_index(name = 'count')
age_corr_y.corr()
# + id="bl5Mx1rzikf6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="63074553-07f2-48db-cd2d-ce7147571d66"
# Healthy patients
age_corr_n = age_corr1[age_corr1['target'] == 1].groupby(['age']).size().reset_index(name = 'count')
age_corr_n.corr()
# + [markdown] id="lL1b0KvGinXM" colab_type="text"
# High correlation between heart disease patients and age. It seems like age is the precursor of heart disease.
# + [markdown] id="sLuRTDEQixQ8" colab_type="text"
# #Correlation between heart disease patients and sex
# + id="8XjsJfv8i4l0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="070094c2-3180-41e7-a471-22ff305fae08"
# Look into distribution by plotting a histogram
plt.figure(figsize=(10,4))
plt.legend(loc='upper left')
g = sns.countplot(data = data, x = 'sex', hue = 'target')
g.legend(title = 'Heart disease patient?', loc='center left', bbox_to_anchor=(1.25, 0.5), ncol=1)
# + [markdown] id="awYd9NktjDdc" colab_type="text"
# **Where 1 is male, and 0 is female
# + id="cH_arNBmjIfc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="d1e015b8-68e0-4719-84e7-357750fd1f15"
sex_corr = ['sex', 'target']
sex_corr1 = data[sex_corr]
sex_corr_y = data[sex_corr1['target'] == 0].groupby(['sex']).size().reset_index(name = 'count')
sex_corr_y.corr()
# + id="VBKqBBqmjZCr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="145ab171-f7b0-47b4-c6e6-37e25479afc0"
sex_corr_n = sex_corr1[sex_corr1['target'] == 1].groupby(['sex']).size().reset_index(name = 'count')
sex_corr_n.corr()
# + [markdown] id="G7s4tOjpj9K0" colab_type="text"
# #Chi-square test
# Sex is a categorical variable. Target, which tells us whether the patient has heart disease or not, is also a categorical variable. To compute the correlation between two categorical data, we will need to use Chi-Square test. We will be using 95% confidence interval (95% chance that the confidence interval I calculated contains the true population mean).
#
# The null hypothesis is that they are independent.
# The alternative hypothesis is that they are correlated in some way.
# + id="QYjRkjqfkZ63" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="81978f5a-3e7c-444e-8c93-0c541aedfe3f"
cont = pd.crosstab(data["sex"],data["target"])
scipy.stats.chi2_contingency(cont)
# + [markdown] id="HdAahaGYoWu2" colab_type="text"
# I performed the test and obtained a p-value < 0.05 and I can reject the hypothesis of independence. So is there truly a correlation between sex and heart disease? Well, I can't really accept this result here mainly for one reason. The data for healthy female is too low. I only have 24 female individuals that are healthy. If I were to push the number up to, let's say 94, I will get a much higher p-value. Hence, I feel that there is no point in performing a correlation analysis if the difference between the test samples are too high.
# + [markdown] id="hO3ZPozrpWN3" colab_type="text"
# #Correlation between chest pain and heart disease
# + id="0wCl73ngpdSF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="2e9bd1f9-27e5-4f0b-86e9-fd74e0467f15"
# Chi-square test
cont1 = pd.crosstab(data["cp"],data["target"])
scipy.stats.chi2_contingency(cont1)
# + [markdown] id="HRPFh_kLppAq" colab_type="text"
# Seems like chest pain is correlated to heart disease.
# + [markdown] id="VVcZOUKzqxRQ" colab_type="text"
# #Correlation between resting blood pressure and heart disease
# + id="EUuytc1Nqzfo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="d5a07608-4762-4b8e-f1e8-0575724ceb73"
restbp_corr = ['trestbps', 'target']
restbp_corr1 = data[restbp_corr]
restbp_corr_y = restbp_corr1[restbp_corr1['target'] == 0].groupby(['trestbps']).size().reset_index(name = 'count')
restbp_corr_y.corr()
# + id="wUfYwdsZq7mS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="a3a9bb9c-bc08-494e-d71a-8fde7a659d10"
restbp_corr_n = restbp_corr1[restbp_corr1['target'] == 1].groupby(['trestbps']).size().reset_index(name = 'count')
restbp_corr_n.corr()
# + [markdown] id="GbMd-YpUq_-M" colab_type="text"
# This shows that heart disease is correlated to resting blood pressure. If we look back into the Pairplot, we will see that heart disease patients have slightly higher resting blood pressure as compared to healthy patients.
# + [markdown] id="DwnZ7uC5rBb2" colab_type="text"
# #Correlation between serum cholesterol and heart disease
# Here, I am rounding the cholesterol value to the tenth place. If I dont do that I'll get tons of count = 1. This will affect the correlation test.
# + id="YS5JSzSUrjUL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="90cca29c-6678-4b80-d0cb-81cc1c8c991f"
# Showing number of heart disease patients based on serum cholesterol
chol_corr = ['chol', 'target']
chol_corr1 = data[chol_corr]
chol_corr2 = chol_corr1.copy()
chol_corr2.chol = chol_corr2.chol.round(decimals=-1)
chol_corr_y = chol_corr2[chol_corr2['target'] == 0].groupby(['chol']).size().reset_index(name = 'count')
chol_corr_y.corr()
# + id="Kq5JJLtlrm4C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="b8059aff-8c5f-444c-f6ac-98faeff898ff"
# Showing number of healthy patients based on serum cholesterol
chol_corr_n = chol_corr1[chol_corr1['target'] == 1].groupby(['chol']).size().reset_index(name = 'count')
chol_corr_n.corr()
# + [markdown] id="y-t2iIa1rrVW" colab_type="text"
# No strong correlation between serum cholesterol and heart disease.
# + [markdown] id="ID2w4Ka6rx0b" colab_type="text"
# #Correlation between ECG results and heart disease
# Value 0: showing probable or definite left ventricular hypertrophy by Estes' criteria
#
# Value 1: normal
#
# Value 2: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)
# + id="CD56NdXWtE2e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="02ad6eac-fa5b-42a1-d27c-f6bb13f708ad"
# Showing number of heart disease patients based on resting ECG results
restecg_corr = ['restecg', 'target']
restecg_corr1 = data[restecg_corr]
restecg_corr_y = restecg_corr1[restecg_corr1['target'] == 0].groupby(['restecg']).size().reset_index(name = 'count')
restecg_corr_y
# + id="nqlf1rlutXGC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="60033662-0e96-4b4c-f02f-6eae9d2b264e"
# Showing number of healthy patients based on resting ECG results
restecg_corr_n = restecg_corr1[restecg_corr1['target'] == 1].groupby(['restecg']).size().reset_index(name = 'count')
restecg_corr_n
# + id="WvKfXtO2tY7G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="065dd08b-a803-403d-ef54-34c0aa5355a6"
# Chi-square test
cont4 = pd.crosstab(data["restecg"],data["target"])
scipy.stats.chi2_contingency(cont4)
# + [markdown] id="aZoKdeFZtfxK" colab_type="text"
# I obtained a p-value of 0.00666. This shows that there is a correlation between the various types of ECG results and heart disease. I do see a huge difference normal ECG between healthy and heart disease patients.
# + [markdown] id="AL1o1FnZt1T7" colab_type="text"
# #Correlation between maximum heart rate and heart disease
# + id="7oNyNj8vt637" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="cdb79a72-540a-48c7-ce90-5e59f2eba929"
# Showing number of heart disease patients based on maximum heart rate
heartrate_corr = ['thalach', 'target']
heartrate_corr1 = data[heartrate_corr]
heartrate_corr_y = heartrate_corr1[heartrate_corr1['target'] == 0].groupby(['thalach']).size().reset_index(name = 'count')
heartrate_corr_y.corr()
# + id="ejtdsEO1t93a" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="5ca03f68-d74d-4c41-d345-68b19196585d"
heartrate_corr_n = heartrate_corr1[heartrate_corr1['target'] == 1].groupby(['thalach']).size().reset_index(name = 'count')
heartrate_corr_n.corr()
# + [markdown] id="VFyFYgQAuEWE" colab_type="text"
# No strong correlation between maximum heart rate and heart disease. If I look into the distribution, I do see close similarity in maximum heart rate in both heart disease patients and healthy patients.
# + [markdown] id="s7jLepuIuQKZ" colab_type="text"
# #Conclusion
# From the results, I can confidently say that resting ECG results, resting blood pressure and types of chest pains are correlated to heart disease. Also, although I do see a correlation when performing Chi-Square test on the gender attribute, the huge difference in healthy female data posed a huge concern for its accuracy.
#
# Thanks for viewing!
#
# <NAME>
#
# High School Senior
#
# 30 July 2020
#
#
| Analyzing_Heart_Disease.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hideCode=true hideOutput=true
import os, sys
try:
from synapse.lib.jupyter import *
except ImportError as e:
# Insert the root path of the repository to sys.path.
# This assumes the notebook is located two directories away
# From the root synapse directory. It may need to be varied
synroot = os.path.abspath('../../')
sys.path.insert(0, synroot)
from synapse.lib.jupyter import *
# + active=""
# .. _http-api:
#
# Synapse HTTP/REST API
# =====================
#
# Many components within the Synapse ecosystem provide HTTP/REST APIs to
# provide a portable interface. Some of these APIs are RESTful, while other
# (streaming data) APIs are technically not.
#
# HTTP/REST API Conventions
# -------------------------
#
# All Synapse RESTful APIs use HTTP GET/POST methods to retrieve and modify data.
# All POST requests expect a JSON body. Each RESTful API call will return a
# result wrapper dictionary with one of two conventions.
#
# For a successful API call:
#
# ::
#
# {"status": "ok", "result": "some api result here"}
#
# or for an unsuccessful API call:
#
# ::
#
# {"status": "err": "code": "ErrCodeString", "mesg": "A human friendly message."}
#
# Streaming HTTP API endpoints, such as the interface provided to retrieve nodes
# from a Synapse Cortex, provide JSON results via HTTP chunked encoding where each
# chunk is a single result.
#
# Client example code in these docs uses the python "aiohttp" module and assumes
# familiarity with python asynchronous code conventions. However, they should be
# enough to understand the APIs basic operation.
#
# Authentication
# --------------
#
# While not in "insecure" mode, most Synapse HTTP APIs require an authenticated user.
# HTTP API endpoints requiring authentication may be accessed using either HTTP Basic
# authentication via the HTTP "Authorization" header or as part of an authenticated
# session. For more information on configuring users/roles see TODO.
#
# To create and use an authenticated session, the HTTP client library must support
# cookies.
#
# /api/v1/login
# ~~~~~~~~~~~~~
#
# The login API endpoint may be used to create an authenticated session. This
# session may then be used to call other HTTP API endpoints as the authenticated user.
# + hideOutput=true
import aiohttp
async def logInExample(ssl=False):
async with aiohttp.ClientSession() as sess:
info = {'user': 'visi', 'passwd': '<PASSWORD>'}
async with sess.post(f'https://localhost:56443/api/v1/login', json=info, ssl=ssl) as resp:
item = await resp.json()
if item.get('status') != 'ok':
code = item.get('code')
mesg = item.get('mesg')
raise Exception(f'Login error ({code}): {mesg}')
# we are now clear to make additional HTTP API calls using sess
# + hideCode=true hideOutput=true
prox = await getTempCoreProx()
ret = await prox.addUser('visi', passwd='<PASSWORD>')
sockname = await prox._core.addHttpsPort(56443)
# Ensure that the example doesn't throw an exception
ret = await logInExample()
assert ret is None
_ = await prox.fini()
# + active=""
#
# /api/v1/auth/users
# ~~~~~~~~~~~~~~~~~~
#
# *Method*
# GET
#
# *Returns*
# A list of dictionaries, each of which represents a user on the system.
#
# /api/v1/auth/roles
# ~~~~~~~~~~~~~~~~~~
#
# *Method*
# GET
#
# *Returns*
# A list of dictionaries, each of which represents a role on the system.
#
# /api/v1/auth/adduser
# ~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API endpoint allows the caller to add a user to the system.
#
# *Input*
# This API expects the following JSON body::
#
# { "name": "myuser" }
#
# Any additional "user dictionary" fields (other than "iden") may be specified.
#
# *Returns*
# The newly created role dictionary.
#
# /api/v1/auth/addrole
# ~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API endpoint allows the caller to add a role to the system.
#
# *Input*
# This API expects the following JSON body::
#
# { "name": "myrole" }
#
# Any additional "role dictionary" fields (other than "iden") may be specified.
#
# *Returns*
# The newly created role dictionary.
#
# /api/v1/auth/delrole
# ~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API endpoint allows the caller to delete a role from the system.
#
# *Input*
# This API expects the following JSON body::
#
# { "name": "myrole" }
#
# *Returns*
# null
#
# /api/v1/auth/user/<id>
# ~~~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to modify specified elements of a user dictionary.
#
# *Input*
# This API expects a JSON dictionary containing any updated values for the user.
#
# *Returns*
# The updated user dictionary.
#
# *Method*
# GET
#
# This API allows the caller to retrieve a user dictionary.
#
# *Returns*
# A user dictionary.
#
# /api/v1/auth/password/<id>
# ~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to change a users password. The authenticated user must either be an admin or
# the user whose password is being changed.
#
# *Input*
# This API expects a JSON dictionary containing the a key ``passwd`` with the new password string.
#
# *Returns*
# The updated user dictionary.
#
#
# /api/v1/auth/role/<id>
# ~~~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to modify specified elements of a role dictionary.
#
# *Input*
# This API expects a dictionary containing any updated values for the role.
#
# *Returns*
# The updated role dictionary.
#
# *Method*
# GET
#
# This API allows the caller to retrieve a user dictionary.
#
# *Returns*
# A user dictionary.
#
# /api/v1/auth/grant
# ~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to grant a role to a given user.
#
# *Input*
# This API expects the following JSON body::
#
# {
# "user": "<id>",
# "role": "<id>"
# }
#
# *Returns*
# The updated user dictionary.
#
# /api/v1/auth/revoke
# ~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to revoke a role which was previously granted to a user.
#
# *Input*
# This API expects the following JSON body::
#
# {
# "user": "<id>",
# "role": "<id>"
# }
#
# *Returns*
# The updated user dictionary.
#
# Cortex
# ------
#
# A Synapse Cortex implements an HTTP API for interacting with the hypergraph and data model. Some
# of the provided APIs are pure REST APIs for simple data model operations and single/simple node
# modification. However, many of the HTTP APIs provided by the Cortex are streaming APIs which use
# HTTP chunked encoding to deliver a stream of results as they become available.
#
# /api/v1/storm
# ~~~~~~~~~~~~~
#
# The Storm API endpoint allows the caller to execute a Storm query on the Cortex and stream
# back the messages generated during the Storm runtime execution. In addition to returning nodes,
# these messsages include events for node edits, tool console output, etc.
#
# *Method*
# GET
#
# *Input*
# The API expects the following JSON body::
#
# {
# "query": "a storm query here",
#
# # optionally...
#
# "opts": {
# "repr": <bool>, # Add optional "reprs" field to returned nodes.
# "limit": <int>, # Limit the total number of nodes to be returned.
# "vars": {
# <name>: <value>, # Variables to map into the Storm query runtime.
# },
# "ndefs": [] # A list of [form, valu] tuples to use as initial input.
# "idens": [] # A list of node iden hashes to use as initial input.
# }
# }
#
# *Returns*
# The API returns a series of messages generated by the Storm runtime. Each message is
# returned as an HTTP chunk, alling readers to consume the resulting nodes as a stream.
#
# Each message has the following basic structure::
#
# [ "type", { ..type specific info... } ]
#
# /api/v1/storm/nodes
# ~~~~~~~~~~~~~~~~~~~
#
# The Storm nodes API endpoint allows the caller to execute a Storm query on the Cortex and stream
# back the resulting nodes. This streaming API has back-pressure, and will handle streaming millions
# of results as the reader consumes them.
#
# *Method*
# GET
#
# *Input*
# See /api/v1/storm for expected JSON body input.
#
# *Returns*
# The API returns the resulting nodes from the input Storm query. Each node is returned
# as an HTTP chunk, allowing readers to consume the resulting nodes as a stream.
#
# Each serialized node will have the following structure::
#
# [
# [<form>, <valu>], # The [ typename, typevalue ] definition of the node.
# {
# "iden": <hash>, # A stable identifier for the node.
# "tags": {}, # The tags on the node.
# "props": {}, # The node's secondary properties.
#
# # optionally (if query opts included {"repr": True}
# "reprs": {} # Presentation values for props which need it.
# }
# ]
#
#
# /api/v1/model
# ~~~~~~~~~~~~~
#
# *Method*
# GET
#
# This API allows the caller to retrieve the current Cortex data model.
#
# *Input*
# The API takes no input.
#
# *Returns*
# The API returns the model in a dictionary, including the types, forms and tagprops. Secondary
# property information is also included for each form::
#
# {
# "types": {
# ... # dictionary of type definitions
# },
# "forms": {
# ... # dictionary of form definitions, including secondary properties
# },
# "tagprops": {
# ... # dictionary of tag property definitions
# }
# }
#
#
# /api/v1/model/norm
# ~~~~~~~~~~~~~~~~~~
#
# *Method*
# GET, POST
#
# This API allows the caller to normalize a value based on the Cortex data model. This may be called via a GET or
# POST requests.
#
# *Input*
# The API expects the following JSON body::
#
# {
# "prop": "prop:name:here",
# "value": <value>,
# }
#
# *Returns*
# The API returns the normalized value as well as any parsed subfields or type specific info::
#
# {
# "norm": <value>,
# "info": {
# "subs": {},
# ...
# }
# }
#
# /api/v1/storm/vars/get
# ~~~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# GET
#
# This API allows the caller to retrieve a storm global variable.
#
# *Input*
# The API expects the following JSON body::
#
# {
# "name": "varnamehere",
# "default": null,
# }
#
# *Returns*
# The API returns the global variable value or the specified default using the REST API convention described earlier.
#
# /api/v1/storm/vars/set
# ~~~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to set a storm global variable.
#
# *Input*
# The API expects the following JSON body::
#
# {
# "name": "varnamehere",
# "value": <value>,
# }
#
# *Returns*
# The API returns `true` using using the REST API convention described earlier.
#
# /api/v1/storm/vars/pop
# ~~~~~~~~~~~~~~~~~~~~~~
#
# *Method*
# POST
#
# This API allows the caller to pop/delete a storm global variable.
#
# *Input*
# The API expects the following JSON body::
#
# {
# "name": "varnamehere",
# "default": <value>,
# }
#
# *Returns*
# The API returns the the current value of the variable or default using using the REST API convention described earlier.
#
# Axon
# ----
#
# A Synapse Axon implements an HTTP API for uploading and downloading files.
# The HTTP APIs use HTTP chunked encoding for handling large files.
#
# /api/v1/axon/files/put
# ~~~~~~~~~~~~~~~~~~~~~~
#
# This API allows the caller to upload and save a file to the Axon. This may be called via a PUT or POST request.
#
# *Method*
# PUT, POST
#
# *Input*
# The API expects a stream of byte chunks.
#
# *Returns*
# On successful upload, or if the file already existed, the API returns information about the file::
#
# {
# "md5": "<the md5sum value of the uploaded bytes>",
# "sha1": "<the sha1 value of the uploaded bytes>",
# "sha256": "<the sha256 value of the uploaded bytes>",
# "sha512": "<the sha512 value of the uploaded bytes>",
# "size": <the size of the uploaded bytes>
# }
#
#
# /api/v1/axon/files/has/sha256/<SHA-256>
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# This API allows the caller to check if a file exists in the Axon as identified by the SHA-256.
#
# *Method*
# GET
#
# *Returns*
# True if the file exists; False if the file does not exist.
#
#
# /api/v1/axon/files/by/sha256/<SHA-256>
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# This API allows the caller to retrieve a file from the Axon as identified by the SHA-256. If the file does not exist a 404 will be returned.
#
# *Method*
# GET
#
# *Returns*
# If the file exists a stream of byte chunks will be returned to the caller.
#
# -
| docs/synapse/httpapi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from papers import *
from ads import *
# imports
my_p = PaperExt(ads_id="2015ApJ...799..103Z")
print(my_p.bib_text)
test_lp = ads_get_ref_cit("https://ui.adsabs.harvard.edu/abs/2015ApJ...799..103Z/citations")
test_lp.summary()
with open("tmp_list.pkl", "wb") as f:
pck.dump(test_lp, f)
with open("tmp_list.pkl", "rb") as f:
test_lp = pck.load(f)
print(test_lp.list_paper[0].bib_text)
lp_ads = ListPapersExt(test_lp)
lp_ads.all_std_keywords()
lp_ads._ids_list()
lp_new = lp_ads.filter_std_key_words("Astrophysics - High Energy Astrophysical Phenomena")
lp_new.summary()
lp_add = lp_new + lp_ads
lp_add.summary()
with open("./list_pkls/7_Oct_19_paper_list.pkl", "rb") as f:
lp_arx1 = pck.load(f)
for pp in lp_arx1:
if (pp.arxiv_id==""):
print(pp)
lp_arx = ListPapersExt(lp_arx1)
lp_add = lp_ads + lp_arx
lp_add.summary()
for pp in lp_arx:
if (pp.arxiv_id==""):
print(pp)
lp_add.check_duplicate()
print(type(lp_arx.list_paper[0]))
lp_empty = ListPapers()
a = [1]
dir(a)
setattr(a, "llll", "haha")
driver.find_element_by_xpath("//meta[@name='citation_title']")
my_p = PaperExt(ads_id="2015ApJ...799..103Z")
lp_ads = ListPapersExt()
t0 = time.time()
lp_deep = lp_ads.dig_deep(my_p, nap=pl_nap(time.time()-t0, 5.), deep=2)
with open("deep_tmp.pkl", "wb") as f:
pck.dump(lp_deep, f)
type(my_p)
| ads_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''python38'': conda)'
# name: python3
# ---
# # Time Series Forecasting with FLAML Library
# ## 1. Introduction
#
# FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models with low computational cost. It is fast and economical. The simple and lightweight design makes it easy to use and extend, such as adding new learners. FLAML can
#
# - serve as an economical AutoML engine,
# - be used as a fast hyperparameter tuning tool, or
# - be embedded in self-tuning software that requires low latency & resource in repetitive tuning tasks.
#
# In this notebook, we demonstrate how to use FLAML library for time series forecasting tasks: univariate time series forecasting (only time), multivariate time series forecasting (with exogneous variables) and forecasting discrete values.
#
# FLAML requires Python>=3.7. To run this notebook example, please install flaml with the notebook and forecast option:
#
# %pip install flaml[notebook,ts_forecast]
# avoid version 1.0.2 to 1.0.5 for this notebook due to a bug for arima and sarimax's init config
# ## 2. Forecast Problem
#
# ### Load data and preprocess
#
# Import co2 data from statsmodel. The dataset is from “Atmospheric CO2 from Continuous Air Samples at Mauna Loa Observatory, Hawaii, U.S.A.,” which collected CO2 samples from March 1958 to December 2001. The task is to predict monthly CO2 samples given only timestamps.
import statsmodels.api as sm
data = sm.datasets.co2.load_pandas().data
# data is given in weeks, but the task is to predict monthly, so use monthly averages instead
data = data['co2'].resample('MS').mean()
data = data.bfill().ffill() # makes sure there are no missing values
data = data.to_frame().reset_index()
# split the data into a train dataframe and X_test and y_test dataframes, where the number of samples for test is equal to
# the number of periods the user wants to predict
num_samples = data.shape[0]
time_horizon = 12
split_idx = num_samples - time_horizon
train_df = data[:split_idx] # train_df is a dataframe with two columns: timestamp and label
X_test = data[split_idx:]['index'].to_frame() # X_test is a dataframe with dates for prediction
y_test = data[split_idx:]['co2'] # y_test is a series of the values corresponding to the dates for prediction
# ### Run FLAML
#
# In the FLAML automl run configuration, users can specify the task type, time budget, error metric, learner list, whether to subsample, resampling strategy type, and so on. All these arguments have default values which will be used if users do not provide them.
''' import AutoML class from flaml package '''
from flaml import AutoML
automl = AutoML()
settings = {
"time_budget": 240, # total running time in seconds
"metric": 'mape', # primary metric for validation: 'mape' is generally used for forecast tasks
"task": 'ts_forecast', # task type
"log_file_name": 'CO2_forecast.log', # flaml log file
"eval_method": "holdout", # validation method can be chosen from ['auto', 'holdout', 'cv']
"seed": 7654321, # random seed
}
'''The main flaml automl API'''
automl.fit(dataframe=train_df, # training data
label='co2', # label column
period=time_horizon, # key word argument 'period' must be included for forecast task)
**settings)
# ### Best model and metric
''' retrieve best config and best learner'''
print('Best ML leaner:', automl.best_estimator)
print('Best hyperparmeter config:', automl.best_config)
print(f'Best mape on validation data: {automl.best_loss}')
print(f'Training duration of best run: {automl.best_config_train_time}s')
automl.model.estimator
''' pickle and save the automl object '''
import pickle
with open('automl.pkl', 'wb') as f:
pickle.dump(automl, f, pickle.HIGHEST_PROTOCOL)
''' compute predictions of testing dataset '''
flaml_y_pred = automl.predict(X_test)
print(f"Predicted labels\n{flaml_y_pred}")
print(f"True labels\n{y_test}")
''' compute different metric values on testing dataset'''
from flaml.ml import sklearn_metric_loss_score
print('mape', '=', sklearn_metric_loss_score('mape', y_true=y_test, y_predict=flaml_y_pred))
# ### Log history
# +
from flaml.data import get_output_from_log
time_history, best_valid_loss_history, valid_loss_history, config_history, train_loss_history = \
get_output_from_log(filename=settings['log_file_name'], time_budget=180)
for config in config_history:
print(config)
# +
import matplotlib.pyplot as plt
import numpy as np
plt.title('Learning Curve')
plt.xlabel('Wall Clock Time (s)')
plt.ylabel('Validation Accuracy')
plt.scatter(time_history, 1 - np.array(valid_loss_history))
plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')
plt.show()
# -
# ### Visualize
import matplotlib.pyplot as plt
plt.plot(X_test, y_test, label='Actual level')
plt.plot(X_test, flaml_y_pred, label='FLAML forecast')
plt.xlabel('Date')
plt.ylabel('CO2 Levels')
plt.legend()
# ## 3. Forecast Problems with Exogeneous Variables
# ### Load Data and Preprocess
#
# Load dataset on NYC energy consumption. The task is to predict the average hourly demand of enegry used in a day given information on time, temperature, and precipitation. Temperature and precipiation values are both continuous values. To demonstrate FLAML's ability to handle categorical values as well, create a column with categorical values, where 1 denotes daily tempurature is above monthly average and 0 is below.
''' multivariate time series forecasting dataset'''
import pandas as pd
# pd.set_option("display.max_rows", None, "display.max_columns", None)
multi_df = pd.read_csv(
"https://raw.githubusercontent.com/srivatsan88/YouTubeLI/master/dataset/nyc_energy_consumption.csv"
)
# preprocessing data
multi_df["timeStamp"] = pd.to_datetime(multi_df["timeStamp"])
multi_df = multi_df.set_index("timeStamp")
multi_df = multi_df.resample("D").mean()
multi_df["temp"] = multi_df["temp"].fillna(method="ffill")
multi_df["precip"] = multi_df["precip"].fillna(method="ffill")
multi_df = multi_df[:-2] # last two rows are NaN for 'demand' column so remove them
multi_df = multi_df.reset_index()
# +
''' Use feature engineering to create a categorical value'''
# Using temperature values create categorical values
# where 1 denotes daily tempurature is above monthly average and 0 is below.
def get_monthly_avg(data):
data["month"] = data["timeStamp"].dt.month
data = data[["month", "temp"]].groupby("month")
data = data.agg({"temp": "mean"})
return data
monthly_avg = get_monthly_avg(multi_df).to_dict().get("temp")
def above_monthly_avg(date, temp):
month = date.month
if temp > monthly_avg.get(month):
return 1
else:
return 0
multi_df["temp_above_monthly_avg"] = multi_df.apply(
lambda x: above_monthly_avg(x["timeStamp"], x["temp"]), axis=1
)
del multi_df["temp"], multi_df["month"] # remove temperature column to reduce redundancy
# +
# split data into train and test
num_samples = multi_df.shape[0]
multi_time_horizon = 180
split_idx = num_samples - multi_time_horizon
multi_train_df = multi_df[:split_idx]
multi_test_df = multi_df[split_idx:]
multi_X_test = multi_test_df[
["timeStamp", "precip", "temp_above_monthly_avg"]
] # test dataframe must contain values for the regressors / multivariate variables
multi_y_test = multi_test_df["demand"]
# -
# ### Run FLAML
automl = AutoML()
settings = {
"time_budget": 10, # total running time in seconds
"metric": "mape", # primary metric
"task": "ts_forecast", # task type
"log_file_name": "energy_forecast_categorical.log", # flaml log file
"eval_method": "holdout",
"log_type": "all",
"label": "demand",
}
'''The main flaml automl API'''
try:
import prophet
automl.fit(dataframe=multi_train_df, **settings, period=multi_time_horizon)
except ImportError:
print("not using prophet due to ImportError")
automl.fit(
dataframe=multi_train_df,
**settings,
estimator_list=["arima", "sarimax"],
period=multi_time_horizon,
)
# ### Prediction and Metrics
''' compute predictions of testing dataset '''
multi_y_pred = automl.predict(multi_X_test)
print("Predicted labels", multi_y_pred)
print("True labels", multi_y_test)
''' compute different metric values on testing dataset'''
from flaml.ml import sklearn_metric_loss_score
print('mape', '=', sklearn_metric_loss_score('mape', y_true=multi_y_test, y_predict=multi_y_pred))
# ### Visualize
import matplotlib.pyplot as plt
plt.figure()
plt.plot(multi_X_test["timeStamp"], multi_y_test, label="Actual Demand")
plt.plot(multi_X_test["timeStamp"], multi_y_pred, label="FLAML Forecast")
plt.xlabel("Date")
plt.ylabel("Energy Demand")
plt.legend()
plt.show()
# ## 4. Forecasting Discrete Values
# ### Load Dataset and Preprocess
#
# Import [sales data](https://hcrystalball.readthedocs.io/en/v0.1.7/api/hcrystalball.utils.get_sales_data.html) from hcrystalball. The task is to predict whether daily sales will be above mean sales for thirty days into the future.
from hcrystalball.utils import get_sales_data
time_horizon = 30
df = get_sales_data(n_dates=180, n_assortments=1, n_states=1, n_stores=1)
df = df[["Sales", "Open", "Promo", "Promo2"]]
# feature engineering - create a discrete value column
# 1 denotes above mean and 0 denotes below mean
import numpy as np
df["above_mean_sales"] = np.where(df["Sales"] > df["Sales"].mean(), 1, 0)
df.reset_index(inplace=True)
# train-test split
discrete_train_df = df[:-time_horizon]
discrete_test_df = df[-time_horizon:]
discrete_X_train, discrete_X_test = (
discrete_train_df[["Date", "Open", "Promo", "Promo2"]],
discrete_test_df[["Date", "Open", "Promo", "Promo2"]],
)
discrete_y_train, discrete_y_test = discrete_train_df["above_mean_sales"], discrete_test_df["above_mean_sales"]
# ### Run FLAML
from flaml import AutoML
automl = AutoML()
settings = {
"time_budget": 15, # total running time in seconds
"metric": "accuracy", # primary metric
"task": "ts_forecast_classification", # task type
"log_file_name": "sales_classification_forecast.log", # flaml log file
"eval_method": "holdout",
}
"""The main flaml automl API"""
automl.fit(X_train=discrete_X_train,
y_train=discrete_y_train,
**settings,
period=time_horizon)
# ### Best Model and Metric
""" retrieve best config and best learner"""
print("Best ML leaner:", automl.best_estimator)
print("Best hyperparmeter config:", automl.best_config)
print(f"Best mape on validation data: {automl.best_loss}")
print(f"Training duration of best run: {automl.best_config_train_time}s")
print(automl.model.estimator)
""" compute predictions of testing dataset """
discrete_y_pred = automl.predict(discrete_X_test)
print("Predicted label", discrete_y_pred)
print("True label", discrete_y_test)
from flaml.ml import sklearn_metric_loss_score
print("accuracy", "=", 1 - sklearn_metric_loss_score("accuracy", discrete_y_test, discrete_y_pred))
# ## 5. Comparison with Alternatives (CO2 Dataset)
# FLAML's MAPE
from flaml.ml import sklearn_metric_loss_score
print('flaml mape', '=', sklearn_metric_loss_score('mape', flaml_y_pred, y_test))
# ### Default Prophet
from prophet import Prophet
prophet_model = Prophet()
X_train_prophet = train_df.copy()
X_train_prophet = X_train_prophet.rename(columns={'index': 'ds', 'co2': 'y'})
prophet_model.fit(X_train_prophet)
X_test_prophet = X_test.copy()
X_test_prophet = X_test_prophet.rename(columns={'index': 'ds'})
prophet_y_pred = prophet_model.predict(X_test_prophet)['yhat']
print('Predicted labels', prophet_y_pred)
print('True labels', y_test)
# Default Prophet MAPE
from flaml.ml import sklearn_metric_loss_score
print('default prophet mape', '=', sklearn_metric_loss_score('mape', prophet_y_pred, y_test))
# ### Auto ARIMA Models
# +
from pmdarima.arima import auto_arima
import pandas as pd
import time
X_train_arima = train_df.copy()
X_train_arima.index = pd.to_datetime(X_train_arima['index'])
X_train_arima = X_train_arima.drop('index', axis=1)
X_train_arima = X_train_arima.rename(columns={'co2': 'y'})
# -
# use same search space as FLAML
start_time = time.time()
arima_model = auto_arima(X_train_arima,
start_p=2, d=None, start_q=1, max_p=10, max_d=10, max_q=10,
suppress_warnings=True, stepwise=False, seasonal=False,
error_action='ignore', trace=True, n_fits=650)
autoarima_y_pred = arima_model.predict(n_periods=12)
arima_time = time.time() - start_time
start_time = time.time()
sarima_model = auto_arima(X_train_arima,
start_p=2, d=None, start_q=1, max_p=10, max_d=10, max_q=10,
start_P=2, D=None, start_Q=1, max_P=10, max_D=10, max_Q=10, m=12,
suppress_warnings=True, stepwise=False, seasonal=True,
error_action='ignore', trace=True, n_fits=50)
sarima_time = time.time() - start_time
autosarima_y_pred = sarima_model.predict(n_periods=12)
# Auto ARIMA Models MAPE
from flaml.ml import sklearn_metric_loss_score
print('auto arima mape', '=', sklearn_metric_loss_score('mape', y_test, autoarima_y_pred))
print('auto sarima mape', '=', sklearn_metric_loss_score('mape', y_test, autosarima_y_pred))
# ### Compare All
from flaml.ml import sklearn_metric_loss_score
print('flaml mape', '=', sklearn_metric_loss_score('mape', y_test, flaml_y_pred))
print('default prophet mape', '=', sklearn_metric_loss_score('mape', prophet_y_pred, y_test))
print('auto arima mape', '=', sklearn_metric_loss_score('mape', y_test, autoarima_y_pred))
print('auto sarima mape', '=', sklearn_metric_loss_score('mape', y_test, autosarima_y_pred))
# +
import matplotlib.pyplot as plt
plt.plot(X_test, y_test, label='Actual level')
plt.plot(X_test, flaml_y_pred, label='FLAML forecast')
plt.plot(X_test, prophet_y_pred, label='Prophet forecast')
plt.plot(X_test, autoarima_y_pred, label='AutoArima forecast')
plt.plot(X_test, autosarima_y_pred, label='AutoSarima forecast')
plt.xlabel('Date')
plt.ylabel('CO2 Levels')
plt.legend()
plt.show()
| notebook/automl_time_series_forecast.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] button=false deletable=true id="OJkcQuONXhj4" new_sheet=false run_control={"read_only": false}
# # Série 3
#
# Dans cette série on verra quelques fonctions de chiffrage. On va commencer avec les plus simples pour passer aux chiffrages utilisés de nos jours.
# + [markdown] button=false deletable=true id="xc0a_EOTck4x" new_sheet=false run_control={"read_only": false}
# # Exercice 2
#
# Une application très célèbre dans les cercles de nerds informatiques est le [ECB Penguin](https://blog.filippo.io/the-ecb-penguin/) qui tient son nom du mode opératoire ECB (Electronic CodeBook), qui est très insécure sous certaines conditions.
#
# ## 1. Connaissance
#
# La première partie montre comment charger une image et le convertir en un objet `bytes`. Cette conversion est importante, parce que normalement les images sont entregistrées d'une manière compressée avec des informations supplémentaires nécessaire à la décompression.
#
# Une fois que l'image est dans une version décompressée avec 3 bytes par pixel (rouge, vert, bleu), il faut s'assurer que la taille totale soit un multiple de 16. Ceci est dû au chiffrage ECB, qui fonctionne en mode block de 16 bytes.
#
# Maintenant on peut appliquer le chiffrage DES/ECB. DES indique ici le chiffrage de chaque block, tandis que ECB indique le mode d'opération.
#
# Vous pouvez lancer ce premier block plusieurs fois et voir comment l'image est chaque fois chiffrée différemment. C'est normal, vu que ça dépend de la clé utilisée. Et celle-ci est initialisée d'une manière aléatoire.
# + button=false colab={"base_uri": "https://localhost:8080/", "height": 368} deletable=true executionInfo={"elapsed": 2479, "status": "error", "timestamp": 1620330981935, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiT9d_hiiZE9FHyx2dphb24xe4f0737HSVlpv54=s64", "userId": "10501199511424753986"}, "user_tz": -120} id="4qOznnPDckVh" jupyter={"outputs_hidden": false} new_sheet=false outputId="5d83ac62-f84e-405a-8990-2414a9d0e570" run_control={"read_only": false}
# Exercice 1 - Partie 1
from matplotlib.pyplot import figure, imshow
from Crypto.Cipher import AES, DES, ChaCha20
from Crypto.Random import get_random_bytes
import PIL.Image
import secrets
def image_DES(img: bytes) -> PIL.Image:
key = get_random_bytes(8)
cipher = DES.new(key, DES.MODE_ECB)
encrypted = cipher.encrypt(img)
return PIL.Image.frombytes("RGB", (256, 312), encrypted)
image_tux = PIL.Image.open("./tux.png").crop((0, 0, 256, 312))
imshow(image_tux)
figure()
imshow(image_DES(image_tux.tobytes()))
# -
# ## 2. Compréhension
#
# Vu que le DES est trop vieux et ne devrait plus être utilisé, on va essayer avec AES. Copier les lignes depuis la 1ère partire pour faire apparaître un pinguin DES/ECB suivi d'un pinguin AES/ECB.
#
# Maintenant comparez les deux.
# - Quelle est la plus grande différence? Et comment pouvez-vous l'expliquer?
#
# Après ajoutez le troisième pinguin, cette fois-ci en utilisant le chiffrage ChaCha20. Cette fois-ci on ne reconnaît plus du tout le pinguin. C'est triste, mais plus sécure!
# + button=false deletable=true id="9HYfk1lTc7Hp" new_sheet=false run_control={"read_only": false}
# Exercice 1 - Partie 2
def image_AES(img: bytes) -> PIL.Image:
key = get_random_bytes(16)
cipher = AES.new(key, AES.MODE_ECB)
encrypted = cipher.encrypt(img)
return PIL.Image.frombytes("RGB", (256, 312), encrypted)
def image_ChaCha20(img: bytes) -> PIL.Image:
key = get_random_bytes(32)
cipher = ChaCha20.new(key=key)
encrypted = cipher.encrypt(img)
return PIL.Image.frombytes("RGB", (256, 312), encrypted)
imshow(image_AES(image_tux.tobytes()))
figure()
imshow(image_ChaCha20(image_tux.tobytes()))
# -
# ## 3. Application
#
# Le problème de l'apparition du pinguin n'est pas le DES ou le AES, mais le mode d'opération ECB. En regardant sur le site de [PyCryptoDome](https://www.pycryptodome.org/en/latest/src/cipher/aes.html) vous pouvez voir les différents modes de AES. Testez les différents modes en changeant la fonction image_AES et regardez les différents résultats.
# Si on compare l'image obtenune par chiffrement DES avec l'image obtenue avec chiffrement AES on voit que les répétitions sont plus large avec le chiffrement AES. Ceci est dû à la taille du block qui est plus large pour AES: 128 bits au lieu de 64bits.
#
# Comme on a discuté pendant le cours, la répétition vient du fait que l'arrière-plan est uni, c'est-à-dire une répétition de toujours la même couleur. AES et DES en mode ECB sont détérministe dans le sens que la même entrée va créer la même sortie. Ceci n'est pas très sécure, et pour cela il y a d'autres mode d'opérations.
# + button=false deletable=true id="-Fk-3jvpdEcq" new_sheet=false run_control={"read_only": false} tags=[]
# Exercice 1 - Partie 3
def image_AES_CBC(img: bytes) -> PIL.Image:
key = get_random_bytes(16)
cipher = AES.new(key, AES.MODE_CBC)
encrypted = cipher.encrypt(img)
return PIL.Image.frombytes("RGB", (256, 312), encrypted)
def image_AES_CTR(img: bytes) -> PIL.Image:
key = get_random_bytes(16)
cipher = AES.new(key, AES.MODE_CTR)
encrypted = cipher.encrypt(img)
return PIL.Image.frombytes("RGB", (256, 312), encrypted)
imshow(image_AES(image_tux.tobytes()))
figure()
imshow(image_AES_CBC(image_tux.tobytes()))
figure()
imshow(image_AES_CTR(image_tux.tobytes()))
figure()
| Jour-2-correction/Serie-3/jour_2_serie_3_exo_3.ipynb |