text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
%pylab inline
```
# Генерирование гауссовских случайных процессов
## 1. Генерирование с помощью многомерного нормального вектора
Если вам нужно сгенерировать реализацию гауссовского случайного процесса $X = (X_t)_{t \geqslant 0}$ фиксированной известной (и не слишком большой) длины $n$, можно воспользоваться тем фактом, что конечномерные распределения гауссовского случайного процесса являются нормальными, а его моменты (математическое ожидание $m(t)$ и ковариационная функция $R(t_1, t_2)$ известны. В этом случае достаточно рассмотреть $n$ моментов времени $t_1, \ldots, t_n$ и сгенерировать $n$-мерный случайный гауссовский вектор с нужными математическим ожиданием $m(t_1), \ldots, m(t_n)$ и ковариационной матрицей $\Sigma = R(t_i, t_i)$.
Заметим, что если $Z$ - стандартно нормально распределенная случайная величина, т.е. $Z \sim N(0, 1)$, то ее линейное преобразование $X = \mu + \sigma Z$ имеет также нормальное распределение, причем $X \sim N(\mu, \sigma^2)$.
Аналогичное соотношение существует и для многомерных нормальных векторов. А именно, пусть $\mathbf{Z} = (Z_1, \ldots, Z_n) \sim N(0, \mathbf{I})$, где $\mathbf{I} = \mathrm{diag}(1, \ldots, 1)$ - единичная матрица. Тогда случайный вектор $\mathbf{X} = \mathbf{\mu} + \mathbf{C} \mathbf{Z}$ имеет нормальное $N(\mathbf{\mu}, \mathbf{\Sigma})$ распределение, где $\mathbf{\mu} = (\mu_1, \ldots, \mu_n)$ - желаемый вектор средних, а матрицы $\mathbf{C}$ и $\mathbf{\Sigma}$ связаны соотношениями $\mathbf{\Sigma} = \mathbf{C} \mathbf{C}^T$. Последнее соотношение называется разложением Холецкого положительно полуопределенной $n \times n$ матрицы $\mathbf{\Sigma}$, причем $n \times n$ матрица $\mathbf{C}$ оказывается нижнетреугольной.
Поэтому, чтобы сгенерировать многомерный нормальный вектор, можно поступить следующим способом. Сгенерируем $\mathbf{Z} = (Z_1, \ldots, Z_n) \sim N(0, \mathbf{I})$, вычислим $\mathbf{C}$ и положим $\mathbf{X} = \mathbf{\mu} + \mathbf{C} \mathbf{Z}$.
### 1.1. Винеровский процесс
Рассмотрим этот подход на примере простого броуновского движения $B = (B_t)_{t \geqslant 0}$. Для этого процесса $\mathrm{E} B_t = 0$, $R(s, t) = \mathrm{E} [B_s B_t] = \mathrm{min}(s, t)$.
Вычислим вектор математических ожиданий и ковариационную матрицу для $t_i = \Delta i, i = 0, \ldots, 1000, \Delta = 10^{-2}$.
```
def compute_bm_mean(timestamps):
return np.zeros_like(timestamps)
def compute_bm_cov(timestamps):
n = len(timestamps)
R = np.zeros((n, n))
for i in xrange(n):
for j in xrange(n):
R[i, j] = min(timestamps[i], timestamps[j])
return R
timestamps = timestamps = np.linspace(1e-2, 10, 1000)
cov_bm = compute_bm_cov(timestamps)
mean_bm = compute_bm_mean(timestamps)
figure(figsize=(6, 3))
subplot(1, 2, 1)
imshow(cov_bm, interpolation='nearest')
subplot(1, 2, 2)
plot(timestamps, cov_bm[500, :])
c_bm = np.linalg.cholesky(cov_bm)
Z = np.random.normal(size=1000)
X = mean_bm + np.dot(c_bm, Z)
figure(figsize=(9, 3))
subplot(1, 2, 1)
plot(timestamps, Z)
title('Brownian motion increments')
subplot(1, 2, 2)
plot(timestamps, X)
title('Brownian motion sample path')
```
### 1.2. Фрактальное броуновское движение
Стандартное фрактальное броуновское движение $B^H = (B^H_t)_{t \geqslant 0}$
на $[0,T]$ с параметром Хёрста $H \in (0,1)$ -
это гауссовский процесс с непрерывными траекториями такой, что
$$
B^H_0 = 0,
\qquad
\mathrm{E} B^H_t = 0,
\qquad
\mathrm{E} [B^H_t B^H_s] = \frac{1}{2}
\big(|t|^{2H} + |s|^{2H} - |t - s|^{2H}\big).
$$
В случае, когда $H=\frac{1}{2}$, фрактальное броуновское движение является
обыкновенным броуновским движением, в случае же $H\neq\frac{1}{2}$ процесс $B^H$ является некоторым гауссовским процессом.
```
def compute_fbm_matrix(timestamps, hurst):
ts_begin = timestamps[0]
dt = timestamps[1] - timestamps[0]
time = np.array([ts - ts_begin for ts in timestamps]) + dt
n = len(time)
K = np.zeros(shape=(n, n))
for index, t in enumerate(time):
K[index, :] = np.power(time, 2. * hurst) + \
np.power(t, 2. * hurst) - \
np.power(np.abs(time - t), 2. * hurst)
K *= 0.5
return K
```
Вначале убедимся, что траектория fBm при $H = 1/2$ выглядит похожей на траекторию обыкновенного броуновского движения, а его ковариационная матрица равна ковариационной матрице обыкновенного броуновского движения.
```
timestamps = np.linspace(0.01, 10, 1000)
cov_fbm = compute_fbm_matrix(timestamps, 0.5)
cov_fbm_chol = np.linalg.cholesky(cov_fbm)
figure(figsize=(6, 3))
subplot(1, 2, 1)
imshow(cov_fbm, interpolation='nearest')
subplot(1, 2, 2)
plot(timestamps, cov_fbm[500, :])
c_bm = np.linalg.cholesky(cov_fbm)
Z = np.random.normal(size=1000)
X = mean_bm + np.dot(c_bm, Z)
figure(figsize=(9, 3))
subplot(1, 2, 1)
plot(timestamps, Z)
title('Brownian motion increments')
subplot(1, 2, 2)
plot(timestamps, X)
title('Fractional Brownian motion sample path')
```
Теперь посмотрим, что происходит с процессом fBM при $H < 1/2$.
```
timestamps = np.linspace(0.01, 10, 1000)
cov_fbm = compute_fbm_matrix(timestamps, 0.1)
cov_fbm_chol = np.linalg.cholesky(cov_fbm)
figure(figsize=(6, 3))
subplot(1, 2, 1)
imshow(cov_fbm, interpolation='nearest')
subplot(1, 2, 2)
plot(timestamps, cov_fbm[500, :])
c_bm = np.linalg.cholesky(cov_fbm)
Z = np.random.normal(size=1000)
X = mean_bm + np.dot(c_bm, Z)
figure(figsize=(9, 3))
subplot(1, 2, 1)
plot(timestamps, Z)
title('Brownian motion increments')
subplot(1, 2, 2)
plot(timestamps, X)
title('Fractional Brownian motion sample path')
```
Рассмотрим еще случай $H > 1/2$.
```
timestamps = np.linspace(0.01, 10, 1000)
cov_fbm = compute_fbm_matrix(timestamps, 0.9)
cov_fbm_chol = np.linalg.cholesky(cov_fbm)
figure(figsize=(6, 3))
subplot(1, 2, 1)
imshow(cov_fbm, interpolation='nearest')
subplot(1, 2, 2)
plot(timestamps, cov_fbm[500, :])
c_bm = np.linalg.cholesky(cov_fbm)
Z = np.random.normal(size=1000)
X = mean_bm + np.dot(c_bm, Z)
figure(figsize=(9, 3))
subplot(1, 2, 1)
plot(timestamps, Z)
title('Brownian motion increments')
subplot(1, 2, 2)
plot(timestamps, X)
title('Fractional Brownian motion sample path')
```
Еще раз отобразим траектории fBm при различных увеличивающихся $H$.
```
Z = np.random.normal(size=1000)
for H in [0.1, 0.3, 0.5, 0.7, 0.9]:
cov_fbm = compute_fbm_matrix(timestamps, H)
c_bm = np.linalg.cholesky(cov_fbm)
X = mean_bm + np.dot(c_bm, Z)
plot(timestamps, X)
```
### 1.3. Процесс Орнштейна-Уленбека
Процесс Орнштейна-Уленбека - это гауссовский процесс $X = (X_t)_{t \geqslant 0}$ с непрерывными траекториями такой, что
$$
X_t = e^{-t} B_{e^{2t}},
$$
где $B = (B_t)_{t \geqslant 0}$ - обыкновенное броуновское движение.
Легко показать, что $\mathrm{E} X_t = 0$, $\mathrm{E} [X_s X_t] = e^{-|s - t|}$.
```
def compute_ou_cov(timestamps):
n = len(timestamps)
R = np.zeros((n, n))
for i in xrange(n):
for j in xrange(n):
R[i, j] = np.exp(-np.abs(timestamps[i] - timestamps[j]))
return R
timestamps = np.linspace(0.01, 10, 1000)
cov_ou = compute_ou_cov(timestamps)
cov_ou_chol = np.linalg.cholesky(cov_ou)
figure(figsize=(6, 3))
subplot(1, 2, 1)
imshow(cov_ou, interpolation='nearest')
subplot(1, 2, 2)
plot(timestamps, cov_ou[500, :])
c_ou = np.linalg.cholesky(cov_ou)
Z = np.random.normal(size=1000)
X = mean_bm + np.dot(c_ou, Z)
figure(figsize=(9, 3))
subplot(1, 2, 1)
plot(timestamps, Z)
title('Brownian motion increments')
subplot(1, 2, 2)
plot(timestamps, X)
title('OU process sample path')
```
| github_jupyter |
```
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import py21cmfast as p21c
from py21cmfast import global_params
from py21cmfast import plotting
random_seed = 1605
EoR_colour = matplotlib.colors.LinearSegmentedColormap.from_list('mycmap',\
[(0, 'white'),(0.33, 'yellow'),(0.5, 'orange'),(0.68, 'red'),\
(0.83333, 'black'),(0.9, 'blue'),(1, 'cyan')])
plt.register_cmap(cmap=EoR_colour)
```
This result was obtained using 21cmFAST at commit 2bb4807c7ef1a41649188a3efc462084f2e3b2e0
# Fiducial and lightcones
Let's fix the initial condition for this tutorial.
```
output_dir = '/Users/julian/Dropbox/Research/21/Mturn_21cmFASTv3/outputcode/'
HII_DIM = 64
BOX_LEN = 200 #cell size of ~3 Mpc or below for relative velocities
# USE_FFTW_WISDOM make FFT faster AND use relative velocities. , 'USE_INTERPOLATION_TABLES': True or code is too slow
user_params = {"HII_DIM":HII_DIM, "BOX_LEN": BOX_LEN, "USE_FFTW_WISDOM": True, 'USE_INTERPOLATION_TABLES': True,
"FAST_FCOLL_TABLES": True,
"USE_RELATIVE_VELOCITIES": True, "POWER_SPECTRUM": 5}
#set FAST_FCOLL_TABLES to TRUE if using minihaloes, it speeds up the generation of tables by ~x30 (see Appendix X of XXX JBM)
#USE_RELATIVE_VELOCITIES is important for minihaloes. If True, POWER_SPECTRUM has to be set to 5 (CLASS) to get the transfers.
initial_conditions = p21c.initial_conditions(user_params=user_params,
random_seed=random_seed,
direc=output_dir
#, regenerate=True
)
plotting.coeval_sliceplot(initial_conditions, "lowres_vcb");
plotting.coeval_sliceplot(initial_conditions, "lowres_density");
```
Let's run a 'fiducial' model and see its lightcones
Note that the reference model has
F_STAR7_MINI ~ F_STAR10
and
F_ESC7_MINI ~ 1%, as low, but conservative fiducial
Also we take L_X_MINI=L_X out of simplicity (and ignorance)
```
# the lightcones we want to plot later together with their color maps and min/max
lightcone_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'z_re_box','Gamma12_box','J_21_LW_box',"density")
cmaps = [EoR_colour,'Reds','magma','magma','magma','cubehelix','cubehelix','viridis']
vmins = [-150, 1e1, 0, 0, 5, 0, 0, -1]
vmaxs = [ 30, 1e3, 1, 2, 9, 1,10, 1]
astro_params_vcb = {'ALPHA_ESC': -0.3, 'F_ESC10': -1.2,
'ALPHA_STAR': 0.5, 'F_STAR10': -1.5, 't_STAR' :0.5,
'F_STAR7_MINI': -1.75, 'ALPHA_STAR_MINI': 0.0, 'F_ESC7_MINI' : -2.25,
'L_X': 40.5, 'L_X_MINI': 40.5, 'NU_X_THRESH': 200.0,
'A_VCB': 1.0, 'A_LW': 2.0}
astro_params_novcb=astro_params_vcb
astro_params_novcb.update({'A_VCB': 0.0})
#setting 'A_VCB': 0 sets to zero the effect of relative velocities (fiducial value is 1.0)
#the parameter 'A_LW' (with fid value of 2.0) does the same for LW feedback.
flag_options_fid = {"INHOMO_RECO":True, 'USE_MASS_DEPENDENT_ZETA':True, 'USE_TS_FLUCT':True,
'USE_MINI_HALOS':True, 'FIX_VCB_AVG':False}
flag_options_fid_vavg = flag_options_fid
flag_options_fid_vavg.update({'FIX_VCB_AVG': True})
#the flag FIX_VCB_AVG side-steps the relative-velocity ICs, and instead fixes all velocities to some average value.
#It gets the background right but it's missing VAOs and 21cm power at large scales
ZMIN=5.5
lightcone_fid_vcb = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params_vcb,
lightcone_quantities=lightcone_quantities,
global_quantities=lightcone_quantities,
random_seed = random_seed,
direc = output_dir,
write=True#, regenerate=True
)
fig, axs = plt.subplots(len(lightcone_quantities),1,
figsize=(20,10))#(getattr(lightcone_fid_vcb, lightcone_quantities[0]).shape[2]*0.01,
#getattr(lightcone_fid_vcb, lightcone_quantities[0]).shape[1]*0.01*len(lightcone_quantities)))
for ii, lightcone_quantity in enumerate(lightcone_quantities):
axs[ii].imshow(getattr(lightcone_fid_vcb, lightcone_quantity)[1],
vmin=vmins[ii], vmax=vmaxs[ii],cmap=cmaps[ii])
axs[ii].text(1, 0.05, lightcone_quantity,horizontalalignment='right',verticalalignment='bottom',
transform=axs[ii].transAxes,color = 'red',backgroundcolor='white',fontsize = 15)
axs[ii].xaxis.set_tick_params(labelsize=10)
axs[ii].yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.01)
#also run one without velocities and with fixed vcb=vavg (for comparison)
lightcone_fid_novcb = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params_novcb,
lightcone_quantities=lightcone_quantities,
global_quantities=lightcone_quantities,
random_seed = random_seed,
direc = output_dir,
write=True#, regenerate=True
)
lightcone_fid_vcbavg = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_fid_vavg,
astro_params = astro_params_vcb,
lightcone_quantities=lightcone_quantities,
global_quantities=lightcone_quantities,
random_seed = random_seed,
direc = output_dir,
write=True#, regenerate=True
)
#plus run one with only atomic-cooling galaxies but same otherwise
flag_options_NOMINI=flag_options_fid
flag_options_NOMINI.update({'USE_MINI_HALOS': False})
lightcone_fid_NOMINI = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_NOMINI,
astro_params = astro_params_vcb,
lightcone_quantities=lightcone_quantities,
global_quantities=lightcone_quantities,
random_seed = random_seed,
direc = output_dir,
write=True#, regenerate=True
)
#compare vcb and novcb
fig, axs = plt.subplots(2 ,1,
figsize=(20,6))
axs[0].imshow(getattr(lightcone_fid_vcb, 'brightness_temp')[1],
vmin=vmins[0], vmax=vmaxs[0],cmap=cmaps[0])
axs[1].imshow(getattr(lightcone_fid_novcb, 'brightness_temp')[1],
vmin=vmins[0], vmax=vmaxs[0],cmap=cmaps[0])
axs[0].text(1, 0.05, 'vcb' ,horizontalalignment='right',verticalalignment='bottom',
transform=axs[0].transAxes,color = 'red',backgroundcolor='white',fontsize = 15)
axs[1].text(1, 0.05, 'novcb' ,horizontalalignment='right',verticalalignment='bottom',
transform=axs[1].transAxes,color = 'red',backgroundcolor='white',fontsize = 15)
# axs[0].xaxis.set_tick_params(labelsize=10)
# axs[1].yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.01)
#plot tau
tau_vcb=tau_novcb=tau_NOMINI=np.array([])
for il,lightcone in enumerate([lightcone_fid_vcb,lightcone_fid_novcb,lightcone_fid_NOMINI]):
z_e=np.array([]);
tau_e=np.array([]);
for i in range(len(lightcone.node_redshifts)-1):
tauz=p21c.compute_tau(redshifts=lightcone.node_redshifts[-1:-2-i:-1],
global_xHI=lightcone.global_xHI[-1:-2-i:-1])
tau_e=np.append(tau_e,tauz)
z_e=np.append(z_e,lightcone.node_redshifts[-2-i])
#add lower zs where we manually set xH=1
zlow=np.linspace(lightcone.node_redshifts[-1]-0.1, 0.1, 14)
for zl in zlow:
tauz=p21c.compute_tau(redshifts=np.array([zl]), global_xHI=np.array([lightcone.global_xHI[-1]]))
tau_e=np.append([tauz],tau_e)
z_e=np.append([zl],z_e)
if(il==0):
tau_vcb=tau_e
elif (il==1):
tau_novcb=tau_e
else:
tau_NOMINI=tau_e
linestyles = ['-', '-.',':']
colors = ['black','gray','#377eb8']
lws = [3,1,2]
fig, axs = plt.subplots(1, 1, sharex=True, figsize=(8,4))
kk=0
axs.plot(z_e, tau_vcb, label = 'vcb',
color=colors[kk],linestyle=linestyles[kk], lw=lws[kk])
kk=1
axs.plot(z_e, tau_novcb, label = 'no vcb',
color=colors[kk],linestyle=linestyles[kk], lw=lws[kk])
kk=2
axs.plot(z_e, tau_NOMINI, label = 'no MINI',
color=colors[kk],linestyle=linestyles[kk],lw=lws[kk])
axs.set_ylim(0., 0.1)
axs.set_xlabel('redshift',fontsize=15)
axs.xaxis.set_tick_params(labelsize=15)
axs.set_xlim(0.,20.)
axs.set_ylabel('$\\tau$',fontsize=15)
axs.yaxis.set_tick_params(labelsize=15)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
tauPmin=0.0561-0.0071
tauPmax=0.0561+0.0071
axs.axhspan(tauPmin, tauPmax, alpha=0.34, color='black')
axs.grid()
#Planck2020: tau=0.0561±0.0071
#check that the tau z=15-30 is below 0.006 as Planck requires
print(z_e[-1],z_e[55])
tau_vcb[-1]-tau_vcb[55]
linestyles = ['-', '-.',':']
colors = ['black','gray','#377eb8']
lws = [3,1,2]
labels = ['vcb', 'no vcb', 'no MINI']
fig, axs = plt.subplots(1, 1, sharex=True, figsize=(8,4))
for kk,lightcone in enumerate([lightcone_fid_vcb,lightcone_fid_novcb,lightcone_fid_NOMINI]):
axs.plot(lightcone.node_redshifts, lightcone.global_xHI, label = labels[kk],
color=colors[kk],linestyle=linestyles[kk], lw=lws[kk])
axs.set_ylim(0., 1.)
axs.set_xlabel('redshift',fontsize=15)
axs.xaxis.set_tick_params(labelsize=15)
axs.set_xlim(5.,20.)
axs.set_ylabel('$x_{HI}$',fontsize=15)
axs.yaxis.set_tick_params(labelsize=15)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
axs.grid()
#choose a redshift to print coeval slices and see if there are VAOs. Usually best then T21~T21min/2
zz=zlist21[40]
print(zz)
#We plot a coeval box, but we compare the vcb case against the vcb=vavg, since the no velocity (vcb=0) case has a background evolution that is too different.
coeval_fid_vcb = p21c.run_coeval(
redshift = zz,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params_vcb,
random_seed = random_seed,
direc = output_dir,
write=True#, regenerate=True
)
coeval_fid_vcbavg = p21c.run_coeval(
redshift = zz,
init_box = initial_conditions,
flag_options = flag_options_fid_vavg,
astro_params = astro_params_vcb,
random_seed = random_seed,
direc = output_dir,
write=True#, regenerate=True
)
T21slice_vcb=coeval_fid_vcb.brightness_temp
T21avg_vcb = np.mean(T21slice_vcb)
dT21slice_vcb = T21slice_vcb - T21avg_vcb
T21slice_novcb=coeval_fid_vcbavg.brightness_temp
T21avg_novcb = np.mean(T21slice_novcb)
dT21slice_novcb = T21slice_novcb - T21avg_novcb
sigma21=np.sqrt(np.var(dT21slice_vcb))
T21maxplot = 3.0*sigma21
T21minplot = -2.0 * sigma21
origin = 'lower'
extend = 'both'
origin = None
extend = 'neither'
xx = np.linspace(0, BOX_LEN, HII_DIM, endpoint=False)
yy = xx
indexv=0
fig, ax = plt.subplots(2, 2, constrained_layout=True, figsize=(10,8),
sharex='col', sharey='row',
gridspec_kw={'hspace': 0, 'wspace': 0})
cs0=ax[0,0].contourf(xx, yy, dT21slice_novcb[indexv], extend=extend, origin=origin,
vmin=T21minplot, vmax=T21maxplot,cmap='bwr')
fig.colorbar(cs0, ax=ax[0,0], shrink=0.9, location='left')
cs1=ax[0,1].contourf(xx, yy, dT21slice_vcb[indexv], extend=extend, origin=origin,
vmin=T21minplot, vmax=T21maxplot,cmap='bwr')
fig.colorbar(cs1, ax=ax[0,1], shrink=0.9)
deltaslice=initial_conditions.lowres_density
deltaavg = np.mean(deltaslice)
ddeltaslice = deltaslice - deltaavg
vcbslice=initial_conditions.lowres_vcb
vcbavg = np.mean(vcbslice)
dvcbslice = vcbslice
print(vcbavg)
csd=ax[1,0].contourf(xx, yy, ddeltaslice[indexv])
fig.colorbar(csd, ax=ax[1,0], shrink=0.9, location='left')
csv=ax[1,1].contourf(xx, yy, dvcbslice[indexv])
fig.colorbar(csv, ax=ax[1,1], shrink=0.9, extend=extend)
plt.show()
plt.tight_layout()
global_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'z_re_box','Gamma12_box','J_21_LW_box',"density")
#choose some to plot...
plot_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'Gamma12_box','J_21_LW_box')
ymins = [-120, 1e1, 0, 0, 0, 0]
ymaxs = [ 30, 1e3, 1, 1, 1,5]
linestyles = ['-', '-',':','-.','-.',':']
colors = ['gray','black','#e41a1c','#377eb8','#e41a1c','#377eb8']
lws = [2,2,2,2]
textss = ['vcb','MCGs']
factorss = [[0, 1],] * len(textss)
labelss = [['NO', 'reference'],] * len(textss)
fig, axss = plt.subplots(len(plot_quantities), len(labelss),
sharex=True, figsize=(4*len(labelss),2*len(plot_quantities)))
for pp, texts in enumerate(textss):
labels = labelss[pp]
factors = factorss[pp]
axs = axss[:,pp]
for kk, label in enumerate(labels):
factor = factors[kk]
if kk==0:
if pp == 0:
lightcone = lightcone_fid_NOMINI
else:
lightcone = lightcone_fid_novcb
else:
lightcone = lightcone_fid_vcb
freqs = 1420.4 / (np.array(lightcone.node_redshifts) + 1.)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].plot(freqs, getattr(lightcone, 'global_%s'%global_quantity.replace('_box','')),
color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
axs[0].text(0.01, 0.99, texts,horizontalalignment='right',verticalalignment='bottom',
transform=axs[0].transAxes,fontsize = 15)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylim(ymins[jj], ymaxs[jj])
axs[-1].set_xlabel('Frequency/MHz',fontsize=15)
axs[-1].xaxis.set_tick_params(labelsize=15)
axs[0].set_xlim(1420.4 / (35 + 1.), 1420.4 / (5.5 + 1.))
zlabels = np.array([ 6, 7, 8, 10, 13, 18, 25, 35])
ax2 = axs[0].twiny()
ax2.set_xlim(axs[0].get_xlim())
ax2.set_xticks(1420.4 / (zlabels + 1.))
ax2.set_xticklabels(zlabels.astype(np.str))
ax2.set_xlabel("redshift",fontsize=15)
ax2.xaxis.set_tick_params(labelsize=15)
ax2.grid(False)
if pp == 0:
axs[0].legend(loc='lower right', ncol=2,fontsize=13,fancybox=True,frameon=True)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylabel('global_%s'%global_quantity.replace('_box',''),fontsize=15)
axs[jj].yaxis.set_tick_params(labelsize=15)
else:
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylabel('global_%s'%global_quantity.replace('_box',''),fontsize=0)
axs[jj].yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
```
# varying parameters
let's vary paremeters that describe mini-halos and see the impact to the global signal
We keep other parameters fixed and vary one of following by a factor of 1/3 and 3:
F_STAR7_MINI
F_ESC7_MINI
L_X_MINI
A_LW
We also have a NOmini model where mini-halos are not included
```
#defining those color, linstyle, blabla
linestyles = ['-', '-',':','-.','-.',':']
colors = ['gray','black','#e41a1c','#377eb8','#e41a1c','#377eb8']
lws = [1,3,2,2,2,2]
textss = ['varying '+r'$f_{*,7}^{\rm mol}$',\
'varying '+r'$f_{\rm esc}^{\rm mol}$',\
'varying '+r'$L_{\rm x}^{\rm mol}$',\
'varying '+r'$A_{\rm LW}$']
factorss = [[0, 1, 0.33, 3.0],] * len(textss)
labelss = [['No Velocity', 'Fiducial', '/3', 'x3'],] * len(textss)
global_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'z_re_box','Gamma12_box','J_21_LW_box',"density")
#choose some to plot...
plot_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'Gamma12_box','J_21_LW_box')
ymins = [-120, 1e1, 0, 0, 0, 0]
ymaxs = [ 30, 1e3, 1, 1, 1,10]
fig, axss = plt.subplots(len(plot_quantities), len(labelss),
sharex=True, figsize=(4*len(labelss),2*len(global_quantities)))
for pp, texts in enumerate(textss):
labels = labelss[pp]
factors = factorss[pp]
axs = axss[:,pp]
for kk, label in enumerate(labels):
flag_options = flag_options_fid.copy()
astro_params = astro_params_vcb.copy()
factor = factors[kk]
if label == 'No Velocity':
lightcone = lightcone_fid_novcb
elif label == 'Fiducial':
lightcone = lightcone_fid_vcb
else:
if pp == 0:
astro_params.update({'F_STAR7_MINI': astro_params_vcb['F_STAR7_MINI']+np.log10(factor)})
elif pp == 1:
astro_params.update({'F_ESC7_MINI': astro_params_vcb['F_ESC7_MINI']+np.log10(factor)})
elif pp == 2:
astro_params.update({'L_X_MINI': astro_params_vcb['L_X_MINI']+np.log10(factor)})
elif pp == 3:
astro_params.update({'A_LW': astro_params_vcb['A_LW']*factor})
else:
print('Make a choice!')
lightcone = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params,
global_quantities=global_quantities,
random_seed = random_seed,
direc = output_dir
)
freqs = 1420.4 / (np.array(lightcone.node_redshifts) + 1.)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].plot(freqs, getattr(lightcone, 'global_%s'%global_quantity.replace('_box','')),
color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
axs[0].text(0.01, 0.99, texts,horizontalalignment='left',verticalalignment='top',
transform=axs[0].transAxes,fontsize = 15)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylim(ymins[jj], ymaxs[jj])
axs[-1].set_xlabel('Frequency/MHz',fontsize=15)
axs[-1].xaxis.set_tick_params(labelsize=15)
axs[0].set_xlim(1420.4 / (35 + 1.), 1420.4 / (5.5 + 1.))
zlabels = np.array([ 6, 7, 8, 10, 13, 18, 25, 35])
ax2 = axs[0].twiny()
ax2.set_xlim(axs[0].get_xlim())
ax2.set_xticks(1420.4 / (zlabels + 1.))
ax2.set_xticklabels(zlabels.astype(np.str))
ax2.set_xlabel("redshift",fontsize=15)
ax2.xaxis.set_tick_params(labelsize=15)
ax2.grid(False)
if pp == 0:
axs[0].legend(loc='lower right', ncol=2,fontsize=13,fancybox=True,frameon=True)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylabel('global_%s'%global_quantity.replace('_box',''),fontsize=15)
axs[jj].yaxis.set_tick_params(labelsize=15)
else:
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylabel('global_%s'%global_quantity.replace('_box',''),fontsize=0)
axs[jj].yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
# define functions to calculate PS, following py21cmmc
from powerbox.tools import get_power
def compute_power(
box,
length,
n_psbins,
log_bins=True,
ignore_kperp_zero=True,
ignore_kpar_zero=False,
ignore_k_zero=False,
):
# Determine the weighting function required from ignoring k's.
k_weights = np.ones(box.shape, dtype=np.int)
n0 = k_weights.shape[0]
n1 = k_weights.shape[-1]
if ignore_kperp_zero:
k_weights[n0 // 2, n0 // 2, :] = 0
if ignore_kpar_zero:
k_weights[:, :, n1 // 2] = 0
if ignore_k_zero:
k_weights[n0 // 2, n0 // 2, n1 // 2] = 0
res = get_power(
box,
boxlength=length,
bins=n_psbins,
bin_ave=False,
get_variance=False,
log_bins=log_bins,
k_weights=k_weights,
)
res = list(res)
k = res[1]
if log_bins:
k = np.exp((np.log(k[1:]) + np.log(k[:-1])) / 2)
else:
k = (k[1:] + k[:-1]) / 2
res[1] = k
return res
def powerspectra(brightness_temp, n_psbins=50, nchunks=10, min_k=0.1, max_k=1.0, logk=True):
data = []
chunk_indices = list(range(0,brightness_temp.n_slices,round(brightness_temp.n_slices / nchunks),))
if len(chunk_indices) > nchunks:
chunk_indices = chunk_indices[:-1]
chunk_indices.append(brightness_temp.n_slices)
for i in range(nchunks):
start = chunk_indices[i]
end = chunk_indices[i + 1]
chunklen = (end - start) * brightness_temp.cell_size
power, k = compute_power(
brightness_temp.brightness_temp[:, :, start:end],
(BOX_LEN, BOX_LEN, chunklen),
n_psbins,
log_bins=logk,
)
data.append({"k": k, "delta": power * k ** 3 / (2 * np.pi ** 2)})
return data
# do 5 chunks but only plot 1 - 4, the 0th has no power for minihalo models where xH=0
nchunks = 4
k_fundamental = 2*np.pi/BOX_LEN
k_max = k_fundamental * HII_DIM
Nk=np.floor(HII_DIM/1).astype(int)
fig, axss = plt.subplots(nchunks, len(labelss), sharex=True,sharey=True,figsize=(4*len(labelss),3*(nchunks)),subplot_kw={"xscale":'log', "yscale":'log'})
for pp, texts in enumerate(textss):
labels = labelss[pp]
factors = factorss[pp]
axs = axss[:,pp]
for kk, label in enumerate(labels):
flag_options = flag_options_fid.copy()
astro_params = astro_params_vcb.copy()
factor = factors[kk]
if label == 'No Velocity':
lightcone = lightcone_fid_novcb
elif label == 'Fiducial':
lightcone = lightcone_fid_vcb
else:
if pp == 0:
astro_params.update({'F_STAR7_MINI': astro_params_vcb['F_STAR7_MINI']+np.log10(factor)})
elif pp == 1:
astro_params.update({'F_ESC7_MINI': astro_params_vcb['F_ESC7_MINI']+np.log10(factor)})
elif pp == 2:
astro_params.update({'L_X_MINI': astro_params_vcb['L_X_MINI']+np.log10(factor)})
elif pp == 3:
astro_params.update({'A_LW': astro_params_vcb['A_LW']+np.log10(factor)})
else:
print('Make a choice!')
lightcone = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params,
global_quantities=global_quantities,
random_seed = random_seed,
direc = output_dir
)
PS = powerspectra(lightcone, min_k = k_fundamental, max_k = k_max)
for ii in range(nchunks):
axs[ii].plot(PS[ii+1]['k'], PS[ii+1]['delta'], color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
if pp == len(textss)-1 and kk == 0:
axs[ii].text(0.99, 0.01, 'Chunk-%02d'%(ii+1),horizontalalignment='right',verticalalignment='bottom',
transform=axs[ii].transAxes,fontsize = 15)
axs[0].text(0.01, 0.99, texts,horizontalalignment='left',verticalalignment='top',
transform=axs[0].transAxes,fontsize = 15)
axs[-1].set_xlabel("$k$ [Mpc$^{-3}$]",fontsize=15)
axs[-1].xaxis.set_tick_params(labelsize=15)
if pp == 0:
for ii in range(nchunks):
axs[ii].set_ylim(2e-1, 2e2)
axs[ii].set_ylabel("$k^3 P$", fontsize=15)
axs[ii].yaxis.set_tick_params(labelsize=15)
else:
for ii in range(nchunks-1):
axs[ii].set_ylim(2e-1, 2e2)
axs[ii].set_ylabel("$k^3 P$", fontsize=0)
axs[ii].yaxis.set_tick_params(labelsize=0)
axss[0,0].legend(loc='lower left', ncol=2,fontsize=13,fancybox=True,frameon=True)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
```
Note that I've run these simulations in parallel before this tutorial. With these setup, each took ~6h to finish. Here, running means read the cached outputs.
## global properties -- optical depth
```
#defining those color, linstyle, blabla
linestyles = ['-', '-',':','-.','-.',':']
colors = ['gray','black','#e41a1c','#377eb8','#e41a1c','#377eb8']
lws = [1,3,2,2,2,2]
textss_tau = ['varying '+r'$f_{*,7}^{\rm mol}$',\
'varying '+r'$f_{\rm esc}^{\rm mol}$',\
'varying '+r'$A_{\rm LW}$']
factorss_tau = [[0, 1, 0.33, 3.0],] * len(textss_tau)
labelss_tau = [['No Velocity', 'Fiducial', '/3', 'x3'],] * len(textss_tau)
plot_quantities = ['tau_e']
ymins = [0]
ymaxs = [0.2]
fig, axss_tau = plt.subplots(len(plot_quantities), len(labelss_tau),
sharex=True, figsize=(4*len(labelss_tau),3*len(plot_quantities)))
for pp, texts in enumerate(textss_tau):
labels = labelss_tau[pp]
factors = factorss_tau[pp]
axs = axss_tau[pp]
for kk, label in enumerate(labels):
flag_options = flag_options_fid.copy()
astro_params = astro_params_vcb.copy()
factor = factors[kk]
if label == 'No Velocity':
lightcone = lightcone_fid_novcb
elif label == 'Fiducial':
lightcone = lightcone_fid_vcb
else:
if pp == 0:
astro_params.update({'F_STAR7_MINI': astro_params_vcb['F_STAR7_MINI']+np.log10(factor)})
elif pp == 1:
astro_params.update({'F_ESC7_MINI': astro_params_vcb['F_ESC7_MINI']+np.log10(factor)})
elif pp == 2:
astro_params.update({'A_LW': astro_params_vcb['A_LW']*factor})
else:
print('Make a choice!')
lightcone = p21c.run_lightcone(
redshift = ZMIN,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params,
global_quantities=global_quantities,
random_seed = random_seed,
direc = output_dir
)
z_e=np.array([]);
tau_e=np.array([]);
for i in range(len(lightcone.node_redshifts)-1):
tauz=p21c.compute_tau(redshifts=lightcone.node_redshifts[-1:-2-i:-1],
global_xHI=lightcone.global_xHI[-1:-2-i:-1])
tau_e=np.append(tau_e,tauz)
z_e=np.append(z_e,lightcone.node_redshifts[-2-i])
#print(i,lightcone.node_redshifts[i],tauz)
#add lower zs where we manually set xH=1
zlow=np.linspace(lightcone.node_redshifts[-1]-0.1, 0.1, 14)
for zl in zlow:
tauz=p21c.compute_tau(redshifts=np.array([zl]), global_xHI=np.array([lightcone.global_xHI[-1]]))
tau_e=np.append([tauz],tau_e)
z_e=np.append([zl],z_e)
# freqs = 1420.4 / (np.array(lightcone.node_redshifts) + 1.)
for jj, global_quantity in enumerate(plot_quantities):
axs.plot(z_e, tau_e,
color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
#print(z_e,tau_e)
axs.text(0.01, 0.99, texts,horizontalalignment='left',verticalalignment='top',
transform=axs.transAxes,fontsize = 15)
axs.set_ylim(ymins[0], ymaxs[0])
axs.set_xlabel('redshift',fontsize=15)
axs.xaxis.set_tick_params(labelsize=15)
axs.set_xlim(0.,20.)
if pp == 0:
for ii in range(nchunks):
axs.set_ylabel('$\\tau$',fontsize=15)
axs.yaxis.set_tick_params(labelsize=15)
else:
for ii in range(nchunks-1):
axs.yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
```
## 21-cm power spectra
```
# do 5 chunks but only plot 1 - 4, the 0th has no power for minihalo models where xH=0
nchunks = 4
fig, axss = plt.subplots(nchunks, len(labelss), sharex=True,sharey=True,figsize=(4*len(labelss),3*(nchunks)),subplot_kw={"xscale":'log', "yscale":'log'})
for pp, texts in enumerate(textss):
labels = labelss[pp]
factors = factorss[pp]
axs = axss[:,pp]
for kk, label in enumerate(labels):
factor = factors[kk]
if kk==0:
lightcone = lightcone_fid_vcbavg
else:
lightcone = lightcone_fid_vcb
PS = powerspectra(lightcone, min_k = k_fundamental, max_k = k_max)
for ii in range(nchunks):
axs[ii].plot(PS[ii+1]['k'], PS[ii+1]['delta'], color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
if pp == len(textss)-1 and kk == 0:
axs[ii].text(0.99, 0.01, 'Chunk-%02d'%(ii+1),horizontalalignment='right',verticalalignment='bottom',
transform=axs[ii].transAxes,fontsize = 15)
axs[0].text(0.01, 0.99, texts,horizontalalignment='left',verticalalignment='top',
transform=axs[0].transAxes,fontsize = 15)
axs[-1].set_xlabel("$k$ [Mpc$^{-3}$]",fontsize=15)
axs[-1].xaxis.set_tick_params(labelsize=15)
if pp == 0:
for ii in range(nchunks):
axs[ii].set_ylim(2e-1, 2e2)
axs[ii].set_ylabel("$k^3 P$", fontsize=15)
axs[ii].yaxis.set_tick_params(labelsize=15)
else:
for ii in range(nchunks-1):
axs[ii].set_ylim(2e-1, 2e2)
axs[ii].set_ylabel("$k^3 P$", fontsize=0)
axs[ii].yaxis.set_tick_params(labelsize=0)
axss[0,0].legend(loc='lower left', ncol=2,fontsize=13,fancybox=True,frameon=True)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
nchunks=5
textss = ['vcb','vcb']
factorss = [[0, 1],] * len(textss)
labelss = [['Regular', 'Avg'],] * len(textss)
k_fundamental = 2*np.pi/BOX_LEN
k_max = k_fundamental * HII_DIM
Nk=np.floor(HII_DIM/1).astype(int)
PSv= powerspectra(lightcone_fid_vcb, min_k = k_fundamental, max_k = k_max)
PSvavg= powerspectra(lightcone_fid_vcbavg, min_k = k_fundamental, max_k = k_max)
klist= PSv[0]['k']
P21diff = [(PSv[i]['delta']-PSvavg[i]['delta'])/PSvavg[i]['delta'] for i in range(nchunks)]
import matplotlib.pyplot as plt
fig, axss = plt.subplots(nchunks, 1, sharex=True,sharey=True,figsize=(2*len(labelss),3*(nchunks)),subplot_kw={"xscale":'linear', "yscale":'linear'})
for ii in range(nchunks):
axss[ii].plot(klist, P21diff[ii])
plt.xscale('log')
axss[0].legend(loc='lower left', ncol=2,fontsize=13,fancybox=True,frameon=True)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
```
| github_jupyter |
# Maximising the utility of an Open Address
Anthony Beck (GeoLytics), John Daniels (UU), Paul Williams (UU), Dave Pearson (UU), Matt Beare (Beare Essentials)

Go down for licence and other metadata about this presentation
\newpage
# The view of addressing from United Utilities
Unless states otherwise all content is under a CC-BY licence. All images are re-used under licence - follow the image URL for the licence conditions.


\newpage
## Using Ipython for presentations
A short video showing how to use Ipython for presentations
```
from IPython.display import YouTubeVideo
YouTubeVideo('F4rFuIb1Ie4')
## PDF output using pandoc
import os
### Export this notebook as markdown
commandLineSyntax = 'ipython nbconvert --to markdown 201609_UtilityAddresses_Presentation.ipynb'
print (commandLineSyntax)
os.system(commandLineSyntax)
### Export this notebook and the document header as PDF using Pandoc
commandLineSyntax = 'pandoc -f markdown -t latex -N -V geometry:margin=1in DocumentHeader.md 201609_UtilityAddresses_Presentation.md --filter pandoc-citeproc --latex-engine=xelatex --toc -o interim.pdf '
os.system(commandLineSyntax)
### Remove cruft from the pdf
commandLineSyntax = 'pdftk interim.pdf cat 1-5 22-end output 201609_UtilityAddresses_Presentation.pdf'
os.system(commandLineSyntax)
### Remove the interim pdf
commandLineSyntax = 'rm interim.pdf'
os.system(commandLineSyntax)
```
## The environment
In order to replicate my environment you need to know what I have installed!
### Set up watermark
This describes the versions of software used during the creation.
Please note that critical libraries can also be watermarked as follows:
```python
%watermark -v -m -p numpy,scipy
```
```
%install_ext https://raw.githubusercontent.com/rasbt/python_reference/master/ipython_magic/watermark.py
%load_ext watermark
%watermark -a "Anthony Beck" -d -v -m -g
#List of installed conda packages
!conda list
#List of installed pip packages
!pip list
```
## Running dynamic presentations
You need to install the [RISE Ipython Library](https://github.com/damianavila/RISE) from [Damián Avila](https://github.com/damianavila) for dynamic presentations
## About me

* Honorary Research Fellow, University of Nottingham: [orcid](http://orcid.org/0000-0002-2991-811X)
* Director, Geolytics Limited - A spatial data analytics consultancy
## About this presentation
* [Available on GitHub](https://github.com/AntArch/Presentations_Github/tree/master/20151008_OpenGeo_Reuse_under_licence) - https://github.com/AntArch/Presentations_Github/
* [Fully referenced PDF](https://github.com/AntArch/Presentations_Github/blob/master/201609_UtilityAddresses_Presentation/201609_UtilityAddresses_Presentation.pdf)
\newpage
To convert and run this as a static presentation run the following command:
```
# Notes don't show in a python3 environment
!jupyter nbconvert 201609_UtilityAddresses_Presentation.ipynb --to slides --post serve
```
To close this instances press *control 'c'* in the *ipython notebook* terminal console
Static presentations allow the presenter to see *speakers notes* (use the 's' key)
If running dynamically run the scripts below
## Pre load some useful libraries
```
#Future proof python 2
from __future__ import print_function #For python3 print syntax
from __future__ import division
# def
import IPython.core.display
# A function to collect user input - ipynb_input(varname='username', prompt='What is your username')
def ipynb_input(varname, prompt=''):
"""Prompt user for input and assign string val to given variable name."""
js_code = ("""
var value = prompt("{prompt}","");
var py_code = "{varname} = '" + value + "'";
IPython.notebook.kernel.execute(py_code);
""").format(prompt=prompt, varname=varname)
return IPython.core.display.Javascript(js_code)
# inline
%pylab inline
```
\newpage
# Addresses

are part of the fabric of everyday life
\newpage
# Addresses

Have economic and commercial impact
\newpage
# Addresses

Support governance and democracy
* Without an address, it is harder for individuals to register as legal residents.
* They are *not citizens* and are excluded from:
* public services
* formal institutions.
* This impacts on democracy.
\newpage
# Addresses

Support Legal and Social integration
* Formal versus Informal
* Barring individuals and businesses from systems:
* financial
* legal
* government
* ....
\newpage
# Addresses
bridge gaps - provide the link between ***people*** and ***place***

\newpage
# Utility Addresses
## In the beginning ...... was the ledger

\newpage
## Bespoke digital addresses
* Digitisation and data entry to create a bespoke Address Database -
* Fit for UU's operational purpose
* Making utilities a key *owner* of address data
* Subject to IP against PAF matching

\newpage
## Policy mandates
Open Water - A shared view of addresses requiring a new addressing paradigm - Address Base Premium?

\newpage
# Utility addressing:
* Postal delivery (Billing)
* Services and Billing to properties within the extent of the UU operational area
* Billing to customers outside the extent of UU operational area
* Asset/Facilities Management (infrastructure)
* Premises
* But utilties manage different assets to Local Authorities
* is an address the best way to manage a geo-located asset?
* Bill calculation
* Cross referencing Vaulation Office and other detals.

\newpage
. . . .
**It's not just postal addressing**
. . . .
**Address credibility is critical**
. . . .
Utilities see the full life-cycle of an address - especially the birth and death
\newpage
## asset management
* UU manage assets and facilities
> According to ABP a Waste Water facility is neither a postal address or a non-postal address.
Really? Is it having an existential crisis?

\newpage
## A connected spatial network

\newpage
## Serving customers who operate **somewhere**

* UU serve customers located in
* Buildings
* Factories
* Houses
* Fields
\newpage
## Serving customers who operate **anywhere**

\newpage
# Utility addressing issues
* Addresses are a pain
* Assets as locations
* Services as locatons
* People at locations

\newpage
# Issues: addresses = postal address.
* Is *Postal* a constraining legacy?
* Is *address* a useful term?

\newpage
# Issues: Do formal *addresses* actually help utilities?
* External addresses (ABP for example) are another product(s) to manage
* which may not fit the real business need
* which may not have full customer or geographic coverage

\newpage
# What is an address?
## Address abstraction
* Address did not spring fully formed into existance.
* They are used globally
* but developed nationally
* and for different reasons

\newpage
## Royal Mail - postal delivery

In a postal system:
* a *Delivery Point* (DP) is a single mailbox or other place at which mail is delivered.
* a single DP may be associated with multiple addresses
* An *Access Point* provides logistical detail.
The postal challenge is to solve the last 100 meters. In such a scenario the *post person* is critical.
DPs were collected by the Royal Mail for their operational activities and sold under licence as the *Postal Address File* (PAF). PAF is built around the 8-character *Unique Delivery Point Reference Number* (UDPRN). The problem with PAF is that the spatial context is not incorporated into the product. Delivery points are decoupled from their spatial context - a delivery point with a spatial context should provide the clear location of the point of delivery (a door in a house, a post-room at an office etc.).
\newpage
## LLPG - asset management
](https://dl.dropboxusercontent.com/u/393477/ImageBank/ForAddresses/LBH-LLPG-System-flow-Diagram.png)
An LLPG (Local Land and Property Gazetteer) is a collection of address and location data created by a local authority.
It is an Asset/Facilities Management tool to support public service delivery:
* Local Authority
* Police
* Fire
* Ambulance
It incorporates:
* Non postal addresses (i.e. something that the Royal Mail wouldn't deliver post to)
* a 12-digit Unique Property Reference Number for every building and plot of land
* National Street Gazetteer
Prior to the initialization of the LLPGs, local authorities would have different address data held across different departments and the purpose of the Local Land and Property Gazetteers was to rationalize the data, so that a property or a particular plot of land is referred to as the same thing, even if they do have different names.
\newpage
## Addresses as assets?
](http://joncruddas.org.uk/sites/joncruddas.org.uk/files/styles/large/public/field/image/post-box.jpg?itok=ECnzLyhZ)
* So what makes the following 'non-postal' *facilities* addresses:
* Chimney
* Post box - which is clearly having a letter delivered ;-)
* Electricity sub-station
* Context is critical
* So why is a waste-water facility not an address in ABP (when an Electricity sub-station is)?
* Because it is not *of interest* to a council and the Royal Mail have never been asked to deliver mail to it.
\newpage
## Korea: The Jibeon system - taxation

* Until recently, the Republic of Korea (Korea) has used land parcel numbers ( jibeon) to identify unique locations.
* These parcel numbers were assigned chronologically according to date of construction and without reference to the street where they were located.
* This meant that adjacent buildings did not necessarily follow a sequential numbering system.
* This system was initially used to identify land for census purposes and to levy taxes.
* In addition, until the launch of the new addressing system, the jibeon was also used to identify locations (i.e. a physical address).
\newpage
## World Bank - social improvement

The World Bank has taken a *street addressing* view-point (@_addressing_2012, p.57). This requires up-to-date mapping and bureacracy (to deliver a street gazetteer and to provide the street infrastructure (furniture)). However, (@_addressing_2012, p.44) demonstrates that this is a cumbersome process with a number of issues, not the least:
* Urban bias
* Cost of infrastucture development
* Lack of community involvment
\newpage
## Denmark: An addressing commons with impact


\newpage
## Denmark: An addressing commons with impact
* Geocoded address infrastructure
* Defined the semantics of purpose
* what is an address
* Open data
* an address commons
* The re-use statistics are staggering:
* 70% of deliveries are to the private sector,
* 20% are to central government
* 10% are to municipalities.
* Benefits:
* Efficiencies
* No duplication
* Improved confidence
* Known quality
A credible service providing a mutlitude of efficiencies (@_addressing_2012, pp.50 - 54)
\newpage
# UK Addressing
## Geoplace - Formal
](https://www.geoplace.co.uk/documents/10181/67776/NAG+infographic/835d83a5-e2d8-4a26-bc95-c857b315490a?t=1434370410424)
* GeoPlace is a limited liability partnership owned equally by the [Local Government Association](http://www.local.gov.uk/) and [Ordnance Survey](http://www.ordnancesurvey.co.uk/).
* It has built a synchronised database containing spatial address data from
* 348 local authorities in England and Wales (the *Local Land and Property Gazetteers* (LLPG) which cumulatively create the *National Land and Property Gazetteer* (NLPG)),
* Royal Mail,
* Valuation Office Agency and
* Ordnance Survey datasets.
* The NAG Hub database is owned by GeoPlace and is the authoritative single source of government-owned national spatial address information, containing over 225 million data records relating to about 34 million address features. GeoPlace is a production organisation with no product sales or supply operations.
* The NAG is made available to public and private sector customers through Ordnance Survey’s [AddressBase](http://www.ordnancesurvey.co.uk/business-and-government/products/addressbase.html) products.
\newpage
## The AddressBase Family

* The National Address Gazetteer Hub database is owned by GeoPlace and is claimed to be *the authoritative single source of government-owned national spatial address information*, containing over 225 million data records relating to about 34 million address features.
* Each address has its own *Unique Property Reference Number* (UPRN). The AddressBase suite have been designed to integrate into the [Ordnance Survey MasterMap suite of products](http://www.ordnancesurvey.co.uk/business-and-government/products/mastermap-products.html).
AddressBase is available at three levels of granularity (lite, plus and premium).
* AB+ merges two address datasets together (PAF and Local Authority) to provide the best available view of addresses currently defined by Local Authorities, giving many advantages over AddressBase.
* AB+ lets you link additional information to a single address, place it on a map, and carry out spatial analysis that enables improved business practices.
* Geoplace argue that further value comes from additional information in the product which includes:
* A more detailed classification – allowing a better understanding of the type (e.g. Domestic, Commercial or Mixed) and function of a property (e.g. Bank or Restaurant)
* Local Authority addresses not contained within PAF – giving a more complete picture of the current addresses and properties (assuming they are in scope (see below))
* Cross reference to OS MasterMap TOIDs – allowing simple matching to OS MasterMap Address Layer 2, Integrated Transport Network or Topography Layer
* Spatial coordinates
* Unique Property Reference Number (UPRN) – which provides the ability to cross reference data with other organisations, and maintain data integrity.
* Premium includes the address lifecycle
AddressBase supports the UK Location Strategy concept of a 'core reference geography', including the key principles of the European Union INSPIRE directive, that data should only be collected once and kept where it can be maintained most effectively (see [AddressBase products user guide](http://www.ordnancesurvey.co.uk/docs/user-guides/addressbase-products-user-guide.pdf)). *It's probably worthwhile mentioning that this is not an open address layer - however, a [2104 feasibility study sponsored by the department of Business, Innovation and Skills](https://www.gov.uk/government/publications/an-open-national-address-gazetteer) included a recommendation that AddressBase lite is made openly available*.
\newpage
## Address lifecycle
](https://dl.dropboxusercontent.com/u/393477/ImageBank/ForAddresses/ABP_Lifecycle.png)
* This ability to maintain an overview of the lifecycle of address and property status means the AddressBase Premium has introduced new potential use cases.
* This has seen companies incorporating AddressBase Premium into their business systems to replace PAF or bespoke addressing frameworks - in theory the ability to authoritatively access the address lifecycle provides greater certainty for a number of business operations.
* At *United Utilites* (UU) AddressBase Premium is replacing a multitude of bespoke and PAF based addressing products.
\newpage
## [Open National Address Gazetteer](https://www.gov.uk/government/publications/an-open-national-address-gazetteer) - *informal?*
The *Department for Business, Innovation & Skills* (BIS) on the need for an [Open National Address Gazetteer](https://www.gov.uk/government/publications/an-open-national-address-gazetteer) commissioned a review of *open addressing* which was published January 2014.
. . . . .
It recommended:
* the UK Government commission an 'open' addressing product based on a variation of the 'freemium' model
* data owners elect to release a basic ('Lite') product as Open Data that leaves higher value products to be licensed
. . . . .
AddressBase Lite was proposed with an annual release cycle. Critically this contains the UPRN which could be be key for product interoperability.
* This would allow the creation of a shared interoperable address spine along the lines of the Denmark model
\newpage
## Open NAG - [*'Responses received'*](https://www.gov.uk/government/publications/an-open-national-address-gazetteer) April 2014
With the exception of the PAF advisory board and Royal Mail there was support for the BIS review across the respondants with some notable calls for the *Totally Open* option (particularly from those organisations who are not part of the Public Sector Mapping Agreement) and that the UPRN should be released under an open data licence (as a core reference data set that encourages product interoperability).
. . . . .
A number of quotes have been selected below:
\newpage
## Addresses as an Open Core Reference
>....Address data and specific locations attached to them **are part of Core Reference data sets recognised by government as a key component of our National Information Infrastructure** (as long argued by APPSI). The report published by BIS gives us **a chance to democratise access to addressing data** and meet many of the Government’s avowed intentions. We urge acceptance of Option 6 *(freemium)* or 7 *(an independent open data product)*.
**David Rhind *Chair of the Advisory Panel on Public Sector Information* **
>....**Freely available data are much more likely to be adopted** by users and embedded in operational systems. **A national register, free at the point of delivery will undoubtedly help in joining up services, increasing efficiency and reducing duplication**.
**Office of National Statistics**
\newpage
## Monopoly rent exploitation
>... we expressed concern that, for almost all other potential customers (non-public sector), **the prices are prohibitive**, and appear designed to protect OS’s existing policy of setting high prices for a small captive market, **extracting monopoly rent**.
**Keith Dugmore *Director, DUG* **
\newpage
## The benefit of current credible addresses
>**The problem of out-of-date addresses is a very significant commercial cost** for the whole of the UK and is also arguably underplayed in the report.
**Individual Respondent 3**
\newpage
## Licences
>Whatever licence the data is available under, **it must permit the data to be combined with other open data and then re-published**. ... The [Open Government Licence](http://www.nationalarchives.gov.uk/doc/open-government-licence/version/2/) fulfils this criteria, but it should be noted that the [OS OpenData Licence](http://www.ordnancesurvey.co.uk/docs/licences/os-opendata-licence.pdf) (enforced by OS on it's OS OpenData products, and via the PSMA) does not. The use of the latter would represent a significant restriction on down-stream data use, and so should be avoided.
**Individual Respondent 6**
\newpage
# Taking Stock
## Addresses are heterogeneous

In terms of:
* What they mean
* What they are used for
* Who uses them
* How they are accessed
\newpage
## Assets can have addresses
So - anything can have an address (the *Internet of Things*)
](http://joncruddas.org.uk/sites/joncruddas.org.uk/files/styles/large/public/field/image/post-box.jpg?itok=ECnzLyhZ)
\newpage
## National data silos

They have been created to solve national issues.
No unifying semantics
\newpage
##

\newpage
## Addresses are bureaucratic and costly

Severely protracted when formal/informal issues are encountered.
\newpage
## Addresses can be opaque

**transparent and reproducible?**
\newpage
## Addresses are of global significance

\newpage
## Addresses are ripe for disruption

\newpage
# Address Disruption
## Formal versus informal

\newpage
## Technology
Streets are so last century.....

* Ubiquitous GPS/GNSS
* Structured crowdsourced geo-enabled content (wikipedia, OSM)
\newpage
## Interoperability

* Will the semantic web provide address interoperabilty?
* between addressing systems
* to incorporate additional data
* VOA
* ETC
\newpage
## Globalisation

* Addressing is a **core reference geography**
* Global brands will demand (or invoke) consistent global addressing
* How will licences impact on this?
\newpage
# A new global address paradigm?
* [Amazon drone delivery in the UK requires](https://www.theguardian.com/technology/2016/jul/25/amazon-to-test-drone-delivery-uk-government)
* A new view over addressing complements streets and buildings but is geo-coded at source
* and supports accurate delivery throughout the delivery chain using a global referencing system.
Is there a universal approach which allows all avenues to be satisfied?

\newpage
## How might this look?
.
.
Requirements for a Global Address Framework
.
.
\newpage
## WGS84 algorithmic address minting

**A global addressing framework needs to be transparent and reproducible.**
**A global addressing framework should be based on a spatial reference system.**
**A global addressing framework needs to be lightweight and cheap so it can be implemented in a timely manner.**
\newpage
## Small footprint

**Ubiquitous access across platforms.**
**No dependency on internet access.**
\newpage
## Short/memorable

\newpage
## Self checking

**Improving validity and credibility of downstream business processes.**
\newpage
## Unlimited spatial recording

* What are the spatial requirements for the range of addressing options?
* [Manila has a population density of 42,857 people per square km](http://en.wikipedia.org/wiki/List_of_cities_proper_by_population_density).
* [Map Kibera](http://mapkibera.org/) and OSM has revolutionised service delivery in Kibera (Kenya).
* Address Kibera could do the same thing for citizenship.
**A global addressing framework should meet the needs of the rural, urban, formal and informal communities equally.**
\newpage
## Open and interoperable

\newpage
## Open and interoperable
> the lack of a consistent and transparent legal and policy framework for sharing spatial data continues to be an additional roadblock.
@pomfret_spatial_2010
**A global addressing framework should be open or available with as few barriers as possible.**
\newpage
## Indoor use and 3D

Incorporating wifi-triangulation - *individual room* addressing and navigation.
Seamless integration with BIM and CityGML.
*Addressing isn't only about buildings - think about the Internet of Things*
\newpage
## Inherent geo-statistical aggregation (spatially scalable)

GIS free multi-scale analysis and reporting during disaster scenarios.
\newpage
# Utility address concepts
* A means of communicating location to third parties in a way **they** understand.
* Delivery
* Contract engineer
* Incident reporting
* Hence, addresses are all about sharing
* We need to *buy into* disambiguating stakeholder semantics
* Democratise the infrastructure
* Democratise re-use
* Everything is mediated by a human in the information exchange
* Everyone has their own semantics
* Formal and vernacular geographies
\newpage
## Addresses mediate space
In business systems addresses are bridge a between technology stacks and social systems.

\newpage
## Addresses mediate space
In business systems addresses are bridge a between technology stacks and social systems.

* Challenges
* find an unambiguous way to encode these different address types across the enterprise (and/or as part of an open initiative)
* find ways to dynamically transform these address so that each end-user community get the most appropriate address be they:
* formal addresses
* vernacular (informal) addresses
* Postal address
* Asset location
\newpage
* Most people in the UK think of an address as a *postal address*
* This is a mindset we should be trying to break
* A delivery address is only one facet to an address
* What do addresses enable
* Services
* Postal services
* Utility services
* etc
* Routing
* Vehicle navigation
* People navigation
* Asset/Infrastructure management
* Information integration
* Lifecycle
* Geodemographics
* Hence, addressing information covers a range of requirements:
* Semantic
* GIS
* Database
* Challenges
* find an unambiguous way to encode these different address types across the enterprise (and/or as part of an open initiative)
* find ways to dynamically transform these address so that each end-user community get the most appropriate address be they:
* formal addresses
* vernacular (informal) addresses
* Postal address
* Asset location
\newpage
In terms of assets two things spring to mind -
1. we no longer need streets and buildings to provide an address.
* GNSS already does this globally.
* The challenge is to translate GNSS into something appropriate for other services
1. The Access point/Delivery point metaphor used by Royal Mail may be important for traction
* solving the last 100m problem (or location of local drone delivery depot)

\newpage
# Current utility addressing?
## A shared view over addressing?

\newpage
## A shared view over addressing?
Not really....
* ABP isn’t a silver bullet
* Subset of required ‘formal - delivery’ addresses
* Mis-match in terms of assets
* Why does a sewage works not have an address when a post-box does?
* Not plug and play
* Lag in the system - the lifecycle feedback does not have credibility for time critical applications.
* The co-ordinating spine is not freely available (under a permissive licence)
* Inset areas - an aglomoration of 'addresses'
* VOA link is a cludge

\newpage
## Addresses should mediate systems
* Bridge the gap between a building focussed 2d view of the world and the 3d world in which we inhabit.
* Harmonise the edge cases relationships between UPRNs and VOAs

\newpage
## Issues about ABP
* Users over a barrel
* Needed to buy ABP as AL2 was being removed
* Data model change
* a hostage to someone else's data model
* Lifecycle benefit not being realised (at least not for utilities)
* Altough utilities have a significant value add
* Update frequency
* Different view of property hierarchy
* 2d and 3d metaphors against VOA
* a better 2.5 view of the world would be appreciated.
* Licences do not encourage re-use and innovation
\newpage
## This begs the question
> Why should utilities replace a functional bespoke address system with an address framework (ABP) that does not meet all their business requirements?
This creates a paradox when products like AddressBase are stipulated in Government policy documents (such as OpenWater)
How can this gap be bridged? Can *open addresses* help?
**Addresses need to be fit-for-purpose for the end user**
\newpage
# Future Addressing
## What do Utilities need from an Open Address infrastructure
> Ant Beck will talk about how addresses are employed within United Utilities: from bespoke addressing, to the current implementation of Geoplace’s Address Base.
>**The current approach to addressing hinders effective market activities so consideration is given to how Open approaches can disrupt the addressing landscape and improve utility services.**
* Should this simply emulate Address Base Premium?
* No
* Like Denmark should it exploit technological developments to be:
* More robust
* Improve use case
* More flexible
# Future Addressing
## What do Utilities need from an Open Address infrastructure
* Should it embrace technological development to make operational activities more efficient
* Use disruptive technologies to facilitate geo-coded addressing of assets in a flexible and credible manner
* How can such an infrastructure interoperate with other formal and informal sources to provide benefits
* What licence would a service be under.
* OS licence? -**No - it is restrictive**
* The point is to encourage:
* adoption
* engagement
* re-use
> We would like to see an *open address infrastructure* in the UK **provide a platform for 21st Century addressing**
> It should **not simply aim to emulate ABP** - there are other use cases
\newpage
## What can Utilities bring to Open Addresses
* A credible publisher of addressing updates under open licences providing:
* additional content
* improved lifecycle information
* expanded use cases
* improving confidence and credibility
* Critical lifecycle data updates
* potentially faster than local government (lag is critical to some users).
\newpage
## What can Open Addresses bring to Utilities
* Fill the gap of formal and informal addresses
* But share a common reference Spine
* UPRN?
* But what about the 3d world
* Add value
* Link to different geoaddressing paradigm
* W3W
* GeoHash
* etc.
* Linked data?
* Property life-cycle?
* Spatially consistent
* Crowd enhanced
* Service innovation
* enhanced business intelligence from shared knowledge
* geo-demographics protecting the disenfranchised
* who are our sensitive customers - what are their needs?
\newpage
# Final thoughts
Utilities have the potential to be:
* Key consumers of open addressing data
* Key providers of open addressing content
**United Utilities would like to help frame this debate and be part of any solution.**
\newpage
# References
| github_jupyter |
This notebook was prepared by Marco Guajardo. Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Implement a binary search tree with insert, delete, different traversals & max/min node values
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Is this a binary tree?
* Yes
* Is the root set to None initially?
* Yes
* Do we care if the tree is balanced?
* No
* What do we return for the traversals?
* Return a list of the data in the desired order
* What type of data can the tree hold?
* Assume the tree only takes ints. In a realistic example, we'd use a hash table to convert other types to ints.
## Test Cases
### Insert
* Always start with the root
* If value is less than the root, go to the left child
* if value is more than the root, go to the right child
### Delete
* Deleting a node from a binary tree is tricky. Make sure you arrange the tree correctly when deleting a node.
* Here are some basic [instructions](http://www.algolist.net/Data_structures/Binary_search_tree/Removal)
* If the value to delete isn't on the tree return False
### Traversals
* In order traversal - left, center, right
* Pre order traversal - center, left, right
* Post order traversal - left, right, center
* Return list for all traversals
### Max & Min
* Find the max node in the binary search tree
* Find the min node in the binary search tree
### treeIsEmpty
* check if the tree is empty
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/binary_tree_implementation/binary_tree_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
class Node (object):
def __init__ (self, data=None):
#TODO:implement me
pass
def __str__ (self):
#TODO:implement me
pass
class BinaryTree (object):
def __init__ (self):
#TODO:implement me
pass
def insert (self, newData):
#TODO:implement me
pass
def delete (self, key):
#TODO:implement me
pass
def maxNode (self):
#TODO:implement me
pass
def minNode (self):
#TODO:implement me
pass
def printPostOrder (self):
#TODO:implement me
pass
def printPreOrder (self):
#TODO:implement me
pass
def printInOrder (self):
#TODO:implement me
pass
def treeIsEmpty (self):
#TODO: implement me
pass
```
## Unit Test
```
import unittest
class TestBinaryTree(unittest.TestCase):
def test_insert_traversals (self):
myTree = BinaryTree()
myTree2 = BinaryTree()
for num in [50, 30, 70, 10, 40, 60, 80, 7, 25, 38]:
myTree.insert(num)
[myTree2.insert(num) for num in range (1, 100, 10)]
print("Test: insert checking with in order traversal")
expectVal = [7, 10, 25, 30, 38, 40, 50, 60, 70, 80]
self.assertEqual(myTree.printInOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
self.assertEqual(myTree2.printInOrder(), expectVal)
print("Test: insert checking with post order traversal")
expectVal = [7, 25, 10, 38, 40, 30, 60, 80, 70, 50]
self.assertEqual(myTree.printPostOrder(), expectVal)
expectVal = [91, 81, 71, 61, 51, 41, 31, 21, 11, 1]
self.assertEqual(myTree2.printPostOrder(), expectVal)
print("Test: insert checking with pre order traversal")
expectVal = [50, 30, 10, 7, 25, 40, 38, 70, 60, 80]
self.assertEqual(myTree.printPreOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
self.assertEqual(myTree2.printPreOrder(), expectVal)
print("Success: test_insert_traversals")
def test_max_min_nodes (self):
myTree = BinaryTree()
myTree.insert(5)
myTree.insert(1)
myTree.insert(21)
print("Test: max node")
self.assertEqual(myTree.maxNode(), 21)
myTree.insert(32)
self.assertEqual(myTree.maxNode(), 32)
print("Test: min node")
self.assertEqual(myTree.minNode(), 1)
print("Test: min node inserting negative number")
myTree.insert(-10)
self.assertEqual(myTree.minNode(), -10)
print("Success: test_max_min_nodes")
def test_delete (self):
myTree = BinaryTree()
myTree.insert(5)
print("Test: delete")
myTree.delete(5)
self.assertEqual(myTree.treeIsEmpty(), True)
print("Test: more complex deletions")
[myTree.insert(x) for x in range(1, 5)]
myTree.delete(2)
self.assertEqual(myTree.root.rightChild.data, 3)
print("Test: delete invalid value")
self.assertEqual(myTree.delete(100), False)
print("Success: test_delete")
def main():
testing = TestBinaryTree()
testing.test_insert_traversals()
testing.test_max_min_nodes()
testing.test_delete()
if __name__=='__main__':
main()
```
**The following unit test is expected to fail until you solve the challenge.**
## Solution NoteBook
Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/binary_tree_implementation/binary_tree_solution.ipynb) for a discussion on algorithms and code solutions.
| github_jupyter |
```
import numpy as np
import env
import catalog as Cat
import sham_hack as SHAM
import observables as Obvs
import AbundanceMatching as AM
import corner as DFM
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
%matplotlib inline
subs = Cat.SubhaloHistory(sigma_smhm=0.2, smf_source='li-march', nsnap_ancestor=15)
subcat = subs.Read()
mmaxlim = (subcat['snapshot15_m.max'] > 0) & (subcat['snapshot15_m.star'] > 0)
msham_z1 = subcat['snapshot15_m.star'][mmaxlim]
mhalo_z1 = subcat['snapshot15_halo.m'][mmaxlim]
mmax_z1 = subcat['snapshot15_m.max'][mmaxlim]
print mmax_z1.min(), mmax_z1.max()
print msham_z1.min(), msham_z1.max()
smhmr = Obvs.Smhmr()
mmid, _, sig_logm, cnts = smhmr.Calculate(mmax_z1, msham_z1)
fig = plt.figure()
sub = fig.add_subplot(111)
DFM.hist2d(mmax_z1, msham_z1, levels=[0.68, 0.95],
range=[[10., 15.],[6., 13.]], plot_datapoints=True, fill_contours=False, plot_density=True, ax=sub)
fig = plt.figure()
sub = fig.add_subplot(111)
sub.plot(mmid, sig_logm)
sub.set_xlim([10., mmax_z1.max()])
sub.set_ylim([0., 0.5])
fig = plt.figure()
sub = fig.add_subplot(111)
z = 1.
MF = SHAM.SMFClass('li-march', z, 0.0, 0.7)
sub.plot(np.linspace(5., 13., 20), ([MF.numden(mm) for mm in np.linspace(6., 13., 20)]), ls='--')
sub.plot(np.linspace(5., 13., 20), ([MF.dndm(mm) for mm in np.linspace(6., 13., 20)]), ls=':')
sub.set_xlim([5., 13.])
sub.set_yscale('log')
sub.set_ylim([10**-5., 10**-1.5])
m_arr = np.linspace(6.0, 13.0, 50)
af = AM.AbundanceFunction(m_arr, np.array([MF.dndm(mm) for mm in m_arr]), ext_range=(4., 13.), faint_end_first=True)
scatter = 0.2
remainder = af.deconvolute(scatter, 20)
x, nd = af.get_number_density_table()
plt.plot(x, remainder/nd)
plt.xlim([6., 13.])
plt.ylim([-0.5, 1.])
nd_halos = AM.calc_number_densities(mmax_z1, 250./0.7)
catalog = af.match(nd_halos)
catalog_sc = af.match(nd_halos, 0.2)
mmid, _, sig_logm, cnts = smhmr.Calculate(mmax_z1, catalog)
fig = plt.figure()
sub = fig.add_subplot(111)
sub.plot(mmid, sig_logm)
sub.set_xlim([10., mmax_z1.max()])
sub.set_ylim([0., 0.5])
fig = plt.figure()
sub = fig.add_subplot(111)
DFM.hist2d(mmax_z1, msham_z1, levels=[0.68, 0.95], color='C0',
range=[[10., 15.],[6., 13.]], plot_datapoints=True, fill_contours=False, plot_density=True, ax=sub)
DFM.hist2d(mmax_z1, catalog_sc, levels=[0.68, 0.95], color='C1',
range=[[10., 15.],[6., 13.]], plot_datapoints=True, fill_contours=False, plot_density=True, ax=sub)
fig = plt.figure()
sub = fig.add_subplot(111)
DFM.hist2d(mhalo_z1, msham_z1, levels=[0.68, 0.95], color='C0',
range=[[10., 15.],[6., 13.]], plot_datapoints=True, fill_contours=False, plot_density=True, ax=sub)
DFM.hist2d(mhalo_z1, catalog_sc, levels=[0.68, 0.95], color='C1',
range=[[10., 15.],[6., 13.]], plot_datapoints=True, fill_contours=False, plot_density=True, ax=sub)
isfin = np.isfinite(catalog_sc)
mmid, _, sig_logm, cnts = smhmr.Calculate(mmax_z1[isfin], catalog_sc[isfin])
fig = plt.figure()
sub = fig.add_subplot(111)
sub.plot(mmid, sig_logm)
sub.set_xlim([10., mmax_z1.max()])
sub.set_ylim([0., 0.5])
```
| github_jupyter |
```
# 1
# Load The dataset
import numpy
data = numpy.loadtxt("./data/pima-indians-diabetes.csv", delimiter=",")
X = data[:,0:8]
y = data[:,8]
# 2
# Create the function that returns the keras model
from keras.models import Sequential
from keras.layers import Dense
from keras.regularizers import l2
def build_model(lambda_parameter):
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l2(lambda_parameter)))
model.add(Dense(8, activation='relu', kernel_regularizer=l2(lambda_parameter)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
return model
# 3
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
# define a seed for random number generator so the result will be reproducible
import numpy
seed = 1
numpy.random.seed(seed)
# create the Keras wrapper with scikit learn
model = KerasClassifier(build_fn=build_model, verbose=0)
# define all the possible values for each hyperparameter
lambda_parameter = [0.01, 0.5, 1]
epochs = [350, 400]
batch_size = [10]
# create the dictionary containing all possible values of hyperparameters
param_grid = dict(lambda_parameter=lambda_parameter, epochs=epochs, batch_size=batch_size)
# perform 5-fold cross validation for ??????? store the results
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
# 3
# print the results for best cross validation score
print("Best cross validation score =", results.best_score_)
print("Parameters for Best cross validation scor e=", results.best_params_)
# print the results for all evaluated hyperparameter combinations
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
# 4
# define a seed for random number generator so the result will be reproducible
numpy.random.seed(seed)
# create the Keras wrapper with scikit learn
model = KerasClassifier(build_fn=build_model, verbose=0)
# define all the possible values for each hyperparameter
lambda_parameter = [0.001, 0.01, 0.05, 0.1]
epochs = [400]
batch_size = [10]
# create the dictionary containing all possible values of hyperparameters
param_grid = dict(lambda_parameter=lambda_parameter, epochs=epochs, batch_size=batch_size)
# search the grid, perform 5-fold cross validation for each possible combination, store the results
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
# 4
# print the results for best cross validation score
print("Best cross validation score =", results.best_score_)
print("Parameters for Best cross validation score =", results.best_params_)
# print the results for the entire grid
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
# 5
# Create the function that returns the keras model
from keras.layers import Dropout
def build_model(rate):
model = Sequential()
model.add(Dense(8, activation='relu'))
model.add(Dropout(rate))
model.add(Dense(8, activation='relu'))
model.add(Dropout(rate))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
return model
# 5
# define a seed for random number generator so the result will be reproducible
numpy.random.seed(seed)
# create the Keras wrapper with scikit learn
model = KerasClassifier(build_fn=build_model, verbose=0)
# define all the possible values for each hyperparameter
rate = [0, 0.2, 0.4]
epochs = [350, 400]
batch_size = [10]
# create the dictionary containing all possible values of hyperparameters
param_grid = dict(rate=rate, epochs=epochs, batch_size=batch_size)
# perform 5-fold cross validation for 10 randomly selected combinations, store the results
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
# 5
# print the results for best cross validation score
print("Best cross validation score=", results.best_score_)
print("Parameters for Best cross validation score=", results.best_params_)
# print the results for the entire grid
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
# 6
# define a seed for random number generator so the result will be reproducible
numpy.random.seed(seed)
# create the Keras wrapper with scikit learn
model = KerasClassifier(build_fn=build_model, verbose=0)
# define all the possible values for each hyperparameter
rate = [0.0, 0.05, 0.1]
epochs = [400]
batch_size = [10]
# create the dictionary containing all possible values of hyperparameters
param_grid = dict(rate=rate, epochs=epochs, batch_size=batch_size)
# perform 5-fold cross validation for 10 randomly selected combinations, store the results
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
# 6
# print the results for best cross validation score
print("Best cross validation score=", results.best_score_)
print("Parameters for Best cross validation score=", results.best_params_)
# print the results for the entire grid
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
```
| github_jupyter |
# Which is the fastest axis of an array?
I'd like to know: which axes of a NumPy array are fastest to access?
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
## A tiny example
```
a = np.arange(9).reshape(3, 3)
a
' '.join(str(i) for i in a.ravel(order='C'))
' '.join(str(i) for i in a.ravel(order='F'))
```
## A seismic volume
```
volume = np.load('data/F3_volume_3x3_16bit.npy')
volume.shape
```
Let's look at how the indices vary:
```
idx = np.indices(volume.shape)
idx.shape
```
We can't easily look at the indices for 190 × 190 × 190 samples (6 859 000 samples). So let's look at a small subset: 5 × 5 × 5 = 125 samples. We can make a plot of how the indices vary in each direction. For C-ordering, the indices on axis 0 vary slowly: they start at 0 and stay at 0 for 25 samples; then they increment by one. So if we ask for all the data for which axis 0 has index 2 (say), the computer just has to retrieve a contiguous chunk of memory and it gets all the samples.
On the other hand, if we ask for all the samples for which axis 2 has index 2, we have to retrieve non-contiguous samples from memory, effectively opening a lot of memory 'drawers' and taking one pair of socks out of each one.
```
from matplotlib.font_manager import FontProperties
annot = ['data[2, :, :]', 'data[:, 2, :]', 'data[:, :, 2]']
mono = FontProperties()
mono.set_family('monospace')
fig, axs = plt.subplots(ncols=3, figsize=(15,3), facecolor='w')
for i, ax in enumerate(axs):
data = idx[i, :5, :5, :5].ravel(order='C')
ax.plot(data, c=f'C{i}')
ax.scatter(np.where(data==2), data[data==2], color='r', s=10, zorder=10)
ax.text(65, 4.3, f'axis {i}', color=f'C{i}', size=15, ha='center')
ax.text(65, -0.7, annot[i], color='red', size=12, ha='center', fontproperties=mono)
ax.set_ylim(-1, 5)
_ = plt.suptitle("C order", size=18)
plt.savefig('/home/matt/Pictures/3d-array-corder.png')
fig, axs = plt.subplots(ncols=3, figsize=(15,3), facecolor='w')
for i, ax in enumerate(axs):
data = idx[i, :5, :5, :5].ravel(order='F')
ax.plot(data, c=f'C{i}')
ax.scatter(np.where(data==2), data[data==2], color='r', s=10, zorder=10)
ax.text(65, 4.3, f'axis {i}', color=f'C{i}', size=15, ha='center')
ax.text(65, -0.7, annot[i], color='red', size=12, ha='center', fontproperties=mono)
ax.set_ylim(-1, 5)
_ = plt.suptitle("Fortran order", size=18)
plt.savefig('/home/matt/Pictures/3d-array-forder.png')
```
At the risk of making it more confusing, it might help to look at the plots together. Shown here is the C ordering:
```
plt.figure(figsize=(15,3))
plt.plot(idx[0, :5, :5, :5].ravel(), zorder=10)
plt.plot(idx[1, :5, :5, :5].ravel(), zorder=9)
plt.plot(idx[2, :5, :5, :5].ravel(), zorder=8)
```
This organization is reflected in `ndarray.strides`, which tells us how many bytes must be traversed to get to the next index in each axis. Each 2-byte step through memory gets me to the next index in axis 2, but I must strude 72200 bytes to get to the next index of axis 0:
```
volume.strides
```
## Aside: figure for blog post
```
fig, axs = plt.subplots(ncols=2, figsize=(10,3), facecolor='w')
for i, ax in enumerate(axs):
data = idx[i, :3, :3, 0].ravel(order='C')
ax.plot(data, 'o-', c='gray')
ax.text(0, 1.8, f'axis {i}', color='gray', size=15, ha='left')
plt.savefig('/home/matt/Pictures/2d-array-corder.png')
```
## Accessing the seismic data
Let's make all the dimensions the same, to avoid having to slice later. I'll make a copy, otherwise we'll have a view of the original array.
Alternatively, change the shape here to see effect of small dimensions, eg try `volume = volume[:10, :290, :290]` with C ordering.
```
volume = volume[:190, :190, :190].copy()
def get_slice_3d(volume, x, axis, n=None):
"""
Naive function... but only works on 3 dimensions.
NB Using ellipses slows down last axis.
"""
# Force cube shape
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n]
if axis == 1:
data = volume[:n, x, :n]
if axis == 2:
data = volume[:n, :n, x]
return data + 1
%timeit get_slice_3d(volume, 150, axis=0)
%timeit get_slice_3d(volume, 150, axis=1)
%timeit get_slice_3d(volume, 150, axis=2)
```
Let's check that changing the memory layout to Fortran ordering makes the last dimension fastest:
```
volumef = np.asfortranarray(volume)
%timeit get_slice_3d(volumef, 150, axis=0)
%timeit get_slice_3d(volumef, 150, axis=1)
%timeit get_slice_3d(volumef, 150, axis=2)
```
Axes 0 and 1 are > 10 times faster than axis 2.
What about if we do something like take a Fourier transform over the first axis?
```
from scipy.signal import welch
%timeit s = [welch(tr, fs=500) for tr in volume[:, 10]]
%timeit s = [welch(tr, fs=500) for tr in volumef[:, 10]]
```
No practical difference. Hm.
I'm guessing this is because the DFT takes way longer than the data access.
```
del(volume)
del(volumef)
```
## Fake data in _n_ dimensions
Let's make a function to generate random data in any number of dimensions.
Be careful: these volumes get big really quickly!
```
def makend(n, s, equal=True, rev=False, fortran=False):
"""
Make an n-dimensional hypercube of randoms.
"""
if equal:
incr = np.zeros(n, dtype=int)
elif rev:
incr = list(reversed(np.arange(n)))
else:
incr = np.arange(n)
shape = incr + np.ones(n, dtype=int)*s
a = np.random.random(shape)
m = f"Shape: {tuple(shape)} "
m += f"Memory: {a.nbytes/1e6:.0f}MB "
m += f"Order: {'F' if fortran else 'C'}"
print (m)
if fortran:
return np.asfortranarray(a)
else:
return a
```
I tried implementing this as a context manager, so you wouldn't have to delete the volume each time after using it. I tried the `@contextmanager` decorator, and I tried making a class with `__enter__()` and `__exit__()` methods. Each time, I tried putting the `del` command as part of the exit routine. They both worked fine... except they did not delete the volume from memory.
## 2D data
```
def get_slice_2d(volume, x, axis, n=None):
"""
Naive function... but only works on 2 dimensions.
"""
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n]
if axis == 1:
data = volume[:n, x]
return data + 1
dim = 2
v = makend(dim, 6000, fortran=False)
for n in range(dim):
%timeit get_slice_2d(v, 3001, n)
del v
dim = 2
v = makend(dim, 6000, fortran=True)
for n in range(dim):
%timeit get_slice_2d(v, 3001, n)
del v
```
This has been between 3.3 and 12 times faster.
## 1D convolution on an array
```
def convolve(data, kernel=np.arange(10), axis=0):
func = lambda tr: np.convolve(tr, kernel, mode='same')
return np.apply_along_axis(func, axis=axis, arr=data)
dim = 2
v = makend(dim, 6000, fortran=False)
%timeit convolve(v, axis=0)
%timeit convolve(v, axis=1)
del v
dim = 2
v = makend(dim, 6000, fortran=True)
%timeit convolve(v, axis=0)
%timeit convolve(v, axis=1)
del v
```
Speed is double on fast axis, i.e. second axis on default C order.
## `np.mean()` across axes
Let's try taking averages across different axes. In C order it should be faster to get the `mean` on `axis=1` because that involves getting the rows:
```
a = [[ 2, 4],
[10, 20]]
np.mean(a, axis=0), np.mean(a, axis=1)
```
Let's see how this looks on our data:
```
dim = 2
v = makend(dim, 6000, fortran=False)
%timeit np.mean(v, axis=0)
%timeit np.mean(v, axis=1)
del v
dim = 2
v = makend(dim, 6000, fortran=True)
%timeit np.mean(v, axis=0)
%timeit np.mean(v, axis=1)
del v
```
We'd expect the difference to be even more dramatic with `median` because it has to sort every row or column:
```
v = makend(dim, 6000, fortran=False)
%timeit np.median(v, axis=0)
%timeit np.median(v, axis=1)
del v
v = makend(dim, 6000, fortran=False)
%timeit v.mean(axis=0)
%timeit v.mean(axis=1)
del v
```
## 3D arrays
In a nutshell:
C order: first axis is fastest, last axis is slowest; factor of two between others.
Fortran order: last axis is fastest, first axis is slowest; factor of two between others.
```
dim = 3
v = makend(dim, 600)
for n in range(dim):
%timeit get_slice_3d(v, 201, n)
del v
```
Non-equal axes doesn't matter.
```
dim = 3
v = makend(dim, 600, equal=False, rev=True)
for n in range(dim):
%timeit get_slice_3d(v, 201, n)
del v
```
Fortran order results in a fast last axis, as per. But the middle axis is pretty fast too.
```
dim = 3
v = makend(dim, 600, fortran=True)
for n in range(dim):
%timeit get_slice_3d(v, 201, n)
del v
```
For C ordering, the last dimension is more than 20x slower than the other two.
## 4 dimensions
Axes 0 and 1 are fast (for C ordering), axis 2 is half speed, axis 3 is ca. 15 times slower than fast axis.
```
def get_slice_4d(volume, x, axis, n=None):
"""
Naive function... but only works on 4 dimensions.
"""
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n, :n]
if axis == 1:
data = volume[:n, x, :n, :n]
if axis == 2:
data = volume[:n, :n, x, :n]
if axis == 3:
data = volume[:n, :n, :n, x]
return data + 1
dim = 4
v = makend(dim, 100, equal=True)
for n in range(dim):
%timeit get_slice_4d(v, 51, n)
del v
dim = 4
v = makend(dim, 100, equal=True, fortran=True)
for n in range(dim):
%timeit get_slice_4d(v, 51, n)
del v
```
## 5 dimensions
We are taking 4-dimensional hyperplanes from a 5-dimensional hypercube.
Axes 0 and 1 are fast, axis 2 is half speed, axis 3 is quarter speed, and the last axis is about 5x slower than that.
```
def get_slice_5d(volume, x, axis, n=None):
"""
Naive function... but only works on 5 dimensions.
"""
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n, :n, :n]
if axis == 1:
data = volume[:n, x, :n, :n, :n]
if axis == 2:
data = volume[:n, :n, x, :n, :n]
if axis == 3:
data = volume[:n, :n, :n, x, :n]
if axis == 4:
data = volume[:n, :n, :n, :n, x]
return data + 1
dim = 5
v = makend(dim, 40)
for n in range(dim):
%timeit get_slice_5d(v, 21, n)
del v
dim = 5
v = makend(dim, 40, fortran=True)
for n in range(dim):
%timeit get_slice_5d(v, 21, n)
del v
```
What about when we're doing something like getting the mean on an array?
```
dim = 5
v = makend(dim, 40, fortran=True)
for n in range(dim):
%timeit np.mean(v, axis=n)
del v
```
## 6 dimensions and beyond
In general, first _n_/2 dimensions are fast, then gets slower until last dimension is several (5-ish) times slower than the first.
```
def get_slice_6d(volume, x, axis, n=None):
"""
Naive function... but only works on 6 dimensions.
"""
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n, :n, :n, :n]
if axis == 1:
data = volume[:n, x, :n, :n, :n, :n]
if axis == 2:
data = volume[:n, :n, x, :n, :n, :n]
if axis == 3:
data = volume[:n, :n, :n, x, :n, :n]
if axis == 4:
data = volume[:n, :n, :n, :n, x, :n]
if axis == 5:
data = volume[:n, :n, :n, :n, :n, x]
return data + 1
dim = 6
v = makend(dim, 23)
for n in range(dim):
%timeit get_slice_6d(v, 12, n)
del v
```
| github_jupyter |
## Summary
In this notebook we load a network trained to solve Sudoku puzzles and use this network to solve a single Sudoku.
----
## Imports
```
import functools
import io
import os
import sys
import tempfile
import time
from collections import deque
from pathlib import Path
import ipywidgets as widgets
import numpy as np
import pandas as pd
import tqdm
from IPython.display import HTML, display
from ipywidgets import fixed, interact, interact_manual, interactive
import matplotlib as mpl
import matplotlib.pyplot as plt
import pyarrow
import torch
import torch.nn as nn
from matplotlib import cm
from torch_geometric.data import DataLoader
import proteinsolver
import proteinsolver.datasets
from proteinsolver.utils import gen_sudoku_graph_featured
%matplotlib agg
try:
inline_rc
except NameError:
inline_rc = mpl.rcParams.copy()
mpl.rcParams.update({"font.size": 12})
```
## Parameters
```
UNIQUE_ID = "c8de7e56"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
```
## Load model
```
%run sudoku_train/{UNIQUE_ID}/model.py
state_files = sorted(
Path("sudoku_train").joinpath(UNIQUE_ID).glob("*.state"),
key=lambda s: (int(s.stem.split("-")[3].strip("amv")), int(s.stem.split("-")[2].strip("d"))),
)
state_file = state_files[-1]
net = Net(
x_input_size=13, adj_input_size=3, hidden_size=162, output_size=9, batch_size=8
).to(device)
net.load_state_dict(torch.load(state_file, map_location=device))
net = net.eval()
net = net.to(device)
```
## Define widgets
### Sudoku grid
```
sudoku_widget_lookup = [[None for _ in range(9)] for _ in range(9)]
row_widgets = []
for row in range(3):
col_widgets = []
for col in range(3):
subrow_widgets = []
for subrow in range(3):
subcol_widgets = []
for subcol in range(3):
i = row * 3 + subrow
j = col * 3 + subcol
subcol_widget = (
widgets.BoundedIntText(
value=0,
min=0,
max=9,
step=1,
description='',
disabled=False,
allow_none=True,
layout={"width": "42px"}
)
)
subcol_widgets.append(subcol_widget)
sudoku_widget_lookup[i][j] = subcol_widget
subrow_widget = widgets.HBox(subcol_widgets)
subrow_widgets.append(subrow_widget)
col_widget = widgets.VBox(subrow_widgets, layout={"padding": "5px"})
col_widgets.append(col_widget)
row_widget = widgets.HBox(col_widgets)
row_widgets.append(row_widget)
sudoku_widget = widgets.VBox(row_widgets)
```
### Puzzle selector
```
puzzle_0 = torch.zeros(9, 9, dtype=torch.int64)
puzzle_1 = torch.tensor(
[
[0, 8, 0, 0, 3, 2, 0, 0, 1],
[7, 0, 3, 0, 8, 0, 0, 0, 2],
[5, 0, 0, 0, 0, 7, 0, 3, 0],
[0, 5, 0, 0, 0, 1, 9, 7, 0],
[6, 0, 0, 7, 0, 9, 0, 0, 8],
[0, 4, 7, 2, 0, 0, 0, 5, 0],
[0, 2, 0, 6, 0, 0, 0, 0, 9],
[8, 0, 0, 0, 9, 0, 3, 0, 5],
[3, 0, 0, 8, 2, 0, 0, 1, 0],
]
)
buf = io.StringIO()
buf.write("""\
6 3 4 8 9 1
6 4 8
6 7 9
6 8 9 1 2 4 5
9 2 7 1 3
1 4 5 9
2 9 6 3
9 5 6
""")
buf.seek(0)
df = pd.read_csv(buf, sep="\t", names=list(range(9))).fillna(0).astype(int)
puzzle_2 = torch.from_numpy(df.values)
buf = io.StringIO()
buf.write("""\
,,,,7,,,2,
,,,,4,,7,,
,,9,,3,6,1,4,
1,,3,4,5,,8,9,
4,,7,6,,,2,,
,,8,,1,,,,
3,,2,5,6,,,,8
8,,,,,3,,6,4
9,6,,,,4,,1,
""")
buf.seek(0)
df = pd.read_csv(buf, names=list(range(9))).fillna(0).astype(int)
puzzle_3 = torch.from_numpy(df.values)
def empty_out_puzzle(b, puzzle_matrix):
for i in range(9):
for j in range(9):
sudoku_widget_lookup[i][j].value = puzzle_matrix[i][j]
empty_puzzle_button = widgets.Button(
description="Empty",
disabled=False,
button_style="", # 'success', 'info', 'warning', 'danger' or ''
tooltip="Click me to set Sudoku grid to empty.",
# icon='puzzle-piece'
)
empty_puzzle_button.on_click(functools.partial(empty_out_puzzle, puzzle_matrix=puzzle_0))
test_puzzle_1_button = widgets.Button(
description="Puzzle 1",
disabled=False,
button_style="", # 'success', 'info', 'warning', 'danger' or ''
tooltip="Click me to set Sudoku grid to puzzle 1",
icon="puzzle-piece",
)
test_puzzle_1_button.on_click(functools.partial(empty_out_puzzle, puzzle_matrix=puzzle_1))
test_puzzle_2_button = widgets.Button(
description="Puzzle 2",
disabled=False,
button_style="", # 'success', 'info', 'warning', 'danger' or ''
tooltip="Click me to set Sudoku grid to puzzle 2",
icon="puzzle-piece",
)
test_puzzle_2_button.on_click(functools.partial(empty_out_puzzle, puzzle_matrix=puzzle_2))
test_puzzle_3_button = widgets.Button(
description="Puzzle 3",
disabled=False,
button_style="", # 'success', 'info', 'warning', 'danger' or ''
tooltip="Click me to set Sudoku grid to puzzle 3",
icon="puzzle-piece",
# layout={"margin": "10px"}
)
test_puzzle_3_button.on_click(functools.partial(empty_out_puzzle, puzzle_matrix=puzzle_3))
puzzle_selector_widget = widgets.HBox(
[empty_puzzle_button, test_puzzle_1_button, test_puzzle_2_button, test_puzzle_3_button]
)
empty_out_puzzle(None, puzzle_1)
```
### Puzzle solver
```
def encode_puzzle(puzzle):
puzzle = puzzle - 1
puzzle = torch.where(puzzle >= 0, puzzle, torch.tensor(9))
return puzzle
def decode_puzzle(puzzle):
puzzle = (puzzle + 1) % 10
return puzzle
puzzle = torch.tensor([1, 1, 1])
assert torch.equal(decode_puzzle(encode_puzzle(puzzle)), puzzle)
def solve_sudoku(net, puzzle):
sudoku_graph = torch.from_numpy(gen_sudoku_graph_featured()).to_sparse(2)
edge_index = sudoku_graph.indices()
edge_attr = sudoku_graph.values()
output = net(
encode_puzzle(puzzle).view(-1).to(device), edge_index.clone().to(device), edge_attr.clone().to(device)
).to("cpu")
output = torch.softmax(output, dim=1)
_, predicted = output.max(dim=1)
return decode_puzzle(predicted).reshape(9, 9)
def show_sudoku(puzzle, solved=None, pred=None, title="", color="black", ax=None):
# Simple plotting statement that ingests a 9x9 array (n), and plots a sudoku-style grid around it.
if ax is None:
fg, ax = plt.subplots(figsize=(4.8, 4.8))
for y in range(10):
ax.plot([-0.05, 9.05], [y, y], color="black", linewidth=1)
for y in range(0, 10, 3):
ax.plot([-0.05, 9.05], [y, y], color="black", linewidth=3)
for x in range(10):
ax.plot([x, x], [-0.05, 9.05], color="black", linewidth=1)
for x in range(0, 10, 3):
ax.plot([x, x], [-0.05, 9.05], color="black", linewidth=3)
ax.axis("image")
ax.axis("off") # drop the axes, they're not important here
# if title is not None:
ax.set_title(title, fontsize=20)
for x in range(9):
for y in range(9):
puzzle_element = puzzle[8 - y][x] # need to reverse the y-direction for plotting
if puzzle_element > 0: # ignore the zeros
T = f"{puzzle_element}"
ax.text(x + 0.25, y + 0.22, T, fontsize=20, color=color)
elif solved is not None and pred is not None:
solved_element = solved[8 - y][x]
pred_element = pred[8 - y][x]
if solved_element == pred_element:
T = f"{solved_element}"
ax.text(x + 0.25, y + 0.22, T, fontsize=20, color="C0")
else:
ax.text(x + 0.1, y + 0.3, f"{pred_element}", fontsize=13, color="C3")
ax.text(x + 0.55, y + 0.3, f"{solved_element}", fontsize=13, color="C2")
return ax
def plot_no_conflicts(title="", ax=None):
if ax is None:
fg, ax = plt.subplots(figsize=(4.8, 4.8))
ax.axis("image")
ax.axis("off") # drop the axes, they're not important here
ax.text(
0.5,
0.5,
"No conflicts!",
fontsize=20,
fontdict={"horizontalalignment": "center", "color": "C2"},
transform=ax.transAxes,
)
ax.set_title(title, fontsize=20)
return ax
plot_no_conflicts(title="Conflict")
show_sudoku(puzzle_1, title="Input")
show_sudoku(puzzle_1, puzzle_1, puzzle_1, title="Solution")
def find_conflict(puzzle):
for row_idx in range(9):
for value in range(1, 10):
mask = puzzle[row_idx, :] == value
if mask.sum() > 1:
ref = puzzle[row_idx, mask]
puzzle = torch.zeros_like(puzzle)
puzzle[row_idx, mask] = ref
return puzzle
for col_idx in range(9):
for value in range(1, 10):
mask = puzzle[:, col_idx] == value
if mask.sum() > 1:
ref = puzzle[mask, col_idx]
puzzle = torch.zeros_like(puzzle)
puzzle[mask, col_idx] = ref
return puzzle
for row_start_idx in range(0, 9, 3):
for col_start_idx in range(0, 9, 3):
for value in range(1, 10):
mask = puzzle[row_start_idx : row_start_idx + 3, col_start_idx : col_start_idx + 3] == value
if mask.sum() > 1:
ref = puzzle[row_start_idx : row_start_idx + 3, col_start_idx : col_start_idx + 3][mask]
puzzle = torch.zeros_like(puzzle)
puzzle[row_start_idx : row_start_idx + 3, col_start_idx : col_start_idx + 3][mask] = ref
return puzzle
return None
puzzle = puzzle_1.clone()
puzzle[0, 0] = 7
find_conflict(puzzle)
puzzle = puzzle_1.clone()
puzzle[0, 2] = 8
find_conflict(puzzle)
puzzle = puzzle_1.clone()
puzzle[6, 2] = 8
find_conflict(puzzle)
def plot_solution(puzzle, solution):
fg, axs = plt.subplots(1, 2, figsize=(9.8, 5))
show_sudoku(puzzle, solution, solution, title="Solution", ax=axs[0])
puzzle_conflict = find_conflict(solution)
if puzzle_conflict is not None:
show_sudoku(puzzle_conflict, title="Conflict", color="C3", ax=axs[1])
else:
plot_no_conflicts(title="Conflicts", ax=axs[1])
return fg
_ = plot_solution(puzzle_0, puzzle_0)
solution_output_widget = widgets.Output(layout={'border': '1px solid black', "width": "600px"})
def solve_sudoku_from_widget(b):
puzzle = torch.zeros(9, 9, dtype=torch.int64)
for i in range(9):
for j in range(9):
puzzle[i][j] = sudoku_widget_lookup[i][j].value
solution = solve_sudoku(net, puzzle)
with solution_output_widget:
solution_output_widget.clear_output()
fg = plot_solution(puzzle, solution)
display(fg)
solve_button_widget = widgets.Button(
description='Solve!',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click to solve the Sudoku puzzle',
icon='check'
)
solve_button_widget.on_click(solve_sudoku_from_widget)
solution = torch.tensor(
[
[4, 8, 9, 5, 3, 2, 7, 6, 1],
[7, 1, 3, 4, 8, 6, 5, 9, 2],
[5, 6, 2, 9, 1, 7, 8, 3, 4],
[2, 5, 8, 3, 4, 1, 9, 7, 6],
[6, 3, 1, 7, 5, 9, 2, 4, 8],
[9, 4, 7, 2, 6, 8, 1, 5, 3],
[1, 2, 5, 6, 7, 3, 4, 8, 9],
[8, 7, 6, 1, 9, 4, 3, 2, 5],
[3, 9, 4, 8, 2, 5, 6, 1, 7],
]
)
assert proteinsolver.utils.sudoku.sudoku_is_solved(solution.to("cpu"))
```
## Dashboard
## Solve a custom Sudoku puzzle
```
display(puzzle_selector_widget)
display(sudoku_widget)
display(solve_button_widget)
display(solution_output_widget)
display(HTML("""\
<hr>
<p>Running into issues? Please send an email to <a href="help@proteinsolver.org">help@proteinsolver.org</a>.
<br>
<em>This website works best using the latest versions of Firefox or Chrome web browsers.</em>
</p>
"""))
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import ktrain
from ktrain import graph as gr
```
# Node Classification in Graphs
In this notebook, we will use *ktrain* to perform node classificaiton on the Cora citation graph. Each node represents a paper pertaining to one of several paper topics. Links represent citations between papers. The attributes or features assigned to each node are in the form of a multi-hot-encoded vector of words appearing in the paper. The dataset is available [here](https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz).
The dataset is already in the form expected by *ktrain*, so let's begin.
### STEP 1: Load and Preprocess Data
We will hold out 10% of the nodes as a test set. Since we set `holdout_for_inductive=False`, the nodes being heldout will remain in the graph, but only their features (not labels) will be visible to our model. This is referred to as transductive inference. Of the remaining nodes, 10% will be used for training and the remaining nodes will be used for validation (also transductive inference). As with the holdout nodes, the features (but not labels) of validation nodes will be available to the model during training. The return value `df_holdout` contain the features for the heldout nodes and `G_complete` is the original graph including the holdout nodes.
```
(train_data, val_data, preproc, df_holdout, G_complete) = gr.graph_nodes_from_csv(
'data/cora/cora.content', # node attributes/labels
'data/cora/cora.cites', # edge list
sample_size=20,
holdout_pct=0.1, holdout_for_inductive=False,
train_pct=0.1, sep='\t')
```
The `preproc` object includes a reference to the training graph and a dataframe showing the features and target for each node in the graph (both training and validation nodes).
```
preproc.df.target.value_counts()
```
### STEP 2: Build a Model and Wrap in Learner Object
```
gr.print_node_classifiers()
learner = ktrain.get_learner(model=gr.graph_node_classifier('graphsage', train_data, ),
train_data=train_data,
val_data=val_data,
batch_size=64)
```
### STEP 3: Estimate LR
Given the small number of batches per epoch, a larger number of epochs is required to estimate the learning rate. We will cap it at 100 here.
```
learner.lr_find(max_epochs=100)
learner.lr_plot()
```
### STEP 4: Train the Model
We will train the model using `autofit`, which uses a triangular learning rate policy. The training will automatically stop when the validation loss no longer improves. We save the weights of the model during training in case we would like to reload the weights from any epoch.
```
learner.autofit(0.01, checkpoint_folder='/tmp/saved_weights')
```
## Evaluate
#### Validate
```
learner.validate(class_names=preproc.get_classes())
```
#### Create a Predictor Object
```
p = ktrain.get_predictor(learner.model, preproc)
```
#### Transductive Inference: Making Predictions for Validation and Test Nodes in Original Training Graph
In transductive inference, we make predictions for unlabeled nodes whose features are visible during training. Making predictions on validation nodes in the training graph is transductive inference.
Let's see how well our prediction is for the first validation example.
```
p.predict_transductive(val_data.ids[0:1], return_proba=True)
val_data[0][1][0]
```
Let's make predictions for all **test** nodes in the holdout set, measure test accuracy, and visually compare some of them with ground truth.
```
y_pred = p.predict_transductive(df_holdout.index, return_proba=False)
y_true = df_holdout.target.values
import pandas as pd
pd.DataFrame(zip(y_true, y_pred), columns=['Ground Truth', 'Predicted']).head()
import numpy as np
(y_true == np.array(y_pred)).mean()
```
Our final test accuracy for transductive inference on the holdout nodes is **82.32%** accuracy.
| github_jupyter |
# Gaussian Process Example 1 #
The GP model is widely considered at the reference when doing ensemble modeling. This notebook can serve to test the behaviour of GP within the context of scikit learn. In theory, 1 GP should be "an ensemble" in it's own right...so comparison should be made to single GP instances. Basic information on how to use GP in sklearn can be found here: https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy_targets.html
```
#Some imports
import matplotlib.pyplot as plt
import numpy as np
import random
import math
import scipy.stats as st
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import LeaveOneOut, KFold
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, Matern, RationalQuadratic, ExpSineSquared, ConstantKernel as C
#Generating the synthetic data
Nmodels=100
Ndims=5
Ndata=50
Nrem=int(Ndata*0.2)
r=np.random.uniform(low=-4,high=4,size=(Ndims,Ndata)).T # N-D euclidian position-vector
x=np.array([np.sqrt(np.sum(r*r,axis=1))]).T # get the euclidian "radial distance" as positive scalar
y=np.sinc(x).ravel() #create the associated targets, needs to be a 1D array
#y=np.sin(x).ravel() #create the associated targets, needs to be a 1D array
y=y+(x*x*0.1).ravel()
x2=x*x
print("mean x²=",np.mean(x2))
print("R=",r.shape,"\nX=",x.shape)
#create Nmodels (identical) rbf-models to train on different datasets
models=list()
datasets_r=list()
datasets_y=list()
for i in range(Nmodels):
Kernel=None
#Kernel=1.0 * RBF(length_scale=1, length_scale_bounds=(1e-1, 10.0))
#Kernel= 1.0 * Matern(length_scale=1.0, length_scale_bounds=(1e-1, 10.0),nu=1.5)
#Kernel=1.0 * RationalQuadratic(length_scale=1.0, alpha=0.1)
#Kernel= 1.0 * ExpSineSquared(length_scale=1.0, periodicity=3.0,length_scale_bounds=(0.1, 10.0),periodicity_bounds=(1.0, 10.0))
clfrbf=GaussianProcessRegressor(
kernel=Kernel,
# alpha=3.0e0,
)
#index=list(range(i,i+Nrem))
index=random.sample(range(Ndata),Nrem)
seti_r=np.array(np.delete(r,index,axis=0)) #remove selected vector rows
#print(seti_r.shape,seti_r)
#print(seti_r.shape," ",seti_r[0:5,:].ravel())
seti_y=np.delete(y,index)
clfrbf.fit(seti_r,seti_y) # train our model instance, aka solve the set of linear equations
datasets_r.append(seti_r)
datasets_y.append(seti_y)
models.append(clfrbf)
print(i,",",end=" ")
def mean_confidence_interval(data, confidence=0.95):
a = 1.0 * np.array(data)
n = len(a)
m, se = np.mean(a), st.sem(a)
h = se * st.t.ppf((1 + confidence) / 2., n-1)
cf=(1.0-confidence)*0.5
qm = np.quantile(a,cf,interpolation='linear')
qp = np.quantile(a,1.0-cf,interpolation='linear')
return m, m-h, m+h, qm, qp
#generate a dens mesh
xmin=0
xmax=8.5
Npts=10000 # number of points, randomly selected in the Ndims dimensional space. (To prevent things from accidentaly exploding)
if Ndims==1:
#generate Ndim grid
rPred=np.linspace((xmin,),(xmax,),Npts)
elif Ndims==2:
#2D sperical grid with uniform random distribution in sphere
rPred=np.zeros((2,Npts)).T
for i in range(1,Npts):
L=2*xmax
while L>xmax: #repeat until in circle
xy=np.random.uniform(low=0,high=xmax,size=2)
L=np.sqrt(np.sum(xy*xy))
rPred[i,0:2]=xy[0:2]
elif Ndims==3:
#3D sperical grid with uniform random distribution in sphere
rPred=np.zeros((3,Npts)).T
for i in range(1,Npts):
L=2*xmax
while L>xmax: #repeat until in sphere
xyz=np.random.uniform(low=0,high=xmax,size=3)
L=np.sqrt(np.sum(xyz**2))
rPred[i,0:3]=xyz[0:3]
else :
rPred=np.random.uniform(low=0,high=xmax,size=(Ndims,Npts)).T # N-D euclidian position-vector
xPred=np.array([np.sqrt(np.sum(rPred*rPred,axis=1))]).T # get the euclidian "radial distance" as positive scalar
#The randomness of the x's gives some issues for plotting purposes, so sort everything wrt the radial value x
indexSort=np.argsort(xPred,axis=0).ravel()
xPred=np.sort(xPred,axis=0)
rPred=rPred[indexSort[::1]]
yExact=np.sinc(xPred).ravel()
#yExact=np.sin(xPred).ravel()
yExact=yExact+(xPred*xPred*0.1).ravel()
yAvg=np.zeros(Npts)
CIlow=np.zeros(Npts)
CIhigh=np.zeros(Npts)
Qlow=np.zeros(Npts)
Qhigh=np.zeros(Npts)
# and predict
all_yPred=list()
yPred2D=np.zeros((Nmodels,Npts))
sigmaPred2D=np.zeros((Nmodels,Npts))
cnt=-1
ERRORS=np.zeros((Nmodels,2))
for clfrbf in models:
cnt+=1
yPred, sigma=clfrbf.predict(rPred,return_std=True)
#print(cnt," : sigma=",sigma)
all_yPred.append(yPred)
yPred2D[cnt]=yPred
sigmaPred2D[cnt]=sigma
# The mean squared error (MAE) and The coefficient of determination R²: 1 is perfect prediction
ERRORS[cnt,0]=mean_squared_error(yExact, yPred)
ERRORS[cnt,1]=r2_score(yExact, yPred)
#print('MAE: %.3f R²: %.3f' % (mean_squared_error(yExact, yPred), r2_score(yExact, yPred)))
print("Average scores -- MAE= ",np.mean(ERRORS[:,0])," R²= ",np.mean(ERRORS[:,1]))
for i in range(Npts):
yAvg[i], CIlow[i], CIhigh[i], Qlow[i], Qhigh[i]= mean_confidence_interval(yPred2D[:,i],confidence=0.9)
CIlow[i]=yAvg[i]-np.mean(sigmaPred2D[:,i])
CIhigh[i]=yAvg[i]+np.mean(sigmaPred2D[:,i])
#print(yExact[i],"=?=",yAvg[i], CIlow[i], CIhigh[i],"--> ",yPred2D[1:5,i])
# Plot outputs
plt.figure(figsize=(12,8))
for yPred in all_yPred:
plt.plot(xPred, yPred, color='red' ,linewidth=1, zorder=-1, alpha=0.25)
#plt.scatter(xPred, yPred, color='red' ,s=1 , zorder=-1, alpha=0.25)
plt.fill_between(xPred.ravel(), CIlow, CIhigh, color='blue', zorder=0, alpha=.15)
plt.fill_between(xPred.ravel(), Qlow, Qhigh, color='green', zorder=0, alpha=.2)
plt.plot(xPred, yAvg, color='blue',linewidth=3, zorder=0)
plt.plot(xPred, yExact, color='black',linewidth=2, zorder=0)
plt.scatter(x, y, color='black', zorder=1)
plt.axis([xmin,xmax,-.5,6])
step=(xmax-xmin)/11.0
Xlst=list()
for a in np.arange(math.floor(xmin),math.ceil(xmax)+1,1.0):
Xlst.append(a)
plt.xticks(Xlst,rotation=45,fontsize=18)
#plt.xticks([-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,8])
plt.yticks([0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6],fontsize=18)
plt.xlabel("feature x",fontsize=22,fontweight="bold")
plt.ylabel("target y",fontsize=22,fontweight="bold")
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/deepchatterjeevns/Pytorch-Udacity-Challenge/blob/master/intro-to-pytorch/Part%205%20-%20Inference%20and%20Validation%20(Exercises%20solved).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import sys
try:
import torch
except:
import os
os.environ['TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD']='2000000000'
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!{sys.executable} -m pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision >/dev/null
import torch
from IPython.core.display import Image
# get helper functions
! wget https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/intro-to-pytorch/fc_model.py >/dev/null 2>&1
! wget https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/intro-to-pytorch/helper.py >/dev/null 2>&1
```
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(*top_class.shape)
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
```
model = Classifier().cuda()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
epochs = 50
steps = 0
train_losses, test_losses, accuracies = [], [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
images, labels = images.cuda(), labels.cuda()
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
## TODO: Implement the validation pass and print out the validation accuracy
test_loss = 0
accuracy = 0
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
images, labels = images.cuda(), labels.cuda()
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
_, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.cuda.FloatTensor))
steps += 1
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
accuracies.append(accuracy/len(testloader))
print("Epoch [{}/{}] ".format(e+1, epochs),
"Training Loss:.. {:.3f}".format(train_losses[e]),
"Test Loss:.. {:.3f}".format(test_losses[e]),
"Test Accuracy:.. {:.3f}".format(accuracies[e]))
%matplotlib inline
%config InlineBackend.figure.format = 'retina'
import matplotlib.pyplot as plt
# Plotting train loss, test loss and accuracy on validation
plt.plot(train_losses, label="training loss")
plt.plot(test_losses, label="validation loss")
plt.plot(accuracies, label="validation accuracy")
plt.legend(frameon=False)
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='https://github.com/deepchatterjeevns/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/overfitting.png?raw=1' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
```
## TODO: Define your model with dropout added
class Classifier_withDropout(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
model = Classifier_withDropout().cuda()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
epochs = 30
steps = 0
train_losses, test_losses, accuracies = [], [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
images, labels = images.cuda(), labels.cuda()
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
## TODO: Implement the validation pass and print out the validation accuracy
test_loss = 0
accuracy = 0
with torch.no_grad():
model.eval()
# validation pass here
for images, labels in testloader:
images, labels = images.cuda(), labels.cuda()
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
_, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.cuda.FloatTensor))
steps += 1
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
accuracies.append(accuracy/len(testloader))
model.train()
print("Epoch [{}/{}] ".format(e+1, epochs),
"Training Loss:.. {:.3f}".format(train_losses[e]),
"Test Loss:.. {:.3f}".format(test_losses[e]),
"Test Accuracy:.. {:.3f}".format(accuracies[e]))
plt.plot(train_losses, label="training loss")
plt.plot(test_losses, label="validation loss")
plt.plot(accuracies, label="validation accuracy")
plt.legend(frameon=False)
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
images, labels = images.cuda(), labels.cuda()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28).cpu(), ps.cpu(), version='Fashion')
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
| github_jupyter |
# Demonstration notebook for the Pulse of the City project.
In this notebook, you will find examples of how to run the scripts and obtain results from the pedestrian traffic prediction, as well as the spatial interpolation and visualisation systems.
** Index: **
1. [Part 1: Predicting pedestrian traffic](#Part-1:-Predicting-pedestrian-traffic)
2. [Part 2: Spatial interpolation and visualisation](#Part-2:-Spatial-interpolation-and-visualisation)
---
# Part 1: Predicting pedestrian traffic
An example of getting predictions from the prediction module and making use of them.
Import and initialise the predictor object.
```
# Import the predictor class into your workspace
import Predictor
# Initialise the predictor object
predictor = Predictor.Predictor()
```
Generate predictions for a certain timeframe
- Make sure the date-time string is in `yyyy-mm-ddTHH:MM:SS` format.
- First and second parameters define the start and the end of the requested time frame (start included, end excluded from the interval).
```
# Generate the predictions for a specific time frame
predictions = predictor.predict_timeframe("2020-01-11T12:00:00", "2020-01-11T14:00:00")
print(predictions)
```
##### NOTE: these predictions are stored as a pandas dataframe,
So, you can use all of the included functionality with it, such as saving it to a `.csv` file with `predictions.to_csv("<filename>.csv")`
---
As you can see, not passing the optional `request_ids` parameter results in predictions for all modelled locations.
If you only want to generate predictions for a certain location, you can do the following:
```
# Print the list of modelled locations with their IDs
Predictor.dump_names()
# Find the ID(s) of location(s) you wish to predict for and include a list of them for the `request_ids` parameter.
ids = [0, 11, 17]
predictions_specific = predictor.predict_timeframe("2020-01-11T12:00:00", "2020-01-11T14:00:00", ids)
# NOTE: even if you are predicting for one location, you will need to pass it as an array (e.g. [5]).
print(predictions_specific)
```
---
As you may have noticed, the prediction dataframe contains 3 columns for each location: `low`, `mid` and `high`. These colums represent the prediction **together with the limits of the confidence interval**:
- `low` is the lower bound of the confidence interval,
- `mid` is the prediction itself,
- `high` is the higher bound of the confidence interval.
If you do not need the confidence interval for your purpose, you can also pass an optional parameter `ci` as `False`:
```
predictions_no_ci = predictor.predict_timeframe("2020-01-11T12:00:00", "2020-01-11T14:00:00", [8, 11], ci=False)
print(predictions_no_ci)
```
---
To access the predicted values directly, convert the dataframe into an array, by doing the following:
```
pred_values = predictions_no_ci.values[:, 1:] # the '[:, 1:]' picks all rows with all but the 1st (date-time) column.
```
Then you can use standard python indexing to access any element(s) in the array:
```
print(pred_values[0, 1]) # First row, second column
print(pred_values[0]) # First row, all columns
print(pred_values[:, 1]) # All rows, second column
```
**If you are using the confidence interval**, to parse the predictions, do the following:
```
# (setting up a new prediction dataframe for demonstration)
predictions = predictor.predict_timeframe("2020-01-11T09:00:00", "2020-01-11T21:00:00", [0, 11])
print(predictions)
# Get the IDs of columns that contain the different components of the predictions:
dataframe_ids = Predictor.get_prediction_ids(predictions)
# This matrix contains indices for each type of prediction (low, mid, high).
# First row is lower bound IDs:
low_ids = dataframe_ids[0]
# Second row is the actual prediction IDs:
pred_ids = dataframe_ids[1]
# Third row is the upper bound IDs:
high_ids = dataframe_ids[2]
# Now, to get the actual values, just convert the dataframe into an array:
dataframe_values = predictions.values
# Here, we do not remove the date-time column, as that will be taken care of by the IDs.
# And now you can access whichever values you wish:
# To get only the actual predictions:
prediction_values = dataframe_values[:, pred_ids] # NOTE: 1st index is the time index, ':' takes all rows.
print("Predictions: \n" + str(prediction_values))
# Works the same way with lower or upper bounds:
high_values = dataframe_values[:, high_ids]
low_values = dataframe_values[:, low_ids]
print("Highs: \n" + str(high_values))
print("Lows: \n" + str(low_values))
```
## Now you can use these values for anything you like!
### Example: plotting the curve
```
# import the plotting library
from matplotlib import pyplot as plt
# Create a new figure
plt.figure(figsize = (15, 7))
plt.title("Pedestrian count prediction")
# Plot the prediction curves:
# 1st column in the prediction values stores predictions for location 0 - Plein 1944 primark:West
plt.plot(prediction_values[:, 0], label = "Plein 1944 primark:West")
# 2nd column is for location 11 - Kop Molenstraat:Molenstraat
plt.plot(prediction_values[:, 1], label = "Kop Molenstraat:Molenstraat")
# Add the confidence interval as a coloured area between lows and highs:
# For Plein 1944 primark:West:
plt.fill_between(
range(12), # X axis indices for the area - in this case, we fill the whole range of 12 predictions
list(low_values[:, 0]), # Lower bound of the area to fill
list(high_values[:, 0]), # Higher bound of the area to fill
alpha = 0.1 # Making the area transparent.
)
# For Kop Molenstraat:Molenstraat:
plt.fill_between(
range(12),
list(low_values[:, 1]),
list(high_values[:, 1]),
alpha = 0.1
)
# You can use the date-time column values as x tick labels
plt.xticks(range(12), predictions.values[:, 0], rotation=20)
# Set the name of axes
plt.xlabel("Time")
plt.ylabel("Predicted pedestrian count")
# Add a legend
plt.legend()
# Show the figure
plt.show()
```
---
# Part 2: Spatial interpolation and visualisation
An example of using the spatial interpolation module of the project.
Import and initialise the interpolator object
```
# Import the interpolator class into your workspace
import Interpolator
# Initialise the interpolator object.
# The parameter of the initialisation determines the resolution scale of the interpolation
# 1.0 - 100% - 1600x950
# 0.5 - 50% - 800x475, etc.
interpolator = Interpolator.Interpolator(1.0)
```
Now, you can either:
1. Interpolate a prediction for a certain date and hour:
```
# The resulting image is saved in Images/Interpolation.png by default
# if you would like to change the name of the resulting image, pass an optional parameter 'filename' with the chosen name:
interpolator.interpolate_predict("2020-01-15T13:00:00", filename="Demo1")
```
Or
2. Interpolate your own array of data from the 42 locations (e.g. actual observations, pulled from the Numina API).
```
# For demonstration purposes, array of 42 random values will be interpolated:
# Generate 42 random values with numpy's random library
import numpy as np
values = np.random.uniform(0, 2500, 42)
# Interpolate these values and visualise them on the map:
interpolator.interpolate_data(values, filename="Demo2")
```
## Thank you for using my project and good luck!
**Made by:** *Domantas Giržadas, 2020*
| github_jupyter |
## Estimating the coefficient of a regression model via scikit-learn
```
'''
loading the dataset
'''
from data import load_data
import numpy as np
from sklearn.preprocessing import StandardScaler
df = load_data()
X = df[['RM']].values
y = df['MEDV'].values
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
y_std = sc_y.fit_transform(y[:, np.newaxis]).flatten()
'''
train the Linear Regressor
'''
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
slr.fit(X, y)
print('Slope: %.3f' % slr.coef_[0])
print('Intercept: %.3f' % slr.intercept_)
import os
import matplotlib.pyplot as plt
def lin_regplot(X, y, model, name=''):
plt.scatter(X, y, c='blue')
plt.plot(X, model.predict(X), color='red')
return plt
# plt.xlabel('Average number of rooms [RM] (standardized)')
# plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
# plt.show()
# if not os.path.exists(os.path.join(os.getcwd(), 'figures')):
# os.mkdir('figures')
# plt.savefig('./figures/%s.png' % (name), dpi=100)
# plt.gcf().clear()
'''
plot a graph to compare with the results of our LinearRegression class
'''
p = lin_regplot(X, y, slr, 'plotting-sklearn-linear-reg')
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
plt.show()
plt.gcf().clear()
```
## Fitting a robust regression model using RANSAC
__RANdom SAmple Consensus (RANSAC)__ algorithm, fits a regression model to a subset of the data, the so-called _inliers_, thus eliminating the impact of _outliers_ on the prediction model.
```
from sklearn.linear_model import RANSACRegressor
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
residual_metric=lambda x: np.sum(np.abs(x), axis=1),
residual_threshold=5.0,
random_state=0)
ransac.fit(X, y)
'''
plot the inliers and outliers obtained from RANSAC
'''
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask],
c='blue', marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask],
c='lightgreen', marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='red')
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper left')
plt.show()
# plt.savefig('./figures/ransac-plot.png', dpi=120)
plt.gcf().clear()
print('Slope: %.3f' % ransac.estimator_.coef_[0])
print('Intercept: %.3f' % ransac.estimator_.intercept_)
```
## Evaluating the performance of linear regression models
We will now use all variables in the dataset and train a multiple regression model
```
'''
load data and train regressor
'''
from sklearn.model_selection import train_test_split
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
'''
Time to plot
------------
Since our model uses multiple explanatory variables, we can't visualize the linear
regression line in a two-dimensional plot, but we can plot the residuals versus the
predicted values to diagnose our regression model.
'''
plt.scatter(y_train_pred, y_train_pred - y_train,
c='blue', marker='o', label='Training Data')
plt.scatter(y_test_pred, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test Data')
plt.xlabel('Predicted values')
plt.ylabel('Residulas')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.show()
# plt.savefig('./figures/preds-vs-residuals.png', dpi=120)
plt.gcf().clear()
```
## Training a linear regression into a curve - polynomial regression
We will now discuss how to use the PolynomialFeatures transformer class from scikit-learn to add a quadratic term ( d = 2 ) to a simple regression problem with one explanatory variable, and compare the polynomial to the linear fit.
```
'''
testing polynomial regression
on random dummy data.
'''
# 1. Add a second degree polynomial term
from sklearn.preprocessing import PolynomialFeatures
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0,
586.0])[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2)
X_quad = quadratic.fit_transform(X)
# 2. Fit a simple linear regression model for compqarison
lr.fit(X, y)
X_fit = np.arange(250, 600, 10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# 3. Fit a multiple regression model on the transformed features for
# polynomial regression:
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# Plot the results
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit,
label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit,
label='quadratic fit')
plt.legend(loc='upper left')
# plt.savefig('./figures/linear-vs-quad.png', dpi=120)
plt.show()
plt.gcf().clear()
'''
Finding MSE and R^2 score.
'''
from sklearn.metrics import mean_squared_error,\
r2_score
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
print('Training MSE linear: %.3f, quadratic: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('Training R^2 linear: %.3f, quadratic: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
```
## Modeling nonlinear relationships in the Housing Dataset
We will model the relationship between house prices and LSTAT (percent lower status of the population) using second degree (quadratic) and third degree (cubic) polynomials and compare it to a linear fit.
```
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# create polynomial features
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# linear fit
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
# quadratic fit
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quad_r2 = r2_score(y, regr.predict(X_quad))
# cubic fit
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# Plotting results
plt.scatter(X, y,
label='training points',
color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$'%linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='linear (d=1), $R^2=%.2f$'%quad_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='linear (d=1), $R^2=%.2f$'%cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper right')
# plt.savefig('./figures/polynomial-reg-plot.png', dpi=120)
plt.show()
plt.gcf().clear()
```
__Note:__ Polynomial features are not always the best choice for modelling nonlinear relationships.<br>
_For example_, just by looking at __MEDV-LSTAT__ scatterplot, we could propose that a log transformation of the __LSTAT__ feature and the square root of __MEDV__ may project the data onto a linear feature space suitable for linear regression fit.
```
"""Let's test the above hypothesis"""
# transform features
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# fit features
X_fit = np.arange(X_log.min() - 1,
X_log.max() + 1,
1)[:, np.newaxis]
regr = regr.fit(X_log, y_sqrt)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# plot results
plt.scatter(X_log, y_sqrt,
label='training points',
color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$')
plt.legend(loc='lower left')
plt.show()
# plt.savefig('./figures/log-sqrt-tranform-plot.png', dpi=120)
plt.clf()
plt.close('all')
```
## Decision tree regression
To use a decision tree for regression, we will replace entropy as the impurity measure of a node `t` by the MSE
```
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort()
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.show()
# plt.savefig('./figures/decision-tree-regression.png', dpi=120)
plt.gcf().clear()
```
## Random forest regression
The random forest algorithm is an ensemble technique that combines multiple decision trees. A random forest usually has a better generalization performance than an individual decision tree due to randomness that helps to
decrease the model variance.
```
"""
let's use all the features in the Housing Dataset to fit a random forest regression model on 60 percent of the samples and evaluate its performance
on the remaining 40 percent.
"""
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y,
test_size=0.4,
random_state=1)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
"""evaluating performance via MSE AND R^2 score"""
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.show()
# plt.savefig('./figures/random-forest-plot.png', dpi=120)
plt.gcf().clear()
```
| github_jupyter |
# San Diego Burrito Analytics: Linear models
Scott Cole
21 May 2016
This notebook attempts to predict the overall rating of a burrito as a linear combination of its dimensions. Interpretation of these models is complicated by the significant correlations between dimensions (such as meat quality and non-meat filling quality).
### Imports
```
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
import seaborn as sns
sns.set_style("white")
```
### Load data
```
import util
df = util.load_burritos()
N = df.shape[0]
```
# Linear model 1: Predict overall rating from the individual dimensions
```
# Define predictors of the model
m_lm = ['Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
# Remove incomplete data
dffull = df[np.hstack((m_lm,'overall'))].dropna()
X = sm.add_constant(dffull[m_lm])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(1 - np.var(res.resid_pearson) / np.var(y))
# Visualize coefficients
sns.set_style("whitegrid")
from tools.plt import bar
newidx = np.argsort(-res.params.values)
temp = np.arange(len(newidx))
newidx = np.delete(newidx,temp[newidx==0])
plt.figure(figsize=(15,7))
plt.bar(np.arange(len(newidx)), res.params[newidx].values, color='.5',yerr=res.bse[newidx].values)
plt.xticks(np.arange(len(newidx)),res.bse[newidx].keys())
ax = plt.gca()
ax.set_ylim((-.5,.5))
ax.set_yticks(np.arange(-.5,.6,.1))
ax.set_xticks([])
figname = 'overall_metric_linearmodelcoef'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
```
# Linear model 2: predict overall rating from ingredients
This linear model is no better than generating random features, showing that simply a good choice of ingredients is not sufficient to making a high quality burrito.
```
# Get all ingredient keys
startingredients = 29
ingredientkeys = df.keys()[startingredients:]
# Get all ingredient keys with at least 10 burritos
Nlim = 10
ingredientkeys = ingredientkeys[df.count()[startingredients:].values>=Nlim]
# Make a dataframe for all ingredients
dfing = df[ingredientkeys]
# Convert data to binary
for k in dfing.keys():
dfing[k] = dfing[k].map({'x':1,'X':1,1:1})
dfing[k] = dfing[k].fillna(0)
# Run a general linear model to predict overall burrito rating from ingredients
X = sm.add_constant(dfing)
y = df.overall
lm = sm.GLM(y,X)
res = lm.fit()
print(res.summary())
origR2 = 1 - np.var(res.resid_pearson) / np.var(y)
# Test if the variance explained in this linear model is significantly better than chance
np.random.seed(0)
Nsurr = 1000
randr2 = np.zeros(Nsurr)
for n in range(Nsurr):
Xrand = np.random.rand(X.shape[0],X.shape[1])
Xrand[:,0] = np.ones(X.shape[0])
lm = sm.GLM(y,Xrand)
res = lm.fit()
randr2[n] = 1 - np.var(res.resid_pearson) / np.var(y)
print 'p = ' , np.mean(randr2>origR2)
```
# Linear model 3. Predicting Yelp ratings
Can also do this for Google ratings
Note, interestingly, that the Tortilla rating is most positively correlated with Yelp and Google ratings. This is significant in a linear model when accounting for the overall rating.
```
# Average each metric over each Location
# Avoid case issues; in the future should avoid article issues
df.Location = df.Location.str.lower()
m_Location = ['Location','N','Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
tacoshops = df.Location.unique()
TS = len(tacoshops)
dfmean = pd.DataFrame(np.nan, index=range(TS), columns=m_Location)
for ts in range(TS):
dfmean.loc[ts] = df.loc[df.Location==tacoshops[ts]].mean()
dfmean['N'][ts] = sum(df.Location == tacoshops[ts])
dfmean.Location = tacoshops
# Note high correlations between features
m_Yelp = ['Google','Yelp','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(m_Yelp)
dfmeancorr = dfmean[m_Yelp].corr()
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(10,10))
cax = plt.pcolor(range(M+1), range(M+1), dfmeancorr, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(m_Yelp,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(m_Yelp,size=25)
plt.xticks(rotation='vertical')
plt.xlim((0,M))
plt.ylim((0,M))
plt.tight_layout()
# GLM for Yelp: all dimensions
m_Yelp = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
dffull = dfmean[np.hstack((m_Yelp,'Yelp'))].dropna()
X = sm.add_constant(dffull[m_Yelp])
y = dffull['Yelp']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(res.pvalues)
print 1 - np.var(res.resid_pearson) / np.var(y)
# GLM for Yelp: some dimensions
m_Yelp = ['Tortilla','overall']
dffull = dfmean[np.hstack((m_Yelp,'Yelp'))].dropna()
X = sm.add_constant(dffull[m_Yelp])
y = dffull['Yelp']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
plt.figure(figsize=(4,4))
ax = plt.gca()
dfmean.plot(kind='scatter',x='Tortilla',y='Yelp',ax=ax,**{'s':40,'color':'k','alpha':.3})
plt.xlabel('Average Tortilla rating',size=20)
plt.ylabel('Yelp rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
plt.ylim((2,5))
plt.tight_layout()
print sp.stats.spearmanr(dffull.Yelp,dffull.Tortilla)
figname = 'corr-Yelp-tortilla'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
```
| github_jupyter |
## Scaling to Minimum and Maximum values - MinMaxScaling
Minimum and maximum scaling squeezes the values between 0 and 1. It subtracts the minimum value from all the observations, and then divides it by the value range:
X_scaled = (X - X.min / (X.max - X.min)
```
import pandas as pd
# dataset for the demo
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
# the scaler - for min-max scaling
from sklearn.preprocessing import MinMaxScaler
# load the the Boston House price data
# this is how we load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
data = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)
# add target
data['MEDV'] = boston_dataset.target
data.head()
# let's separate the data into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data.drop('MEDV', axis=1),
data['MEDV'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# set up the scaler
scaler = MinMaxScaler()
# fit the scaler to the train set, it will learn the parameters
scaler.fit(X_train)
# transform train and test sets
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# the scaler stores the maximum values of the features, learned from train set
scaler.data_max_
# tthe scaler stores the minimum values of the features, learned from train set
scaler.min_
# the scaler also stores the value range (max - min)
scaler.data_range_
# let's transform the returned NumPy arrays to dataframes
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
import matplotlib.pyplot as plt
import seaborn as sns
# let's compare the variable distributions before and after scaling
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# before scaling
ax1.set_title('Before Scaling')
sns.kdeplot(X_train['RM'], ax=ax1)
sns.kdeplot(X_train['LSTAT'], ax=ax1)
sns.kdeplot(X_train['CRIM'], ax=ax1)
# after scaling
ax2.set_title('After Min-Max Scaling')
sns.kdeplot(X_train_scaled['RM'], ax=ax2)
sns.kdeplot(X_train_scaled['LSTAT'], ax=ax2)
sns.kdeplot(X_train_scaled['CRIM'], ax=ax2)
plt.show()
# let's compare the variable distributions before and after scaling
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# before scaling
ax1.set_title('Before Scaling')
sns.kdeplot(X_train['AGE'], ax=ax1)
sns.kdeplot(X_train['DIS'], ax=ax1)
sns.kdeplot(X_train['NOX'], ax=ax1)
# after scaling
ax2.set_title('After Min-Max Scaling')
sns.kdeplot(X_train_scaled['AGE'], ax=ax2)
sns.kdeplot(X_train_scaled['DIS'], ax=ax2)
sns.kdeplot(X_train_scaled['NOX'], ax=ax2)
plt.show()
```
| github_jupyter |
# Land Use/Land Cover
```
import networkx as nx
import osmnx as ox
import pygeohydro as gh
from pynhd import NLDI
```
Land cover, imperviousness, and canopy data can be retrieved from the [NLCD](https://www.mrlc.gov/data) database. First, we use [PyNHD](https://github.com/cheginit/pynhd) to get the contributing watershed geometry of three NWIS stations with the ID of `USGS-01031450`, `USGS-01031500`, and `USGS-01031510`:
```
geometry = NLDI().get_basins(["01031450", "01031500", "01031510"])
```
We can now use the ``nlcd_bygeom`` and ``nlcd_bycoords`` functions to get the NLCD data.
Let's start by ``nlcd_bygeom``. This function has two positional arguments for passing the target geometries or points of interests and target resolution in meters. Note that, if a single geometry is passed and the geometry is not in ``EPSG:4326`` CRS, ``geo_crs`` argument should be given as well. The second argument is the target resolution of the data in meters. The NLCD database is multi-resolution and based on the target resolution, the source data are resampled on the server side.
You should be mindful of the resolution since higher resolutions require more memory so if your requests requires more memory than the available memory on your system the code is likely to crash. You can either increase the resolution or divide your region of interest into smaller regions.
Moreover, the [MRLC](https://www.mrlc.gov/geoserver/web/) GeoServer has a limit of about 8 million pixels per request but PyGeoHydro takes care of the domain decomposition under-the-hood and divides the request to smaller requests then merges them. So the only bottleneck for requests is the amount of available memory on your system.
Both ``nlcd_bygeom`` and ``nlcd_bycoords`` functions can request for four layers from the MRLC web service; imperviousness, land use/land cover, impervious descriptors, and tree canopy. Since NLCD is released every couple of years, you can specify the target year via the ``years`` argument. Layers that are not included in this argument are ignored. By default, `years` is `{'impervious': [2019], 'cover': [2019], 'canopy': [2019], 'descriptor': [2019]}`.
Furthermore, we can specify the region of interest as well via the `region` argument. Valid values are `L48` (for CONUS), `HI` (for Hawaii), `AK` (for Alaska), and `PR` (for Puerto Rico). By default, region is set to `L48`.
Let's get the cover and impervious descriptor data at a 100 m resolution for all three stations
```
desc = gh.nlcd_bygeom(geometry, 100, years={"descriptor": 2019})
```
This function returns a `dict` where its keys are the indices of the input `GeoDataFrame`.
```
cmap, norm, levels = gh.plot.descriptor_legends()
ax = desc["01031500"].descriptor_2019.plot(
size=5, cmap=cmap, levels=levels, cbar_kwargs={"ticks": levels[:-1]}
)
ax.axes.set_title("Urban Descriptor 2019")
ax.figure.savefig("_static/descriptor.png", bbox_inches="tight", facecolor="w")
```
Now let's get the land cover data:
```
lulc = gh.nlcd_bygeom(geometry, 100, years={"cover": [2016, 2019]})
```
Additionally, `PyGeoHydro` provides a function for getting the official legends of the cover data. Let's plot the data using this legends.
```
cmap, norm, levels = gh.plot.cover_legends()
ax = lulc["01031500"].cover_2019.plot(
size=5, cmap=cmap, levels=levels, cbar_kwargs={"ticks": levels[:-1]}
)
ax.axes.set_title("Land Use/Land Cover 2019")
ax.figure.savefig("_static/lulc.png", bbox_inches="tight", facecolor="w")
```
Moreover, we can get the statistics of the cover data for each class or category as well:
```
stats = gh.cover_statistics(lulc["01031500"].cover_2019)
stats
```
Now, let's see `nlcd_bycoords` in action. The coordinates must be a list of (longitude, latitude) coordinates. Let's use [osmnx](https://github.com/gboeing/osmnx) package to get a street network:
```
G = ox.graph_from_place("Piedmont, California, USA", network_type="drive")
```
Now, we can get land cover and tree canopy for each node based on their coordinates and then plot the results.
```
x, y = nx.get_node_attributes(G, "x").values(), nx.get_node_attributes(G, "y").values()
lulc = gh.nlcd_bycoords(list(zip(x, y)), years={"cover": [2019], "canopy": [2016]})
nx.set_node_attributes(G, dict(zip(G.nodes(), lulc.cover_2019)), "cover_2019")
nx.set_node_attributes(G, dict(zip(G.nodes(), lulc.canopy_2016)), "canopy_2016")
lulc.head()
nc = ox.plot.get_node_colors_by_attr(G, "canopy_2016", cmap="viridis_r")
fig, ax = ox.plot_graph(
G,
node_color=nc,
node_size=20,
save=True,
bgcolor="w",
)
```
| github_jupyter |
# Profiling OpenACC Code
This lab is intended for C/C++ programmers. If you prefer to use Fortran, click [this link.](../Fortran/README.ipynb)
You will receive a warning five minutes before the lab instance shuts down. At this point, make sure to save your work! If you are about to run out of time, please see the [Post-Lab](#Post-Lab-Summary) section for saving this lab to view offline later.
Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
---
Let's execute the cell below to display information about the GPUs running on the server by running the `pgaccelinfo` command, which ships with the PGI compiler that we will be using. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
```
!pgaccelinfo
```
---
## Introduction
Our goal for this lab is to learn what exactly code profiling is, and how we can use it to help us write powerful parallel programs.

This is the OpenACC 3-Step development cycle.
**Analyze** your code to determine most likely places needing parallelization or optimization.
**Parallelize** your code by starting with the most time consuming parts and check for correctness.
**Optimize** your code to improve observed speed-up from parallelization.
We are currently tackling the **analyze** step. We will use PGI's code profiler (PGProf) to get an understanding of a relatively simple sample code before moving onto the next two steps.
---
## Run the Code
Our first step to analyzing this code is to run it. We need to record the results of our program before making any changes so that we can compare them to the results from the parallel code later on. It is also important to record the time that the program takes to run, as this will be our primary indicator to whether or not our parallelization is improving performance.
### Compiling the Code with PGI
We are using the PGI compiler to compiler our code. You will not need to memorize the compiler commands to complete this lab, however, they will be helpful to know if you want to parallelize your own personal code with OpenACC.
**pgcc** : this is the command to compile C code
**pgc++** : this is the command to compile C++ code
**pgfortran** : this is the command to compile Fortran code
**-fast** : this compiler flag will allow the compiler to perform additional optimizations to our code
```
!pgcc -fast -o laplace -Mprof=ccff jacobi.c laplace2d.c && echo "Compilation Successful!" && ./laplace
```
### Understanding Code Results
The output from our program will make more sense as we analyze the code. The most important thing to keep in mind is that we need these output values to stay consistent. If these outputs change during any point while we parallelize our code, we know we've made a mistake. For simplicity, focus on the last output, which occurred at iteration 900. It is also helpful to record the time the program took to run. Our goal while parallelizing the code is ultimately to make it faster, so we need to know our "base runtime" in order to know if the code is running faster.
---
## Analyze the Code
Now that we know how long the code took to run and what the code's output looks like, we should be able to view the code with a decent idea of what is happening. The code is contained within two files, which you may open and view.
[jacobi.c](../../../../edit/module2/English/C/jacobi.c)
[laplace2d.c](../../../../edit/module2/English/C/laplace2d.c)
You may read through these two files on your own, but we will also highlight the most important parts below in the "Code Breakdown".
### Code Description
The code simulates heat distribution across a 2-dimensional metal plate. In the beginning, the plate will be unheated, meaning that the entire plate will be room temperature. A constant heat will be applied to the edge of the plate and the code will simulate that heat distributing across the plate over time.
This is a visual representation of the plate before the simulation starts:

We can see that the plate is uniformly room temperature, except for the top edge. Within the [laplace2d.c](../C/laplace2d.c) file, we see a function called `initialize`. This function is what "heats" the top edge of the plate.
```cpp
void initialize(double *restrict A, double *restrict Anew, int m, int n)
{
memset(A, 0, n * m * sizeof(double));
memset(Anew, 0, n * m * sizeof(double));
for(int i = 0; i < m; i++){
A[i] = 1.0;
Anew[i] = 1.0;
}
}
```
After the top edge is heated, the code will simulate the heat distributing across the length of the plate. We will keep the top edge at a constant heat as the simulation progresses.
This is the plate after several iterations of our simulation:

That's the theory: simple heat distribution. However, we are more interested in how the code works.
### Code Breakdown
The 2-dimensional plate is represented by a 2-dimensional array containing double-precision floating point values. These doubles represent temperature; 0.0 is room temperature, and 1.0 is our max temperature. The 2-dimensional plate has two states, one represents the current temperature, and one represents the expected temperature values at the next step in our simulation. These two states are represented by arrays **`A`** and **`Anew`** respectively. The following is a visual representation of these arrays, with the top edge "heated".

Simulating this state in two arrays is very important for our **`calcNext`** function. Our calcNext is essentially our "simulate" function. calcNext will look at the inner elements of A (meaning everything except for the edges of the plate) and update each elements temperature based on the temperature of its neighbors. If we attempted to calculate in-place (using only **`A`**), then each element would calculate its new temperature based on the updated temperature of previous elements. This data dependency not only prevents parallelizing the code, but would also result in incorrect results when run in serial. By calculating into the temporary array **`Anew`** we ensure that an entire step of our simulation has completed before updating the **`A`** array.

Below is the `calcNext` function:
```cpp
01 double calcNext(double *restrict A, double *restrict Anew, int m, int n)
02 {
03 double error = 0.0;
04 for( int j = 1; j < n-1; j++)
05 {
06 for( int i = 1; i < m-1; i++ )
07 {
08 Anew[OFFSET(j, i, m)] = 0.25 * ( A[OFFSET(j, i+1, m)] + A[OFFSET(j, i-1, m)]
09 + A[OFFSET(j-1, i, m)] + A[OFFSET(j+1, i, m)]);
10 error = fmax( error, fabs(Anew[OFFSET(j, i, m)] - A[OFFSET(j, i , m)]));
11 }
12 }
13 return error;
14 }
```
We see on lines 07 and 08 where we are calculating the value of `Anew` at `i,j` by averaging the current values of its neighbors. Line 09 is where we calculate the current rate of change for the simulation by looking at how much the `i,j` element changed during this step and finding the maximum value for this `error`. This allows us to short-circuit our simulation if it reaches a steady state before we've completed our maximum number of iterations.
Lastly, our `swap` function will copy the contents of `Anew` to `A`.
```cpp
01 void swap(double *restrict A, double *restrict Anew, int m, int n)
02 {
03 for( int j = 1; j < n-1; j++)
04 {
05 for( int i = 1; i < m-1; i++ )
06 {
07 A[OFFSET(j, i, m)] = Anew[OFFSET(j, i, m)];
08 }
09 }
10 }
```
---
## Profile the Code
By now you should have a good idea of what the code is doing. If not, go spend a little more time in the previous sections to ensure you understand the code before moving forward. Now it's time to profile the code to get a better understanding of where the application is spending its runtime. To profile our code we will be using PGPROF. PGPROF provides both command line and visual profiler. It comes comes with the PGI compiler.
We will start by profiling the laplace executable that we created earlier using the command line option first. Run the pgprof command:
```
!pgprof ./laplace
```
We can see the time that each individual portion of our code took to run. This information is important because it allows us to make educated decisions about which parts of our code to optimize first. To get the bang for our buck, we want to focus on the most time-consuming parts of the code. Next, we will compile, run, and profile a parallel version of the code, and analyze the differences.
### Optional - Where is the c_mcopy8 coming from?
When we compiled our code earlier, we omitted any sort of compiler feedback. It turns out that even with a sequential code, the compiler is performing a lot of optimizations. If you compile the code again with the `-Minfo=opt` flag, which instructs the compiler to print additional information how it optimized the code, then it will become more obvious where this strange routine came from.. Afterwards, you should see that the `c_mcopy8` is actually an optimzation that is being applied to the `swap` function. Notice in the output below that at line 64 of `laplace2d.c`, which happens inside the `swap` routine, that the compiler determined that our loops are performing a memory copy, which it believes can be performed more efficiently by calling the `c_mcopy8` function instead.
```cpp
laplace2d.c:
swap:
63, Memory copy idiom, loop replaced by call to __c_mcopy8
```
```
!pgcc -fast -Minfo=opt -o laplace jacobi.c laplace2d.c
```
---
## Run Our Parallel Code on Multicore CPU
In a future lab you will run parallelize the code to run on a multicore CPU. This is the simplest starting point, since it doesn't require us to think about copying our data between different memories. So that you can experience profiling with PGProf on a multicore CPU, a parallel version of the code has been provided. You will be able to parallelize the code yourself in the next lab.
```
!pgcc -fast -ta=multicore -Minfo=accel -o laplace_parallel ./solutions/parallel/jacobi.c ./solutions/parallel/laplace2d.c && ./laplace_parallel
```
### Compiling Multicore Code using PGI
Again, you do not need to memorize the compiler commands to complete this lab. Though, if you want to use OpenACC with your own personal code, you will want to learn them.
**-ta** : This flag will tell the compiler to compile our code for a specific parallel hardware. TA stands for *"Target Accelerator"*, an accelerator being any device that accelerates performance (in our case, this means parallel hardware.) Omitting the -ta flag will cause the code to compile sequentially.
**-ta=multicore** will tell the compiler to parallelize the code specifically for a multicore CPU.
**-Minfo** : This flag will tell the compiler to give us some feedback when compiling the code.
**-Minfo=accel** : will only give us feedback about the parallelization of our code.
**-Minfo=opt** : will give us feedback about sequential optimizations.
**-Minfo=all** : will give all feedback; this includes feedback about parallelization, sequential optimizations, and even parts of the code that couldn't be optimized for one reason or another.
If you would like to see the c_mcopy8 from earlier, try switching the Minfo flag with **-Minfo=accel,opt**.
---
## Profiling Multicore Code
We will use PGProf visual profiler this time to get a more graphical view of the profile. [Click here](/vnc/vnc.html) to open a new browser tab with a virtual desktop runnning the PGProf profiler. Normally you would open this program on your local machine by running the `pgprof` command or choosing it from your installed applications.
After accessing the URL, the first screen will ask you to connect to VNC 
You will be asked to enter the password. Please enter "openacc" as the password 
We have already run Visual Profiler for you and the first screen you see is to choose a workspace. You can keep the default workspace and press enter 
We will start by profiling the laplace executable that we created earlier. To do this, Select File > New Session. After doing this you should see a pop-up like the one in the picture below.

Then where is says "File: Enter Executable File [required]", select "Browse". Then select File Systems > home > openacc > labs > module2 > English > C.
Select our "laplace" executable file.

Then select "Next", followed by "Finished".

Follow the steps from earlier to profile the code with PGPROF, however, select the **`laplace_parallel`** executable this time instead of **`laplace`**. If you have closed the noVNC client, you can reopen it by <a href="/vnc" target="_blank">clicking this link</a>.
This is the view that we are greeted with when profiling a multicore application.

The first difference we see is the blue "timeline." This timeline represents when our program is executing something on the parallel hardware. This means that every call to `calcNext` and `swap` should be represented by a blue bar.
This is the main idea behind parallel programming. When you consider the **TOTAL** computation time, it will take almost equivalent time to our sequential program, however the application runtime decreases due to the fact that we can now execute portions of our code in parallel by spreading the work across multiple threads.
---
## Conclusion
Now we have a good understanding of how our program is running, and which parts of the program are time consuming. In the next lab, we will parallelize our program using OpenACC.
We are working on a very simple code that is specifically used for teaching purposes. Meaning that, in terms of complexity, it can be fairly underwhelming. Profiling code will become exponentially more useful if you chose to work on a "real-world" code; a code with possibly hundreds of functions, or millions of lines of code. Profiling may seem trivial when we only have 4 functions, and our entire code is contained in only two files, however, profiling will be one of your greatest assets when parallelizing real-world code.
---
## Bonus Task
For right now, we are focusing on multicore CPUs. Eventually, we will transition to GPUs. If you are familiar with GPUs, and would like to play with a GPU profile, then feel free to try this bonus task. If you do not want to complete this task now, you will have an opportunity in later labs (where we will also explain more about what is happening.)
Run this script to compile/run our code on a GPU.
```
!pgcc -fast -ta=tesla -Minfo=accel -o laplace_gpu ./solutions/gpu/jacobi.c ./solutions/gpu/laplace2d.c && ./laplace_gpu
```
Now, within PGPROF, select File > New Session. Follow the same steps as earlier, except select the **`laplace_gpu`** executable. If you closed the noVNC window, you can reopen it by <a href="/vnc" target="_blank">clicking this link</a>.
Happy profiling!
---
## Post-Lab Summary
If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.
You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
```
%%bash
rm -f openacc_files.zip
zip -r openacc_files.zip *
```
**After** executing the above zip command, you should be able to download the zip file [here](files/openacc_files.zip)
# Licensing
This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| github_jupyter |
<a href="https://colab.research.google.com/github/ayulockin/Explore-NFNet/blob/main/Train_Basline_Cifar10.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
* This is the baseline notebook to setup training a ResNet20 model on Cifar10 dataset.
* Horizontal Flip and Rotation is used as augmentation policy. Albumentation package is used to do the same.
# 🧰 Setups, Installations and Imports
```
%%capture
!pip install wandb --upgrade
!pip install albumentations
!git clone https://github.com/ayulockin/Explore-NFNet
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
import sys
sys.path.append("Explore-NFNet")
import os
import cv2
import numpy as np
from functools import partial
import matplotlib.pyplot as plt
# Imports from the cloned repository
from models.resnet import resnet_v1
from models.mini_vgg import get_mini_vgg
# Augmentation related imports
import albumentations as A
# Seed everything for reproducibility
def seed_everything():
# Set the random seeds
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
np.random.seed(hash("improves reproducibility") % 2**32 - 1)
tf.random.set_seed(hash("by removing stochasticity") % 2**32 - 1)
seed_everything()
# Avoid TensorFlow to allocate all the GPU at once.
# Ref: https://www.tensorflow.org/guide/gpu
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
import wandb
from wandb.keras import WandbCallback
wandb.login()
DATASET_NAME = 'cifar10'
IMG_HEIGHT = 32
IMG_WIDTH = 32
NUM_CLASSES = 10
SHUFFLE_BUFFER = 1024
BATCH_SIZE = 1024
EPOCHS = 100
AUTOTUNE = tf.data.experimental.AUTOTUNE
print(f'Global batch size is: {BATCH_SIZE}')
```
# ⛄ Download and Prepare Dataset
```
(train_ds, val_ds, test_ds), info = tfds.load(name=DATASET_NAME,
split=["train[:85%]", "train[85%:]", "test"],
with_info=True,
as_supervised=True)
@tf.function
def preprocess(image, label):
# preprocess image
image = tf.cast(image, tf.float32)
image = image/255.0
return image, label
# Define the augmentation policies. Note that they are applied sequentially with some probability p.
transforms = A.Compose([
A.HorizontalFlip(p=0.7),
A.Rotate(limit=30, p=0.7)
])
# Apply augmentation policies.
def aug_fn(image):
data = {"image":image}
aug_data = transforms(**data)
aug_img = aug_data["image"]
return aug_img
@tf.function
def apply_augmentation(image, label):
aug_img = tf.numpy_function(func=aug_fn, inp=[image], Tout=tf.float32)
aug_img.set_shape((IMG_HEIGHT, IMG_WIDTH, 3))
return aug_img, label
train_ds = (
train_ds
.shuffle(SHUFFLE_BUFFER)
.map(preprocess, num_parallel_calls=AUTOTUNE)
.map(apply_augmentation, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
# plt.title(f'{np.argmax(label_batch[n].numpy())}')
plt.title(f'{label_batch[n].numpy()}')
plt.axis('off')
image_batch, label_batch = next(iter(train_ds))
show_batch(image_batch, label_batch)
print(image_batch.shape, label_batch.shape)
```
# 🐤 Model
```
def GetModel(use_bn):
return resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=use_bn) ## Returns a ResNet20 model.
tf.keras.backend.clear_session()
test_model = GetModel(use_bn=True)
test_model.summary()
print(f"Total learnable parameters: {test_model.count_params()/1e6} M")
```
# 📲 Callbacks
```
earlystopper = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=10, verbose=0, mode='auto',
restore_best_weights=True
)
reducelronplateau = tf.keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5,
patience=3, verbose=1
)
```
# 🚋 Train with W&B
```
tf.keras.backend.clear_session()
# Intialize model
model = GetModel(use_bn=True)
model.compile('adam', 'sparse_categorical_crossentropy', metrics=['acc'])
# Intialize W&B run
run = wandb.init(entity='ayush-thakur', project='nfnet', job_type='train-baseline')
# Train model
model.fit(train_ds,
epochs=EPOCHS,
validation_data=val_ds,
callbacks=[WandbCallback(),
reducelronplateau,
earlystopper])
# Evaluate model on test set
loss, acc = model.evaluate(test_ds)
wandb.log({'Test Accuracy': round(acc, 3)})
# Close W&B run
run.finish()
```
| github_jupyter |
```
# %load defaults.ipy
import numpy as np
import matplotlib
matplotlib.rcParams['savefig.dpi'] = 600
%matplotlib inline
import matplotlib.pyplot as plt
import sys
sys.path.append('../python')
from plot_info import showAndSave, savePlot, get_environment
import plot_info
plot_info.set_notebook_name("WassersteinDistancesPerturbationsAll.ipynb")
import netCDF4
from IPython.core.display import display, HTML
import matplotlib2tikz
import os
import h5py
import ot
import sys
import scipy
import scipy.stats
# we had some issues on the euler cluster loading the correct libraries
for p in sys.path:
if 'matplotlib' in p.lower():
sys.path.remove(p)
if 'netcdf' in p.lower():
sys.path.remove(p)
from mpl_toolkits.mplot3d import Axes3D
def load(f, sample):
if '.nc' in f:
with netCDF4.Dataset(f) as d:
return d.variables['sample_%d_rho' % sample][:,:,0]
else:
f = os.path.join(f, 'kh_%d/kh_1.h5' % sample)
with h5py.File(f) as d:
return d['rho'][:,:,0]
print("STATISTICAL_KH_PERTS={}".format(plot_info.get_environment("STATISTICAL_KH_PERTS", [])))
print("STATISTICAL_KH_PERTS_NORMAL_UNIFORM={}".format(plot_info.get_environment("STATISTICAL_KH_PERTS_NORMAL_UNIFORM", [])))
```
# Histogram plotting
```
def plot_histograms2(N, M, perturbation, name, minValue, maxValue, x, y, xp, yp, valuesx, valuesy):
plt.hist2d(valuesx, valuesy, bins=20, normed=True, range=[[minValue, maxValue], [minValue, maxValue]])
plt.colorbar()
plt.xlabel('Value of $\\rho(%.2f,%.2f)$' % (x,y))
plt.ylabel('Value of $\\rho(%.2f,%.2f)$' % (xp,yp))
plt.title('Histogram at resolution %d, $\\epsilon = %.4f$, for %s,\nbetween $(%.2f, %.2f)$ and $(%.2f, %.2f)$' % (N, perturbation, name, x,y,xp,yp))
showAndSave('hist2pt_perturbation_%s_%.5f_%d_%.1f_%.1f_%.1f_%.1f' %(name, perturbation, N, x, y, xp, yp))
H, xedges, yedges = np.histogram2d(valuesx, valuesy, bins=20, normed=True,range=[[minValue, maxValue], [minValue, maxValue]])
fig = plt.figure(figsize=(10,8))
ax = fig.gca(projection='3d')
Xvalues, Yvalues = np.meshgrid(xedges[:-1], yedges[:-1])
surf = ax.plot_surface(Xvalues, Yvalues, H)
plt.xlabel('Value of $\\rho(%.2f,%.2f)$' % (x,y))
plt.ylabel('Value of $\\rho(%.2f,%.2f)$' % (xp,yp))
plt.title('Histogram at resolution %d, , $\\epsilon = %.4f$, for %s,\nbetween $(%.2f, %.2f)$ and $(%.2f, %.2f)$' % (N, perturbation, name, x,y,xp,yp))
ax.dist = 12
ax.set_xticks(np.array(ax.get_xticks())[::4])
ax.set_yticks(np.array(ax.get_yticks())[::4])
ax.set_zticks(np.array(ax.get_zticks())[::2])
ax.yaxis.labelpad=10
ax.xaxis.labelpad=10
ax.zaxis.labelpad=10
showAndSave('hist2pt_perturbation_surface_%s_%.5f_%d_%.1f_%.1f_%.1f_%.1f' %(name, perturbation, N, x, y, xp, yp))
def plotHistograms(name, resolution, perturbations, basename, samples):
points = [0.2, 0.4, 0.7, 0.8]
min_values = {}
max_values = {}
for x in points:
min_values[x] = {}
max_values[x] = {}
for y in points:
min_values[x][y] = {}
max_values[x][y] = {}
for xp in points:
min_values[x][y][xp] = {}
max_values[x][y][xp] = {}
for yp in points:
min_values[x][y][xp][yp] = 100000000
max_values[x][y][xp][yp] = -100000000
for p in perturbations:
# first we load the data
data = np.zeros((resolution, resolution, samples))
filename = basename.format(perturbation=p)
for k in range(samples):
data[:,:,k] = load(filename, k)
for x in points:
for y in points:
for xp in points:
for yp in points:
i = int(x*resolution)
j = int(y*resolution)
ip = int(xp*resolution)
jp = int(yp*resolution)
min_values[x][y][xp][yp] = min(np.amin([data[i,j,:], data[ip,jp,:]]), min_values[x][y][xp][yp])
max_values[x][y][xp][yp] = max(np.amax([data[i,j,:], data[ip,jp,:]]), max_values[x][y][xp][yp])
# First we find minimum and maximum over all resolutions and all samples
# we need this to equalize the histograms properly.
for p in perturbations:
filename = basename.format(perturbation=p)
data = np.zeros((resolution, resolution, samples))
for k in range(samples):
data[:,:,k] = load(filename, k)
for x in points:
for y in points:
for xp in points:
for yp in points:
# Limit plotting
if x == 0.7 and y == 0.7:
if xp == 0.7:
if yp != 0.8:
continue
elif xp == 0.4:
if yp != 0.2:
continue
else:
continue
else:
continue
valuesx = []
valuesy = []
i = int(x*resolution)
j = int(y*resolution)
ip = int(xp*resolution)
jp = int(yp*resolution)
for k in range(samples):
valuesx.append(data[i, j,k])
valuesy.append(data[ip, jp,k])
plot_histograms2(resolution, samples, p, name, min_values[x][y][xp][yp], max_values[x][y][xp][yp], x,y, xp, yp, valuesx, valuesy)
```
# Computing Wasserstein distances
```
def wasserstein_point2_fast(data1, data2, i, j, ip, jp, a, b, xs, xt):
"""
Computes the Wasserstein distance for a single point in the spatain domain
"""
xs[:,0] = data1[i,j,:]
xs[:,1] = data1[ip, jp, :]
xt[:,0] = data2[i,j, :]
xt[:,1] = data2[ip, jp, :]
M = ot.dist(xs, xt, metric='euclidean')
G0 = ot.emd(a,b,M)
return np.sum(G0*M)
def wasserstein2pt_fast(data1, data2):
"""
Approximate the L^1(W_1) distance (||W_1(nu1, nu2)||_{L^1})
"""
M = data1.shape[2]
a = np.ones(M)/M
b = np.ones(M)/M
xs = np.zeros((M,2))
xt = np.zeros((M,2))
N = data1.shape[0]
distance = 0
points = 0.1*np.array(range(0,10))
for (n,x) in enumerate(points):
for y in points:
for xp in points:
for yp in points:
i = int(x*N)
j = int(y*N)
ip = int(xp*N)
jp = int(yp*N)
distance += wasserstein_point2_fast(data1, data2, i,j, ip, jp, a, b, xs, xt)
return distance / len(points)**4
def wasserstein1pt_fast(data1, data2):
"""
Approximate the L^1(W_1) distance (||W_1(nu1, nu2)||_{L^1})
"""
N = data1.shape[0]
distance = 0
for i in range(N):
for j in range(N):
distance += scipy.stats.wasserstein_distance(data1[i,j,:], data2[i,j,:])
return distance / N**2
def plotWassersteinConvergence(name, basename, r, perturbations):
wasserstein2pterrors = []
for (n, p) in enumerate(perturbations[:-1]):
filename = basename.format(perturbation=p)
filename_coarse = basename.format(perturbation=perturbations[-1])
data1 = np.zeros((r,r,r))
data2 = np.zeros((r,r,r))
for k in range(r):
d1 = load(filename, k)
d2 = load(filename_coarse, k)
data1[:,:,k] = d1
data2[:,:,k] = d2
wasserstein2pterrors.append(wasserstein2pt_fast(data1, data2))
print("wasserstein2pterrors=%s" % wasserstein2pterrors)
plt.loglog(perturbations[1:], wasserstein2pterrors, '-o', basex=2, basey=2)
plt.xlabel("Perturbation $\\epsilon$")
plt.ylabel('$||W_1(\\nu^{2, \\Delta x, \\epsilon}, \\nu^{2,\\Delta x, \\epsilon_0})||_{L^1(D\\times D)}$')
plt.title("Wasserstein convergence for %s\nfor second correlation measure,\nwith respect to perturbation size\nagainst a reference solution with $\epsilon_0=%.4f$"%(name,perturbations[-1]))
showAndSave('%s_wasserstein_perturbation_convergence_2pt_all' % name)
# one point
wasserstein1pterrors = []
for (n, p) in enumerate(perturbations[:-1]):
filename = basename.format(perturbation=p)
filename_coarse = basename.format(perturbation=perturbations[-1])
data1 = np.zeros((r,r,r))
data2 = np.zeros((r,r,r))
for k in range(r):
d1 = load(filename, k)
d2 = load(filename_coarse, k)
data1[:,:,k] = d1
data2[:,:,k] = d2
wasserstein1pterrors.append(wasserstein1pt_fast(data1, data2))
print("wasserstein1pterrors=%s" % wasserstein1pterrors)
plt.loglog(perturbations[1:], wasserstein1pterrors, '-o', basex=2, basey=2)
plt.xlabel("Perturbation $\\epsilon$")
plt.ylabel('$||W_1(\\nu^{1, \\Delta x, \\epsilon}, \\nu^{1,\\Delta x, \\epsilon_0})||_{L^1(D)}$')
plt.title("Wasserstein convergence for %s\nfor first correlation measure,\nwith respect to perturbation size\nagainst a reference solution with $\epsilon_0=%.4f$"%(name,perturbations[-1]))
showAndSave('%s_wasserstein_perturbation_convergence_1pt_all' % name)
def plotWassersteinConvergenceDifferentTypes(name, filenames, r, perturbations_inverse):
wasserstein2pterrors = []
types = [k for k in filenames.keys()]
if len(types)!=2:
raise Exception("Only support two perturbation types")
for filename_a, filename_b in zip(filenames[types[0]], filenames[types[1]]):
data1 = np.zeros((r,r,r))
data2 = np.zeros((r,r,r))
for k in range(r):
d1 = load(filename_a, k)
d2 = load(filename_b, k)
data1[:,:,k] = d1
data2[:,:,k] = d2
wasserstein2pterrors.append(wasserstein2pt_fast(data1, data2))
print("wasserstein2pterrors=%s" % wasserstein2pterrors)
plt.loglog(1.0/np.array(perturbations_inverse,dtype=np.float64), wasserstein2pterrors, '-o', basex=2, basey=2)
plt.xlabel("Perturbation $\\epsilon$")
plt.ylabel('$||W_1(\\nu^{2, \\Delta x, \\epsilon}_{\\mathrm{%s}}, \\nu^{2,\\Delta x, \\epsilon}_{\\mathrm{%s}})||_{L^1(D\\times D)}$' % (types[0], types[1]))
plt.title("Wasserstein convergence for %s\nfor second correlation measure"%(name))
showAndSave('%s_type_comparison_wasserstein_perturbation_convergence_2pt_all' % name)
# one point
wasserstein1pterrors = []
for filename_a, filename_b in zip(filenames[types[0]], filenames[types[1]]):
data1 = np.zeros((r,r,r))
data2 = np.zeros((r,r,r))
for k in range(r):
d1 = load(filename_a, k)
d2 = load(filename_b, k)
data1[:,:,k] = d1
data2[:,:,k] = d2
wasserstein1pterrors.append(wasserstein1pt_fast(data1, data2))
print("wasserstein1pterrors=%s" % wasserstein1pterrors)
plt.loglog(1.0/np.array(perturbations_inverse,dtype=np.float64), wasserstein1pterrors, '-o', basex=2, basey=2)
plt.xlabel("Perturbation $\\epsilon$")
plt.ylabel('$||W_1(\\nu^{1, \\Delta x, \\epsilon}_{\\mathrm{%s}}, \\nu^{1,\\Delta x, \\epsilon}_{\\mathrm{%s}})||_{L^1(D)}$' % (types[0], types[1]))
plt.title("Wasserstein convergence for %s\nfor first correlation measure,\nwith respect to perturbation size"%(name))
showAndSave('%s_type_comparison_wasserstein_perturbation_convergence_1pt_all' % name)
```
# Kelvin-Helmholtz
## Convergence as we refine the perturbation
In the cell below, we look at the convergence
$$\mathrm{Error}(\epsilon)=\|W_1(\mu^{\epsilon}, \mu^{\epsilon_{\mathrm{ref}}})\|_{L^1}$$
where $\mu^{\epsilon_{\mathrm{ref}}}$ is a reference solution with small perturbation size ($\epsilon_{\mathrm{ref}}=0.0025$). We keep the number of samples and resolution fixed ($1024$ samples at $1024x1024$ resolution).
```
resolution = 1024
perturbations = [0.09, 0.075, 0.06, 0.05, 0.025, 0.01, 0.0075, 0.005,0.0025]
basepath_perts = get_environment("STATISTICAL_KH_PERTS",
["kh_perts/q{}/kh_1.nc".format(p) for p in perturbations])
plot_info.console_log("Using basepath_perts={}".format(basepath_perts))
basename = os.path.join(basepath_perts, 'kh_perts/q{perturbation}/kh_1.nc')
name = 'Kelvin-Helmholtz'
samples = 1024
plotWassersteinConvergence(name, basename, resolution, perturbations)
try:
plotHistograms(name, resolution, perturbations, basename, samples)
except Exception as e:
plot_info.console_log("Failed making historgrams, \t{}".format(gettatr(e, 'message', repr(e))))
```
# Convergence for different perturbation types
In this experiment, we have done two perturbations. One with a normal distribution, and one with a uniform distribution. We measure the following for each perturbation size
$$\mathrm{Error}(\epsilon)=\|W_1(\mu^{\epsilon}_{\mathrm{normal}}, \mu^{\epsilon}_{\mathrm{uniform}})\|_{L^1}$$
We plot the error as a function of $\epsilon$. If the statisitical solution is invariant to the different perturbation types, we should get something that converges to zero.
```
resolution = 1024
pert_inverses = [8, 16, 32, 64, 128, 256, 512]
types = ['normal', 'uniform']
normal_uniform_base = 'dist_{t}/pertinv_{inv}/kh_1.nc'
# all_filenames is just used for verification
all_filenames = []
for t in types:
for p in pert_inverses:
all_filenames.append(normal_uniform_base.format(t=t, inv=p))
basepath_perts_normal_uniform = get_environment("STATISTICAL_KH_PERTS_NORMAL_UNIFORM",
all_filenames)
plot_info.console_log("Using basepath_perts_normal_uniform={}".format(basepath_perts_normal_uniform))
filenames_per_type = {}
for t in types:
filenames_per_type[t] = []
for p in pert_inverses:
filenames_per_type[t].append(os.path.join(basepath_perts_normal_uniform,
normal_uniform_base.format(t=t,inv=p)))
name = 'Kelvin-Helmholtz Perturbation comparison'
samples = 1024
plotWassersteinConvergenceDifferentTypes(name, filenames_per_type, resolution, pert_inverses)
```
| github_jupyter |
# Reshaping data: Portland housing developments
In this notebook, we're going to work with some data on Portland (Oregon) housing developments since 2014. Right now, the data are scattered across a jillion spreadsheets. Our goal is to parse them all into one clean CSV. (Thanks to [Kelly Kenoyer of the Portland Mercury](https://twitter.com/Kelly_Kenoyer) for donating this data.)
The spreadsheets, a mixture of `xls` and `xlsx` files, live in `../data/portland/`. A few things to note:
- Some of the spreadsheets have extra columns
- Some of the spreadsheets have other worksheets in addition to the data worksheet (pivot tables, mostly) -- but these are not always in the same position
- Some of the spreadsheets have columns of mostly blank data that the city once used to manually aggregate data by category -- we don't want these columns
- Some of the spreadsheets have blank rows
Our strategy:
- Get a list of Excel files in that directory using the [`glob`](https://docs.python.org/3/library/glob.html) module
- Create an empty pandas data frame
- Loop over the list of spreadsheet files and ...
- Read in the file to a data frame
- Find the correct worksheet
- Drop empty columns and rows
- Append to the main data frame
First, we'll import `glob` and pandas.
```
import glob
import pandas as pd
```
Next, we'll use `glob` to get a list of the files we're going to loop over. We'll use the asterisk `*`, which means "match everything."
```
xl_files = glob.glob('../data/portland/*')
print(xl_files)
```
Now we'll create an empty data frame. This will be the container we stuff the data into as we loop over the files.
```
housing = pd.DataFrame()
```
Let's take a look at what we're dealing with. We're going to loop over the spreadsheet, and for each one, we're going to look at:
- The names of the worksheets in that spreadsheet
- The columns in each worksheet
This will help us decide, later, which worksheets we need to target.
We're going to take advantage of the fact, [according to the `read_excel()` documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html), that you can pass `None` as the `sheet_names` argument and pandas will read in _all_ of the sheets as a big dictionary -- the keys are the names of the worksheets, the values are the associated data frames.
Later, our logic will go like this:
- Read in every worksheet as a data frame
- Target the worksheet whose name matches the pattern for the data we need
👉 For a refresher on _for loops_ and dictionaries, [check out this notebook](../reference/Python%20data%20types%20and%20basic%20syntax.ipynb#for-loops).
```
# loop over the excel file paths
for f in xl_files:
# load the file into a data frame
# specifying `None` as the sheet name
df = pd.read_excel(f, sheet_name=None)
# print the name of the file
print(f)
# print the worksheet names
print(df.keys())
# print a divider to make scanning easier
print('='*60)
# and an empty line
print('')
```
OK. So it looks like our target sheets are called a few different things: `nrs`, `04_2016 New Res Units'`, `'2018 04 New Residential Units'`, etc.
Can we come up with a list of patterns to match all of them? I think we can.
```
# the items in this list are lowercased,
# because we're gonna match on .lower()'d versions of the sheet names
target_sheet_name_fragments = ['new res', 'nrs', 'lus stats']
```
So now, we need to write some logic that says: Pick the worksheet that has one of our `target_sheet_name_fragments` in the name. A nested pair of _for loops_ will do the trick for us.
```
# loop over the excel file paths
for f in xl_files:
# load the file into a data frame
# specifying `None` as the sheet name
df = pd.read_excel(f, sheet_name=None)
# start off with no match
match = None
# loop over the worksheet names
for ws_name in df.keys():
# loop over the word fragments
for fragment in target_sheet_name_fragments:
# if this fragment exists in the lowercased worksheet name
if fragment in ws_name.lower():
# we've got a winner
match = ws_name
# if, when we get to the end of this, `match` is still None
if not match:
# print something to let us know about it
print(f'NO MATCH FOUND FOR {f}')
# and the names of the sheets
print(df.keys())
# and break out of the loop
break
# otherwise, grab a handle to the worksheet we want
working_df = df[match]
# print a status message to let us know what's up
print(f'parsing "{match}" worksheet from "{f}"')
```
Scanning through that list, I feel comfortable that we're grabbing the correct data. Let's take a look at the columns in each worksheet we'll be parsing.
```
# loop over the excel file paths
for f in xl_files:
# load the file into a data frame
# specifying `None` as the sheet name
df = pd.read_excel(f, sheet_name=None)
# start off with no match
match = None
# loop over the worksheet names
for ws_name in df.keys():
# loop over the word fragments
for fragment in target_sheet_name_fragments:
# if this fragment exists in the lowercased worksheet name
if fragment in ws_name.lower():
# we've got a winner
match = ws_name
# if, when we get to the end of this, `match` is still None
if not match:
# print something to let us know about it
print(f'NO MATCH FOUND FOR {f}')
# and the names of the sheets
print(df.keys())
# and break out of the loop
break
# otherwise, grab a handle to the worksheet we want
working_df = df[match]
# print a status message to let us know what's up
print(f'parsing "{match}" worksheet from "{f}"')
# print a sorted list of column names
print(sorted(working_df.columns))
# print a divider to make scanning our results easier
print('='*60)
# print an empty line
print()
```
I notice that some columns are, e.g. `Unnamed: 4`. That means there's no column header. Let's take a look at one of those:
```
test = pd.read_excel('../data/portland/08_2014 New Res Units.xls', sheet_name='08_2014 New Res Units')
test.head(20)
```
Looks like they're using those columns to total up the valuations for groups of housing types. I'm noticing, too, that there are some blank rows -- probably used as dividers between groups -- so we'll want to drop those as well.
We'll keep that in mind as we roll through these sheets.
Here's the pandas documentation on the methods we'll be using here:
- [`append()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html)
- [`drop()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html)
- [`dropna()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html)
```
# loop over the excel file paths
for f in xl_files:
# load the file into a data frame
# specifying `None` as the sheet name
df = pd.read_excel(f, sheet_name=None)
# start off with no match
match = None
# loop over the worksheet names
for ws_name in df.keys():
# loop over the word fragments
for fragment in target_sheet_name_fragments:
# if this fragment exists in the lowercased worksheet name
if fragment in ws_name.lower():
# we've got a winner
match = ws_name
# if, when we get to the end of this, `match` is still None
if not match:
# print something to let us know about it
print(f'NO MATCH FOUND FOR {f}')
# and the names of the sheets
print(df.keys())
# and break out of the loop
break
# otherwise, grab a handle to the worksheet we want
working_df = df[match]
# print a status message to let us know what's up
print(f'parsing "{match}" worksheet from "{f}"')
# get a list of columns we want to drop
columns_to_drop = [x for x in working_df.columns if 'Unnamed' in x]
# drop those bad boys
working_df = working_df.drop(columns_to_drop, axis=1)
# drop empty rows in place, but only if _all_ of the values are nulls
working_df.dropna(inplace=True, how='all')
# append to our `housing` data frame
housing = housing.append(working_df,
ignore_index=True,
sort=True)
housing.head()
len(housing)
housing.dtypes
```
One last thing I'd do, before writing out to file, is parse the date columns as dates:
```
# convert "indate" column to datetime
housing.indate = pd.to_datetime(housing.indate)
# convert "indate" column to datetime
housing.issuedate = pd.to_datetime(housing.issuedate)
housing.head()
```
Now we can use the `to_csv()` method to write out to a new file:
```
housing.to_csv('portland-developments.csv', index=False)
```
| github_jupyter |
# Composing Time Constructions
In this notebook we build and test a Hebrew phrase parser.
```
import sys
import collections
import pickle
import random
import re
import copy
import numpy as np
import networkx as nx
from datetime import datetime
import matplotlib.pyplot as plt
from Levenshtein import distance as lev_dist
from pprint import pprint
# local packages
from tf_tools.load import load_tf
# load semantic vectors
from paths import semvector
with open(semvector, 'rb') as infile:
semdist = pickle.load(infile)
# load and configure Text-Fabric
TF, api, A = load_tf()
F, E, T, L = api.F, api.E, api.T, api.L
A.displaySetup(condenseType='phrase', withNodes=True, extraFeatures='st')
# add grammar to path
sys.path.append('../cxs')
```
# Machinery
We could use some machinery to do the hard work of looking in and around a node. In the older approach we used TF search templates. But these are not very efficient at scale, and they are always bound by the limits of the query language. I take another approach here: a set of classes that specify locations and directions within a specified context.
```
from positions import Positions, PositionsTF, Walker, Dummy
```
## `Positions(TF)`
The `Positions` class enables concise access to adjacent nodes within a given context. This allows us to write algorithms with query-like efficiency with all of the power of Python.
This class is instantiated on a word node and can provide contextual look-up data for a given word. For example, given a phrase containing the following word nodes:
> (189681, 189682, **189683**, 189684, 189685, 189686) <br>
representing the following phrase (space separated for clarity):
> ב שׁנת **שׁלשׁים** ו שׁמנה שׁנה
Given that the bolded node, `189683` is our `source` word, we instantiate the class, feeding in the node, the "phrase_atom" string (which is the context we want to search within), and an instance of Text-Fabric (`tf`):
```
# source node context TF instance
# | | |
P = PositionsTF(189683, 'phrase_atom', A).get
```
If we want to obtain the word adjacent one space forward, we simply ask `P` for `1`, which gives us the next word in the phrase.
```
P(1)
```
If we try to ask for 4 words forward, we go beyond the bounds of the phrase. But `P` handles this by returning nothing:
```
P(4)
```
To look back one word, we simply give a negative value:
```
P(-1)
```
Finally, `P` can be used to quickly call features on these words. For instance, in order to get the lexeme of the word two words in front of `189683`:
```
P(2,'lex')
```
And if we want to get a number of features, we can just add other features to the arguments. The result is a feature set:
```
P(2, 'lex', 'nu')
```
`P` can also handle features on the source node itself by giving a positionality of `0`:
```
P(0, 'lex')
```
### `Positions` also exists in a non-TF version
When the non-tf version of `Positions` is provided any iterable, it can perform the same functions.
```
test_ps = ['The', 'good', 'dog', 'jumped.']
P = Positions('good', test_ps).get
P(1)
```
Positions can perform a function on the result with an option `do`. In the example below, the word two words ahead is found and an upper-case function is called on the string.
```
P(2, do=lambda w: w.upper())
```
The non-tf version of `Positions` makes it possible to do positionality searches with any ordered list of Python objects that represent linguistic units.
## `Walker`
`Walker` performs a similar function to `Positions`, except it is ambiguous to exact positions, walking either `ahead` or `back` from the source to a target node in the context. A function must be supplied that returns `True` on the target node.
We instantiate the `Walker` using the same source and context as above.
```
source = 189683
# get words inside source's phrase_atom
positions = L.d(
L.u(189683,'phrase_atom')[0], 'word'
)
Wk = Walker(source, positions)
```
`Walker` is demonstrated below with the same word. A simple `lambda` function is used to test for the lexeme. In the example below, we find the first word ahead of `189683` that is a cardinal number:
```
Wk.ahead(lambda w: F.ls.v(w) == 'card')
```
An alternative demonstrates the `None` returned on the lack of a valid match.
```
Wk.ahead(lambda w: F.ls.v(w) == 'BOOGABOOGA')
```
Another example wherein we walk backwards to the preposition:
```
Wk.back(lambda w: F.sp.v(w) == 'prep')
```
We can also specify that the walk should be interrupted under certain conditions with a `stop` function. In this case we walk forward to the next cardinal number, but the walk is interrupted when the `stop` function detects a conjunction.
```
Wk.ahead(lambda w: F.ls.v(w) == 'card',
stop=lambda w: F.sp.v(w) == 'conj')
```
We can also specify the opposite with a `go` function argument, which defines the nodes that allowed to intervene between `source` and `target`. Below we specify that *only* a conjunction should intervene.
```
Wk.ahead(lambda w: F.ls.v(w) == 'card',
go=lambda w: F.sp.v(w) == 'conj')
```
The `go` and `stop` functions can be as permissive or strict as desired.
Finally, we can tell `Walker` that the output of the validation function should be returned instead of the node itself with the optional argument `output=True`:
```
val_funct = lambda w: F.ls.v(w) if F.ls.v(w)=='card' else None
Wk.ahead(val_funct, output=True)
```
This ability is useful for certain tests.
Like `Positions`, `Walker` can be used in non-TF contexts:
```
test_ps = ['The', 'bad', 'cat', 'swatted.']
Wk_notf = Walker('bad', test_ps)
Wk_notf.ahead(lambda w: w.startswith('sw'))
```
### Returning All Results along Path
`Walker` can also return all results along the path by toggling `every=True`
```
Wk_notf.ahead(lambda w: type(w)==str, every=True)
```
## `Dummy`
When writing conditions and logic, we want an object that passively receives `NoneType`s or zero `int`s without throwing errors. Such an object should also return `None` to reflect its `False` value. `Dummy`, provides such functionality. `Dummy` can receive all of the arguments, kwargs, and function calls as a `Positions` or `Walker` object. But it returns absolutely nothing. Ouch.
```
D = Dummy(None, 'phrase_atom', A)
```
The function call below returns `None`:
```
D.get(1)
```
As does this:
```
D.get(1, 'lex')
```
And even this:
```
D.ahead(1)
```
`D` is essentially a souless void that consumes whatever you throw at it and gives nothing in return.
For safe-calls on a `Position` or `Walker` object, assign nodes to it via a function with a `Dummy` given on null nodes:
```
def getPos(node, context, tf):
"""A function to get Positions safely."""
if node:
return PositionsTF(node, context, tf)
else:
return Dummy() # <- give dummy on empty node
```
So:
```
P = getPos(None, 'phrase_atom', A)
P.get(1)
```
Or:
```
P = getPos(1, 'phrase_atom', A)
P.get(1)
```
# Need for Semantic Data
The accurate processing of word connections depends on fuller semantic data than BHSA provides. Future semantic data could be stored in a similar way to word sets (`wsets`).
For example, in the two phrases
> (Exod 25:39) ככר זהב טהור <br>
> (2 Sam 24:24) בכסף שקלים חמשׁים
we see that זהב and כסף, despite being in two different positions with two different words indicates a kind of "composed of" semantic concept: "round gold" (i.e. round composed of gold) and "silver shekels" (shekels composed of silver). To process these kinds of links, we need a list of nouns that often function as "material." But this is only the beginning. Many other words will have specific semantic values that motivate their syntactic behavior. Such a scope lies outside the bounds of this author's current project on Hebrew time phrases.
## A Compromise: Time Phrases
Since constructing these semantic classes is vastly time consuming, I want to start with a smaller set of cases. I will instead focus on parsing connections within time phrases for now. This is because I am analyzing time phrases in my current ongoing PhD project.
```
def disjoint(ph):
"""Isolate phrases with gaps."""
ph = L.d(ph,'word')
for w in ph:
if ph[-1] == w:
break
elif (ph[ph.index(w)+1] - w) > 1:
return True
alltimes = [
ph for ph in F.otype.s('timephrase')
]
timephrases = [ph for ph in alltimes if not disjoint(ph)]
print(f'{len(timephrases)} phrases ready')
```
## Search & Display Functions
The functions below allow for fast searching and displaying of queries using a `Construction` object, described in the next section.
```
from cx_analysis.search import SearchCX
cx_show = SearchCX(A)
pretty, prettyconds, showcx, search = (
cx_show.pretty, cx_show.prettyconds,
cx_show.showcx, cx_show.search
)
```
## Construction Classes
* `Construction` - an object that represents a linguistic construction; the class records roles and the words that occupy them, as well as has methods for accessing and retrieving data on embedded roles/other constructions
* `CXBuilder` - matches conditions to build `Construction` objects; populates them with requisite data
```
from cx_analysis.cx import Construction
from cx_analysis.build import CXbuilder, CXbuilderTF
```
# Word Constructions
The `wordConstructions` builder class recognizes word semantic classes and types based on provided criteria.
```
from word_grammar import Words
```
<hr>
# Subphrase Constructions
The `SPConstructions` class prepares subphrase constructions.
```
from phrase_grammar import Subphrases
```
### Load Constructions
```
words = Words(A) # word CX builder
# analyze all matches; return as dict
start = datetime.now()
print(f'Beginning word construction analysis...')
wordcxs = words.cxdict(
s for tp in timephrases
for s in L.d(tp,'word')
)
print(f'\t{datetime.now() - start} COMPLETE \t[ {len(wordcxs)} ] words loaded')
# time phrase CX builder
spc = Subphrases(wordcxs, semdist, A)
```
# TO-FIX
* missed appo 361457 cx: 1450112 (בחים מספר ימי חיי הבלו)
### Small Tests
```
# pretty(1448320)
# test_small = spc.appo_name(202679)
# showcx(test_small, conds=True)
```
### Stretch Tests
```
# On deck: adjectival preposition
# check performance: 1448556
test = spc.analyzestretch(L.d(1450075, 'word'), debug=False)
# for res in test:
# showcx(res, conds=True)
```
### Pattern Searches
```
# words = [w for ph in timephrases for w in L.d(ph, 'word')]
# results = search(words, spc.appo_name, pattern='entity_name', show=100, shuffle=False)
```
### Analyze Results
```
# for res in results:
# head, appo = list(res.getsuccroles('head'))[-1], list(res.getsuccroles('appo'))[-1]
# hlex, alex = F.lex.v(int(head)), F.lex.v(int(appo))
# showcx(res)
# print()
# print(f'lexs: {hlex} x {alex}')
# print(f'dist: {semdist[hlex][alex]}')
# print()
```
### Stretch Tests on Results
```
# elements = sorted(set(L.u(res.element, 'timephrase')[0] for res in results))
# for el in elements:
# stretch = L.d(el, 'word')
# test = spc.analyzestretch(stretch)
# for res in test:
# showcx(res)
```
### Testing on Random Phrases
```
# shuff = [k for k in timephrases
# if len(L.d(k,'word')) > 4]
# random.shuffle(shuff)
# for phrase in shuff[:25]:
# print('analyzing', phrase)
# elements = L.d(phrase,'word')
# try:
# cxs = tpc.analyzestretch(elements)
# if cxs:
# for cx in cxs:
# showcx(cx, refslots=elements)
# else:
# showcx(Construction(), refslots=elements)
# except:
# sys.stderr.write(f'\nFAIL...running with debug...\n')
# pretty(phrase)
# tpc.analyzestretch(elements, debug=True)
# raise Exception('...debug complete...')
```
### Testing on All Timephrases
```
phrase2cxs = collections.defaultdict(list)
nocxs = []
# time it
start = datetime.now()
print(f'{datetime.now()-start} beginning subphrase analysis...')
for i, phrase in enumerate(timephrases):
# analyze all known relas
elements = L.d(phrase,'word')
# analyze with debug exceptions
try:
cxs = spc.analyzestretch(elements)
except:
sys.stderr.write(f'\nFAIL...running with debug...\n')
pretty(phrase)
spc.analyzestretch(elements, debug=True)
raise Exception('...debug complete...')
# save those phrases that have no matching constructions
if not cxs:
nocxs.append(phrase)
else:
phrase2cxs[phrase] = cxs
# report status
if i % 500 == 0 and i:
print(f'\t{datetime.now()-start}\tdone with iter {i}/{len(timephrases)}')
print(f'{datetime.now()-start}\tCOMPLETE')
print('-'*20)
print(f'{len(phrase2cxs)} phrases matched with Constructions...')
print(f'{len(nocxs)} phrases not yet matched with Constructions...')
```
## Closing Gaps
### Identify Gaps
Find timephrases that contain un-covered words besides waw conjunctions.
```
# gapped = []
# tested = []
# for ph, cxs in phrase2cxs.items():
# tested.append(ph)
# ph_slots = set(
# s for s in L.d(ph,'word')
# )
# cx_slots = set(
# s for cx in cxs
# for s in cx.slots
# )
# if ph_slots.difference(cx_slots):
# gapped.append(cxs)
# print(f'{len(gapped)} gapped phrases logged...')
# for gp in gapped[:25]:
# for cx in gp:
# showcx(cx)
```
<hr>
# Phrase Constructions
Developing a CXbuilder to connect all constructions in a complete phrase.
## Ambiguity with Coordinate CXs
Considerable ambiguity is present in several coordinate constructions:
**`A B and C`**<br>
Given A, B, C == nominal words. Is their relationship `A // B // C` or `A+B // C`. In other words: **what is the relationship of two adjacent nominal words given a list?** Is B a descriptor of A or is it an independent element?
**`A of B and C`**<br>
Is it, `(A of B) // (C)` or `(A of (B // C)`
Or even:
**`A of B C and D`**<br>
This pattern combines elements from both ambiguous cases.
### Method
To address these ambiguities we will apply a battery of disambiguation attempts. At the core of these attempts is a [Semantic Vector Space](https://en.wikipedia.org/wiki/Vector_space_model), which is able to quantify the semantic distance between two words based on their contextual uses throughout the Hebrew Bible.
The working hypothesis of this method is
> Words in coordination with each other will be more semantically similar (i.e. the least distance in the vector space) than other candidates in the phrase.
Semantic similarity in a vector space is not the only method used, however. Another aspect of semantic closeness is phrase structure. For instance, the identity of phrase types is taken into consideration above semantic similarity.
```
import cx_analysis.graph_nav as nav
class Phrases(CXbuilder):
"""Build complete phrase constructions."""
def __init__(self, phrase2cxs, semdist, tf):
CXbuilder.__init__(self)
# set up tf methods
self.tf = tf
self.F, self.T, self.L = tf.api.F, tf.api.T, tf.api.L
# map cx to phrase node for context retrieval
self.cx2phrase = {
cx:ph
for ph in phrase2cxs
for cx in phrase2cxs[ph]
}
self.phrase2cxs = phrase2cxs
self.semdists = semdist
self.cxs = (
self.appo,
self.coord,
)
self.dripbucket = (
self.cxph,
)
self.kind = 'phrase'
def cxph(self, cx):
"""Dripbucket function that returns cx as is."""
return cx
def get_context(self, cx):
"""Get context for a given cx."""
phrase = self.cx2phrase.get(cx, None)
if phrase:
return self.phrase2cxs[phrase]
else:
return tuple()
def getP(self, cx):
"""Index positions on phrase context"""
positions = self.get_context(cx)
if positions:
return Positions(
cx, positions, default=Construction()
).get
else:
return Dummy
def getWk(self, cx):
"""Index walks on phrase context"""
positions = self.get_context(cx)
if positions:
return Walker(cx, positions)
else:
return Dummy()
def getindex(
self, indexable, index,
default=Construction()
):
"""Safe index on iterables w/out IndexErrors."""
try:
return indexable[index]
except:
return default
def getname(self, cx):
"""Get a cx name"""
return cx.name
def getkind(self, cx):
"""Get a cx kind."""
return cx.kind
def getsuccrole(self, cx, role, index=-1):
"""Get a cx role from a list of successive roles.
e.g.
[big_head, medium_head, small_head][-1] == small_head
"""
cands = list(cx.getsuccroles(role))
try:
return cands[index]
except IndexError:
return Construction()
def string_plus(self, cx, plus=1):
"""Stringifies a CX + N-slots for Levenshtein tests."""
# get all slots in the context for plussing
allslots = sorted(set(
s for scx in self.get_context(cx)
for s in scx.slots
))
# get plus slots
P = (Positions(self.getindex(cx.slots, -1), allslots).get
if cx.slots and allslots else Dummy)
plusses = []
for i in range(plus, plus+1):
plusses.append(P(i,-1)) # -1 for null slots (== empty string in T.text)
plusses = [p for p in plusses if type(p) == int]
# format the text string for Levenshtein testing
ptxt = T.text(
cx.slots + tuple(plusses),
fmt='text-orig-plain'
) if cx.slots else ''
return ptxt
def rank_candidates(self, cx, cx_patterns=[]):
"""Ranks preceding phrases on likelihood of a relationship
TODO: Give a thorough explanation
"""
# standard features and positional navigation
F, T = self.F, self.T
P = self.getP(cx)
semdist = self.semdists
Wk = self.getWk(cx)
# first we need to collect candidates
# there are two possibilities:
# 1. non-embedded candidates (i.e. have no other relations)
# 2. embedded candidates (i.e. already part of another subphrase)
# we give first preference to top level candidates as this seems
# to produce more accurate results
# 1. get all top-level cxs behind this one that match in name
cx_behinds = Wk.back(
lambda c: c.name == cx.name,
every=True,
stop=lambda c: (
c.name == 'conj' and (c != P(-1))
)
)
# 2. if top level phrases produce no results,
# look for embedded candidates
if not cx_behinds:
topcontext = self.get_context(cx)
# gather all valid embedded candidates
subcontext = []
for topcx in topcontext:
for subcx in topcx.subgraph():
if type(subcx) == int: # skip TF slots
continue
if (
subcx in topcontext or subcx.name != 'conj'
and subcx not in cx
):
subcontext.append(subcx)
# walk the embedded candidates
# and collect those that are valid
Wk2 = Walker(cx, subcontext)
cx_behinds = Wk2.back(
lambda c: c.name != 'conj',
default=[P(-2)],
every=True,
stop=lambda c: (
c.name == 'conj' and (c != P(-1))
)
)
# Now we apply a series of additional filters on the candidates:
# map each candidate to its last slot to make sure
# every one is the last item in its phrase
# (check is made in next series of lines)
cx2last = {
cxb:self.getindex(sorted(cxb.slots), -1, 0)
for cxb in cx_behinds
}
# find coordinate candidate subphrases that stand
# at the end of the phrase
cx_subphrases = []
for cx_back in cx_behinds:
for cxsp in cx_back.subgraph():
if type(cxsp) == int:
continue
elif (
cx2last[cx_back] in cxsp.slots # check last slot
and cxsp.getrole('head')
):
cx_subphrases.append(cxsp)
# get subphrase heads for semantic tests
cx2heads = [
(cxsp, self.getsuccrole(cxsp,'head'))
for cxsp in cx_behinds
]
# get head of this cx
head1 = self.getsuccrole(cx,'head')
head1lex = F.lex.v(head1)
# sort on a set of priorities
# the default sort behavior is used (least to greatest)
# thus when a bigger value should be more important,
# a negative is added to the number
stringp = self.string_plus
# arrange candidates by priority
cxpriority = []
for cxsp, headsp in cx2heads:
name_eq = 0 if cxsp.name == cx.name else 1
semantic_dist = semdist.get(
head1lex,{}
).get(F.lex.v(headsp), np.inf)
size = -len(cxsp.slots)
levenshtein = lev_dist(stringp(cx), stringp(cxsp))
slot_dist = -next(iter(cxsp.slots), 0)
heads = (head1, headsp) # for reporting purposes only
cxpriority.append((
name_eq,
semantic_dist,
size,
levenshtein,
slot_dist,
heads,
cxsp
))
# make the sorting
candidates = sorted(cxpriority, key=lambda k: k[:-1])
# select the first priority candidate
cand = next(iter(candidates), (0,0,Construction()))
# add data for conds report / debugging
stats = collections.defaultdict(str)
for namescore,sdist,leng,ldist,lslot,heads,cxp in candidates:
# name equality
stats['namescore'] += f'\n\t{cxp} namescore: {namescore}'
# semantic distance
stats['semdists'] += (
f'\n\t{round(sdist, 2)}, {F.lex.v(heads[0])} ~ {F.lex.v(heads[1])}, {cxp}'
)
# size of cx
stats['size'] += f'\n\t{cxp} length: {abs(leng)}'
# Levenstein distance
stats['ldist'] += f'\n\t{cxp} dist: {ldist}'
# dist of last slot
stats['lslot'] += f'\n\t{cxp} last slot: {abs(lslot)}'
return (candidates, cand, stats)
def coord(self, cx):
"""A coordinate construction.
In order to match a coordinate cx, we need to determine
which item in the previous phrase this cx belongs with.
This is done using a semantic vector space, which can
quantify the approximate semantic distance between the
heads of this cx and a candidate cx.
Criteria utilized in validating a coordinate cx between
an origin cx and a candidate cx are the following:
TODO: fill in
"""
P = self.getP(cx)
cands, cand, stats = self.rank_candidates(cx)
return self.test(
{
'element': cx,
'name': 'coord',
'kind': self.kind,
'roles': {'part2':cx, 'conj': P(-1), 'part1': cand[-1]},
'conds': {
'P(-1).name == conj':
P(-1).name == 'conj',
'bool(cand)':
bool(cand[-1]),
f'name matches {stats["namescore"]}\n':
bool(cands),
f'is shortest sem. distance of {stats["semdists"]}\n':
bool(cands),
f'is longest length of: {stats["size"]}\n':
bool(cands),
f'is shortest Levenshtein distance: {stats["ldist"]}\n':
bool(cands),
f'is closest last slot of: {stats["lslot"]}\n':
bool(cands)
}
},
)
def L_anchor(self, cx):
"""Find L anchor CXs"""
P = self.getP(cx)
prep = nav.get_role(cx, 'prep', default=Construction())
prep = next(iter(prep.slots), 0)
prep_lex = self.F.lex.v(prep)
return self.test(
{
'element': cx,
'name': 'L_anchor',
'kind': self.kind,
'roles': {'anchor': cx, 'head': P(-1)},
'conds': {
'prep_lex == L':
prep_lex == 'L',
'bool(P-1)':
bool(P(-1)),
},
},
)
def appo(self, cx):
"""Find appositional cxs"""
P = self.getP(cx)
cands, cand, stats = self.rank_candidates(cx)
return self.test(
{
'element': cx,
'name': 'appo',
'pattern': 'NP',
'kind': self.kind,
'roles': {'appo':cx, 'head': cand[-1]},
'conds': {
'name(cx) not in not_NPset':
cx.name not in {'prep_ph','conj'},
'P(-1).name != conj':
P(-1).name != 'conj',
'bool(cand)':
bool(cand[-1]),
f'name matches {stats["namescore"]}\n':
bool(cands),
f'is shortest sem. distance of {stats["semdists"]}\n':
bool(cands),
f'is longest length of: {stats["size"]}\n':
bool(cands),
f'is shortest Levenshtein distance: {stats["ldist"]}\n':
bool(cands),
f'is closest last slot of: {stats["lslot"]}\n':
bool(cands),
}
},
{
'element': cx,
'name': 'appo',
'pattern': 'PP',
'kind': self.kind,
'roles': {'appo':cx, 'head': cand[-1]},
'conds': {
'name(cx) == prep':
cx.name == 'prep_ph',
'P(-1).name != conj':
P(-1).name != 'conj',
'bool(cand)':
bool(cand[-1]),
f'name matches {stats["namescore"]}\n':
bool(cands),
f'is shortest sem. distance of {stats["semdists"]}\n':
bool(cands),
f'is longest length of: {stats["size"]}\n':
bool(cands),
f'is shortest Levenshtein distance: {stats["ldist"]}\n':
bool(cands),
f'is closest last slot of: {stats["lslot"]}\n':
bool(cands)
}
}
)
def adjacent(self, cx):
"""Find adjacent CXs"""
P = self.getP(cx)
return self.test(
{
'element': cx,
'name': 'appo',
'kind': self.kind,
'roles': {'head':cx, 'appo':P(1)},
'conds': {
'cx.name != conj':
cx.name != 'conj',
'P(1).name != prep':
P(1).name != 'prep',
'bool(P(1))':
bool(P(1)),
f'name({P(1).name}) not in (conj, prep_ph)':
P(1).name not in {'conj','prep_ph'},
}
}
)
cxp = Phrases(phrase2cxs, semdist, A)
```
## Tests
```
# A.show(A.search('''
# timephrase
# word pdp=subs ls#card|prpe lex#KL/|JWM/ st=a
# <: word lex=JWM/
# ''')[:10])
# the following phrases contain cases that still
# need to be fixed for the coordinate cx; some should
# actually be done in the previous cx builder at subphrase level
to_fix = [
1450039, # coord, add adjacent advb cx with JWM
1450647, # coord, consider prioritizing Levenshtein over size
]
```
### Test Small
```
test = cxp.coord(phrase2cxs[1449813][-1])
showcx(test, conds=True)
F.freq_lex.v(363638)
L.u(363638,'lex')
test = cxp.coord(phrase2cxs[1450668][-1])
showcx(test, conds=True)
L.u(862564,'timephrase')
test = cxp.coord(phrase2cxs[1450558][-1])
showcx(test, conds=True)
```
### Stretch Test
```
testph = phrase2cxs[1446841]
# test = cxp.analyzestretch(
# testph,
# duplicate=True,
# debug=True)
# for res in test:
# showcx(res, conds=True)
```
<hr>
# TOFIX:
* fix apposition - 1447545 (צען מצרים)
# TOTEST:
1450333 - from apposition to proper name
<hr>
Print total number of phrases left to parse:
```
print(
len([cx_tuple for cx_tuple in phrase2cxs.values() if len(cx_tuple) > 1])
)
def filt_gaps(cx):
"""Isolate cxs with gaps"""
timephrase = L.u(next(iter(cx.slots)),'phrase')[0]
if set(L.d(timephrase,'word')) - cx.slots:
return True
else:
return False
def filt(cx):
"""Find specific lexeme"""
timephrase = L.u(next(iter(cx.slots)),'phrase')[0]
phrasewords = L.d(timephrase, 'word')
if (
{'JWM/', 'LJLH/'}.issubset(set(F.lex.v(w) for w in phrasewords))
and len(phrasewords) == 3
):
return True
else:
return False
# elements = [
# cx for ph in list(phrase2cxs.values())
# for cx in ph
# ]
# results = search(
# elements,
# cxp.L_anchor,
# pattern='',
# shuffle=False,
# #select=lambda c: filt(c),
# extraFeatures='lex st',
# show=100
# )
```
## Stretch Tests
Testing across a whole phrase.
```
# test = cxp.analyzestretch(phrase2cxs[1449168], debug=True)
# for res in test:
# showcx(res, conds=False)
```
| github_jupyter |
# Sample testing for DEM & slopes
This is basically a sandbox. By playing with smaller area, eg, a single tile of TMS zoom 10, we can get accurate comparison of approaches.
* The `cut_extent` command will extract from an existing DEM.
* The `slope` command converts to mbtile.
See [gdal_slope_util.py](../src/gdal_slope_util.py)
```
%load_ext autoreload
%autoreload 2
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(os.curdir)))
from src.bbox import bbmontblancz10
from src.gdal_slope_util import cut_extent, make_ovr, slope_mbt, make_overviews, mbt_merge
ZSTD_OPT='-co COMPRESS=ZSTD -co PREDICTOR=2 -co ZSTD_LEVEL=1 '
TILE_OPT='-co TILED=YES -co blockXsize=1024 -co blockYsize=1024 '
PARAL_OPT='-co NUM_THREADS=ALL_CPUS -multi -wo NUM_THREADS=ALL_CPUS ' # <- || compression, warp and compute
EXTRA_OPT='-co BIGTIFF=YES -overwrite '
alpw_dem_path = os.path.realpath('alps/slopes-Lausanne-Jouques-Sanremo-Zermatt.tif')
#CMAPDIR = '~/code/eddy-geek/TIL/geo/data'
```
# Mont-Blanc
The area covered is 1.5 z10 tiles around Mont-Blanc, from Morillon to Lavachey
Covers IGN 20m / Aoste 1m / Swisstopo 2m
```
!mkdir -p montblancz10
%cd montblancz10
# w, s, e, n = 6.855466, 45.828796, 7.207031, 45.951147
#other common extents I used:
# see also TIL/geo/src/useful_extents.sh main ⬆ ✱ ◼
# mbz10 = '6.855466 45.828796 7.207031 45.951147'
# clapier='7.38 44.1 7.44 44.15'
# malinvern='7.163085 44.182203 7.207031 44.213709'
# paradis_z11='6.855466 45.460132 7.382813 45.583291'
```
## Test compression parameters
As per the results below:
* Float compression halves size
* ZSTD-L1 provides similar results as DEFLATE-L9 with much better write perf
* Float16 halves size with a very small precision cost (<0.06)
* **'Rounding' to Byte divides size by >5** (because it makes for much better compressibility)
This is why I chose to compress to int, and so, why I need palettes with cutoff points at 0.5° -- so that the rounding does not twists the output.
Note: it's important to do this on a representative sample, which is the case here with both low-precision (=good compressibility) and high-precision DEMs.
```
from time import time
from contextlib import redirect_stderr, redirect_stdout
DEFLATE_OPT = ' -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=9 '
ZSTDL_OPT=' -co COMPRESS=ZSTD -co PREDICTOR=2 -co ZSTD_LEVEL=%d '
buff = []
with open(os.devnull, 'w') as fnull:
with redirect_stderr(fnull), redirect_stdout(fnull):
for name, opt in {
'f32-': '',
'f16-': '-co NBITS=16 ',
'i16-': '-ot UInt16 ',
'i8-': '-ot Byte ',
'f16-deflate': '-co NBITS=16 ' + DEFLATE_OPT,
'i8-deflate': '-ot Byte ' + DEFLATE_OPT,
'f32-zstd1': ZSTDL_OPT % 1,
'f16-zstd1': '-co NBITS=16 ' + ZSTDL_OPT % 1,
'i16-zstd1': '-ot UInt16 ' + ZSTDL_OPT % 1,
'i8-zstd1': '-ot Byte ' + ZSTDL_OPT % 1,
'f32-zstd3': ZSTDL_OPT % 3,
'f16-zstd3': '-co NBITS=16 ' + ZSTDL_OPT % 3,
'i8-zstd3': '-ot Byte ' + ZSTDL_OPT % 3,
'f32-zstd9': ZSTDL_OPT % 9,
'f16-zstd9': '-co NBITS=16 ' + ZSTDL_OPT % 9,
'i8-zstd9': '-ot Byte ' + ZSTDL_OPT % 9
}.items():
dest = f'slopes-z16-cmp-{name}.tif'
startt = time()
cut_extent(src=alpw_dem_path, precision='', extent=bbmontblancz10,
default_opt=opt + TILE_OPT + PARAL_OPT + EXTRA_OPT, dest=dest)
buff += [name.rjust(12), ': ',
round(os.path.getsize(dest) / 1024**2), ' MB ; ' ,
round(time()-startt, 1), ' seconds\n']
os.remove(dest)
sys.stdout.writelines(map(str, buff))
# test takes 2min
```
## Parallelization test
Here is an archive of the results on my laptop:
```
: 5.8 seconds
-co NUM_THREADS=ALL_CPUS : 4.8 seconds
-multi : 3.9 seconds
-wo NUM_THREADS=ALL_CPUS : 3.8 seconds
-multi -wo NUM_THREADS=ALL_CPUS : 3.6 seconds
-co NUM_THREADS=ALL_CPUS -multi -wo NUM_THREADS=ALL_CPUS : 2.9 seconds
```
```
: 7.5 seconds
-co NUM_THREADS=ALL_CPUS : 7.5 seconds
-multi : 6.7 seconds
-wo NUM_THREADS=ALL_CPUS : 2.9 seconds
-multi -wo NUM_THREADS=ALL_CPUS : 2.6 seconds
-co NUM_THREADS=ALL_CPUS -multi -wo NUM_THREADS=ALL_CPUS : 2.5 seconds
```
```
cut_extent(src=alpw_dem_path, dest='slopes-z16.tif', extent=bbmontblancz10)
pbuff = []
base = ZSTD_OPT + TILE_OPT + EXTRA_OPT
pcompress = '-co NUM_THREADS=ALL_CPUS '
pwarp = '-multi '
pcompute = '-wo NUM_THREADS=ALL_CPUS '
dest = './tmp.tif'
for fun in (
lambda xopts: make_ovr(src='./slopes-z16.tif', dest=dest, z=15, default_opt=base + xopts),
lambda xopts: cut_extent(src=alpw_dem_path, dest=dest, extent=bbmontblancz10, default_opt=base + xopts),
):
with open(os.devnull, 'w') as fnull:
with redirect_stderr(fnull), redirect_stdout(fnull):
for xopts in (
'',
pcompress,
pwarp,
pcompute,
pwarp+pcompute,
pcompress+pwarp+pcompute
):
if os.path.isfile(dest): os.remove(dest)
startt = time()
fun(xopts)
pbuff += [xopts.rjust(60), ': ',
round(os.path.getsize(dest) / 1024**2), ' MB ; ' ,
round(time()-startt, 1), ' seconds\n']
sys.stdout.writelines(map(str, pbuff))
```
## Test overview palette
```
make_ovr(src='slopes-z16.tif', z=15)
make_ovr(src='slopes-z16.tif', z=12)
slope_mbt('eslo4near', z=14)
make_ovr(src='slopes-z16.tif', z=12)
slope_mbt('eslo4near', z=12)
# !pip install pymbtiles
# from pymbtiles.ops import extend
# !cp -f eslo13-z16.mbtiles eslo.mbtiles
# extend('eslo4-z14.mbtiles', 'eslo.mbtiles')
# extend('eslo4near-z12.mbtiles', 'eslo.mbtiles')
# with MBtiles('eslo.mbtiles', 'w+') as m:
# m.meta['...'] = ...
# !rm eslo.mbtiles
# mbt_merge('eslo13near-z16.mbtiles', 'eslo4near-z14.mbtiles', 'eslo4near-z12.mbtiles', dest='eslo.mbtiles')
make_overviews('./slopes-z16.tif') #, reuse=True)
os.system('gpxsee eslo.mbtiles &')
for row in ((245, 191, 0), (220, 0, 245), (77, 77, 77)):
for i in row:
print ('%d ' % ((i*2+255)/3), end='')
print()
```
| github_jupyter |
# 3D Frangi vesselness measure example
```
import numpy as np
import pyqtgraph as pg
from scipy import ndimage as ndi
import matplotlib.pyplot as plt
% matplotlib inline
%gui qt
import sys
sys.path.append('..')
if sys.version_info >= (3,0):
print("Sorry, requires Python 2.x, not Python 3.x")
import core.frangi as fra
from core.utils import Normalize
VIZ_pyqtgraph = True # set True if you want to use pyqtgraph
VIZ_matplotlib = True # set True if you want to use matplotlib
#load data
fibers = np.fromfile('./data/fibers.raw', dtype='uint16')
fibers = fibers.reshape((100,100,30))
lungs = np.fromfile('./data/lungs.raw', dtype='int32')
lungs = lungs.reshape((50,50,50))
liver = np.fromfile('./data/liver.raw', dtype='int32')
liver = liver.reshape((100,100,50))
# view volumes with pyqtgraph
if VIZ_pyqtgraph:
pg.image(fibers.T)
pg.image(lungs.T)
pg.image(liver.T)
# view a slice with matplotlib
if VIZ_matplotlib:
SLICE_INDEX = 20
figure = plt.figure()
figure.set_size_inches(15,10)
figure.add_subplot(1,3,1)
plt.imshow(fibers.T[SLICE_INDEX], cmap='gray')
plt.title('fibers', fontsize=15)
figure.add_subplot(1,3,2)
plt.imshow(lungs.T[SLICE_INDEX], cmap='gray')
plt.title('lungs', fontsize=15)
figure.add_subplot(1,3,3)
plt.imshow(liver.T[SLICE_INDEX], cmap='gray')
plt.title('liver', fontsize=15)
plt.show()
```
# 1. Process fibers
```
print('fibers')
nfibers = Normalize(fibers,0.,255.)
scales=[1.5,2.0]
fib1=fra.ScaledFrangi3D(nfibers,scales,'bright')
if VIZ_pyqtgraph:
pg.image(fibers.T)
pg.image(fib1.T)
if VIZ_matplotlib:
SLICE_INDEX = 19
figure = plt.figure()
figure.set_size_inches(15,10)
figure.add_subplot(1,2,1)
plt.imshow(fibers.T[SLICE_INDEX], cmap='gray')
plt.title('raw fibers', fontsize=15)
figure.add_subplot(1,2,2)
plt.imshow(fib1.T[SLICE_INDEX], cmap='gray')
plt.title('vesselness', fontsize=15)
plt.show()
```
# 2. Process lungs
```
print('lungs')
scales=[1.5,2.0]
lun1=fra.ScaledFrangi3D(lungs,scales,'bright')
if VIZ_pyqtgraph:
pg.image(lun1.T)
pg.image(lungs.T)
if VIZ_matplotlib:
SLICE_INDEX = 15
figure = plt.figure()
figure.set_size_inches(15,10)
figure.add_subplot(1,2,1)
plt.imshow(lungs.T[SLICE_INDEX], cmap='gray')
plt.title('raw lungs', fontsize=15)
figure.add_subplot(1,2,2)
plt.imshow(lun1.T[SLICE_INDEX], cmap='gray')
plt.title('vesselness', fontsize=15)
plt.show()
```
# 3. Process liver
```
nliver = liver.copy()
nliver[nliver<100]=0
nliver[nliver>140]=140
print('liver')
scales=[2,2.5,3]
liv1=fra.ScaledFrangi3D(nliver,scales,'bright')
cf15=liv1.copy()
thr = 0.5
cf15[cf15>=thr]=1
cf15[cf15<=thr]=0
label_objects, nb_labels = ndi.label(cf15)
sizes = np.bincount(label_objects.ravel())
mask_sizes = sizes > 40
mask_sizes[0] = 0
cleaned = mask_sizes[label_objects]
if VIZ_pyqtgraph:
pg.image(liv1.T)
pg.image(cleaned.T)
pg.image(liver.T)
if VIZ_matplotlib:
SLICE_INDEX = 14
figure = plt.figure()
figure.set_size_inches(15,10)
figure.add_subplot(1,3,1)
plt.imshow(liver.T[SLICE_INDEX], cmap='gray')
plt.title('raw', fontsize=15)
figure.add_subplot(1,3,2)
plt.imshow(liv1.T[SLICE_INDEX], cmap='gray')
plt.title('vesselness', fontsize=15)
figure.add_subplot(1,3,3)
plt.imshow(cleaned.T[SLICE_INDEX], cmap='gray')
plt.title('cleaned', fontsize=15)
plt.show()
```
| github_jupyter |
# Output part of infinite matter dataframe as LaTeX table
This notebook generates Tables II and III in the Appendix of _Quantifying uncertainties and correlations in the nuclear-matter equation of state_ by [BUQEYE](https://buqeye.github.io/) members Christian Drischler, Jordan Melendez, Dick Furnstahl, and Daniel Phillips (see [[arXiv:2004.07805]](https://arxiv.org/abs/2004.07805)).
Data is read in from nuclear matter calculations (both SNM and PNM) and outputs total energies at each EFT order as a function of both density and Fermi momentum in the form of LaTeX tables. It uses pandas to manipulate the data and dump it to LaTeX. The details are easily modified.
```
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
```
## Data import and processing
The calculations for infinite matter are stored in a standardized csv file in the data directory. Both symmetric nuclear matter (SNM) and pure neutron matter (PNN) are included. The fields in the file are
| field | units | description |
| :---: | :---: | :---- |
| kf | $$\text{fm}^{-1}$$ | Fermi momentum. |
| n | $$\text{fm}^{-3}$$ | Density. |
| Kin | MeV | Kinetic energy. |
| MBPT_HF | MeV | Hartree-Fock energy (leading order in MBPT). |
| MBPT_2 | MeV | 2nd-order contribution in MBPT (not total). |
| MBPT_3 | MeV | 3rd-order contribution in MBPT (not total). |
| MBPT_4 | MeV | 4th-order contribution in MBPT (not total). |
| total | MeV | Total energy (sum of all contributions). |
| Lambda | MeV | Regulator parameter.
| OrderEFT | | Order of the EFT: LO, NLO, N2LO, N3LO |
| Body | | Two-body only (NN) or two-plus-three (NN+3N)
| x | | Proton fraction: 0.5 is SNM; 0.0 is PNM.
| fit | | Index for the fit.
The following is commented code from another notebook that identifies the indexes for fits from arXiv:1710.08220.
```
# EFT orders LO, NLO, N2LO, N3LO
#orders = np.array([0, 2, 3, 4]) # powers of Q
# body = 'NN-only'
# body = 'NN+3N'
# Lambda = 450
# Specify by index what fits will be used for the 3NF [N2LO, N3LO]
# The indices follow the fits in Fig. 3 of arXiv:1710.08220
# fits = {450: [1, 7], 500: [4, 10]}
# fits_B = {450: [2, 8], 500: [5, 11]}
# fits_C = {450: [3, 9], 500: [6, 12]}
# Replace the following with the path to the desired data file
data_file = '../data/all_matter_data.csv'
# Read infinite matter data from specified csv file into a dataframe df
df = pd.read_csv(data_file)
# Make copies of the dataframe for experiments
df_kin = df.copy()
df_all = df.copy()
# Convert differences to total energy prediction at each MBPT order
#mbpt_orders = ['Kin', 'MBPT_HF', 'MBPT_2', 'MBPT_3', 'MBPT_4']
#df[mbpt_orders] = df[mbpt_orders].apply(np.cumsum, axis=1)
```
### Replacements in column names or values
These are minor fixes to the files that we fix by hand.
```
df = df.replace({'OrderEFT' : 'NLO'}, 'N1LO') # makes columns align correctly
# We notice some truncation problems we fix by hand.
df = df.replace({'kf' : 0.904594}, 0.904590) # fix a truncation difference
df = df.replace({'kf' : 0.961274}, 0.961270) # fix a truncation difference
# For our basic tables we only need the 'total' column, so delete the other energies.
pop_list = ['Kin', 'MBPT_HF', 'MBPT_2', 'MBPT_3', 'MBPT_4']
for col in pop_list:
df.pop(col)
# check it
df
def dump_to_file(df, output_file, kf_column='snm'):
"""
Output adjusted dataframe to a file in LaTeX format for tables.
Modify the format here or generalize for different looks.
"""
with open(output_file, 'w') as of:
of.write('% description\n')
of.write(df.to_latex(
index=False,
formatters={'LO':'${:,.2f}$'.format,
'N1LO':'${:,.2f}$'.format,
'N2LO':'${:,.2f}$'.format,
'N3LO':'${:,.2f}$'.format,
'kf_snm':'${:,.2f}$'.format},
columns=['n', kf_column, 'LO', 'N1LO', 'N2LO', 'N3LO'],
escape=False
))
Lambdas = (450, 500)
for Lambda in Lambdas:
s_Lambda = (df['Lambda']==Lambda)
# s_x = (df['x']==0.5) | (df['x']==0.0)
s_x_SNM = df['x']==0.5 # select SNM (proton fraction 1/2)
s_x_PNM = df['x']==0.0 # select PNM (proton fraction 0)
s_Body = df['Body']=='NN+3N' # 'NN+3N' or 'NN-only'
s_n = True # df['n']==0.5 # could select a subset of densities
# For the 'fit', the LO and NLO values are NaN, so use pd.isna
if Lambda == 450:
s_fit = df['fit'].isin([1.0, 7.0]) | pd.isna(df['fit'])
elif Lambda == 500:
s_fit = df['fit'].isin([4.0, 10.0]) | pd.isna(df['fit'])
# Make a table just for SNM and a specified Lamba
df_SNM = df.loc[s_Lambda & s_x_SNM & s_Body & s_n & s_fit ]
df_SNM.pop('x') # we don't want 'x' anymore
df_SNM = df_SNM.rename(columns={"kf": "kf_snm"})
# Make a table just for PNM and a specified Lamba
df_PNM = df.loc[s_Lambda & s_x_PNM & s_Body & s_n & s_fit ]
df_PNM.pop('x') # we don't want 'x' anymore
df_PNM = df_PNM.rename(columns={"kf": "kf_pnm"})
# Check the tables
df_SNM
# Pivoting here means to take the row entries for OrderEFT and make them columns
df_SNM_pivoted = df_SNM.pivot_table(values='total', columns='OrderEFT',
index=('n','kf_snm')).reset_index()
df_PNM_pivoted = df_PNM.pivot_table(values='total', columns='OrderEFT',
index=('n','kf_pnm')).reset_index()
SNM_output_file = f'SNM_table_Lambda{Lambda}.tex'
dump_to_file(df_SNM_pivoted, SNM_output_file, kf_column='kf_snm')
PNM_output_file = f'PNM_table_Lambda{Lambda}.tex'
dump_to_file(df_PNM_pivoted, PNM_output_file, kf_column='kf_pnm')
```
| github_jupyter |
## Basic relationship plots
Last time, we played around with plotting the distributions of variables, and comparing distributions to one another. Oftentimes, however two variables intimately related such that knowing a particular value of one variable allows you to predict, to some extent, the value of another variable. Say for example, two variables measured in the same sample or the same human individual. In cases where the values of two variables are interrelated, simply plotting the historgrams of each variable would not show the relationship. To show the relationship, we would want instead to plot the values of one variable against the values of the other. This plot (a.k.a. scatter plot) is best way to begin to appreciate the nature and strength of the relationship. So let's play around with plotting and evaluation some relationships.
##### As always, we'll start by importing the libraries we'll use.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
```
##### And read the first of our data files
```
mdff = pd.read_csv("datasets/008TutorialDataFile1.csv")
```
##### Now let's take a look to make sure we read something that looks okay
Make sure you have read the instructions. As in the previous tutorials, we will assume that you will have imported the dataset under `./datasets`.
```
display(mdff)
```
#### We can now plot on varial (`X`) against the other (`Y`) to so visually appreciate their relationship.
We can simply do this by using the built in plotting capability of the data frame created in `Cell [3]` when we loaded the dataset.
```
mdff.plot(x='X', y='Y', kind='scatter', color='r')
```
OK, it looks like the two columns of the dataset we loaded, the variables `x` and `y`, are related. This can be interpreted as if the measurements in each row of the dataset have some sort of relationship.
The plot shows the relationship. The relationship looks like a straight line in which `y` is related to `x` by `y = a + bx + err`, in which `a` is the y-intercept, `b` is the slope, and *`err`* represents 'error' or 'noise', that is, variablility in `y` that is unrelated to `x`.
So, in a nutshell, it looks like if we know the value of `x`, we can predict the value of `y` except for a some amount of random variable represented by `err`. And all we need to know to do the predicting is two numbers, a slope and a y intercept.
Now lets do a prettier plot using `seaborn`.
```
sns.set_theme() # set theme to seaborn's default
```
Just like `displot()` is the seaborn quick and easy way for plotting distributions of variables, `relplot()` is the Q&E way for plotting relationships among variables.
```
sns.relplot(data=mdff, x='X', y='Y')
```
As we noted above, the data seem to fall around a roughly straight line. We can easily fit and plot that line using the `seaborn.lmplot()` function, where the `lm` in the call stands for "linear model" (of which a straight line is the simplest and default case).
```
sns.lmplot(data=mdff, x='X', y='Y')
```
So what happened here. In this plot, there is a straight line plotted along with the data. The `slope` and `y-intercept` of the line have been adjusted by the `lm` modelling method of `seaborn` so that the line does the best job at simultaneously coming the closest to all the data points. More specifically, the line shown is the line that minimizes the *sum of the squared differences between the line and the y data values*.
This *sum of squared difference* between the line and the data points is also called *error*, because it represents the remaining difference between individual points and the linear model. It represents what is not accounted for by the model (the line) as as such is considered error of the linear model. This error is generally referred to as the *sum squared error* or just the *sse*.
The plot also shows the error bound around the fit. The error bound is represented as a semi-transparent shading around the line in the previous plot. The shading shows the uncertainty of the model. More specifcally, by assuming that the two samples (`x` and `y`) are representative of some underlying populations, or more generally of the underlying phenomena, if we were to obtain other samples from the same underlying populations that are producing the data, then 95% of experiments would produce data that would fit a straight line somewhere within these bounds.
There are two main things we can take away from this plot. First, there is a relationship between the variables, and the data nail down that relationship very well (as indicated by the small error bounds).
Second, there is noise – random variability – that limits our ability to predict *any one particular* y value from a give x value.
We can estimate the `slope` (or `b` in the equation above) and `y` intercept (or `a` in the equation above) of the straight line relationship with `numpy.polyfit()` as follows:
```
myfit = np.polyfit(x=mdff['X'], y=mdff['Y'], deg=1)
print(myfit)
```
The `deg=1` argument specifies that we want to fit the data with a *first order polynomial*, i.e., a straight line. The returned coefficients tell us that the `slope` is around 4.5 and the `y intercept` is around 240. But, of course, these numbers only allow to predict `y` *on average*. There is also the `error`, which puts a fundamental limit on our ability to predict any particular `y` value.
We can explore the error a little more by looking at the *residuals*, which is just the difference (literally) between the line and the `y` values.
Seaborn provides an easy way to peek at the rediduals using `seaborn.residplot()`.
```
sns.residplot(data=mdff, x='X', y='Y')
```
We can think of this as a picture of the noise alone; we have literally subtracted out the linear relationship with x! That means that for each value in `x` we have subtracted the value in `y` withg the corresponding value of the line. Let's think about it, in theory, if things go well with the model, meaning the line captures the data well then the subtraction should produce about half of the points above `0` and half below `0`. This is because if the model is good, unbiased then half of the `y` values should end up above the line and half below; this is because the line is supposed to pass right in the middle of the data! As we can see from the plot the distribution of the data is symmetric around `0`, and it is appears that half of the data points are above and half below `0`.
After that we can look at the range or spread of the residuals. If we think that the model is a good one, the residuls should be small. This would indicate that the line is generally close to each data point. If the line were to be far from each datapoint then the line would not be very representative of the dataset and the residuals would spread across a wide range of values. A small or large spread (or range of values) of the residuals is indicative of the quality of the fit of the model.
In our case, the plot shows that the range of the noise looks to be around 200 total in the `y` direction. If the variability is Gaussian, this would correspond to a standard deviation of around 200/6 or 33 (do you see why we divided by 6?). So we can say that we can predict y *on average* with a precision corresponding to a sigma of about +/- 33.
Let's explore this a little further by looking at the actual distribution of the residuals. In other words, lets look at the distribution of the noise component of `y`. We can do this by first getting the actual values of our best fit line at each value of `x` using `np.polyval()` – we just give it our fit from `np.polyfit()` and our x values.
```
fitvals = np.polyval(myfit, mdff['X'])
```
Now that we have those, we can compute the residual values by subtracting the best fit line from the y values.
```
myres = mdff['Y'] - fitvals
```
And then plotting those!
```
sns.displot(myres, kind='kde')
```
Sure enough, looks like a Gaussian distribution with a standard deviation of around 30 or so. Or, more precisely:
```
np.std(myres)
```
Okay, so, what have we done? We have looked at the data and then created a simple model in which `y` is a linear function of `x` plus random variability. If we let *N*(mu, sigma) stand for a normal distribution with a mean of mu and a standard deviation of sigma, the we can actually write down our as
*y = 4.5 * x + 242 + N(0, 36)*
Even though this is a simple line fit, the basic procedure we just followed is a general procedura that scientists and data scientists always use (or should use) when presented with a new dataset to explore and model. The procedure is the same no matter how complicated a situation or dataset we are dealing with:
1. look at the data
2. make a guess at the relationship (perhaps using prior knowledge in addition to the data, start with something simple, say a linear relationship)
3. fit a model to the data
4. evaluate how well the model fits the data (look at the error and residuals)
Sometimes these steps are informal and internal (Oh, yeah, that's linear), and sometimes we go way down into the weeds in the fitting and evaluation, but these are the basic steps!
##### Let's look at a second set of data!
```
mdff = pd.read_csv("datasets/008TutorialDataFile2.csv")
```
Take a peek:
```
display(mdff)
```
Plot the relationship!
```
sns.relplot(data=mdff, x='X', y='Y')
```
So these data look both the same and different than the last data set. They look the same in that it looks like there is a linear relationship between x and y. Let's look at that.
```
sns.lmplot(data=mdff, x='X', y='Y')
```
So it looks like we have a nice linear relationship as before, but the slope is perhaps not pinned down as well (note the bow tie shape of the error bounds). Cool.
But these data also look different if we look at the x values. In the last data set, the x values were evenly spaced, as though they came from a laboratory experiment in which x was intentionally adjusted in a precise way. In this data set, it looks as though x, like y, was randomly sampled.
So let's plot these data in some ways that we can look at both x and y by making a scatterplot and then adding a "rug" plot along the axes.
```
sns.scatterplot(data=mdff, x='X', y='Y')
sns.rugplot(data=mdff, x='X', y='Y')
```
The rugs are basically interior tic marks showing the positions of each data point close to the corresponding axis. Here, we can see that both variables cluster near the centers and get more sparse towards the edges of the plot.
Because both x and y seem to be random variables, the scatter plot above shows the *joint distribution* of x and y. We can take a more detailed look by plotting both the joint distribution, and the individual or *marginal* distributions of the two variables in the *margins* of the plot.
```
sns.jointplot(x="X", y="Y", data=mdff)
```
Oh! That's pretty! And we can see at a glance that each variable is distributed roughly as Gaussians as well as seeing the y vs. x relationship. We can also make a version with the best fit line by specifying the `kind` argument to `reg` for regression.
```
sns.jointplot(x="X", y="Y", data=mdff, kind="reg")
```
As a bonus, this also seems to add KDE plots to the marginal distributions!
So, looking at these data, we'd conclude that there is a positive linear relationship between x and y, though both variables are noisy. If we wished, we could of course do a deeper dive into a linear model and how well it fits!
##### Let's look at our third and final dataset.
```
mdff = pd.read_csv("datasets/008TutorialDataFile3.csv")
display(mdff)
sns.relplot(data=mdff, x='X', y='Y')
```
So, again, we have a strong relationship. Let's do our plot with a simple model:
```
sns.lmplot(data=mdff, x='X', y='Y')
```
The simple line fit looks okay, and the error bounds on the line are small. Great! Right!
If we look a little more closely at the actual data though, without getting seduced by the line, we see that the relationship here is perhaps a little more complicated. In addition to y going up with x, it looks like there's a curve in the data such that, the higher x is, the faster y increases.
Let's look at the residuals from our line fit.
```
sns.residplot(data=mdff, x='X', y='Y')
```
Here we can clearly see that there is a *pattern to the residuals*. The residuals do not look like how expected, meaning they are not centered around zero or randoml distributed around zero. There is a shape, a banana shape we would say. This is diagnostic. In general, true error, true random variability tends to be normally distributed (thank you Central Limit Theorem). Thus, if our model is really capturing the data, then the residuals should be flat and normally distributed around zero (think about it). So this "smile" pattern in the data is because our model is systematically overestimating the data in the middle and underestimating at the two extremes.
It looks like the data are bending but our model isn't.
We'll talk about different kinds of models as we go on, but a very simple way to capture a bend in data is to expand our *first degree polynomial* model – a straight line – to a *second order polynomial* in which
*y = a + bx + cx^2 + N(0,1)*
The squared term turns our carrot model into a banana model. Let's look!
```
sns.lmplot(data=mdff, x='X', y='Y', order=2)
```
Note the `order=2` argument in `lmplot()`
You might be thinking "Wait, didn't `lm` stand for *linear model*?" But now we're squaring x, doesn't that make it non-linear"? Indeed, all polynomials are actually linear models by definition – not all lines are straight!
Let's look at the residuals again!
```
g = sns.residplot(data=mdff, x='X', y='Y', order=2)
```
Okay, that looks good now! But let's go ahead and take a slightly deeper dive like we did above, but without the color commentary.
```
myfit = np.polyfit(x=mdff['X'], y=mdff['Y'], deg=2)
print(myfit)
fitvals = np.polyval(myfit, mdff['X'])
myres = mdff['Y'] - fitvals
sns.displot(myres)
np.std(myres)
```
So now we can say that the residuals now look like truly random noise, and that the data are captured by
*y = 7.3 + 0.02x + 0.7x^2 + N(0,98)*
So, here, we played all the same games that we talked about above. The difference is that we tried a candidate model, a straight line, decided it wasn't quite right, and then settled on a curvy model instead.
Now, as you might have realized already, the models we played with in this tutorial, were purely *descriptive*. We don't know what `x` and `y` are, and can't therefore say anything about *why* `y` should be related to `x`. Ultimately, we want models that represent the things that are generating the data, not just ones that reasonably describe the data.
Ultimately, though, the basic process of fitting and evaluation are the same, so this tutorial gives us a good start!
| github_jupyter |
# Table of Contents
<p><div class="lev1"><a href="#Correlation-and-Causation"><span class="toc-item-num">1 </span>Correlation and Causation</a></div><div class="lev1"><a href="#Mortality"><span class="toc-item-num">2 </span>Mortality</a></div><div class="lev1"><a href="#Deciding"><span class="toc-item-num">3 </span>Deciding</a></div>
# Correlation and Causation
- Awhile back I've told you about the Simpson's paradox and it was surprising how easy it was to draw the false conclusion from data.
- Today I will give you a deep insight of the common mistakes that's being made in interpreting statistical data by confusing correlation with causation. I'll show you example where data is correlated and why it's tempted to confuse correlation with causation. So both of those are words that start with a C and very frequently I read newspaper articles that deeply confuse both the relationship of correlation and causation--so let's dive in.
<img src="images/Screen Shot 2016-05-08 at 9.00.41 AM.png"/>
*Screenshot taken from [Udacity](https://classroom.udacity.com/courses/st101/lessons/48698742/concepts/487273800923)*
<!--TEASER_END-->
# Mortality
- Suppose you are sick, and you wake up with a strong pain in the middle of the night. You so sick that you fear you might die, but you're not sick enough not to apply the lessons of my Statistics 101 class to make a rational decision whether to go to the hospital. And in doing so, you consult the titer.
- You find that in your town, over the last year, 40 people were hospitalized of which 4 passed away. Whereas the vast part of the population of your town never went to the hospital, and of those, 20 passed away at home. So compute for me the percentages of the people who died in the hospital and the percentage of the people who died at home.
<img src="images/Screen Shot 2016-05-08 at 9.03.44 AM.png"/>
*Screenshot taken from [Udacity](https://classroom.udacity.com/courses/st101/lessons/48698742/concepts/486928270923)*
<!--TEASER_END-->
# Deciding
- Now I offer these as a fictitious example – these are relatively large numbers. But what’s important to notice is the chances of dying in a hospital are 40 times as large than dying at home.
- That means whether you die or not is correlated to whether or not you are in a hospital. So the chances of dying in a hospital are indeed 40 times larger than at home.
- So let me ask the critical question. Shall you now stay at home, given that you are a really smart
0:30statistics student, can you resist the temptation
0:33to go to the hospital because indeed it
0:36might increase your chances of passing away.
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 1, Module 4*
---
# Logistic Regression
- do train/validate/test split
- begin with baselines for classification
- express and explain the intuition and interpretation of Logistic Regression
- use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models
Logistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models).
### Setup
Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.
Libraries:
- category_encoders
- numpy
- pandas
- scikit-learn
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
```
# Do train/validate/test split
## Overview
### Predict Titanic survival 🚢
Kaggle is a platform for machine learning competitions. [Kaggle has used the Titanic dataset](https://www.kaggle.com/c/titanic/data) for their most popular "getting started" competition.
Kaggle splits the data into train and test sets for participants. Let's load both:
```
import pandas as pd
train = pd.read_csv(DATA_PATH+'titanic/train.csv')
test = pd.read_csv(DATA_PATH+'titanic/test.csv')
```
Notice that the train set has one more column than the test set:
```
train.shape, test.shape
```
Which column is in train but not test? The target!
```
set(train.columns) - set(test.columns)
```
### Why doesn't Kaggle give you the target for the test set?
#### Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)
> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:
>
> 1. a **training set**, which includes the _independent variables,_ as well as the _dependent variable_ (what you are trying to predict).
>
> 2. a **test set**, which just has the _independent variables._ You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.
>
> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. **You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.**
>
> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...
>
> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data.
### 2-way train/test split is not enough
#### Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection
> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially.
#### Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)
> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is.
#### Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.html#hypothesis-generation-vs.hypothesis-confirmation)
> There is a pair of ideas that you must understand in order to do inference correctly:
>
> 1. Each observation can either be used for exploration or confirmation, not both.
>
> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.
>
> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.
>
> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis.
#### Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)
> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ...
<img src="https://sebastianraschka.com/images/blog/2018/model-evaluation-selection-part4/model-eval-conclusions.jpg" width="600">
Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."** (The green box in the diagram.)
Therefore, we usually do **"3-way holdout method (train/validation/test split)"** or **"cross-validation with independent test set."**
### What's the difference between Training, Validation, and Testing sets?
#### Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)
> The validation set is for adjusting a model's hyperparameters. The testing data set is the ultimate judge of model performance.
>
> Testing data is what you hold out until very last. You only run your model on it once. You don’t make any changes or adjustments to your model after that. ...
## Follow Along
> You will want to create your own training and validation sets (by splitting the Kaggle “training” data).
Do this, using the [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function:
## Challenge
For your assignment, you'll do a 3-way train/validate/test split.
Then next sprint, you'll begin to participate in a private Kaggle challenge, just for your cohort!
You will be provided with data split into 2 sets: training and test. You will create your own training and validation sets, by splitting the Kaggle "training" data, so you'll end up with 3 sets total.
# Begin with baselines for classification
## Overview
We'll begin with the **majority class baseline.**
[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)
> A baseline for classification can be the most common class in the training dataset.
[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data
> For classification tasks, one good baseline is the _majority classifier,_ a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%.
## Follow Along
Determine majority class
What if we guessed the majority class for every prediction?
#### Use a classification metric: accuracy
[Classification metrics are different from regression metrics!](https://scikit-learn.org/stable/modules/model_evaluation.html)
- Don't use _regression_ metrics to evaluate _classification_ tasks.
- Don't use _classification_ metrics to evaluate _regression_ tasks.
[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions.
What is the baseline accuracy if we guessed the majority class for every prediction?
## Challenge
In your assignment, your Sprint Challenge, and your upcoming Kaggle challenge, you'll begin with the majority class baseline. How quickly can you beat this baseline?
# Express and explain the intuition and interpretation of Logistic Regression
## Overview
To help us get an intuition for *Logistic* Regression, let's start by trying *Linear* Regression instead, and see what happens...
## Follow Along
### Linear Regression?
```
train.describe()
# 1. Import estimator class
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
linear_reg = LinearRegression()
# 3. Arrange X feature matrices (already did y target vectors)
features = ['Pclass', 'Age', 'Fare']
X_train = train[features]
X_val = val[features]
# Impute missing values
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
# 4. Fit the model
linear_reg.fit(X_train_imputed, y_train)
# 5. Apply the model to new data.
# The predictions look like this ...
linear_reg.predict(X_val_imputed)
# Get coefficients
pd.Series(linear_reg.coef_, features)
test_case = [[1, 5, 500]] # 1st class, 5-year old, Rich
linear_reg.predict(test_case)
```
### Logistic Regression!
```
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Validation Accuracy', log_reg.score(X_val_imputed, y_val))
# The predictions look like this
log_reg.predict(X_val_imputed)
log_reg.predict(test_case)
log_reg.predict_proba(test_case)
# What's the math?
log_reg.coef_
log_reg.intercept_
# The logistic sigmoid "squishing" function, implemented to accept numpy arrays
import numpy as np
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
```
So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study).
# Use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models
## Overview
Now that we have more intuition and interpretation of Logistic Regression, let's use it within a realistic, complete scikit-learn workflow, with more features and transformations.
## Follow Along
Select these features: `['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']`
(Why shouldn't we include the `Name` or `Ticket` features? What would happen here?)
Fit this sequence of transformers & estimator:
- [category_encoders.one_hot.OneHotEncoder](http://contrib.scikit-learn.org/category_encoders/onehot.html)
- [sklearn.impute.SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)
- [sklearn.preprocessing.StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
- [sklearn.linear_model.LogisticRegressionCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html)
Get validation accuracy.
Plot coefficients:
Generate [Kaggle](https://www.kaggle.com/c/titanic) submission:
## Challenge
You'll use Logistic Regression for your assignment, your Sprint Challenge, and optionally for your first model in our Kaggle challenge!
# Review
For your assignment, you'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?
> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.
- Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.
- Begin with baselines for classification.
- Use scikit-learn for logistic regression.
- Get your model's validation accuracy. (Multiple times if you try multiple iterations.)
- Get your model's test accuracy. (One time, at the end.)
- Commit your notebook to your fork of the GitHub repo.
- Watch Aaron's [video #1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video #2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.
# Sources
- Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)
- Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.html#hypothesis-generation-vs.hypothesis-confirmation), Hypothesis generation vs. hypothesis confirmation
- Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection
- Mueller and Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270), Chapter 5.2.2: The Danger of Overfitting the Parameters and the Validation Set
- Provost and Fawcett, [Data Science for Business](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data
- Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)
- Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)
- Will Koehrsen, ["A baseline for classification can be the most common class in the training dataset."](https://twitter.com/koehrsen_will/status/1088863527778111488)
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from shingle import *
from text import *
import pandas as pd
from sklearn.metrics import f1_score, accuracy_score
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import gc
plt.style.use("ggplot")
```
# Utility Functions
```
def merge(texts):
res = ""
for text in texts:
res = res +" "+ text
return res.strip()
def trn_tst(dataframe, trn_p=0.8):
trn_df = pd.DataFrame()
tst_df = pd.DataFrame()
trn_txt = []
tst_txt = []
for text in dataframe.text:
lst = text.split(" ")
len_ = len(lst)
trn_txt.append(merge(lst[:int(trn_p*len_)]))
tst_txt.append(merge(lst[int(trn_p*len_):]))
trn_df['text'] = trn_txt
tst_df['text'] = tst_txt
trn_df['label'] = dataframe.label
tst_df['label'] = dataframe.label
return trn_df, tst_df
```
# Create Train and Test DataFrames
```
%%time
# create dataframe for train and test
data = create_dataframe(labels, max_words=50000)
data.shape
data.columns
data.label.replace([i for i in range(len(labels))], labels, inplace=True)
trn_df, tst_df = trn_tst(data, 0.75)
del data
gc.collect()
```
# Train Shingle Model
```
trn_df.shape
trn_df.head(1)
# initiate parameters
ngram_range, smode = (1, 2), "log"
# instanciate model
model = Shingle(df=trn_df, ngram_range=ngram_range, smode=smode)
# model.summary()
model.add_metric(accuracy_score)
%%time
# fit model
model.fit(bs=2000)
# model.to_pickle("./models/shingle.pkl")
# model.summary()
```
# Test Shingle Model
```
# maximum number of words in each sentence (row) in test dataframe
max_lenght = 10
# read test data
tst_df = create_split_dataframe(tst_df, max_lenght)
```
# Evaluate Shingle Model
to add metrics use **{model_name}.add_metric(metric)**
```
x_tst, y_tst = tst_df['text'], tst_df['label']
scores = model.evaluate(x_tst, y_tst)
len(scores)
scores
```
# Test Different Scoring Modes
```
def test_scoring_mode(modes):
scores = []
ngram_range, max_words, max_lenght, p, bs = (1, 4), 5000, 15, 0.75, 3000
data = create_dataframe(labels, max_words=max_words)
data.label.replace([i for i in range(len(labels))], labels, inplace=True)
trn_df, tst_df = trn_tst(data, p)
tst_df = create_split_dataframe(tst_df, max_lenght)
x_tst, y_tst = tst_df['text'], tst_df['label']
for mode in tqdm(modes):
model = Shingle(df=trn_df, ngram_range=ngram_range, smode=mode)
model.fit(bs=bs)
scores.append(model.evaluate(x_tst, y_tst)[0])
return scores
scores = test_scoring_mode(['sum', 'average', 'product', 'vote', 'log'])
scores = pd.DataFrame({"method":['sum', 'average', 'product', 'vote', 'log'], "score":scores})
scores.plot(x="method", y="score", kind='bar')
plt.savefig("./models/test_methods.png")
```
# Test Different Data Sizes (how much data used)
```
def test_data(max_data=[]):
scores = []
ngram_range, smode, max_lenght, p = (1, 4), "vote", 25, 0.75
for max_words in tqdm(max_data):
data = create_dataframe(labels, max_words=max_words)
data.label.replace([i for i in range(len(labels))], labels, inplace=True)
trn_df, tst_df = trn_tst(data, p)
model = Shingle(df=trn_df, ngram_range=ngram_range, smode=smode)
model.fit(bs=2000)
tst_df = create_split_dataframe(tst_df, max_lenght)
x_tst, y_tst = tst_df['text'], tst_df['label']
scores.append(model.evaluate(x_tst, y_tst)[0])
return scores
max_data = [1000, 3000, 5000, 9000, 17000, 33000, 65000]
scores = test_data(max_data)
x = list(map(str, [int(i*0.75) for i in max_data]))
plt.plot(x, scores, 'ro--')
plt.xlabel("Words Per Language")
plt.ylabel("F1 Score")
plt.savefig("./models/test_data.png")
```
# Test Different NGram Ranges
```
def test_ngram_range(ngram_ranges=[]):
scores = []
max_words, max_lenght, p, bs, mode = 5000, 15, 0.6, 3000, 'vote'
data = create_dataframe(labels, max_words=max_words)
data.label.replace([i for i in range(len(labels))], labels, inplace=True)
trn_df, tst_df = trn_tst(data, p)
tst_df = create_split_dataframe(tst_df, max_lenght)
x_tst, y_tst = tst_df['text'], tst_df['label']
for ngram_range in tqdm(ngram_ranges):
model = Shingle(df=trn_df, ngram_range=ngram_range, smode=mode)
model.fit(bs=bs)
scores.append(model.evaluate(x_tst, y_tst)[0])
return scores
ngram_ranges = [(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (1, 7)]
x = list(map(str, ngram_ranges))
scores = test_ngram_range(ngram_ranges)
plt.plot(x, scores, 'ro--')
plt.xlabel("N-Gram Range")
plt.ylabel("F1 Score")
plt.savefig("./models/test_ngram.png")
scores
```
# Test Lenght of Sentences
```
def test_lenght(max_lenghts=[]):
scores = []
max_words, ngram_range, p, bs, mode = 5000, (1, 3), 0.6, 3000, 'average'
data = create_dataframe(labels, max_words=max_words)
data.label.replace([i for i in range(len(labels))], labels, inplace=True)
trn_df, tst_df = trn_tst(data, p)
model = Shingle(df=trn_df, ngram_range=ngram_range, smode=mode)
model.fit(bs=bs)
for max_lenght in tqdm(max_lenghts):
tst_tmp = create_split_dataframe(tst_df, max_lenght)
x_tst, y_tst = tst_tmp['text'], tst_tmp['label']
scores.append(model.evaluate(x_tst, y_tst)[0])
return scores
max_lenghts = [3, 5, 7, 11, 15, 17, 19, 27]
x = list(map(str, max_lenghts))
scores = test_lenght(max_lenghts)
plt.plot(x, scores, 'ro--')
scores
S["average"] = scores
S["length"] = max_lenghts
s = pd.DataFrame(S)
s
s.plot(x="length")
plt.ylabel("F1 Score")
plt.savefig("./models/rank_lengths.png")
```
| github_jupyter |
Following https://medium.com/technovators/machine-learning-based-multi-label-text-classification-9a0e17f88bb4
```
import sys
sys.path.append('/usr/local/lib/python3.9/site-packages')
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.calibration import CalibratedClassifierCV
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn import metrics
import pandas as pd
import numpy as np
import pickle
import json
import matplotlib.pyplot as plt
import umap
from mlxtend.plotting import plot_decision_regions
import scipy
train_dict = pickle.load(open('data/train.pkl', 'rb'))
valid_dict = pickle.load(open('data/valid.pkl', 'rb'))
test_dict = pickle.load(open('data/test.pkl', 'rb'))
# combine valid and test - approx 80/20 split
test_dict = {**valid_dict, **test_dict}
len(train_dict)
len(test_dict)
# Save certain keys only
def load_dict(d:dict):
d_new = {i:{'input': d[i]['input'],
'label': d[i]['label'].split(';'),
'label_vec': d[i]['label_vec'],
'lemmas': d[i]['lemmas']}
for i in d
if len(d[i]['lemmas']) != 0}
return d_new
train_dict = load_dict(train_dict)
# valid_dict = load_dict(valid_dict)
test_dict = load_dict(test_dict)
train_data = pd.DataFrame.from_dict(train_dict, orient='index')
# valid_data = pd.DataFrame.from_dict(valid_dict, orient='index')
test_data = pd.DataFrame.from_dict(test_dict, orient='index')
vectorizer = TfidfVectorizer()
vectorised_train_documents = vectorizer.fit_transform(train_data["input"])
vectorised_test_documents = vectorizer.transform(test_data["input"])
mlb = MultiLabelBinarizer()
train_labels = mlb.fit_transform(train_data['label'])
test_labels = mlb.transform(test_data['label'])
svmClassifier = OneVsRestClassifier(LinearSVC(), n_jobs=-1)
# svmClassifier = CalibratedClassifierCV(svmClassifier)
svmClassifier.fit(vectorised_train_documents, train_labels)
svmPreds = svmClassifier.predict(vectorised_test_documents)
# svmProba = svmClassifier._predict_proba_lr(vectorised_test_documents)
svmDF = svmClassifier.decision_function(vectorised_test_documents)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix, hamming_loss
accuracy_score(test_labels, svmPreds)
f1_score(test_labels, svmPreds, average='micro')
precision_score(test_labels, svmPreds, average='micro')
recall_score(test_labels, svmPreds, average='micro')
with open('map_labels.json', 'r') as f:
map_labels = json.load(f)
# map_labels
# map_labels_rev = {map_labels[i]:i for i in map_labels}
print(metrics.classification_report(test_labels, svmPreds, target_names=map_labels.keys()))
# Which categories did the model perform best and worst on?
confusion_matrix = metrics.multilabel_confusion_matrix(test_labels, svmPreds)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
disp1 = metrics.ConfusionMatrixDisplay(confusion_matrix[0]).plot(ax=ax1)
disp1.ax_.set_title('Mechanism')
disp2 = metrics.ConfusionMatrixDisplay(confusion_matrix[1]).plot(ax=ax2)
disp2.ax_.set_title('General Info')
metrics.roc_auc_score(test_labels, svmDF, average=None)
# The coverage_error function computes the average number of labels that have to be included in the final prediction such that all true labels are predicted. This is useful if you want to know how many top-scored-labels you have to predict in average without missing any true one.
# The best value of this metrics is thus the average number of true labels.
metrics.coverage_error(test_labels, svmPreds)
map_labels
```
| github_jupyter |
# Forecasting Air Passenger using Random Forest
```
import numpy as np
import pandas as pd
import os
import warnings
from copy import copy
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import ExtraTreesRegressor
# perosnal wrapper created for time-series plots
import ts_utils
plt.rcParams['figure.figsize'] = 15, 6
plt.style.use('ggplot')
warnings.filterwarnings("ignore")
```
## Data preparation
```
DATADIR = '../data/air-passenger/'
data_path = os.path.join(DATADIR, 'AirPassengers.csv')
MODELDIR = '../checkpoints/air-passenger/et/model/'
df = pd.read_csv(data_path, usecols=[1], header=0, engine='python')
data = df.values
data = data.astype('float32')
plt.plot(data)
plt.title('#Air Passengers from 1949 to 1960')
plt.ylabel('#thousands of passengers')
plt.xlabel('Years')
plt.tight_layout()
plt.show()
scaled_data = np.log(data)
print(scaled_data[:5])
plt.plot(scaled_data)
plt.title('#Air Passengers from 1949 to 1960')
plt.ylabel('Scaled #passengers')
plt.xlabel('Years')
plt.tight_layout()
plt.show()
```
## Train test split
```
train, test = train_test_split(scaled_data, train_size=0.8 ,shuffle=False)
len(train), len(test)
X_train, y_train = ts_utils.prepare_data(train, time_step=1)
X_test, y_test = ts_utils.prepare_data(test, time_step=1)
print(X_train.shape)
print(X_test.shape)
```
## Model fitting
```
params = {
'max_depth': 5,
'n_estimators': 200,
'max_features' : 1
}
model = ExtraTreesRegressor(**params)
model.fit(X_train, y_train)
if os.path.exists(MODELDIR):
pass
else:
os.makedirs(MODELDIR)
pickle.dump(model, open(os.path.join(MODELDIR, 'et.model'), 'wb'))
```
## Prediction and evaluation
```
# prediction
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# the results are in the form of scaled value, so inverse the transformation
y_train_pred_inv = np.exp(y_train_pred)
y_test_pred_inv = np.exp(y_test_pred)
# will be used for calculating MAE and RMSE
y_train_inv = np.exp(y_train)
y_test_inv = np.exp(y_test)
# MAE and RMSE calculation
train_rmse = np.sqrt(mean_squared_error(y_train_inv, y_train_pred_inv))
train_mae = mean_absolute_error(y_train_inv, y_train_pred_inv)
train_nrmse = train_rmse/np.std(y_train_inv)
test_rmse = np.sqrt(mean_squared_error(y_test_inv, y_test_pred_inv))
test_mae = mean_absolute_error(y_test_inv, y_test_pred_inv)
test_nrmse = test_rmse/np.std(y_test_inv)
print(f'Training NRMSE: {train_nrmse}')
print(f'Training MAE: {train_mae}')
print(f'Test NRMSE: {test_nrmse}')
print(f'Test MAE: {test_mae}')
plt.plot(y_test_inv, label='test')
plt.plot(y_test_pred_inv, label='test predicted')
plt.xlabel('time')
plt.ylabel('#passengers')
plt.title('Actual vs predicted on test data using ExtraTrees', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
plt.plot(y_train_inv, label='actual')
plt.plot(y_train_pred_inv, label='predicted')
plt.ylabel('#passengers')
plt.xlabel('time')
plt.title('Actual vs Predicted on Training data', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
plt.plot(y_test_inv, label='actual')
plt.plot(y_test_pred_inv, label='predicted')
plt.ylabel('#passengers')
plt.xlabel('time')
plt.title('Actual vs Predicted on test data', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
actual_data = np.vstack((y_train_inv, y_test_inv))
test_plot = np.empty_like(actual_data)
test_plot[:, :] = np.nan
test_plot[len(y_train_pred_inv):len(actual_data):, ] = y_test_pred_inv.reshape(-1, 1)
plt.plot(actual_data, label='actual')
plt.plot(y_train_pred_inv, color='blue', linestyle='-.', label='training prediction')
plt.plot(test_plot, color='green', linestyle='-', label='test prediction')
plt.xlabel('Years')
plt.ylabel('#thousand air passengers')
plt.title('Actual v/s Predicted using Random Forest', fontsize=15)
plt.legend()
plt.tight_layout()
plt.show()
```
| github_jupyter |
https://twitter.com/yujitach/status/1424030835771023363
```
VERSION
]st
Threads.nthreads()
"""
Original
* https://gist.github.com/yujitach/c30d7a174bbc3d3d3e40a3c0f9f9d47f
* TAB を " " で置換
"""
module Original
using LinearAlgebra,LinearMaps
import Arpack
const L=20
diag_ = zeros(Float64,2^L)
function prepareDiag(diag)
for state = 1 : 2^L
for i = 1 : L
j = i==L ? 1 : i+1
diag[state] -= (((state >> (i-1))&1) == ((state >> (j-1))&1)) ? 1 : -1
end
end
end
function Hfunc!(C,B,diag)
for state = 1 : 2^L
C[state] = diag[state] * B[state]
end
for state = 1 : 2^L
for i = 1 : L
newstate = (state&(~(2^L))) ⊻ (1<<(i-1))
if newstate==0
newstate = 2^L
end
C[newstate] -= B[state]
end
end
end
println("preparing...")
prepareDiag(diag_)
println("computing the lowest eigenvalue...")
H=LinearMap((C,B)->Hfunc!(C,B,diag_),2^L,ismutating=true,issymmetric=true,isposdef=false)
@time e,v = Arpack.eigs(H,nev=1,which=:SR)
@time e,v = Arpack.eigs(H,nev=1,which=:SR)
println("obtained:")
println(e[1])
println("theoretical:")
println(-2sum([ abs(sin((n-1/2) * pi/L)) for n in 1 : L]))
end;
"""
Rev0
* This revision is almost equivalent to the original.
* Stop using constants.
* Always pass global variables to functions as arguments.
* Swap the order of the for loop.
* Add @inbounds macro.
* Revise sum([f(x) for x in X]) to sum(f(x) for x in X).
"""
module Rev0
using LinearAlgebra, LinearMaps
import Arpack
function prepareDiag(L)
diag = zeros(2^L)
for state = 1:2^L
for i = 1:L
j = i==L ? 1 : i+1
@inbounds diag[state] -= (((state >> (i-1))&1) == ((state >> (j-1))&1)) ? 1 : -1
end
end
diag
end
function Hfunc!(C, B, diag, L)
for state = 1:2^L
@inbounds C[state] = diag[state] * B[state]
end
for i = 1:L
for state = 1:2^L
newstate = (state&(~(2^L))) ⊻ (1<<(i-1))
if newstate == 0
newstate = 2^L
end
@inbounds C[newstate] -= B[state]
end
end
end
prepareHfunc!(diag, L) = (C, B) -> Hfunc!(C, B, diag, L)
L = 20
println("preparing...")
diag_ = prepareDiag(L)
println("computing the lowest eigenvalue...")
H = LinearMap(prepareHfunc!(diag_, L), 2^L, ismutating=true, issymmetric=true, isposdef=false)
@time e, v = Arpack.eigs(H, nev=1, which=:SR)
@time e, v = Arpack.eigs(H, nev=1, which=:SR)
println("obtained:")
println(e[1])
println("theoretical:")
println(-2sum(abs(sin((n-1/2) * pi/L)) for n in 1:L))
end;
"""
Rev1
* Use Threads.@threads macro.
"""
module Rev1
using LinearAlgebra, LinearMaps
import Arpack
function prepareDiag(L)
diag = zeros(2^L)
for state = 1:2^L
for i = 1:L
j = i==L ? 1 : i+1
@inbounds diag[state] -= (((state >> (i-1))&1) == ((state >> (j-1))&1)) ? 1 : -1
end
end
diag
end
function Hfunc!(C, B, diag, L)
Threads.@threads for state = 1:2^L
@inbounds C[state] = diag[state] * B[state]
end
for i = 1:L
Threads.@threads for state = 1:2^L
newstate = (state&(~(2^L))) ⊻ (1<<(i-1))
if newstate == 0
newstate = 2^L
end
@inbounds C[newstate] -= B[state]
end
end
end
prepareHfunc!(diag, L) = (C, B) -> Hfunc!(C, B, diag, L)
L = 20
println("preparing...")
diag_ = prepareDiag(L)
println("computing the lowest eigenvalue...")
H = LinearMap(prepareHfunc!(diag_, L), 2^L, ismutating=true, issymmetric=true, isposdef=false)
@time e, v = Arpack.eigs(H, nev=1, which=:SR)
@time e, v = Arpack.eigs(H, nev=1, which=:SR)
println("obtained:")
println(e[1])
println("theoretical:")
println(-2sum(abs(sin((n-1/2) * pi/L)) for n in 1:L))
end;
"""
Rev2
* Use LoopVectorization.@tturbo macro.
"""
module Rev2
using LinearAlgebra, LinearMaps
import Arpack
using LoopVectorization
function prepareDiag(L)
diag = zeros(2^L)
for state = 1:2^L
for i = 1:L
j = i==L ? 1 : i+1
@inbounds diag[state] -= (((state >> (i-1))&1) == ((state >> (j-1))&1)) ? 1 : -1
end
end
diag
end
function Hfunc!(C, B, diag, L)
N = length(diag)
@tturbo for state = 1:N
C[state] = diag[state] * B[state]
end
for i = 1:L
@tturbo for state = 1:N
newstate = (state&(~(2^L))) ⊻ (1<<(i-1))
c = newstate == 0
newstate = !c*newstate + c*N # remove if statement
C[newstate] -= B[state]
end
end
end
prepareHfunc!(diag, L) = (C, B) -> Hfunc!(C, B, diag, L)
L = 20
println("preparing...")
diag_ = prepareDiag(L)
println("computing the lowest eigenvalue...")
H = LinearMap(prepareHfunc!(diag_, L), 2^L, ismutating=true, issymmetric=true, isposdef=false)
@time e, v = Arpack.eigs(H, nev=1, which=:SR)
@time e, v = Arpack.eigs(H, nev=1, which=:SR)
println("obtained:")
println(e[1])
println("theoretical:")
println(-2sum(abs(sin((n-1/2) * pi/L)) for n in 1:L))
end;
using BenchmarkTools
using LinearAlgebra
using Arpack: Arpack
H = Original.H
H0 = Rev0.H
H1 = Rev1.H
H2 = Rev2.H
B, C = similar(Original.diag_), similar(Original.diag_)
println("Hamiltonian bench")
print(" Original: ")
@btime mul!($C, $H, $B)
print(" Rev0 (almost original): ")
@btime mul!($C, $H0, $B)
print(" Rev1 (Threads.@threads): ")
@btime mul!($C, $H1, $B)
print(" Rev2 (LoopVectorization):")
@btime mul!($C, $H2, $B);
println("Arpack.eigs bench")
print(" Original: ")
@btime e, v = Arpack.eigs($H, nev=1, which=:SR)
print(" Rev0 (almost original): ")
@btime e, v = Arpack.eigs($H0, nev=1, which=:SR)
print(" Rev1 (Threads.@threads): ")
@btime e, v = Arpack.eigs($H1, nev=1, which=:SR)
print(" Rev2 (LoopVectorization):")
@btime e, v = Arpack.eigs($H2, nev=1, which=:SR);
@show a = 2
@show foo = x -> a*x
@show foo(3)
@show a = 10
@show foo(3);
@show a = 2
@show makebar(a) = x -> a*x
@show bar = makebar(a)
@show bar(3)
@show a = 10
@show bar(3);
```
| github_jupyter |
```
import os
import zipfile
import random
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from shutil import copyfile
# If the URL doesn't work, visit https://www.microsoft.com/en-us/download/confirmation.aspx?id=54765
# And right click on the 'Download Manually' link to get a new URL to the dataset
# Note: This is a very large dataset and will take time to download
!wget --no-check-certificate \
"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip" \
-O "/tmp/cats-and-dogs.zip"
local_zip = '/tmp/cats-and-dogs.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
print(len(os.listdir('/tmp/PetImages/Cat/')))
print(len(os.listdir('/tmp/PetImages/Dog/')))
# Expected Output:
# 12501
# 12501
try:
os.mkdir('/tmp/cats-v-dogs')
os.mkdir('/tmp/cats-v-dogs/training')
os.mkdir('/tmp/cats-v-dogs/testing')
os.mkdir('/tmp/cats-v-dogs/training/cats')
os.mkdir('/tmp/cats-v-dogs/training/dogs')
os.mkdir('/tmp/cats-v-dogs/testing/cats')
os.mkdir('/tmp/cats-v-dogs/testing/dogs')
except OSError:
pass
def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):
files = []
for filename in os.listdir(SOURCE):
file = SOURCE + filename
if os.path.getsize(file) > 0:
files.append(filename)
else:
print(filename + " is zero length, so ignoring.")
training_length = int(len(files) * SPLIT_SIZE)
testing_length = int(len(files) - training_length)
shuffled_set = random.sample(files, len(files))
training_set = shuffled_set[0:training_length]
testing_set = shuffled_set[:testing_length]
for filename in training_set:
this_file = SOURCE + filename
destination = TRAINING + filename
copyfile(this_file, destination)
for filename in testing_set:
this_file = SOURCE + filename
destination = TESTING + filename
copyfile(this_file, destination)
CAT_SOURCE_DIR = "/tmp/PetImages/Cat/"
TRAINING_CATS_DIR = "/tmp/cats-v-dogs/training/cats/"
TESTING_CATS_DIR = "/tmp/cats-v-dogs/testing/cats/"
DOG_SOURCE_DIR = "/tmp/PetImages/Dog/"
TRAINING_DOGS_DIR = "/tmp/cats-v-dogs/training/dogs/"
TESTING_DOGS_DIR = "/tmp/cats-v-dogs/testing/dogs/"
split_size = .9
split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)
split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)
# Expected output
# 666.jpg is zero length, so ignoring
# 11702.jpg is zero length, so ignoring
print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))
print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))
print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))
print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))
# Expected output:
# 11250
# 11250
# 1250
# 1250
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['acc'])
TRAINING_DIR = "/tmp/cats-v-dogs/training/"
# Experiment with your own parameters here to really try to drive it to 99.9% accuracy or better
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
VALIDATION_DIR = "/tmp/cats-v-dogs/testing/"
# Experiment with your own parameters here to really try to drive it to 99.9% accuracy or better
validation_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
# Expected Output:
# Found 22498 images belonging to 2 classes.
# Found 2500 images belonging to 2 classes.
# Note that this may take some time.
history = model.fit_generator(train_generator,
epochs=15,
steps_per_epoch = 100,
verbose=1,
validation_data=validation_generator)
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r', "Training Loss")
plt.plot(epochs, val_loss, 'b', "Validation Loss")
plt.figure()
# Desired output. Charts with training and validation metrics. No crash :)
# Here's a codeblock just for fun. You should be able to upload an image here
# and have it classified without crashing
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a dog")
else:
print(fn + " is a cat")
```
| github_jupyter |
# 18 - K Nearest Neighbors (KNN) - Theory
- Here we will understand the K Nearest Neighbour Algorithm and how to use it for classification problems.
## Reading Assignment
Chapter 4 : Introduction to Statistical Learning (ISLR) By Gareth James, et al.
## What is KNN ?
- K Nearest Neighbors is a classification algorithm that operates on a very simple principle.
- It is best shown through example!
- Imagine we had some imaginary data on Dogs and Horses, with heights and weights.

- We want to know that given an animal, whose height and weight we know, we want to predict if that data point is a horse or a dog.
- Based on where green point lies, for which we have to predict and association of its neigboring point we can tell for most the cases whether it is dog or horse.
- It is easy to predict for points at topmost and bottom most areas but tricky part is to logically classify the green point in the middle.
- This is where K nearest neighbors algorithm comes into play.
## Training Algorithm :
- Store all the data.
## Prediction Algorithm :
1. Calculate the distance from x to all points in your data.
2. Sort the points in your data by increasing distance from X.
3. Predict the majority label of the “k” closest points.
x being a particular new data point.
k being the number of closest points.
### Choosing a K
- Choosing a K will affect what class a new point is assigned to :

- In image above we first plot our training data, yellow points in class A, purple points in class B.
- Star indicates a new point. We want to predict whether star belongs to class A or class B.
- If we have K = 3 then we look at the 3 nearest neighbors to star, if we choose k = 6 then we in the vicinity of 6 nearest neighbours of the new point star.
- As we can see for k = 3 new point belongs to class B and for k = 6 new point belongs to class A.
- Choosing a K will affect what class a new point is assigned to.
- For horse vs dog classification we can see plotted effects for various K values below :

- For a smaller k value we might pickup some noise.
- On going to larger k values, the distinction boundary becomes smooth and easily differentiates, while doing so it does a few False Positive and Flase Negative errors.
## Pros of KNN Algorithm
- Very simple
- Training is trivial
- Works with any number of classes
- Easy to add more data
- Few parameters
- K (How many points to look at which are neighbors of new given point.)
- **Distance Metric** (More on this in Chapter 4 of ISLR) In layman terms it is just how you define mathematically the distance between new data point and old training points.
## Cons of KNN Algorithm
- High Prediction Cost (worse for large datasets)
- Not ideal to apply for high dimensional data.
- Categorical features do not work well.
# Next :
- A common interview task for a data scientist position is to be given anonymized data and attempt to classify it, without knowing the context of the data.
- We are also going to simulate a similar scenario by giving you some **"classified"** data, where what the columns represent is not known, but you have to use KNN to classify it.
| github_jupyter |
```
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import BooleanType, IntegerType
from datetime import *
from settings import obtener_timestamp, obtener_dia_semana
""" Configuramos Spark """
conf = SparkConf()
conf.setAppName("ProcesamientoDatos")
conf.setMaster("local[*]")
spark = SparkSession.builder.config(conf=conf).getOrCreate()
```
Leemos los datos procesados del archivo guardado
```
data = spark.read.format('parquet').load('./../datos/processed/full.parquet/')
data.printSchema()
tiempo_fin = obtener_timestamp("2013-01-01", "01:30")
tiempo_inicio = tiempo_fin - timedelta(minutes=30)
```
### Consulta 1: Rutas frequentes
En esta primera búsqueda lo que vamos a obtener es las 10 rutas más frecuentes durante los 30 minutos anteriores. Estas rutas contarán únicamente si el viaje ha sido completado, es decir, si el usuario se ha bajado del taxi. La salida de la consulta será la siguiente:
hora_subida, hora_bajada, celda_subida_1, celda_bajada_1, ..., celda_subida_10, celda_bajada_10, tiempo_ejecucion
```
mejor = data.filter(data.hora_subida <= tiempo_fin) \
.filter(data.hora_subida >= tiempo_inicio) \
.filter(data.hora_bajada <= tiempo_fin) \
.filter(data.hora_bajada >= tiempo_inicio) \
.groupBy("cuad_longitud_subida", "cuad_latitud_subida", "cuad_longitud_bajada", "cuad_latitud_bajada") \
.count().orderBy(desc("count"))
mejor = mejor.take(10)
mejor
```
### Consulta 1B: Rutas frequentes
En esta primera búsqueda lo que vamos a obtener es las 10 rutas más frecuentes en los dias de una semana durante los 30 minutos anteriores a una fecha dada. Estas rutas contarán únicamente si el viaje ha sido completado, es decir, si el usuario se ha bajado del taxi. La salida de la consulta será la siguiente:
hora_subida, hora_bajada, celda_subida_1, celda_bajada_1, ..., celda_subida_10, celda_bajada_10, tiempo_ejecucion
```
dia_elegido = obtener_dia_semana("Lunes")
```
Debido a diversas limitaciones de spark utilizamos variables globales para hacer las limitaciones de tiempo
```
hora_fin = datetime.strptime("00:30:00", "%H:%M:%S")
hora_inicio = (hora_fin - timedelta(minutes=30))
def comparar_hora(hora):
"""
Metodo que filtra las horas de los registros para que concuerden
con las horas de busqueda deseada
:param hora: Timestamp completo
:return: True si las horas del timestamp estan entre las deseadas
False si lo contrario
"""
if hora.time() <= hora_fin.time() and hora.time() >= hora_inicio.time():
return True
return False
def relevancia(fecha):
"""
Metodo que da mas relevancia a los viajes mas cercanos a la
fecha de busqueda deseada.
Si la diferencia es menor a un mes de la fecha
dada los registros tienen más relevancia
:param fecha: Timestamp completo
:return: 2 si el viaje esta cerca de la fecha deseada, 1 si no
"""
diferencia = fecha - tiempo_fin
if diferencia < timedelta(days=30) and diferencia > timedelta(days=-30):
return 2
else:
return 1
comprobar_hora = udf(comparar_hora, BooleanType())
calcular_relevancia = udf(relevancia, IntegerType())
filtered = data.filter(data.dia_semana == dia_elegido) \
.filter(comprobar_hora(data.hora_subida)) \
.filter(comprobar_hora(data.hora_bajada)) \
.withColumn('relevancia', calcular_relevancia(data.hora_subida))
frequent = filtered.groupBy("cuad_longitud_subida", "cuad_latitud_subida", \
"cuad_longitud_bajada", "cuad_latitud_bajada") \
.sum("relevancia") \
.select(col("cuad_longitud_subida"), col("cuad_latitud_subida"), \
col("cuad_longitud_bajada"), col("cuad_latitud_bajada"), \
col("sum(relevancia)").alias("frecuencia")) \
.orderBy("frecuencia", ascending=False)
filtered.show()
mes = "06"
hora = "15:00"
HORA_FIN = datetime.strptime(mes + " " + hora, "%m %H:%M")
print(HORA_FIN)
```
| github_jupyter |
```
from traitlets.config.manager import BaseJSONConfigManager
# To make this work, replace path with your own:
# On the command line, type juypter --paths to see where your nbconfig is stored
# Should be in the environment in which you install reveal.js
# path = "/Users/jacobperricone/anaconda/envs/py36/bin/jupyter"
# cm = BaseJSONConfigManager(config_dir=path)
# cm.update('livereveal', {
# 'theme': 'simple`',
# 'transition': 'zoom',
# 'start_slideshow_at': 'selected',
# })
import numpy as np
import scipy
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(rc={"figure.figsize": (20, 7), "lines.linewidth": 2.5}, font_scale=1.5)
sns.set_style("whitegrid")
np.random.seed(0)
%%HTML
<link rel="stylesheet" type="text/css" href="custom.css">
```
# CME 193
## Introduction to Scientific Python
## Spring 2018
<br>
## Lecture 7
-------------
## More ```Pandas```, ```SciPy```, and ```scikit-learn```
# Lecture 7 Contents
* Admin
* Python Environments
* More Pandas
* SciPy
* scikit-learn
# HW2/Project
- Again, you are strongly encouraged to do the project instead of the HW2
- HW2 is useful, but doing a project allows you to choose something you're more interested in
- Lots of freedom, your chance to work on something you like
### Proposals due ```4/30```
https://web.stanford.edu/~jacobp2/src/html/project.html
# Exercises
- Your solutions are to be turned in on ```5/15 ```
- Turn in a zipped folder with Jupyter notebooks for each section.
- What's most important is that your solutions are there and you have some form of comments letting us know how to navigate your code.
- Jupyter markdown is really great for this
https://web.stanford.edu/~jacobp2/src/html/exercises.html
---
# Python Environments
# Python environments
- Pip is a package manager, and Virtualenv is a widely used environment manager.
- Conda is both
- You can look at the differences here: https://conda.io/docs/_downloads/conda-pip-virtualenv-translator.html
If you are using virtualenv, I recommend also using virtualenvwrapper:
You can install virtualenv with brew and virtualenvwrapper with pip.
# Why do we want to use Python environments?
- virtualenv/conda creates a folder which contains all the necessary executables to use the packages that a Python project would need.
- You can download an open source project and easily install the requirements in a self-contained environment.
- You can manage environments of Python 2 and Python 3, ensure dependencies don’t clash.
## Conda Environments
Documentation:
https://conda.io/docs/using/envs.html
This tutorial spells the workflow out exactly:
https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/
Very simple, highly recommend!
# Git
When working on a task like your project, it is often the case that we want to version our code.
```git```: version control system.
Tutorial: https://medium.com/@abhishekj/an-intro-to-git-and-github-1a0e2c7e3a2f
---
# More pandas
# Concatenate DataFrames
- The ``` concat ``` function in pandas does all the heavy lifting of concatenation operations along an axis
- The ``` concat ``` function as well does optional set logic (union or intersection) of the indexes (if any) on the other axes
- Syntax:
``` python
pd.concat(objs, axis = 0, join = 'outer', join_axes = None, ignore_index = False, ... )
```
```
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],'B': ['B0', 'B1', 'B2', 'B3'],'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],'B': ['B4', 'B5', 'B6', 'B7'],'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},index=[4, 5, 6, 7])
end_string = '\n' + '--'*25 + '\n'
print(df1, df2, sep =end_string )
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],'B': ['B8', 'B9', 'B10', 'B11'],'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},index=[8, 9, 10, 11])
df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],'D': ['D2', 'D3', 'D6', 'D7'], 'F': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7])
print(df3,df4, sep = end_string)
result = pd.concat([df1, df2, df3])
print(result)
results = pd.concat([df1, df2, df3], keys = ['a','b','c'])
print(results, end = end_string)
print(results.loc['a'])
```
## Set logic on the other axes
- When concatenating DataFrames or Panels or Series, you have a choice on how to handle the other axes, i.e. how you join the two objects:
- Default: join = `outer`. Takes the sorted union of them all
- Take the intersection: join = `inner`
- Use a specific index, i.e. use the ``` join_axes ``` argument
```
#df1 has indices 0-3, df4 has indices 2,3,6, T
print(df1,df2, sep = end_string)
print(df3,df4,sep = end_string)
#take the outer join of the two indices and concatenate along the 1st axis
result2 = pd.concat([df1, df4], axis = 1)
print(result2)
result2['B']
df1
df4
# Take the inner join of the two indices
result2 = pd.concat([df1, df4], axis = 1, join = 'inner')
print(result2)
# Specify an exact index
result = pd.concat([df1, df4], axis=1, join_axes=[df1.index])
print(result)
```
## df.append
- A useful shortcut for ``` pd.concat ``` is the instance method ``` df.append(df2 or [list of other dfs]) ```
- This simply concatenates along axes 0
- The indices must be disjoint but the columns do not need to be
```
print(df1, df2, df4, sep= end_string)
# Simple append
print(df1.append(df2), end = end_string)
# example where columns are not disjoint (notice the repeated values)
print(df1.append(df4))
# Multiple dfs
df1.append([df2, df3])
```
# Ignoring indexes
- For DataFrames without a meaningful index (i.e. just``` 0,..., len(df) - 1 ```), you can append them and ignore the fact that there may be overlapping indices
- Done by setting ```ignore_index = True```
```
print(df1, df4, sep= end_string)
# can also use append (df1.append(df4, ignore_index = True))
result = pd.concat([df1, df4], ignore_index = True)
print(result)
```
## Function Application
- Row or Column-wise Function Application: Applies function along input axis of DataFrame
```python
df.apply(func, axis = 0) ```
- Elementwise: apply the function to every element in the df
```python
df.applymap(func) ```
- Note, ``` applymap ``` is equivalent to the ``` map ``` function on lists.
- Note, ``` Series ``` objects support ``` .map ``` instead of ``` applymap ```
```
## APPLY EXAMPLES
import numpy as np
df1 = pd.DataFrame(np.random.randn(6,4), index=list(range(0,12,2)), columns=list('abcd'))
# Apply to each column
print(df1.apply(np.mean), end = end_string)
# Apply to each row
print(df1.apply(np.mean, axis = 1), end = end_string)
# Use lambda functions to normalize columns
print(df1.apply(lambda x: (x - x.mean())/ x.std()).head())
df1[['a','b']].apply(lambda x: (x - x.mean())/ x.std()).head()
## APPLY EXAMPLES
# Can get trickier, say I wanted to find where the maximum dates occured for each column of the df:
tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'],
index=pd.date_range('1/1/2000', periods=1000))
# easy
print(tsdf.apply(lambda x: x.idxmax()))
## APPLYMAP EXAMPLES
print(tsdf.head(),end = end_string)
tmp = tsdf.applymap(lambda x: x - 1)
print(tmp.head())
```
## The split/apply combo
- pandas objects can be split on any of their axes. The abstract definition of grouping is to provide a mapping of labels to group names:
- Syntax:
- ``` groups = df.groupby(key) ```
- ``` groups = df.groupby(key, axis = 1) ```
- ``` groups = df.groupby([key1, key2], axis = 1) ```
### Some Theory
- The motivation behind the groupby concept is that we often want to apply the same function on subsets of the dataframe, based on some key we use to split the DataFrame into subsets
- This idea is referred to as the "split-apply-combine" operation:
- Split the data into groups based on some criteria
- Apply a function to each group independently
- Combine the results

```
import pandas as pd
```
## Simple example:
- Say we have a DataFrame of two columns, key and data, and we want to find the sum of column data for each unique value in key.
```
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
print(df)
```
## Ugly way:
- We could do
``` python
df[df['key'] == 'A'].sum()
df[df['key'] === 'B'].sum() ....```
For all the keys in the dictionary
- Or we can group by the column 'key' and apply the sum function to each group.
``` python
df.groupby('key').sum()
```
```
df.groupby('key').sum()
```
#### Let's switch over to another notebook and see some more ```pandas```
It's on the [website][lec7-climate] next to today's lecture under climate-change.
[lec7-climate]: https://stanford.edu/~jacobp2/src/html/lectures.html
---
# Scipy
## What is SciPy?
* SciPy is a library of algorithms and mathematical tools built to work with NumPy arrays.
- scipy.linalg statistics
- scipy.stats optimization
- scipy.optimize sparse matrices
- scipy.sparse signal processing
- scipy.signal etc.
## ```scipy.linalg```
* Slightly different from numpy.linalg. Always uses BLAS/LAPACK support, so could be faster.
* Support for special matrices, many more functions for advanced algorithms
* Matrix decompositions and many equation solvers and matrix decomposition utilities
## ```scipy.optimize```
- General purpose minimization: CG, BFGS, least-squares
- Constrainted minimization; non-negative least-squares
- Minimize using simulated annealing
- Scalar function minimization
- Root finding
- Check gradient function Line search
## ```scipy.stats```
- Mean, median, mode, variance, kurtosis
- Pearson correlation coefficient
- Hypothesis tests (ttest, Wilcoxon signed-rank test, Kolmogorov-Smirnov)
- Gaussian kernel density estimation
See also SciKits (or scikit-learn).
## ```scipy.sparse```
- Sparse matrix classes: CSC, CSR, etc.
- Functions to build sparse matrices
- Use ```sparse.linalg``` module for sparse linear algebra methods
- ```sparse.csgraph``` for sparse graph routines
## ```scipy.signal```
- Convolutions
- B-splines
- Filtering
- Continuous-time linear system
- Wavelets
- Peak finding
## ```scipy.io```
Methods for loading and saving data
- Matlab files
- Matrix Market files (sparse matrices)
- Wav file
and much more.
## A few quick ```SciPy``` examples
```
from scipy import optimize
def f(x):
return [x[0] + 0.5 * (x[0] - x[1])**3 - 1.0,
0.5 * (x[1] - x[0])**3 + x[1]]
x0 = [0, 0] # initial guess
sol = optimize.root(f, x0)
print(sol.x)
print(sol.success)
```
## Exercise
- Create a matrix (A) of random entries (your choice on distribution) with m > n (more rows than columns).
- Create a column vector $b ∈ R^m$.
- Find ```x``` that minimizes ```|Ax − b|^2```. What is the norm of the residual?
Hint: use scipy.linalg.lstsq
```
import numpy as np
from scipy import linalg
n = 100
m = 200
A = np.random.randn(m,n)
b = np.random.randn(m,1)
x = linalg.lstsq(A, b)
yhat = np.dot(A,x[0])
print(linalg.norm(yhat-b))
### scipy.linalg
end_string = '\n' + '-'*50 + '\n'
from scipy import linalg
arr = np.array([[1, 2],[3, 4]])
print(arr)
# find determinate
print("Det: {}".format(linalg.det(arr)), end = end_string)
print("Inv: \n {}".format(linalg.inv(arr)), end = end_string)
arr = np.arange(9).reshape((3, 3)) + np.diag([1, 0, 1])
print("X: \n {}".format(arr), end = end_string)
# u are left singular vectors (set of orthonormal eigenvecotrs) of arr arr.T
# v are right singular vectors (set of orthonormal eigenvecotrs) of arr.T arr
# s is singular values, suare roots of eigenvectors of arr.T arr
u, s, v = linalg.svd(arr)
print("U: \n {}".format(u), "S: \n {}".format(s), "V {}".format(v), sep = end_string,end = end_string)
# Can compute eigenvalues, cholesky decompositions, eigenvectors etc
eig = linalg.eigvals(arr)
print("Eigs: \n {}".format(eig), end = end_string)
eigs, z = linalg.eigh(arr)
print("Eigs: \n {}".format(eigs), "Eigenvectors: \n {}".format(z), sep = end_string)
from scipy.interpolate import interp1d
#scipy.interpolate is useful for fitting a function from experimental data and thus evaluating points where no measure exists.
# The module is based on the FITPACK Fortran subroutines.
measured_time = np.linspace(0, 1, 10)
noise = (np.random.random(10)*2 - 1) * 1e-1
measures = np.sin(2 * np.pi * measured_time) + noise
linear_interp = interp1d(measured_time, measures)
cubic_interp = interp1d(measured_time, measures, kind = 'cubic')
time = np.linspace(0, 1, 50)
lin_approx = linear_interp(time)
cub_approx = cubic_interp(time)
plt.plot(measured_time, measures, 'o', time, lin_approx, '-',
time, cub_approx, '--')
plt.legend(['data', 'linear', 'cubic'], loc='best')
plt.show()
```
---
# `scikit-learn`: machine learning in Python
# `scikit-learn`: machine learning in Python
See: <http://scikit-learn.org/stable/>
* Simple and efficient tools for data mining and data analysis
* Accessible to everybody, and reusable in various contexts
* Built on NumPy, SciPy, and matplotlib
* Open source, commercially usable - BSD license
## `scikit-learn`: machine learning in Python
#### * Regression
#### * Clustering
#### * Dimensionality Reduction
#### * Model Selection
#### * Preprocessing
#### Supervised learning
* Regression and classification methods
* All types of models: logistic regression, ridge, SVM, lasso regression, decision trees... up to Neural networks (no GPU support)
* Stochastic Gradient Descent, Nearest-Neighbors,
* Also features semi-supervised learning, ensemble methods, feature selection methods, Naiive Bayes, and Isotonic Regression
#### Unsupervised learning
* Gaussian Mixture Models, Manifold Learning
* Clustering, Bi-clustering
* PCA, LDA, Outlier detection, Covariance estimation
## `scikit-learn`
#### Definitely check out the examples here:
http://scikit-learn.org/stable/auto_examples/
## Loading an example dataset
First we will load some data to play with. The data we will use is a very simple
flower database known as the Iris dataset.
We have 150 observations of the iris flower specifying some measurements:
- sepal length, sepal width, petal length and petal width together with its subtype:
*Iris setosa*, *Iris versicolor*, *Iris virginica*.
To load the dataset into a Python object:
```
from sklearn import datasets
iris = datasets.load_iris()
```
This data is stored in the `.data` member, which is a `(n_samples, n_features)`
array.
```
print(iris.keys(), end = end_string)
print(iris.target.shape, end = end_string)
```
The class of each observation is stored in the `.target` attribute of the
dataset. This is an integer 1D array of length `n_samples`:
```
print(iris.target.shape)
np.unique(iris.target)
```
## k-Nearest neighbors classifier
The simplest possible classifier is the nearest neighbor: given a new
observation, take the label of the training samples closest to it in
*n*-dimensional space, where *n* is the number of *features* in each sample.
The k-nearest neighbors classifier internally uses an algorithm based on
ball trees to represent the samples it is trained on.
## k-Nearest neighbors classifier

## k-Nearest neighbors classifier
```python
# Create and fit a nearest-neighbor classifier
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier()
knn.fit(iris.data, iris.target)
```
```python
knn.predict([[0.1, 0.2, 0.3, 0.4]])
```
```
from sklearn import datasets
iris = datasets.load_iris()
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier()
knn.fit(iris.data, iris.target)
knn.predict([[0.1, 0.2, 0.3, 0.4]])
knn.get_params()
```
## Training set and testing set
When experimenting with learning algorithms, it is important not to test the
prediction of an estimator on the data used to fit the estimator.
Indeed, with the kNN estimator, we would always get perfect prediction on the training set.
```
### Manually
perm = np.random.permutation(iris.target.size)
iris.data = iris.data[perm]
iris.target = iris.target[perm]
knn.fit(iris.data[:100], iris.target[:100])
knn.score(iris.data[100:], iris.target[100:])
# Preferred
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
# split holding out 40 %
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=0)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
# We are drastically reducing the size of our training data, better to do k-fold cross validation
scores = cross_val_score(knn, iris.data, iris.target, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
```
## K-means clustering
The simplest clustering algorithm is k-means. This divides a set into *k*
clusters, assigning each observation to a cluster so as to minimize the distance
of that observation (in *n*-dimensional space) to the cluster's mean; the means
are then recomputed. This operation is run iteratively until the clusters
converge, for a maximum for `max_iter` rounds.
(An alternative implementation of k-means is available in SciPy's `cluster`
package. The `scikit-learn` implementation differs from that by offering an
object API and several additional features, including smart initialization.)
```
from sklearn import cluster
k_means = cluster.KMeans(n_clusters=3)
labels= k_means.fit_predict(iris.data)
(iris.data)
print(labels[::10])
print(iris.target[::10])
```
## Attribution
This notebook is a Jupyter Notebook port of the `scikit-learn` lecture from the
open source [Scipy Lecture Notes][scipy-lec-notes] by Fabian Pedregosa and Gael
Varoquaux.
[scipy-lec-notes]: http://www.scipy-lectures.org/
## Another Example
```pandas + statsmodels + sklearn```
```
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from sklearn.cross_validation import cross_val_score
import statsmodels.api as sm
from patsy import dmatrices
%matplotlib inline
pd.set_option('display.max_rows', 10)
print(sm.datasets.fair.SOURCE)
print(sm.datasets.fair.NOTE)
# load dataset
dta = sm.datasets.fair.load_pandas().data
# add "affair" column: 1 represents having affairs, 0 represents not
dta['affair'] = (dta.affairs > 0).astype(int)
dta
print("Affair proportion by childre: \n \n {}\n".format(dta.groupby('children')['affair'].mean()))
print("Affair proportion by age: \n \n {}".format(dta.groupby('age')['affair'].mean()))
dta.groupby('rate_marriage').mean()
dta.educ.hist()
plt.title('Histogram of Education')
plt.xlabel('Education Level')
_ = plt.ylabel('Frequency')
sns.set(rc={"figure.figsize": (20, 12), "lines.linewidth": 2.5}, font_scale=1.5)
sns.set_style("whitegrid")
np.random.seed(0)
sns.set_palette((sns.color_palette("rainbow", 8)))
# histogram of marriage rating
dta.rate_marriage.hist()
plt.title('Histogram of Marriage Rating')
plt.xlabel('Marriage Rating')
_ = plt.ylabel('Frequency')
# barplot of marriage rating grouped by affair (True or False)
pd.crosstab(dta.rate_marriage, dta.affair.astype(bool)).plot(kind='bar')
plt.title('Marriage Rating Distribution by Affair Status')
plt.xlabel('Marriage Rating')
_ = plt.ylabel('Frequency')
affair_yrs_married = pd.crosstab(dta.yrs_married, dta.affair.astype(bool))
affair_yrs_married.div(affair_yrs_married.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Affair Percentage by Years Married')
plt.xlabel('Years Married')
plt.ylim([0,1.25])
_ = plt.ylabel('Percentage')
```
# Logistic Regression
```
# create dataframes with an intercept column and dummy variables for
# occupation and occupation_husb
y, X = dmatrices('affair ~ rate_marriage + age + yrs_married + children + \
religious + educ + C(occupation) + C(occupation_husb)',
dta, return_type="dataframe")
print(X.columns)
X = X.rename(columns = {'C(occupation)[T.2.0]':'occ_2',
'C(occupation)[T.3.0]':'occ_3',
'C(occupation)[T.4.0]':'occ_4',
'C(occupation)[T.5.0]':'occ_5',
'C(occupation)[T.6.0]':'occ_6',
'C(occupation_husb)[T.2.0]':'occ_husb_2',
'C(occupation_husb)[T.3.0]':'occ_husb_3',
'C(occupation_husb)[T.4.0]':'occ_husb_4',
'C(occupation_husb)[T.5.0]':'occ_husb_5',
'C(occupation_husb)[T.6.0]':'occ_husb_6'})
y = np.ravel(y)
X
# instantiate a logistic regression model, and fit with X and y
model = LogisticRegression()
model = model.fit(X, y)
# check the accuracy on the training set
model.score(X, y)
y.mean()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model2 = LogisticRegression()
model2.fit(X_train, y_train)
# predict class labels for the test set
predicted = model2.predict(X_test)
print("Predicted {} affairs in {} points".format(predicted.sum(), X_test.shape[0]))
# generate class probabilities
probs = model2.predict_proba(X_test)
print(probs)
# generate evaluation metrics
roc_auc = metrics.roc_auc_score(y_test, probs[:, 1])
acc = metrics.accuracy_score(y_test, predicted)
print("Accuracy score: {}".format(acc), end = end_string)
print("ROC-AUC score {}".format(roc_auc))
fpr, tpr, thresholds = metrics.roc_curve(y_test, probs[:, 1], pos_label=1)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
conf_matrix= metrics.confusion_matrix(y_test, predicted)
print(conf_matrix,end = end_string)
plot_confusion_matrix(conf_matrix, classes=['No Affair', 'Affair'],
title='Confusion matrix, without normalization')
# Classification report
report =metrics.classification_report(y_test, predicted)
print(report)
```
# More packages for optimization
- Convex optimization: ```cvxpy```
- Neural Networks
- Tensorflow
- Keras
- PyTorch
| github_jupyter |
```
from robotsearch.robots import rct_robot
import numpy as np
import os
import pandas as pd
```
## Prepping CoronaWhy dataset
```
coy_df = pd.read_csv('/media/axhue/WD/Data/Coronawhy/Annotationv2.csv')
def clean_labels(labels):
labels.rename(columns=labels.iloc[0,5:-1].to_dict(),inplace=True)
labels.drop(0,inplace=True)
designs = labels.columns[5:-1]
labels.dropna(how='all',subset=designs,inplace=True)
labels.fillna(0,inplace=True)
labels.drop(labels.tail(1).index,inplace=True)
labels.drop(columns=['Assignee','General Notes'],inplace=True)
clean_labels(coy_df)
#from ..datasets.specter_embeddings import annotations_with_specter_embeddings
#relative importing is a pain
def annotations_with_specter_embeddings(annotations_dataset, specter_embeddings_filepath):
enriched_dataset = annotations_dataset.copy()
specter_embeddings = pd.read_csv(
specter_embeddings_filepath, header=None,
names=['cord_uid'] + [f"{i}" for i in range(768)])
enriched_dataset = enriched_dataset.merge(specter_embeddings, left_on="cord_uid", right_on="cord_uid")
return enriched_dataset
coy_df = annotations_with_specter_embeddings(coy_df,'/media/axhue/WD/Data/Coronawhy/cord_19_embeddings_4_24.csv')
```
## Prep Kaggle Dataset
```
kag_df = pd.read_csv('/media/axhue/WD/Data/Coronawhy/design.csv')
metadf = pd.read_csv('/media/axhue/WD/Data/Coronawhy/dataV8/metadata.csv')
id_map = dict(zip(metadf.sha,metadf.cord_uid))
cuid_df = kag_df.id.map(id_map,na_action='ignore')
no_corduid = kag_df[cuid_df.isna()].copy()
kag_df.id = cuid_df
kag_df.drop(index=no_corduid.index,inplace=True)
kag_df.rename(columns={'id':'cord_uid'},inplace=True)
```
## Eval RCT_Robot
```
rct_clf = rct_robot.RCTRobot()
rcts = kag_df[kag_df.label == 2]
len(coy_df[coy_df.RCT == '1'].loc[:,['cord_uid','RCT']])
merged_rcts = coy_df[coy_df.RCT == '1'].loc[:,['cord_uid','RCT']].merge(rcts,on='cord_uid',how='outer')
merged_rcts
texts = metadf[metadf.cord_uid.isin(merged_rcts.cord_uid.values)]
preds = []
for i,t in texts.iterrows():
pred = rct_clf.predict({"title": t['title'],
"abstract": t['abstract'],
"use_ptyp": False}, filter_type="balanced", filter_class="svm_cnn")
preds.append(pred[0])
pd_df = pd.DataFrame(preds)
pd_df.head()
pd_df.is_rct
from sklearn import metrics
metrics.f1_score([1 for i in range(len(merged_rcts))],
pd_df.is_rct.values)
```
# Continue to ensemble with Catboost
## Training Catboost
```
total_data = kag_df[kag_df.label != 2].copy()
annotations_with_specter_embeddings(total_data,
'/media/axhue/WD/Data/Coronawhy/cord_19_embeddings_4_24.csv')
from pathlib import Path
import numpy as np
from sklearn.metrics import accuracy_score, f1_score
from catboost import CatBoostClassifier, Pool
# https://github.com/catboost/tutorials/blob/2c16945f850503bfaa631176e87588bc5ce0ca1c/text_features/text_features_in_catboost.ipynb
def train_validate_catboost_model(train_data, test_data, train_features, target_feature, text_features, params, verbose=True):
X_train = train_data[train_features]
y_train = train_data[target_feature]
X_test = test_data[train_features]
y_test = test_data[target_feature]
train_pool = Pool(
X_train,
y_train,
feature_names=list(train_features),
text_features=text_features)
test_pool = Pool(
X_test,
y_test,
feature_names=list(train_features),
text_features=text_features)
catboost_default_params = {
'iterations': 1000,
'learning_rate': 0.03,
'eval_metric': 'Accuracy',
'task_type': 'GPU'
}
params = {**catboost_default_params, **params}
model = CatBoostClassifier(**params)
model.fit(train_pool, eval_set=test_pool, verbose=verbose)
prediction = model.predict(test_pool)
acc = accuracy_score(y_test, prediction)
f1 = f1_score(y_test, prediction, average=None)
f1_macro = f1_score(y_test, prediction, average="macro")
logger.info(f"accuracy: {acc}")
logger.info(f"f1: {f1}")
logger.info(f"f1_macro: {f1_macro}")
return model, acc, f1, f1_macro, params
def single_label_multiclass_annotated_study_design(annotations_filepath, metadata_filepath):
"""
match david's annotated dataset (colums id, label) with the article metadata
file from kaggle cord-19 https://www.kaggle.com/davidmezzetti/cord19-study-design
:param annotations_filepath:
:param metadata_filepath:
:return: pd.DataFrame of annotations, with columns ["sha", "cord_uid", "title", "abstract", "label", "label_str"]
"""
label_number_to_study_name_mapping = {
1: "Meta analysis", #systematic review and meta-analysis
2: "Randomized control trial",
3: "Non-randomized trial",
4: "Prospective cohort", #prospective observational study
5: "Time-series analysis",
6: "Retrospective cohort", #retrospective observational study
7: "Cross-sectional", # cross sectional study
8: "Case control",
9: "Case study", #case series
10: "Simulation", #simulation
0: "Other"
}
annotations = pd.read_csv(annotations_filepath)
cord19_metadata = pd.read_csv(metadata_filepath)
annotations = annotations.rename(columns={"id": "sha"}).dropna()
annotations = annotations.merge(cord19_metadata, left_on="sha", right_on="sha")[
["sha", "cord_uid", "label", "title", "abstract"]]
annotations = annotations.dropna(subset=['abstract', 'title']).drop_duplicates(
subset=['abstract', 'title'])
annotations['label_string'] = annotations["label"].apply(label_number_to_study_name_mapping.get)
return annotations
def train_validate_catboost_model(train_data, test_data, train_features, target_feature, text_features, params, verbose=True):
X_train = train_data[train_features]
y_train = train_data[target_feature]
X_test = test_data[train_features]
y_test = test_data[target_feature]
train_pool = Pool(
X_train,
y_train,
feature_names=list(train_features),
text_features=text_features)
test_pool = Pool(
X_test,
y_test,
feature_names=list(train_features),
text_features=text_features)
catboost_default_params = {
'iterations': 1000,
'learning_rate': 0.03,
'eval_metric': 'Accuracy',
'task_type': 'GPU'
}
params = {**catboost_default_params, **params}
model = CatBoostClassifier(**params)
model.fit(train_pool, eval_set=test_pool, verbose=verbose)
prediction = model.predict(test_pool)
acc = accuracy_score(y_test, prediction)
f1 = f1_score(y_test, prediction, average=None)
f1_macro = f1_score(y_test, prediction, average="macro")
print(f"accuracy: {acc}")
print(f"f1: {f1}")
print(f"f1_macro: {f1_macro}")
return model, acc, f1, f1_macro, params
total_data = total_data.merge(metadf.loc[:,['cord_uid','title','abstract']])
total_data.label.value_counts()
filt_dat = total_data.dropna()
len(filt_dat)
mapp = {0:0,1:1}
for i in range(3,10):
mapp[i] = i-1
print(mapp)
filt_dat.head()
filt_dat.label = filt_dat.label.map(mapp,na_action='ignore').values
filt_dat.label.unique()
from sklearn.metrics import accuracy_score, f1_score
from catboost import CatBoostClassifier, Pool
results = []
params = {}
filt_dat = filt_dat.sample(frac=1.0)
test_set_len = int(len(filt_dat)/5)
for i in range(5):
test = filt_dat.iloc[i * test_set_len : (i+1)* test_set_len]
train = filt_dat[~filt_dat.index.isin(test.index)]
model, acc, f1, f1_macro, params = train_validate_catboost_model(
train, test,
['abstract', 'title'],
'label',
text_features=['abstract', 'title'], params={})
results.append({
"i": i,
"f1": f1,
"acc": acc,
'f1_macro': f1_macro,
})
model.predict([['this is a title','and the abstract']])
filt_dat.label.value_counts()
```
## combining models
```
from catboost import CatBoostClassifier, Pool
print(mapp)
class BetterTogether():
def __init__(self,rct_classifier,general_classifier):
self.rct_model = rct_classifier
self.gen_model = general_classifier
mapp = {0:0,1:1}
for i in range(2,9):
mapp[i] = i+1
self.gen_map = mapp
def process(self,title,abstract):
#step 1
rct_pred = self.rct_model.predict({"title": title,
"abstract": abstract,
"use_ptyp": False}, filter_type="balanced", filter_class="svm_cnn")
#step 2
if rct_pred[0]['is_rct'] == False:
gen_pred = self.gen_model.predict([[title,abstract]])[0]
output = np.argmax(gen_pred)
return self.gen_map[output],0.9
else:
return 2,rct_pred[0]['score']
gen_model =CatBoostClassifier().load_model('cat1.cbm')
duo = BetterTogether(rct_clf,gen_model)
testing = kag_df.merge(metadf[['cord_uid','title','abstract']],on='cord_uid').dropna()
res = {
'y_true': [],
'y_pred': [],
'y_prob': [],
}
for i,row in testing.iterrows():
y_pred,y_prob = duo.process(row['title'],row['abstract'])
res['y_true'].append(row['label'])
res['y_pred'].append(y_pred)
res['y_prob'].append(y_prob)
results = pd.DataFrame(res)
results.head()
from sklearn import metrics
results.y_true.value_counts()
report = metrics.classification_report(results.y_true.values,
results.y_pred.values
)
import pprint
pp = pprint.pprint
pp(report)
```
| github_jupyter |
## Looking Through Tree-Ring Data in the Southwestern USA Using Pandas
**Pandas** provides a useful tool for the analysis of tabular data in Python, where previously we would have had to use lists of lists, or use R.
```
## Bringing in necessary pckages
%config InlineBackend.figure_format = 'svg'
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats.mstats as stat
```
### The Data
The dataset I included herein is an example of tree growth rates at many different sites in the southwestern United States (AZ, NM). Field crews used increment borers to core into live trees in these sites and extract core samples. These were then brought back to the lab and dated to determine tree age and growth rates. I obtained this during my master's work at Northern Arizona University.
In this dataset, each individual row is a tree.
The columns are as follows:
* site: The code for either the plot name or study site at which the tree was surveyed
* center.date: The innermost ring of the tree. The closes estimate for the establishment year of the tree
* dbh: The diameter of the tree (cm) at 1.37m above ground level
* dsh: The diameter of the tree (cm) at 0.4m above ground level
* Age: Estimated age of the tree when the core was collected. $Age = inv.yr-center.date$
* spp: Four letter species code for the tree. The first two letters of the genus and species
* inv.yr: The year in which the core was collected
* BA: The basal area of the tree. Basically the surface area of the top of a stump if the tree was cut at 1.37m. Given with the formula $BA = 0.00007854 * DBH^2$
* BA/Age: Just what it sounds like
* Annual BAI: An estimate of the square centimeters of basal area produced by the tree each year. A better measure of growth than annual growth on the core as it accounts for tree size in addition to ring thickness in the core.
Similar datasets are available through the International Tree Ring Data Bank (ITRDB), and can be found on the [ITRDB Webpage](https://data.noaa.gov/dataset/international-tree-ring-data-bank-itrdb)
The following codeblock reads in the data and displays the first few rows of the pandas data frame. The path should be changed to the location of the .csv file.
```
## Change the path below if being run on a different computer
data = pd.read_csv(r"/Users/kyle/Google Drive/UC Boulder PhD Stuff/Classes/Fall 2016/Spatiotemporal Methods/Combined_BaiData.csv")
data.head()
print "There are cores for "+str(len(data))+" trees"
filtered_data = data.dropna()
print ("After removing rows with missing values, there are cores for "+str(len(filtered_data))+
" trees. \nSo, there were "+str(len(data)-len(filtered_data))+" rows that had NaN's")
```
#### A logical question may be:
*What species is growing the fastest across the sites?*
So, we can produce a simple boxplot to help visualize this.
```
filtered_data.boxplot(column = 'annual.bai', by = 'spp')
```
It appears that *Abies lasiocarpa* - subalpine fir - may be the fastest growing species overall. We can also look at the median values for the different species to verify this
```
filtered_data.groupby('spp', as_index=False)['annual.bai'].median()
## Adapted from http://stackoverflow.com/questions/35816865/create-vectors-for-kruskal-wallis-h-test-python
groupednumbers = {}
for grp in filtered_data['spp'].unique():
groupednumbers[grp] = filtered_data['annual.bai'][filtered_data['spp']==grp].values
args = groupednumbers.values()
h,p = stat.kruskalwallis(*args)
print("The Kruskal-Wallis H-statistic is: "+str(round(h,2))+" and... \nThe p-value is: "+str(p))
```
So the species have very different growth rates. We could take this a step further and perform pairwise comparisons between groups using Mann-Whitney tests with Bonferroni correction for multiple comparison, but a more robust analysis would likely use mixed-effects models or partial regression to account for the different growing conditions between sites, and perhaps to account for age and tree size as additional covariates.
| github_jupyter |
# Guide
## Quick-start
Let's import our package and define two small lists that we would like to compare in similarity
```
from polyfuzz import PolyFuzz
from_list = ["apple", "apples", "appl", "recal", "house", "similarity"]
to_list = ["apple", "apples", "mouse"]
```
Then, we instantiate our PolyFuzz model and choose `TF-IDF` as our similarity measure. We matche the two lists and
check the results.
**NOTE**: We can also use `EditDistance` and `Embeddings` as our matchers.
```
model = PolyFuzz("TF-IDF").match(from_list, to_list)
model.get_matches()
```
As expected, we can see high similarity between the `apple` words. Moreover, we could not find a single match for `similarity` which is why it is mapped to `None`.
#### Precision Recall Curve
Next, we would like to see how well our model is doing on our data. Although this method is unsupervised, we can use the similarity score as a proxy for the accuracy of our model (assuming we trust that similarity score).
A minimum similarity score might be used to identify when a match could be considered to be correct. For example, we can assume that if a similarity score pass 0.95 we are quite confident that the matches are correct. This minimum similarity score can be defined as **`Precision`** since it shows you how precise we believe the matches are at a minimum.
**`Recall`** can then be defined as as the percentage of matches found at a certain minimum similarity score. A high recall means that for a certain minimum precision score, we find many matches.
```
model.visualize_precision_recall(kde=True)
```
#### Group Matches
We can group the matches `To` as there might be significant overlap in strings in our `from_list`.
To do this, we calculate the similarity within strings in `from_list` and use single linkage to then group the strings with a high similarity.
```
model.group(link_min_similarity=0.75)
model.get_matches()
```
As can be seen above, we grouped `apple` and `apples` together to `apple` such that when a string is mapped to `apple` it will fall in the cluster of [`apples`, `apple`] and will be mapped to the first instance in the cluster which is `apples`.
For example, `appl` is mapped to `apple` and since `apple` falls into the cluster [`apples`, `apple`], `appl` will be mapped to `apples`.
## Multiple Models
You might be interested in running multiple models with different matchers and different parameters in order to compare the best results.
Fortunately, **`PolyFuzz`** allows you to exactly do this!
Below, you will find all models currently implemented in PolyFuzz and are compared against one another.
```
from polyfuzz.models import EditDistance, TFIDF, Embeddings, RapidFuzz
from polyfuzz import PolyFuzz
from jellyfish import jaro_winkler_similarity
from flair.embeddings import TransformerWordEmbeddings, WordEmbeddings
from_list = ["apple", "apples", "appl", "recal", "house", "similarity"]
to_list = ["apple", "apples", "mouse"]
# BERT
bert = TransformerWordEmbeddings('bert-base-multilingual-cased') # https://huggingface.co/transformers/pretrained_models.html
bert_matcher = Embeddings(bert, model_id="BERT", min_similarity=0)
# FastText
fasttext = WordEmbeddings('en-crawl')
fasttext_matcher = Embeddings(fasttext, min_similarity=0)
# TF-IDF
tfidf_matcher = TFIDF(n_gram_range=(3, 3), min_similarity=0, model_id="TF-IDF")
tfidf_large_matcher = TFIDF(n_gram_range=(3, 6), min_similarity=0)
# Edit Distance models with custom distance function
base_edit_matcher = EditDistance(n_jobs=1)
jellyfish_matcher = EditDistance(n_jobs=1, scorer=jaro_winkler_similarity)
# Edit distance with RapidFuzz --> slightly faster implementation than Edit Distance
rapidfuzz_matcher = RapidFuzz(n_jobs=1)
matchers = [bert_matcher, fasttext_matcher, tfidf_matcher, tfidf_large_matcher,
base_edit_matcher, jellyfish_matcher, rapidfuzz_matcher]
model = PolyFuzz(matchers).match(from_list, to_list)
model.visualize_precision_recall(kde=True)
```
#### Custom Grouper
We can even use one of the `polyfuzz.models` to be used as the grouper in case you would like to use something else than the standard `TF-IDF` matcher:
```
base_edit_grouper = EditDistance(n_jobs=1)
model.group(base_edit_grouper)
model.get_matches("Model 1")
model.get_clusters("Model 1")
```
## Custom Models
Although the options above are a great solution for comparing different models, what if you have developed your own? What if you want a different similarity/distance measure that is not defined in PolyFuzz? That is where custom models come in. If you follow the structure of PolyFuzz's `BaseMatcher` you can quickly implement any model you would like.
Below, we are implementing the `ratio` similarity measure from `RapidFuzz`.
```
import numpy as np
import pandas as pd
from rapidfuzz import fuzz
from polyfuzz.models import EditDistance, TFIDF, Embeddings, BaseMatcher
class MyModel(BaseMatcher):
def match(self, from_list, to_list):
# Calculate distances
matches = [[fuzz.ratio(from_string, to_string) / 100 for to_string in to_list] for from_string in from_list]
# Get best matches
mappings = [to_list[index] for index in np.argmax(matches, axis=1)]
scores = np.max(matches, axis=1)
# Prepare dataframe
matches = pd.DataFrame({'From': from_list,'To': mappings, 'Similarity': scores})
return matches
```
It is important that the `match` function takes in two lists of strings and throws out a pandas dataframe with three columns:
* From
* To
* Similarity
Then, we can simply create an instance of `MyModel` and pass it through `PolyFuzz`:
```
from_list = ["apple", "apples", "appl", "recal", "house", "similarity"]
to_list = ["apple", "apples", "mouse"]
custom_matcher = MyModel()
model = PolyFuzz(custom_matcher).match(from_list, to_list)
model.visualize_precision_recall(kde=True)
```
| github_jupyter |
# TV Script Generation
In this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
## Get the Data
The data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text.
>* As a first step, we'll load in this data and look at some samples.
* Then, you'll be tasked with defining and training an RNN to generate a new script!
```
from collections import Counter
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
```
## Explore the Data
Play around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
```
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
```
---
## Implement Pre-processing Functions
The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
```
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( **.** )
- Comma ( **,** )
- Quotation Mark ( **"** )
- Semicolon ( **;** )
- Exclamation mark ( **!** )
- Question mark ( **?** )
- Left Parentheses ( **(** )
- Right Parentheses ( **)** )
- Dash ( **-** )
- Return ( **\n** )
This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
tokens = dict()
tokens['.'] = '<PERIOD>'
tokens[','] = '<COMMA>'
tokens['"'] = '<QUOTATION_MARK>'
tokens[';'] = '<SEMICOLON>'
tokens['!'] = '<EXCLAMATION_MARK>'
tokens['?'] = '<QUESTION_MARK>'
tokens['('] = '<LEFT_PAREN>'
tokens[')'] = '<RIGHT_PAREN>'
tokens['?'] = '<QUESTION_MARK>'
tokens['-'] = '<DASH>'
tokens['\n'] = '<NEW_LINE>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Pre-process all the data and save it
Running the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.
### Check Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
```
## Input
Let's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.
You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.
```
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
```
### Batching
Implement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.
>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.
For example, say we have these as input:
```
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
```
Your first `feature_tensor` should contain the values:
```
[1, 2, 3, 4]
```
And the corresponding `target_tensor` should just be the next "word"/tokenized word value:
```
5
```
This should continue with the second `feature_tensor`, `target_tensor` being:
```
[2, 3, 4, 5] # features
6 # target
```
```
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
# print("feature: ",x_batch)
batch_y = words[idx_end]
# print("target: ", batch_y)
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
```
### Test your dataloader
You'll have to modify this code to test a batching function, but it should look fairly similar.
Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.
Your code should return something like the following (likely in a different order, if you shuffled your data):
```
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 21, 22, 23, 24, 25],
[ 17, 18, 19, 20, 21],
[ 34, 35, 36, 37, 38],
[ 11, 12, 13, 14, 15],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 25, 26, 27, 28, 29],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])
```
### Sizes
Your sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10).
### Values
You should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
```
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print(sample_y.shape)
print(sample_y)
```
---
## Build the Neural Network
Implement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:
- `__init__` - The initialize function.
- `init_hidden` - The initialization function for an LSTM/GRU hidden state
- `forward` - Forward propagation function.
The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.
**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
### Hints
1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`
2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:
```
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
```
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5,lr=0.001):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# define embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## Define the LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Define the final, fully-connected output layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
```
### Define forward and backpropagation
Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:
```
loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)
```
And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.
**If a GPU is available, you should move your data to that GPU device, here.**
```
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
# # Creating new variables for the hidden state, otherwise
# # we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# print(h[0].data)
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
```
## Neural Network Training
With the structure of the network complete and data ready to be fed in the neural network, it's time to train it.
### Train Loop
The training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```
### Hyperparameters
Set and train the neural network with the following parameters:
- Set `sequence_length` to the length of a sequence.
- Set `batch_size` to the batch size.
- Set `num_epochs` to the number of epochs to train for.
- Set `learning_rate` to the learning rate for an Adam optimizer.
- Set `vocab_size` to the number of unique tokens in our vocabulary.
- Set `output_size` to the desired size of the output.
- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.
- Set `hidden_dim` to the hidden dimension of your RNN.
- Set `n_layers` to the number of layers/cells in your RNN.
- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.
If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
```
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
print(len(vocab_to_int))
```
### Train
In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train.
> **You should aim for a loss less than 3.5.**
You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
```
### Question: How did you decide on your model hyperparameters?
For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?
**Answer:**
Going over the course material regarding embedding, I noticed that typical embedding dimensions are around 200 - 300 in size.
Upon reading from different sources:
- https://arxiv.org/pdf/1707.06799.pdf
- https://github.com/wojzaremba/lstm/blob/76870253cfca069477f06b7056af87f98490b6eb/main.lua#L44
- https://machinelearningmastery.com/tune-lstm-hyperparameters-keras-time-series-forecasting/
as well as going over the course examples (Skip-gram Word2Vec, Simple RNN, Sentiment Analysis with an RNN) and onlder courses intuition, I chose the parameters.
I tried:
- sequence_length = 10, batch_size = 64, learning_rate = 0.01, embedding_dim = 200, hidden_dim = 200, n_layers = 2. Started with loss 9.25 and after 4 epochs the loss was still around 9.26.
- sequence_length = 10, batch_size = 64, learning_rate = 0.003, embedding_dim = 300, hidden_dim = 250, n_layers = 2 Started with Loss: 9.202159190654754 and at epoch 4 it was Loss: 9.206429640371343
- sequence_length = 20, batch_size = 20, learning_rate = 0.3, embedding_dim = 300, hidden_dim = 250, n_layers = 2 Started with Loss: 9.70091618013382, and at epoch 4 it was still around 9.6
- sequence_length = 20, batch_size = 124, learning_rate = 1, embedding_dim = 200, hidden_dim = 200, n_layers = 2 Started with Epoch: 1/10 Loss: 9.50547212076187 ..
At this point i realized I have some bugs in my code related to zero_grad, extra dropout layer and sigmoid layer.
Fixed issues and retried:
- sequence_length = 10, batch_size = 128, learning_rate = 0.001, embedding_dim = 200, hidden_dim = 250, n_layers = 2 Started with:
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.944083527803421
...
Epoch: 4/10 Loss: 3.5780555000305174
...
Epoch: 7/10 Loss: 3.3266124720573425
...
- sequence_length = 10, batch_size = 124, learning_rate = 0.1, embedding_dim = 200, hidden_dim = 200, n_layers = 2 Started with
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.481069218158722
Epoch: 2/10 Loss: 5.025624033570289
Epoch: 3/10 Loss: 4.981013494968415
I stopped here, because, even if it was decreasing it seemd to converge way slower than the previous experiment with a lower learning rate and a slightly bigger hidden_dim.
The first experiment above reached:
`Epoch: 10/10 Loss: 3.1959674040079116`.
---
# Checkpoint
After running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
```
## Generate TV Script
With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section.
### Generate Text
To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
```
### Generate a New Script
It's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:
- "jerry"
- "elaine"
- "george"
- "kramer"
You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
```
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
```
#### Save your favorite scripts
Once you have a script that you like (or find interesting), save it to a text file!
```
# save script to a text file
f = open("generated_script_2.txt","w")
f.write(generated_script)
f.close()
```
# The TV Script is Not Perfect
It's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.
### Example generated script
>jerry: what about me?
>
>jerry: i don't have to wait.
>
>kramer:(to the sales table)
>
>elaine:(to jerry) hey, look at this, i'm a good doctor.
>
>newman:(to elaine) you think i have no idea of this...
>
>elaine: oh, you better take the phone, and he was a little nervous.
>
>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.
>
>jerry: oh, yeah. i don't even know, i know.
>
>jerry:(to the phone) oh, i know.
>
>kramer:(laughing) you know...(to jerry) you don't know.
You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
| github_jupyter |
```
# hide
# all_tutorial
! [ -e /content ] && pip install -Uqq mrl-pypi # upgrade mrl on colab
```
# Tutorial - RL Train Cycle Overview
>Overview of the RL training cycle
## RL Train Cycle Overview
The goal of this tutorial is to walk through the RL fit cycle to familiarize ourselves with the `Events` cycle and get a better understanding of how `Callback` and `Environment` classes work.
## Performance Notes
The workflow in this notebook is more CPU-constrained than GPU-constrained due to the need to evaluate samples on CPU. If you have a multi-core machine, it is recommended that you uncomment and run the `set_global_pool` cells in the notebook. This will trigger the use of multiprocessing, which will result in 2-4x speedups.
This notebook may run slow on Collab due to CPU limitations.
If running on Collab, remember to change the runtime to GPU
## High Level Overview
### The Environment
At the highest level, we have the `Environment` class. The `Environment` holds together several sub-modules and orchestrates them during the fit loop. The following are contained in the `Environment`:
- `agent` - This is the actual model we're training
- `template_cb` - this holds a `Template` class that we use to define our chemical space
- `samplers` - samplers generate new samples to train on
- `buffer` - the buffer collects and distributes samples from all the `samplers`
- `rewards` - rewards score samples
- `losses` - losses generate values we can backpropagate through
- `log` - the log holds a record of all samples in the training process
### Callbacks and the Event Cycle
Each one of the above items is a `Callback`. A `Callback` is a a general class that can hook into the `Environment` fit cycle at a number of pre-defined `Events`. When the `Environment` calls a specific `Event`, the event name is passed to every callback in the `Environment`. If a given `Callback` has a defined function named after the event, that function is called. This creates a very flexible system for customizing training loops.
We'll be looking more at `Events` later. For now, we'll just list them in brief. These are the events called during the RL training cycle in the order they are executed:
- `setup` - called when the `Environment` is created, used to set up values
- `before_train` - called before training is started
- `build_buffer` - draws samples from `samplers` into the `buffer`
- `filter_buffer` - filters samples in the buffer
- `after_build_buffer` - called after buffer filtering. Used for cleanup, logging, etc
- `before_batch` - called before a batch starts, used to set up the `batch state`
- `sample_batch` - samples are drawn from `sampers` and `buffer` into the `batch state`
- `before_filter_batch` - allows preprocessing of samples before filtering
- `filter_batch` - filters samples in `batch state`
- `after_sample` - used for calculating sampling metrics
- `before_compute_reward` - used to set up any values needed for reward computation
- `compute_reward` - used by `rewards` to compute rewards for all samples in the `batch state`
- `after_compute_reward` - used for logging reward metrics
- `reward_modification` - modify rewards in ways not tracked by the log
- `after_reward_modification` - log reward modification metrics
- `get_model_outputs` - generate necessary tensors from the model
- `after_get_model_outputs` - used for any processing required prior to loss calculation
- `compute_loss` - compute loss values
- `zero_grad` - zero grad
- `before_step` - used for computation before optimizer step (ie gradient clipping)
- `step` - step optimizer
- `after_batch` - compute batch stats
- `after_train` - final event after all training batches
```
import sys
sys.path.append('..')
from mrl.imports import *
from mrl.core import *
from mrl.chem import *
from mrl.templates.all import *
from mrl.torch_imports import *
from mrl.torch_core import *
from mrl.layers import *
from mrl.dataloaders import *
from mrl.g_models.all import *
from mrl.vocab import *
from mrl.policy_gradient import *
from mrl.train.all import *
from mrl.model_zoo import *
from collections import Counter
# set_global_pool(min(10, os.cpu_count()))
```
## Getting Started
We start by creating all the components we need to train a model
### Agent
The `Agent` is the actual model we want to train. For this example, we will use the `LSTM_LM_Small_ZINC` model, which is a `LSTM_LM` model trained on a chunk of the ZINC database.
The agent will actually contain two versions of the model. The main model that we will train with every update iteration, and a baseline model which is updated as an exponentially weighted moving average of the main model. Both models are used in the RL training algorithm we will set up later
```
agent = LSTM_LM_Small_ZINC(drop_scale=0.5,opt_kwargs={'lr':5e-5})
```
### Template
The `Template` class is used to conrol the chemical space. We can set parameters on what molecular properties we want to allow. For this example, we set the following:
- Hard Filters - must have qualities
- `ValidityFilter` - must be a valid chemical structure
- `SingleCompoundFilter` - samples must be single compounds
- `RotBondFilter` - compounds can have at most 8 rotatable bonds
- `ChargeFilter` - compounds must have no net charge
- Soft Filters - nice to have qualities
- `QEDFilter` - Compounds get a score bonus of +1 if their QED value is greater than 0.5
- `SAFilter` - compounds get a score bonus of + if their SA score is less than 5
We then pass the `Template` to the `TemplateCallback` which integrates the template into the fit loop. Note that we pass `prefilter=True` to the `TemplateCallback`, which ensures compounds that don't meet our hard filters are removed from training
```
template = Template([ValidityFilter(),
SingleCompoundFilter(),
RotBondFilter(None, 8),
ChargeFilter(0, 0)],
[QEDFilter(0.5, None, score=1.),
SAFilter(None, 5, score=1.)])
template_cb = TemplateCallback(template, prefilter=True)
```
### Reward
For the reward, we will load a scikit-learn linear regression model. This model was trained to predict affinity against erbB1 using molecular fingerprints as inputs
This score function is extremely simple and likely won't translate well to real affinity. It is used as a lightweight example
```
class FP_Regression_Score():
def __init__(self, fname):
self.model = torch.load(fname)
self.fp_function = partial(failsafe_fp, fp_function=ECFP6)
def __call__(self, samples):
mols = to_mols(samples)
fps = maybe_parallel(self.fp_function, mols)
fps = [fp_to_array(i) for i in fps]
x_vals = np.stack(fps)
preds = self.model.predict(x_vals)
return preds
# if in the repo
reward_function = FP_Regression_Score('../files/erbB1_regression.sklearn')
# if in Collab:
# download_files()
# reward_function = FP_Regression_Score('files/erbB1_regression.sklearn')
reward = Reward(reward_function, weight=1.)
aff_reward = RewardCallback(reward, 'aff')
```
We can think of the score function as a black box that takes in samples (SMILES strings) and returns a single numeric score for each sample. Any score function that follows this paradigm can be integrated into MRL
```
samples = ['Brc1cc2c(NCc3cccs3)ncnc2s1',
'Brc1cc2c(NCc3ccncc3)ncnc2s1']
reward_function(samples)
```
### Loss Function
For our loss, we will use the `PPO` reinforcement learning algorithm. See the [PPO](arxiv.org/pdf/1707.06347.pdf) paper for full details.
The gist of it is the loss function takes a batch of samples and directs he model to increase the probability of above-average samples (relative to the batch mean) and decrease he probability of below-average samples.
```
pg = PPO(0.99,
0.5,
lam=0.95,
v_coef=0.5,
cliprange=0.3,
v_cliprange=0.3,
ent_coef=0.01,
kl_target=0.03,
kl_horizon=3000,
scale_rewards=True)
loss = PolicyLoss(pg, 'PPO',
value_head=ValueHead(256),
v_update_iter=2,
vopt_kwargs={'lr':1e-3})
```
### Samplers
`Samplers` fill the role of generating samples to train on. We will use four samplers for this run:
- `sampler1`: `ModelSampler` - this sampler will draw samples from the main model in the `Agent`. We set `buffer_size=1000`, which means we will generate 1000 samples every time we build the buffer. We set `p_batch=0.5`, which means during training, 50% of each batch will be sampled on the fly from the main model and the rest of the batch will come from the buffer
- `sampler2`: `ModelSampler` - this sampler is the same as `sampler1`, but we draw from the baseline model instead of the main model. We set `p_batch=0.`, so this sampler will only contribute to the buffer
- `sampler3`: `LogSampler` - this sampler looks through the log of previous samples. Based on our input arguments, it grabs the top `95` percentile of samples in the log, and randomly selects `100` samples from that subset
- `sampler4`: `DatasetSampler` - this sampler is seeded wih erbB1 training data used to train the score function. This sampler will randomly select 4 samples from the dataset to add to the buffer
```
gen_bs = 1500
# if in the repo
df = pd.read_csv('../files/erbB1_affinity_data.csv')
# if in Collab
# download_files()
# df = pd.read_csv('files/erbB1_affinity_data.csv')
df = df[df.neg_log_ic50>9.2]
sampler1 = ModelSampler(agent.vocab, agent.model, 'live', 1000, 0.5, gen_bs)
sampler2 = ModelSampler(agent.vocab, agent.base_model, 'base', 1000, 0., gen_bs)
sampler3 = LogSampler('samples', 'rewards', 10, 95, 100)
sampler4 = DatasetSampler(df.smiles.values, 'erbB1_data', buffer_size=4)
samplers = [sampler1, sampler2, sampler3, sampler4]
```
### Other Callbacks
We'll add three more callbacks:
- `MaxCallback`: this will grab the max reward within a batch that came from the source `live`. `live` is the name we gave to `sampler1` above. This means the max callback will grab all outputs from `sampler1` corresponding to samples from the live model and add the largest to the batch metrics
- `PercentileCallback`: this does the same as `MaxCallback` but instead of printing the maximum score, it prints the 90th percentile score
- `NoveltyReward`: this is reward modification that gives a bonus score of `0.05` to new samples (ie samples that haven't appeared before in training)
```
live_max = MaxCallback('rewards', 'live')
live_p90 = PercentileCallback('rewards', 'live', 90)
new_cb = NoveltyReward(weight=0.05)
cbs = [new_cb, live_p90, live_max]
```
## Training Walkthrough
Now we will step through the training cycle looking at how each callback event is used
### Setup
The first event occurs when we create our `Environment` using the callbacks we set up before. Instantiating the `Environment` registers all callbacks and runs the `setup` event. Many callbacks use the `setup` event to add terms to the batch log or the metrics log.
```
env = Environment(agent, template_cb, samplers=samplers, rewards=[aff_reward], losses=[loss],
cbs=cbs)
```
Inside the environment, we just created a `Buffer` and a `Log`.
The `Buffer` holds a list of samples, which is currently empty
```
env.buffer
env.buffer.buffer
```
The `Log` holds a number of containers for tracking training outputs
- `metrics`: dictionary of batch metrics. Each key maps to a list where each value in the list is the metric term for given batch
- `batch_log`: dictionary of batch items. Each key maps to a list. Each element in that list is a list containing the batch values for that key in a given batch
- `unique_samples`: dictionary of unique samples and the rewards for those samples. Useful for looking up if a sample has been seen before
- `df`: dataframe of unique samples and all associated values stored in the `batch_log`
We can see that these log terms have already been populated during the `setup` event
```
env.log.metrics
env.log.batch_log
env.log.df
```
The keys in the above dictionaries were added by the associated callbacks. For example, look at the `setup` method in `ModelSampler`, the type of sampler we used for `sampler1`:
```
def setup(self):
if self.p_batch>0. and self.track:
log = self.environment.log
log.add_metric(f'{self.name}_diversity')
log.add_metric(f'{self.name}_valid')
log.add_metric(f'{self.name}_rewards')
log.add_metric(f'{self.name}_new')
```
We gave `sampler1` the name `live`. As a result, the terms `live_diversity`, `live_valid`, `live_rewards` and `live_new` were added to the metrics.
We can also look at the `setup` method of our loss function `loss`:
```
def setup(self):
if self.track:
log = self.environment.log
log.add_metric(self.name)
log.add_log(self.name)
```
This is responsible for the `PPO` terms in the `batch_log` and the `metrics`. The PPO metrics term will store the average PPO loss value across a batch, while the PPO batch log term will store the PPO value for each item in a batch
### The Fit Cycle
At this point, we could start training using `Environment.fit`. We could call `env.fit(200, 90, 10, 2)` to train for 10 batches with a batch size of 200. For this tutorial, we will step through each part of the fit cycle and observe what is happening
### Before Train
The first stage of the fit cycle is the `before_train` stage. This sets the batch size and sequence length based on the inputs to `Environment.fit` (which we will set manually) and prints the top of the log
```
env.bs = 200 # batch size of 200
env.sl = 90 # max sample length of 90 steps
mb = master_bar(range(1))
env.log.pbar = mb
env.report = 1
env.log.report = 1 # report stats every batch
env('before_train')
```
### Build Buffer
The next stage of the cycle is the `build_buffer` stage. This consists of the following events:
- `build_buffer`: samplers add items to the buffer
- `filter_buffer`: the buffer is filtered
- `after_build_buffer`: use as needed
Going into this stage, our buffer is empty:
```
env.buffer.buffer
```
#### build_buffer
By calling the `build_buffer` event, our samplers will add items to the buffer
```
env('build_buffer')
```
Now we have 2004 items in the buffer.
```
len(env.buffer.buffer)
```
We can use the `buffer_sources` attribute to see where each item came from. We have 1000 items from `live_buffer` which corresponds to `sampler1`, sampling from the main model.
We have 1000 items from `base_buffer` which corresponds to `sampler2`, sampling from the baseline model.
We have 4 items from `erbB1_data_buffer`, our dataset sampler (`sampler4`).
Our log sampler, `sampler3` was set to start sampling after 10 training iterations, so we don't currently have any samples from that sampler
```
Counter(env.buffer.buffer_sources)
```
#### filter_buffer
It's likely some of these samples don't match our compound requirements defined in the `Template` we used, so we want to filter the buffer for passing compounds. This is what the `filter_buffer` does. For this current example, the only callback doing any buffer filtering is the template callback. However, the `filter_buffer` can be used to implement any form of buffer filtering.
Any callback that passes a list of boolean values to `Buffer._filter_buffer` can filter the buffer.
After filtering, we have 1829 remaining samples
```
env('filter_buffer')
len(env.buffer.buffer)
Counter(env.buffer.buffer_sources)
```
#### after_build_buffer
Next is the `after_build_buffer` event. None of our current callbacks make use of this event, but it exists to allow for evaluation/postprocessing/whatever after buffer creation.
### Sample Batch
The next event stage is the `sample_batch` stage. This consists of the following events:
- `before_batch`: set up/refresh any required state prior to batch sampling
- `sample_batch`: draw one batch of samples
- `before_filter_batch`: evaluate unfiltered batch
- `filter_batch`: filter batch
- `after_sample`: compute sample based metrics
#### before_batch
This event is used to create a new `BatchState` for the environment. The batch state is a container designed to hold any values required by the batch
```
env.batch_state = BatchState()
env('before_batch')
```
Currently the batch state only has placeholder values for commonly generated terms
```
env.batch_state
```
#### sample_batch
Now we actually draw samples to form a batch. All of our `Sampler` objects have a `p_batch` value, which designated what percentage of the batch should come from that sampler. Batch sampling is designed such that individual sampler `p_batch` values are respected, and any remaining batch percentage comes from the buffer.
Only `sampler1` has `p_batch>0.`, with a value of `p_batch=0.5`. This means 50% of the batch will be sampled on he fly from `sampler1`, and the remaining 50% of the batch will come from the buffer.
Using a hybrid of live sampling and buffer sampling seems to work best. That said, it is possible to have every batch be 100% buffer samples (like offline RL), or have 100% be live samples (like online RL)
```
env('sample_batch')
```
Now we can see we've populated several terms in the batch state. `BatchState.samples` now has a list of samples. `BatchState.sources` has the source of each sample.
We also added `BatchState.live_raw` and `BatchState.base_raw`. These terms hold the outputs of `sampler1` and `sampler2`. When we filter `BatchState.samples`, we can refer to the `_raw` terms to see what samples were removed.
Note that `BatchState.base_raw` is an empty list since `sampler2.p_batch=0.`
```
env.batch_state.keys()
```
`BatchState.sources` holds the source of each sample. We have 100 samples from `live`, which corresponds to our on the fly samples from `sampler1`. The remaining 100 samples come from `live_buffer` and `base_buffer`. This means they came from either `sampler1` (live) or `sampler2` (base) by way of being sampled from the buffer
```
Counter(env.batch_state['sources'])
env.batch_state['samples'][:5]
env.batch_state['sources'][:5]
env.batch_state['live_raw'][:5]
env.batch_state['base_raw']
```
#### before_filter_batch
This event is not used by any of our current callbacks. It provides a hook to influence the batch state prior to filtering
#### filter_batch
Now the batch will be filtered by our `Template`, as well as any other callbacks with a `filter_batch` method
```
env('filter_batch')
```
We can see that 13 of our 200 samples were removed by filtering
```
len(env.batch_state['samples'])
```
We can compare the values in `BatchState.samples` and `BatchState.live_raw` to see what was filtered
```
raw_samples = env.batch_state['live_raw']
filtered_samples = [env.batch_state['samples'][i] for i in range(len(env.batch_state['samples']))
if env.batch_state.sources[i]=='live']
len(filtered_samples), len(raw_samples)
# filtered compounds
[i for i in raw_samples if not i in filtered_samples]
```
#### after_sample
The `after_sample` event is used to calculate metrics related to sampling
```
env('after_sample')
```
We can see that several values have been added to `Environment.log.metrics`
- `new`: percent of samples that have not been seen before
- `diversity`: number of unique samples relative to the number of total samples
- `bs`: true batch size after filtering
- `valid`: percent of samples that passed filtering
- `live_diversity`: number of unique samples relative to the number of total samples from `sampler1`
- `live_valid`: percent of samples that passed filtering from `sampler1`
- `live_new`: percent of samples that have not been seen before from `sampler1`
```
env.log.metrics
```
### Compute Reward
After we sample a batch, we enter the `compute_reward` stage. This consists of the following events:
- `before_compute_reward` - used to set up any values needed for reward computation
- `compute_reward` - used by `rewards` to compute rewards for all samples in the `batch state`
- `after_compute_reward` - used for logging reward metrics
- `reward_modification` - modify rewards in ways not tracked by the log
- `after_reward_modification` - log reward modification metrics
#### before_compute_reward
This event can be used to set up any values needed for reward computation. Most rewards only need the raw samples as inputs, but rewards can use other inputs if needed. The only requirement for a reward is that it returns a tensor with one value per batch item.
By default, the `Agent` class will tensorize the samples present at this step. Our `PPO` loss will also add placeholder values for the terms needed by that function
```
env('before_compute_reward')
```
A number of new items have populated the batch state
```
env.batch_state.keys()
env.batch_state.x # x tensor
env.batch_state.y # y tensor
env.batch_state.mask # padding mask
```
#### compute_reward
This step actually computes rewards. The `BatchState` has a tensor of 0s as a placeholder for reward values. Rewards will compute a numeric score for each item in the batch and add it to `BatchState.rewards`
```
env.batch_state.rewards
env('compute_reward')
env.batch_state.rewards
```
So where did these rewards come from?
One reward term comes from our `Template`. We specified soft rewards for compounds with `QED>=0.5` and `SA<=5`. Compounds could score a maximum of 2 from the template.
We also have the reward from the erbB1 regression model we set up earlier.
The specific rewards from each of these sources are logged in the `BatchState`
For the `Template`, we have `BatchState.template` and `BatchState.template_passes`
```
env.batch_state.keys()
```
Template scores:
```
env.batch_state.template
```
`BatchState.template_passes` shows which samples passed the hard filters. Since we decided to prefilter with our template earlier, all remaining samples are passing
```
env.batch_state.template_passes
```
And here we have the erbB2 regression scores
```
env.batch_state.aff
```
#### after_compute_reward
This event is used to calculate metrics on the rewards
```
env('after_compute_reward')
env.log.metrics
```
#### reward_modification
The reward modification event can be thought of as a second reward that isn't logged. The reason for including this is to allow for transient, "batch context" rewards that don't affect logged values.
When we set up our callbacks earlier, we had a term
`new_cb = NoveltyReward(weight=0.05)`
Which would add a bonus score of 0.05 to new, never before seen samples. The point of this callback is to give the model a soft incentive to generate novel samples.
We want this score to impact our current batch. However, if we treated it the same as our actual rewards, the samples would be saved into `env.log` with their scores inflated by 0.05. Later, when our `LogSampler` samples from the log, the sampling would be influenced by a score that was only supposed to be given once.
Separating out rewards and reward modifications lets us avoid this
```
env('reward_modification')
env.batch_state.novel
```
#### after_reward_modification
Similar to `after_compute_reward`, this event can be used to compute stats on reward modifications
```
env('after_reward_modification')
env.log.metrics
```
### Get Model Outputs
After computing rewards, we move to set up our loss calculation. The `get_model_outputs` stage is based on generating the values that we will be backpropagating through. This stage consists of the following events:
- `get_model_outputs` - generate necessary tensors from the model
- `after_get_model_outputs` - used for any processing required prior to loss calculation
#### get_model_outputs
This is where we generate tensor values used for loss computation.
The specifics of what happens here depends on the type of model used. For autoregressive models, this step involves taking the `x` and `y` tensors we generated during the `before_compute_reward` event and doing a forward pass.
`x` is a tensor of size `(bs, sl)`. Running `x` through the model will give a set of log probabilities of size `(bs, sl, d_vocab)`. We then use `y` to gather the relevant log probs to get a gathered log prob tensor of size `(bs, sl)`.
We generate these values from both the main model and the baseline model
```
env('get_model_outputs')
env.batch_state.keys()
env.batch_state.model_logprobs.shape, env.batch_state.model_gathered_logprobs.shape
```
#### after_get_model_outputs
This event is not used by any of our current callbacks, but can be used for any sort of post-processing needed before loss computation
### Compute Loss
Now we actually compute a loss value and do an optimizer update. See the `PPO` class for a description of the policy gradient algorithm used.
Loss computation consists of the following steps:
- `compute_loss` - compute loss values
- `zero_grad` - zero grad
- `before_step` - used for computation before optimizer step (ie gradient clipping)
- `step` - step optimizer
#### compute_loss
When we first created our `BatchState`, there was a placehoder value for `loss`. This is the value that will ulimately be backpropagated through. This means we can run any sort of loss configuration, so long as the final values end up in `BatchState.loss`.
For example, the `PPO` policy gradient algorithm we are using involved a `ValueHead` that predicts values at every time step. This model is held in the `PolicyLoss` callback that holds the `PPO` class. During the `compute_loss` event, `PPO` computes an additional loss for the value head that is added to `BatchState.loss`. `PolicyLoss` also holds an optimizer for the `ValueHead` parameters.
```
env.batch_state.loss
env('compute_loss')
env.batch_state.loss
```
#### zero_grad
This is an event to zero gradients of all optimizers in play. We currently have one optimizer in `Agent` for our generative model and one in `PolicyLoss` for the `ValueHead` of our policy gradient algorithm.
```
env('zero_grad')
env.batch_state.loss.backward()
```
#### before_step
This is an event before the actual optimizer step. This is used for things like gradient clipping
```
env('before_step')
```
#### step
This is the actual optimizer step. This will step both the `Agent` and `PolicyLoss` optimizers
```
env('step')
```
### After Batch
The `after_batch` stage consists of a single `after_batch` event. This is used for any updates at the end of the batch.
In particular, the `Log` will update `Log.df` and the `Agent` will update he baseline model
```
env('after_batch')
env.log.df
```
### After Train
The `after_train` event can be used to calculate any final statistics or other values as desired
```
env('after_train')
```
### Conclusions
Hopefully walking through the training process step by step has made he process more understandable. We conclude by simply running `Environment.fit` so we don't have to go through things step by step anymore
```
env.fit(200, 90, 50, 4)
```
| github_jupyter |
```
## This script is used to read genomic data (in tabular format) from S3 and store features in SageMaker FeatureStore
import boto3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import io, os
from time import gmtime, strftime, sleep
import time
import sagemaker
from sagemaker.session import Session
from sagemaker import get_execution_role
from sagemaker.feature_store.feature_group import FeatureGroup
```
## Set up SageMaker FeatureStore
```
region = boto3.Session().region_name
boto_session = boto3.Session(region_name=region)
sagemaker_client = boto_session.client(service_name='sagemaker', region_name=region)
featurestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region)
feature_store_session = Session(
boto_session=boto_session,
sagemaker_client=sagemaker_client,
sagemaker_featurestore_runtime_client=featurestore_runtime
)
role = get_execution_role()
s3_client = boto3.client('s3', region_name=region)
default_s3_bucket_name = feature_store_session.default_bucket()
prefix = 'sagemaker-featurestore-demo'
```
## Get data from S3
```
# Get data from S3
bucket_gen = 'nsclc-clinical-genomic-data'
#bucket_gen = <S3-BUCKET-NAME>
# Genomic data
data_key_gen = 'Genomic-data-119patients.csv'
#data_key_gen = <FILE-NAME.csv>
data_location_gen = 's3://{}/{}'.format(bucket_gen, data_key_gen)
data_gen = pd.read_csv(data_location_gen)
```
## Ingest data into FeatureStore
```
genomic_feature_group_name = 'genomic-feature-group-' + strftime('%d-%H-%M-%S', gmtime())
genomic_feature_group = FeatureGroup(name=genomic_feature_group_name, sagemaker_session=feature_store_session)
current_time_sec = int(round(time.time()))
def cast_object_to_string(data_frame):
for label in data_frame.columns:
if data_frame.dtypes[label] == 'object':
data_frame[label] = data_frame[label].astype("str").astype("string")
# Cast object dtype to string. SageMaker FeatureStore Python SDK will then map the string dtype to String feature type.
cast_object_to_string(data_gen)
# Record identifier and event time feature names
record_identifier_feature_name = "Case_ID"
event_time_feature_name = "EventTime"
# Append EventTime feature
data_gen[event_time_feature_name] = pd.Series([current_time_sec]*len(data_gen), dtype="float64")
# Load feature definitions to the feature group. SageMaker FeatureStore Python SDK will auto-detect the data schema based on input data.
genomic_feature_group.load_feature_definitions(data_frame=data_gen); # output is suppressed
def wait_for_feature_group_creation_complete(feature_group):
status = feature_group.describe().get("FeatureGroupStatus")
while status == "Creating":
print("Waiting for Feature Group Creation")
time.sleep(5)
status = feature_group.describe().get("FeatureGroupStatus")
if status != "Created":
raise RuntimeError(f"Failed to create feature group {feature_group.name}")
print(f"FeatureGroup {feature_group.name} successfully created.")
genomic_feature_group.create(
s3_uri=f"s3://{default_s3_bucket_name}/{prefix}",
record_identifier_name=record_identifier_feature_name,
event_time_feature_name=event_time_feature_name,
role_arn=role,
enable_online_store=True
)
wait_for_feature_group_creation_complete(feature_group=genomic_feature_group)
genomic_feature_group.ingest(
data_frame=data_gen, max_workers=3, wait=True
)
```
| github_jupyter |
## Chemical kinetics
In chemistry one is often interested in how fast a chemical process proceeds. Chemical reactions (when viewed as single events on a molecular scale) are probabilitic. However, most reactive systems of interest involve very large numbers of molecules (a few grams of a simple substance containts on the order of $10^{23}$ molecules. The sheer number allows us to describe this inherently stochastic process deterministically.
### Law of mass action
In order to describe chemical reactions as as system of ODEs in terms of concentrations ($c_i$) and time ($t$), one can use the [law of mass action](https://en.wikipedia.org/wiki/Law_of_mass_action):
$$
\frac{dc_i}{dt} = \sum_j S_{ij} r_j
$$
where $r_j$ is given by:
$$
r_j = k_j\prod_l c_l^{R_{jl}}
$$
and $S$ is a matrix with the overall net stoichiometric coefficients (positive for net production, negative for net consumption), and $R$ is a matrix with the multiplicities of each reactant for each equation.
### Example: Nitrosylbromide
We will now look at the following (bi-directional) chemical reaction:
$$
\mathrm{2\,NO + Br_2 \leftrightarrow 2\,NOBr}
$$
which describes the equilibrium between nitrogen monoxide (NO) and bromine (Br$_2$) and nitrosyl bromide (NOBr). It can be represented as a set of two uni-directional reactions (**f**orward and **b**ackward):
$$
\mathrm{2\,NO + Br_2 \overset{k_f}{\rightarrow} 2\,NOBr} \\
\mathrm{2\,NOBr \overset{k_b}{\rightarrow} 2\,NO + Br_2}
$$
The law of mass action tells us that the rate of the first process (forward) is proportional to the concentration Br$_2$ and the square of the concentration of NO. The rate of the second reaction (the backward process) is in analogy proportional to the square of the concentration of NOBr. Using the proportionality constants $k_f$ and $k_b$ we can formulate our system of nonlinear ordinary differential equations as follows:
$$
\frac{dc_1}{dt} = 2(k_b c_3^2 - k_f c_2 c_1^2) \\
\frac{dc_2}{dt} = k_b c_3^2 - k_f c_2 c_1^2 \\
\frac{dc_3}{dt} = 2(k_f c_2 c_1^2 - k_b c_3^2)
$$
where we have denoted the concentration of NO, Br$_2$, NOBr with $c_1,\ c_2,\ c_3$ respectively.
This ODE system corresponds to the following two matrices:
$$
S = \begin{bmatrix}
-2 & 2 \\
-1 & 1 \\
2 & -2
\end{bmatrix}
$$
$$
R = \begin{bmatrix}
2 & 1 & 0 \\
0 & 0 & 2
\end{bmatrix}
$$
### Solving the initial value problem numerically
We will now integrate this system of ordinary differential equations numerically as an initial value problem (IVP) using the ``odeint`` solver provided by ``scipy``:
```
import numpy as np
from scipy.integrate import odeint
```
By looking at the [documentation](https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.integrate.odeint.html) of odeint we see that we need to provide a function which computes a vector of derivatives ($\dot{\mathbf{y}} = [\frac{dy_1}{dt}, \frac{dy_2}{dt}, \frac{dy_3}{dt}]$). The expected signature of this function is:
f(y: array[float64], t: float64, *args: arbitrary constants) -> dydt: array[float64]
in our case we can write it as:
```
def rhs(y, t, kf, kb):
rf = kf * y[0]**2 * y[1]
rb = kb * y[2]**2
return [2*(rb - rf), rb - rf, 2*(rf - rb)]
%load_ext scipy2017codegen.exercise
```
Replace **???** by the proper arguments for ``odeint``, you can write ``odeint?`` to read its documentaiton.
```
%exercise exercise_odeint.py
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(tout, yout)
_ = plt.legend(['NO', 'Br$_2$', 'NOBr'])
```
Writing the ``rhs`` function by hand for larger reaction systems quickly becomes tedious. Ideally we would like to construct it from a symbolic representation (having a symbolic representation of the problem opens up many possibilities as we will soon see). But at the same time, we need the ``rhs`` function to be fast. Which means that we want to produce a fast function from our symbolic representation. Generating a function from our symbolic representation is achieved through *code generation*.
In summary we will need to:
1. Construct a symbolic representation from some domain specific representation using SymPy.
2. Have SymPy generate a function with an appropriate signature (or multiple thereof), which we pass on to the solver.
We will achieve (1) by using SymPy symbols (and functions if needed). For (2) we will use a function in SymPy called ``lambdify``―it takes a symbolic expressions and returns a function. In a later notebook, we will look at (1), for now we will just use ``rhs`` which we've already written:
```
import sympy as sym
sym.init_printing()
y, k = sym.symbols('y:3'), sym.symbols('kf kb')
ydot = rhs(y, None, *k)
ydot
```
## Exercise
Now assume that we had constructed ``ydot`` above by applying the more general law of mass action, instead of hard-coding the rate expressions in ``rhs``. Then we could have created a function corresponding to ``rhs`` using ``lambdify``:
```
%exercise exercise_lambdify.py
plt.plot(tout, odeint(f, y0, tout, k_vals))
_ = plt.legend(['NO', 'Br$_2$', 'NOBr'])
```
In this example the gains of using a symbolic representation are arguably limited. However, it is quite common that the numerical solver will need another function which calculates the [Jacobian](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant) of $\dot{\mathbf{y}}$ (given as Dfun in the case of ``odeint``). Writing that by hand is both tedious and error prone. But SymPy solves both of those issues:
```
sym.Matrix(ydot).jacobian(y)
```
In the next notebook we will look at an example where providing this as a function is beneficial for performance.
| github_jupyter |
# Combine datasets together
```
# Import libraries
import os #operating system
import glob # for reading multiple files
from glob import glob
import pandas as pd #pandas for dataframe management
import matplotlib.pyplot as plt #matplotlib for plotting
import matplotlib.dates as mdates # alias for date formatting
import numpy as np # for generating synthetic data
# datetime stuff
from datetime import date
import holidays
# Handle date time conversions between pandas and matplotlib
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# Set some variables
dataPath = '../data' # set data path
year = 2019
# Define a function to read the data
def read_data(file):
# Read in excel
df = (pd.read_excel(
file,
skiprows=range(1,9),
usecols='B:C',
header=0,
))
# remove last rows
df.drop(
df.tail(2).index,
inplace=True
)
# fix index and naming
df.columns = ['date', 'demand']
df['date'] = pd.to_datetime(df['date']) # convert column to datetime
df.set_index('date', inplace=True)
return df
```
# Read all the data
```
# Create a list of files to combine
PATH = dataPath
EXT = "*.xls"
all_files = [file
for path, subdir, files in os.walk(PATH)
for file in glob(os.path.join(path, EXT))]
# Assemble files into a final dataframe
df = pd.DataFrame()
for file in all_files:
tmp = read_data(file)
dfs = [df, tmp]
df = pd.concat(dfs)
```
# Data processing
```
df.sort_index(inplace=True)
df['demand'] = df['demand'].apply(pd.to_numeric, errors='coerce')
ts_daily = df.resample('D').mean()
days = ts_daily.index.strftime("%Y-%m-%d")
ts_monthly = df.resample('M').mean()
months = ts_monthly.index.strftime("%Y-%m")
```
## Daily
```
def create_date_labels(df):
""" Function to create day labels that could be useful for plotting
"""
df['year'] = df.index.year
df['month'] = df.index.month
df['day'] = df.index.day
df['weekday'] = df.index.weekday
df['month_name'] = df.index.month_name()
df['day_name'] = df.index.day_name()
return df
ts_daily = create_date_labels(ts_daily)
```
### Filter by weekends and holidays
```
# Check the holidays resolving
brazil_holidays = holidays.Brazil()
for ptr in holidays.Brazil(years = 2019).items():
print(ptr)
# reset index and convert timestamp to date
ts_daily.reset_index(inplace=True)
ts_daily['date'] = ts_daily['date'].apply(lambda x: x.date())
ts_daily['holiday'] = ts_daily['date'].apply(lambda x: x in brazil_holidays)
ts_daily.set_index('date', inplace=True)
ts_daily.head()
# Check
ts_daily.loc[ts_daily['weekday'] < 5] # Weekday
ts_daily.loc[ts_daily['weekday'] >= 5] # Weekend
ts_daily.loc[ts_daily['holiday'] == True] # Holiday
```
# Plotting
```
import seaborn as sns # plotting library
sns.set(rc={'figure.figsize':(11, 4)})
# Basic quick plot
ts_daily['demand'].plot()
ts_daily.loc[ts_daily['month_name'] == 'January', 'demand'].plot()
# plot with more customization
cols_plot = ['demand']
timerange = '2019'
axes = ts_daily[cols_plot].plot(
marker='o',
alpha=0.5,
linestyle='-',
figsize=(14, 8),
subplots=True)
for ax in axes:
ax.set_ylabel('demand (GWh)')
ax.set_title(f'demand for {timerange}')
def plotBox(df, col='demand', grp='month_name'):
fig, ax = plt.subplots(1, 1, figsize=(14, 8), sharex=True)
sns.boxplot(data=df, x=grp, y=col, ax=ax)
ax.set_label(f'{col}')
ax.set_title(f'boxplot for year {year}')
ax.set_xlabel('')
# all data
plotBox(ts_daily, 'demand')
# Weekend
plotBox(ts_daily.loc[ts_daily['weekday'] >= 5], 'demand')
# holiday
plotBox(ts_daily.loc[ts_daily['holiday'] == True], 'demand')
# weekday
plotBox(ts_daily.loc[ts_daily['weekday'] < 5], 'demand')
plotBox(ts_daily, col='demand', grp='day_name')
```
| github_jupyter |
# Additional analyses for manuscript revisions
This notebook contains additional analyses performed for a revised version of the manuscript. In particular, two analyses are performed:
1. Determining whether there is a bias in the linear arrangement of motifs in strong enhancers and silencers.
2. Associating differentially expressed genes in Crx-/- vs. wildtype P21 retinas with the activity of nearby library members.
```
import os
import sys
import itertools
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
from pybedtools import BedTool
from IPython.display import display
sys.path.insert(0, "utils")
import fasta_seq_parse_manip, modeling, plot_utils, predicted_occupancy, sequence_annotation_processing
data_dir = "Data"
downloads_dir = os.path.join(data_dir, "Downloaded")
figures_dir = "Figures"
all_seqs = fasta_seq_parse_manip.read_fasta(os.path.join(data_dir, "library1And2.fasta"))
# Drop scrambled sequences
all_seqs = all_seqs[~(all_seqs.index.str.contains("scr"))]
plot_utils.set_manuscript_params()
```
Load in MPRA data and other metrics.
```
# Mapping activity class to a color
color_mapping = {
"Silencer": "#e31a1c",
"Inactive": "#33a02c",
"Weak enhancer": "#a6cee3",
"Strong enhancer": "#1f78b4",
np.nan: "grey"
}
color_mapping = pd.Series(color_mapping)
# Sort order for the four activity bins
class_sort_order = ["Silencer", "Inactive", "Weak enhancer", "Strong enhancer"]
# MPRA measurements
activity_df = pd.read_csv(os.path.join(data_dir, "wildtypeMutantPolylinkerActivityAnnotated.txt"), sep="\t", index_col=0)
activity_df["group_name_WT"] = sequence_annotation_processing.to_categorical(activity_df["group_name_WT"])
activity_df["group_name_MUT"] = sequence_annotation_processing.to_categorical(activity_df["group_name_MUT"])
# Only keep sequences that were measured in WT form
activity_df = activity_df[activity_df["expression_log2_WT"].notna()]
# TF occupancy metrics, also separate out the WT sequences
occupancy_df = pd.read_csv(os.path.join(data_dir, "predictedOccupancies.txt"), sep="\t", index_col=0)
wt_occupancy_df = occupancy_df[occupancy_df.index.str.contains("WT$")].copy()
wt_occupancy_df = sequence_annotation_processing.remove_mutations_from_seq_id(wt_occupancy_df)
wt_occupancy_df = wt_occupancy_df.loc[activity_df.index]
n_tfs = len(wt_occupancy_df.columns)
# PWMs
pwms = predicted_occupancy.read_pwm_files(os.path.join("Data", "Downloaded", "Pwm", "photoreceptorAndEnrichedMotifs.meme"))
pwms = pwms.rename(lambda x: x.split("_")[0])
motif_len = pwms.apply(len)
mu = 9
ewms = pwms.apply(predicted_occupancy.ewm_from_letter_prob).apply(predicted_occupancy.ewm_to_dict)
# WT sequences measured in the assay
wt_seqs = all_seqs[all_seqs.index.str.contains("WT")].copy()
wt_seqs = sequence_annotation_processing.remove_mutations_from_seq_id(wt_seqs)
wt_seqs = wt_seqs[activity_df.index]
```
## Analysis for linear arrangement bias
For each TF besides CRX, identify strong enhancers with exactly one position occupied by CRX and exactly one position occupied by the other TF. Count the number of times the occupied position is 5' (left) or 3' (right) of the CRX position. Because all sequences are centered on CRX motifs in a forward orientation, we do not need to consider orientation.
```
occupied_cutoff = 0.5
strong_mask = activity_df["group_name_WT"].str.contains("Strong")
strong_mask = strong_mask & strong_mask.notna()
# {tf name: [number of times TF is 5' of central CRX, number of times 3']}
tf_order_counts = {}
for tf in ewms.index.drop("CRX"):
left_counts = right_counts = 0
# Get strong enhancers with a motif for this TF
has_tf_mask = (wt_occupancy_df[tf] > occupied_cutoff) & strong_mask
has_tf_seqs = wt_seqs[has_tf_mask]
# Get the predicted occupancy landscape only for CRX and the other TF
for seq in has_tf_seqs:
occupancy_landscape = predicted_occupancy.total_landscape(seq, ewms[["CRX", tf]], mu) > occupied_cutoff
# Only consider the sequence if there is exactly one CRX and exactly one of the other TF
fwd_counts = occupancy_landscape[tf + "_F"].sum()
rev_counts = occupancy_landscape[tf + "_R"].sum()
if (occupancy_landscape["CRX_F"].sum() == 1) and (occupancy_landscape["CRX_R"].sum() == 0) and (fwd_counts + rev_counts == 1):
# By construction, the motif will only be in the F or R column
if fwd_counts == 1:
col = occupancy_landscape[tf + "_F"]
else:
col = occupancy_landscape[tf + "_R"]
tf_start_pos = col[col].index[0]
# CRX start position should always be the same, but just in case it's not do this
crx_occ = occupancy_landscape["CRX_F"]
crx_start_pos = crx_occ[crx_occ].index[0]
if tf_start_pos < crx_start_pos:
left_counts += 1
elif tf_start_pos > crx_start_pos:
right_counts += 1
# else they start at the same position, ignore it
tf_order_counts[tf] = [left_counts, right_counts]
tf_order_counts = pd.DataFrame.from_dict(tf_order_counts, orient="index", columns=["left", "right"])
tf_order_counts["binom_pval"] = tf_order_counts.apply(stats.binom_test, axis=1)
tf_order_counts["binom_qval"] = modeling.fdr(tf_order_counts["binom_pval"])
display(tf_order_counts)
```
Now do the same analyses as above for silencers with NRL. Then create a contingency table to determine whether the left/right positioning of NRL motifs is independent of whether a sequence is a strong enhancer or silencer.
```
silencer_mask = activity_df["group_name_WT"].str.contains("Silencer")
silencer_mask = silencer_mask & silencer_mask.notna()
has_nrl_mask = (wt_occupancy_df["NRL"] > occupied_cutoff) & silencer_mask
has_nrl_seqs = wt_seqs[has_nrl_mask]
silencer_left_counts = silencer_right_counts = 0
for seq in has_nrl_seqs:
occupancy_landscape = predicted_occupancy.total_landscape(seq, ewms[["CRX", "NRL"]], mu) > occupied_cutoff
# Only consider the sequence if there is exactly one CRX and exactly one of the other TF
fwd_counts = occupancy_landscape["NRL_F"].sum()
rev_counts = occupancy_landscape["NRL_R"].sum()
if (occupancy_landscape["CRX_F"].sum() == 1) & (occupancy_landscape["CRX_R"].sum() == 0) & (fwd_counts + rev_counts == 1):
# Samw as above
if fwd_counts == 1:
col = occupancy_landscape["NRL_F"]
else:
col = occupancy_landscape["NRL_R"]
tf_start_pos = col[col].index[0]
crx_occ = occupancy_landscape["CRX_F"]
crx_start_pos = crx_occ[crx_occ].index[0]
if tf_start_pos < crx_start_pos:
silencer_left_counts += 1
elif tf_start_pos > crx_start_pos:
silencer_right_counts += 1
silencer_order_counts = pd.Series([silencer_left_counts, silencer_right_counts], index=["left", "right"])
# Join with strong enhancer counts
contingency_table = pd.DataFrame.from_dict({"Silencer": silencer_order_counts, "Strong enhancer": tf_order_counts.loc["NRL", ["left", "right"]]}, orient="index").astype(int)
display(contingency_table)
odds, pval = stats.fisher_exact(contingency_table)
print(f"The linear arrangement of CRX-NRL motifs is independent of whether a sequence is a strong enhancer or silencer, Fisher's exact test p={pval:.2f}, odds ratio={odds:.1f}")
```
## Analysis of gene expression changes in Crx-/- retina
First, read in the RNA-seq data, CPM normalize each replicate, compute mean expression across replicates, and then compute the fold changes between Crx-/- and WT.
```
rnaseq_df = pd.read_csv(os.path.join(downloads_dir, "rogerRnaseqRaw2014.txt"), sep="\t", usecols=["Wta", "Wtb", "Crxa", "Crxb"])
# Add a pseudocount too when CPM normalizing
rnaseq_df = (rnaseq_df + 1) / (rnaseq_df.sum() / 1e6)
# Compute averages and then log2 FC
ko_wt_fc_log2 = np.log2(rnaseq_df[["Crxa", "Crxb"]].mean(axis=1) / rnaseq_df[["Wta", "Wtb"]].mean(axis=1))
```
Next, read in a BED file of the library and intersect it with [Supplementary file 4](https://doi.org/10.7554/eLife.48216.022) from Murphy *et al.*, 2019 to associate sequences to genes.
```
bed_columns = ["chrom", "begin", "end", "label", "score", "strand"]
# Load in BED file
library_bed = pd.read_csv(os.path.join(data_dir, "library1And2.bed"), sep="\t", header=None, names=bed_columns)
# Pull out sequences that were measured
library_bed = library_bed.set_index("label").loc[activity_df.index].reset_index()[bed_columns]
library_bed = BedTool.from_dataframe(library_bed).sort()
# Read in ATAC data and intersect with the library
atac_df = pd.read_excel(os.path.join(downloads_dir, "murphyAtac2019.xlsx"), sheet_name="peakUnion_counts", skiprows=1)
atac_bed = BedTool.from_dataframe(atac_df[["Chr", "Start", "End", "PeakID"]]).sort()
atac_bed = atac_bed.intersect(library_bed, wo=True).to_dataframe()
atac_bed = atac_bed[["chrom", "start", "end", "name", "thickEnd"]].rename(columns={"thickEnd": "library_label"})
atac_df = atac_df.set_index("PeakID").loc[atac_bed["name"]].reset_index()
atac_df["library_label"] = atac_bed["library_label"]
```
Get the gene closest to every library member, and then get the fold change.
```
def get_nearest_gene_fc(gene):
if gene in ko_wt_fc_log2.index:
return ko_wt_fc_log2[gene]
else:
return np.nan
library_nearest_gene = atac_df.set_index("library_label")["Nearest Gene"]
library_gene_fc = library_nearest_gene.apply(get_nearest_gene_fc)
library_gene_fc = library_gene_fc[library_gene_fc.notna()]
activity_has_gene_df = activity_df.loc[library_gene_fc.index]
```
Now get genes that have an absolute fold change of 2 (log2 = 1) or more.
```
library_gene_de = library_gene_fc[library_gene_fc.abs() >= 1]
up_mask = (library_gene_de > 0).replace({False: "Down-regulated", True: "Up-regulated"})
de_direction_grouper = library_gene_de.groupby(up_mask)
```
Determine enrichment of silencers being near up-regulated genes and enhancers being near down-regulated genes.
```
# Silencers
silencer_count_contingency = de_direction_grouper.apply(lambda x: activity_has_gene_df.loc[x.index, "group_name_WT"].str.contains("Silencer").value_counts()).unstack()
oddsratio, pval = stats.fisher_exact(silencer_count_contingency)
print(f"Direction of differential expression is independent of having a silencer nearby, Fisher's exact test p={pval:.3f}, OR={oddsratio:.2f}")
silencer_count_contingency = silencer_count_contingency.div(silencer_count_contingency.sum(axis=1), axis=0)
display(silencer_count_contingency)
fig, ax = plt.subplots()
ax.bar([0, 1], silencer_count_contingency[True], tick_label=silencer_count_contingency.index)
ax.set_ylabel("Fraction of genes near a silencer")
# Enhancers
enhancer_count_contingency = de_direction_grouper.apply(lambda x: activity_has_gene_df.loc[x.index, "group_name_WT"].str.contains("enhancer").value_counts()).unstack()
oddsratio, pval = stats.fisher_exact(enhancer_count_contingency)
print(f"Direction of differential expression is independent of having an enhancer nearby, Fisher's exact test p={pval:.3f}, OR={oddsratio:.2f}")
enhancer_count_contingency = enhancer_count_contingency.div(enhancer_count_contingency.sum(axis=1), axis=0)
display(enhancer_count_contingency)
fig, ax = plt.subplots()
ax.bar([0, 1], enhancer_count_contingency[True], tick_label=enhancer_count_contingency.index)
ax.set_ylabel("Fraction of genes near an enhancer")
```
| github_jupyter |
# LAB: Introdução a Pandas 1
## 1. Introdução
Neste caso usaremos uma versão muito resumida dos dados do [Censo Demográfico (levantamento realizado pelo INDEC)](http://www.indec.gov.ar/bases-de-datos.asp). Trata-se de uma pesquisa contínua cujo objetivo principal é gerar informações sobre o funcionamento do mercado de trabalho.
Utilizaremos apenas algumas variáveis (idade, escolaridade, número de horas trabalhadas, qualificação da tarefa e renda do trabalho) e alguns casos (os empregados, ou seja, aqueles que trabalharam pelo menos uma hora na semana anterior à pesquisa).
### 1.1 Importamos os pacotes a serem usados
```
import pandas as pd
```
### 1.2 Importamos os dados a serem usados
```
df = pd.read_csv('../../99 Datasets/demografico-INDEC.csv.zip', encoding = 'latin1', engine='python', delimiter=',')
```
## 2. Explorando o dataset
### 2.1. Quantas filas e quantas colunas o dataset tem?
### 2.2 Que informação o dataset tem? Imprimir o nome das colunas
Os nomes das colunas não são muito descritivos a respeito das informações que elas contêm. Vamos tentar mudá-los pela seguinte lista:
['idade', 'escolaridade', 'hs_trabalhadas', 'qualificação_ocupacional', 'renda_ult_mes']
```
df.columns = ['idade', 'escolaridade', 'hs_trabalhadas', 'qualificação_ocupacional', 'renda_ult_mes']
```
### 2.3 Como o dataset está indexado?
```
df
```
### 2.4 Qual é o tipo da quarta coluna?
### 2.4 Qual é a escolaridade mais comum?
### 2.5 E como a população é distribuída segundo a qualificação?
### 2.6 Qual é a renda total da população?
### 2.4 Qual é a renda média da população?
## 3. Indexando e organizando os dados
### 3.1 Selecionar a coluna `escolaridade` e `renda_ult_mes` e atribuí-las a um objeto novo chamado `df2`
### 3.2 Selecionar as primeiras 20 filas de df2
### 3.2 Selecionar uma amostra aleatória de 500 filas de df
### 3.4 Escolher todas as colunas, exceto escolaridade. Dica: Utilizar a propriedade columns para filtrar na dimensão das colunas.
### 3.5 Organizar o dataset segundo a idade
### 3.6 Qual é a média de horas trabalhadas dos jovens entre 14 e 25 anos pouco qualificados?
### 3.7 Gerar um novo dataframe com os trabalhadores que ganham mais que a renda média geral e estão abaixo do número médio de horas trabalhadas. Quantos trabalhadores estão nesta condição? Qual é a média de idade deles?
## 4. Visulização dos dados
### 4.1 Plote um histograma para a variável renda do ultimo mês
### 4.2 Plote um histograma das horas trabalhadas
### 4.3 Plote um histograma da renda do ultimo mês dos funcionários que tem 15 horas trabalhadas
### 4.4 Plote um histograma da renda do ultimos mes dos funcionários classificados como Prof./Tecn.
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.chdir('/content/gdrive/My Drive/finch/tensorflow2/text_classification/imdb/main')
%tensorflow_version 2.x
!pip install tensorflow-addons
import tensorflow as tf
import numpy as np
import pprint
import logging
import time
from tensorflow_addons.optimizers.cyclical_learning_rate import ExponentialCyclicalLearningRate
print("TensorFlow Version", tf.__version__)
print('GPU Enabled:', tf.test.is_gpu_available())
def data_generator(f_paths, params):
for f_path in f_paths:
with open(f_path) as f:
print('Reading', f_path)
for line in f:
line = line.rstrip()
label, text = line.split('\t')
text = text.split(' ')
words = [get_idx(params['word2idx'], w) for w in text]
if len(words) >= params['max_word_len']:
words = words[:params['max_word_len']]
chars = []
for w in text:
temp = []
for c in list(w):
temp.append(get_idx(params['char2idx'], c))
if len(temp) < params['max_char_len']:
temp += [0] * (params['max_char_len'] - len(temp))
else:
temp = temp[:params['max_char_len']]
chars.append(temp)
if len(chars) >= params['max_word_len']:
chars = chars[:params['max_word_len']]
y = int(label)
yield words, chars, y
def dataset(is_training, params):
_shapes = ([None], [None, params['max_char_len']], ())
_types = (tf.int32, tf.int32, tf.int32)
_pads = (0, 0, -1)
if is_training:
ds = tf.data.Dataset.from_generator(
lambda: data_generator(params['train_paths'], params),
output_shapes = _shapes,
output_types = _types,)
ds = ds.shuffle(params['num_samples'])
ds = ds.padded_batch(params['batch_size'], _shapes, _pads)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
else:
ds = tf.data.Dataset.from_generator(
lambda: data_generator(params['test_paths'], params),
output_shapes = _shapes,
output_types = _types,)
ds = ds.padded_batch(params['batch_size'], _shapes, _pads)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
class KernelAttentivePooling(tf.keras.Model):
def __init__(self, params):
super().__init__()
self.dropout = tf.keras.layers.Dropout(params['dropout_rate'])
self.kernel = tf.keras.layers.Dense(1,
activation=tf.tanh,
use_bias=False)
def call(self, inputs, training=False):
x, masks = inputs
# alignment
align = tf.squeeze(self.kernel(self.dropout(x, training=training)), -1)
# masking
paddings = tf.fill(tf.shape(align), float('-inf'))
align = tf.where(tf.equal(masks, 0), paddings, align)
# probability
align = tf.nn.softmax(align)
align = tf.expand_dims(align, -1)
# weighted sum
return tf.squeeze(tf.matmul(x, align, transpose_a=True), -1)
class Model(tf.keras.Model):
def __init__(self, params):
super().__init__()
self.char_embedding = tf.keras.layers.Embedding(len(params['char2idx'])+1, params['char_embed_size'])
self.word_embedding = tf.Variable(np.load('../vocab/word.npy'),
dtype=tf.float32,
name='pretrained_embedding',
trainable=False,)
self.char_cnn = tf.keras.layers.Conv1D(filters=params['cnn_filters'], kernel_size=params['cnn_kernel_size'], activation=tf.nn.elu, padding='same')
self.embed_drop = tf.keras.layers.Dropout(params['dropout_rate'])
self.embed_fc = tf.keras.layers.Dense(params['cnn_filters'], tf.nn.elu, name='embed_fc')
self.word_cnn = tf.keras.layers.Conv1D(filters=params['cnn_filters'], kernel_size=params['cnn_kernel_size'], activation=tf.nn.elu, padding='same')
self.word_drop = tf.keras.layers.Dropout(params['dropout_rate'])
self.attentive_pooling = KernelAttentivePooling(params)
self.out_linear = tf.keras.layers.Dense(2)
def call(self, inputs, training=False):
words, chars = inputs
if words.dtype != tf.int32:
words = tf.cast(words, tf.int32)
masks = tf.sign(words)
batch_sz = tf.shape(words)[0]
word_len = tf.shape(words)[1]
chars = self.char_embedding(chars)
chars = tf.reshape(chars, (batch_sz*word_len, params['max_char_len'], params['char_embed_size']))
chars = self.char_cnn(chars)
chars = tf.reduce_max(chars, 1)
chars = tf.reshape(chars, (batch_sz, word_len, params['cnn_filters']))
words = tf.nn.embedding_lookup(self.word_embedding, words)
x = tf.concat((words, chars), axis=-1)
x = self.embed_drop(x, training=training)
x = self.embed_fc(x)
x = self.word_drop(x, training=training)
x = self.word_cnn(x)
x = self.attentive_pooling((x, masks), training=training)
x = self.out_linear(x)
return x
params = {
'vocab_path': '../vocab/word.txt',
'train_paths': [
'../data/train_bt_part1.txt',
'../data/train_bt_part2.txt',
'../data/train_bt_part3.txt',
'../data/train_bt_part4.txt',
'../data/train_bt_part5.txt',
'../data/train_bt_part6.txt',
],
'test_paths': [
'../data/test.txt',
],
'num_samples': 25000*2,
'num_labels': 2,
'batch_size': 32,
'max_word_len': 1000,
'max_char_len': 10,
'char_embed_size': 30,
'cnn_filters': 300,
'cnn_kernel_size': 5,
'dropout_rate': .2,
'kernel_size': 5,
'num_patience': 10,
'init_lr': 1e-4,
'max_lr': 8e-4,
}
def get_vocab(f_path):
word2idx = {}
with open(f_path) as f:
for i, line in enumerate(f):
line = line.rstrip()
word2idx[line] = i
return word2idx
def get_idx(symbol2idx, symbol):
return symbol2idx.get(symbol, len(symbol2idx))
params['char2idx'] = get_vocab('../vocab/char.txt')
params['word2idx'] = get_vocab('../vocab/word.txt')
model = Model(params)
model.build(input_shape=[[None, None], [None, None, params['max_char_len']]])
pprint.pprint([(v.name, v.shape) for v in model.trainable_variables])
step_size = 4 * params['num_samples'] // params['batch_size']
decay_lr = ExponentialCyclicalLearningRate(
initial_learning_rate = params['init_lr'],
maximal_learning_rate = params['max_lr'],
step_size = step_size,)
optim = tf.optimizers.Adam(params['init_lr'])
global_step = 0
count = 0
best_acc = .0
t0 = time.time()
logger = logging.getLogger('tensorflow')
logger.setLevel(logging.INFO)
while True:
# TRAINING
for words, chars, labels in dataset(is_training=True, params=params):
with tf.GradientTape() as tape:
logits = model((words, chars), training=True)
loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_true = tf.one_hot(labels, 2),
y_pred = logits,
from_logits = True,
label_smoothing = .2,))
optim.lr.assign(decay_lr(global_step))
grads = tape.gradient(loss, model.trainable_variables)
optim.apply_gradients(zip(grads, model.trainable_variables))
if global_step % 50 == 0:
logger.info("Step {} | Loss: {:.4f} | Spent: {:.1f} secs | LR: {:.6f}".format(
global_step, loss.numpy().item(), time.time()-t0, optim.lr.numpy().item()))
t0 = time.time()
global_step += 1
# EVALUATION
m = tf.keras.metrics.Accuracy()
for words, chars, labels in dataset(is_training=False, params=params):
logits = model((words, chars), training=False)
y_pred = tf.argmax(logits, axis=-1)
m.update_state(y_true=labels, y_pred=y_pred)
acc = m.result().numpy()
logger.info("Evaluation: Testing Accuracy: {:.3f}".format(acc))
if acc > best_acc:
best_acc = acc
count = 0
else:
count += 1
logger.info("Best Accuracy: {:.3f}".format(best_acc))
if count == params['num_patience']:
print(params['num_patience'], "times not improve the best result, therefore stop training")
break
```
| github_jupyter |
# 一、数据预处理
```
import pandas as pd
df = pd.read_csv('../data/qualitydata_3/jmt0718withGeoLocation.csv')
print df.shape
print df.columns.values
# print df.dtypes
# print df.describe(include='all')
df.head(10)
# 把REGION和CITY字段为 NaN 的部分填充为 Unknown
df.COUNTRY= df.COUNTRY.fillna('Unknown')
df.REGION= df.REGION.fillna('Unknown')
df.CITY = df.CITY.fillna('Unknown')
# 生成新字段ID,用GID或UID的值表示
df.insert(0,'ID',df.GID+df.UID_)
# 生成新字段register,0表示为注册用户,1表示为没注册用户。uid对应0,gid对应1
bool_to_num = {False:0, True: 1}
register = (df['GID']!=0).map(bool_to_num)
df.insert(6,'REGISTER',register)
# 并且原来的删除GID和UID两个字段
del df['GID']
del df['UID_']
# 生成HOUR和TIMESTAMP字段
import time
def rowkey_to_hour(startjointime):
return time.strptime(startjointime, '%Y-%m-%d %H:%M:%S').tm_hour
def rowkey_to_timestamp(startjointime):
return int(time.mktime(time.strptime(startjointime, '%Y-%m-%d %H:%M:%S')))
HOUR = df.STARTJOINTIME.map(rowkey_to_hour)
df.insert(2,'HOUR',HOUR)
TIMESTAMP = df.STARTJOINTIME.map(rowkey_to_timestamp)
df.insert(3,'TIMESTAMP',TIMESTAMP)
# 分析 country、region、city,并对这三者进行处理
# ============ 出现频率小于某个阈值的类别统一划分为other==============
def category_unify(category_field, threshold):
field_count_dict = df[category_field].value_counts()<threshold
index_field_dict ={}
for index, field in df[category_field].iteritems():
index_field_dict[index] = field_count_dict[field]
return pd.Series(index_field_dict)
df.loc[category_unify('COUNTRY', 1000), 'COUNTRY' ] = "Other"
df.loc[category_unify('REGION', 1000), 'REGION' ] = "Other"
df.loc[category_unify('CITY', 1000), 'CITY' ] = "Other"
# df_dis_series = df.COUNTRY.value_counts()
# country_flag_series = df_dis_series<=30
# for country,flag in country_flag_series.iteritems():
# if flag:
# df.COUNTRY.replace(country,"other",inplace=True)
# # 原来是 130 个 country,划分后为 71 个 country
# df_dis_series = df.REGION.value_counts()
# region_flag_series = df_dis_series<=10
# for region,flag in region_flag_series.iteritems():
# if flag:
# df.REGION.replace(region,"other",inplace=True)
# # 原来是 674 个region,划分后为 171 个 region
# df_dis_series = df.CITY.value_counts()
# city_flag_series = df_dis_series<=10
# for city,flag in city_flag_series.iteritems():
# if flag:
# df.CITY.replace(city,"other",inplace=True)
# # 原来是 2849 个city,划分后为 312 个city
# 对于 USEROS 和 USERBROWSER 进行分析处理
# 生成 PC、moblie特征
from collections import OrderedDict
o_dict = OrderedDict()
o_dict['Phone'] = 'Moblie'
o_dict['Android'] = 'Moblie'
o_dict['iOS'] = 'Moblie'
o_dict['Other'] = 'Other'
o_dict['Windows'] = 'PC'
o_dict['Linux'] = 'PC'
o_dict['intel mac os x'] = 'PC'
o_dict['Chrome OS'] = 'PC'
def category_unify_by_char(category_field):
index_field_dict = {}
for index, field in df[category_field].iteritems():
for useros, flag in o_dict.items():
if useros in field:
index_field_dict[index] = flag
break
return pd.Series(index_field_dict)
MACHINE_TYPE = category_unify_by_char('USEROS')
df['MACHINE_TYPE'] = MACHINE_TYPE
df.loc[category_unify('USEROS', 1000), 'USEROS' ] = "Other"
df.loc[category_unify('USERBROWSER', 1000), 'USERBROWSER' ] = "Other"
# 保存中间数据集到 csv
df.to_csv('../data/qualitydata_3/jmt0718withGeoLocation_clean.csv',index=False)
df.head()
```
# 二、JMT处理
```
import pandas as pd
df = pd.read_csv('../data/SAP_MEETINGJMT_20170810-20170825/SAP_MEETINGJMT_20170810.csv')
pd.set_option('display.max_rows',None)
print df.USERJMT.value_counts()[0:100]
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
plt.figure(figsize=(10, 4))
count_series = df.USERJMT.value_counts().sort_index()[0:100]
count_series.plot(label='JMT Distributed')
plt.legend()
plt.show()
print count_series[0:100]
# 查看一阶导数分布
plt.figure(figsize=(10, 4))
count_series.pct_change().plot(label='First Derivative')
plt.legend()
plt.show()
```
观察4-5的变化率,和5-6的变化率
```
# 查看二阶导数分布
plt.figure(figsize=(10, 4))
count_series.pct_change().pct_change()[0:40].plot(label='Second Derivative')
plt.legend()
plt.show()
print count_series.pct_change().pct_change()[0:40]
```
1. 更加印证了 jmt > 20 是比较合理的。
2. 想把0-5归为一个区间段、6-20归为一个区间段是比较合理的。
```
# USERJMT 字段进行映射为[0,2]
def jmt_map(x):
if 0 <= x <= 5:
return 2
elif 6 <= x <= 20:
return 1
elif 20 < x :
return 0
df['USERJMT_DIS'] = df.USERJMT.map(jmt_map)
df.USERJMT_DIS.value_counts()
df.to_excel('../data/qualitydata_3/jmt0718withGeoLocation_clean_map.xlsx',index=False)
```
# 三、单字段分析
```
# USERJMT_DIS 分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
fig = plt.figure()
fig.set(alpha=0.4) # 设定图表颜色alpha参数
df.USERJMT_DIS.value_counts().plot(kind='bar')# 柱状图
plt.title(u"离散化后:USERJMT_DIS分布", fontproperties=myfont) # 标题
plt.ylabel(u"人数", fontproperties=myfont)
plt.xlabel(u"评分", fontproperties=myfont)
userjmt_dis_series = df.USERJMT_DIS.value_counts()
userjmt_dis_series_per = userjmt_dis_series/df.shape[0]
print pd.concat([userjmt_dis_series.to_frame(name='count'),userjmt_dis_series_per.to_frame(name='percentage')],axis=1)
plt.show()
```
结论: 大部分都是正常的评分,少部分是异常评分
### 2.1 REGISTER 与JMT分析
```
# REGISTER 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
# fig = plt.figure()
# fig.set(alpha=0.2) # 设定图表颜色alpha参数
Score_1 = df.REGISTER[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.REGISTER[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.REGISTER[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.REGISTER[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.REGISTER[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"REGISTER-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"REGISTER",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.REGISTER.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论: 在差评评分中(1-2分),在未注册的比例是已注册的3倍左右。
(4886+2518)/111239 = 6.66% <br>
(3177+2181)/28933 = 18.5%
### 2.2 USERTYPE 与JMT分析
```
# USERTYPE 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.USERTYPE[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.USERTYPE[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.USERTYPE[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.USERTYPE[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.USERTYPE[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"USERTYPE-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"USERTYPE",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.USERTYPE.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论:在差评(1-2分)中<br>
Return 比例非常少。(361+1169)/120507 = 1.27% <br>Update比例最高。(1940+4887)/16088 = 42.4%<br>New比例其次。(386+599)/3577 = 27.5%
### 2.3 PLATFORM 与JMT分析
```
# PLATFORM 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.PLATFORM[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.PLATFORM[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.PLATFORM[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.PLATFORM[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.PLATFORM[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"PLATFORM-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"PLATFORM",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.PLATFORM.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论: 在差评中,PLATFORM = 0 类型的比例远大于其他类型的比例。其次是PLATFORM = 1类型的比例。
### 2.4 REFNUM6 与JMT分析
```
# REFNUM6 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.REFNUM6[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.REFNUM6[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.REFNUM6[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.REFNUM6[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.REFNUM6[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"REFNUM6-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"REFNUM6",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.REFNUM6.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论
REFNUM6 = 41 以及 =0 的比例是最高的。
### 2.5 USEROS 与JMT分析
```
# USEROS 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.USEROS[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.USEROS[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.USEROS[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.USEROS[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.USEROS[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"USEROS-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"USEROS",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.USEROS.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论: 在差评(1-2分)中mac系列还是比windows系列表现要好。主要看intel mac os x 10.10、10.11、10.12 和Windows 7、Windows 8、Windows 10对比。
### 2.6 USERBROWSER 与JMT分析
```
# USERBROWSER 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.USERBROWSER[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.USERBROWSER[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.USERBROWSER[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.USERBROWSER[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.USERBROWSER[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"USERBROWSER-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"USERBROWSER",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.USERBROWSER.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
pd.set_option('display.max_rows',None)
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论: 这个字段的结论不明显
### 2.7 DOWNLOADMETHOD 与JMT分析
```
# DOWNLOADMETHOD 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.DOWNLOADMETHOD[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.DOWNLOADMETHOD[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.DOWNLOADMETHOD[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.DOWNLOADMETHOD[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.DOWNLOADMETHOD[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"DOWNLOADMETHOD-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"DOWNLOADMETHOD",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.DOWNLOADMETHOD.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论: Java Applet Native Client 低分较多,但是样本量偏少。<br>
   不过 Extension 和 GPC Plugin 低分中比例偏少。
### 2.8 SERVICETYPE 与JMT分析
```
# SERVICETYPE 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.SERVICETYPE[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.SERVICETYPE[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.SERVICETYPE[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.SERVICETYPE[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.SERVICETYPE[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"SERVICETYPE-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"SERVICETYPE",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.SERVICETYPE.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论:MC好评率最高。不过这个字段中各个值得分布也不平均,MC占据了大多数。
### 2.9 SITEVERSION 与JMT分析
```
# REFNUM6 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.SITEVERSION[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.SITEVERSION[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.SITEVERSION[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.SITEVERSION[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.SITEVERSION[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"SITEVERSION-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"SITEVERSION",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.SITEVERSION.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
结论:这个字段没什么分析意义。。。
### 2.10 HOUR 与JMT分析
```
# HOUR 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.HOUR[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.HOUR[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.HOUR[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.HOUR[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.HOUR[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"HOUR-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"HOUR",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.HOUR.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
# 当评分为1分时,时间(hour为单位)和 比例的分布图
plt.figure(figsize=(10, 4))
Score_1_per.plot(label='percentage %')
plt.legend()
plt.show()
# 当评分为2分时,时间(hour为单位)和 比例的分布图
plt.figure(figsize=(10, 4))
Score_2_per.plot(label='percentage %')
plt.legend()
plt.show()
# 当评分为5分时,时间(hour为单位)和 比例的分布图
plt.figure(figsize=(10, 4))
Score_5_per.plot(label='percentage %')
plt.legend()
plt.show()
```
结论:0-10点开会时,差评比例很高。
### 2.11 COUNTRY 与JMT分析
```
# COUNTRY 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.COUNTRY[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.COUNTRY[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.COUNTRY[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.COUNTRY[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.COUNTRY[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"COUNTRY-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"COUNTRY",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.COUNTRY.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
### 2.12 REGION 与JMT分析
```
# REGION 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
Score_1 = df.REGION[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.REGION[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.REGION[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.REGION[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.REGION[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"REGION-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"REGION",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.REGION.value_counts()
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
### 2.13 CITY 与JMT分析
```
# CITY 和JMT的数据分布
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname='/Library/Fonts/Songti.ttc')
# ============ 出现频率小于某个阈值的类别统一划分为other==============
df_dis_series = df.CITY.value_counts()
# print '=====',len(df_dis_series)
city_flag_series = df_dis_series<=10
for city,flag in city_flag_series.iteritems():
if flag:
df.CITY.replace(city,"other",inplace=True)
# 原来是 2849 个city,划分后为 312 个city
# ==============================================================
Score_1 = df.CITY[df.USERJMT_DIS == 1].value_counts()
Score_2 = df.CITY[df.USERJMT_DIS == 2].value_counts()
Score_3 = df.CITY[df.USERJMT_DIS == 3].value_counts()
Score_4 = df.CITY[df.USERJMT_DIS == 4].value_counts()
Score_5 = df.CITY[df.USERJMT_DIS == 5].value_counts()
pd.DataFrame({u'1':Score_1, u'2':Score_2, u'3':Score_3, u'4':Score_4, u'5':Score_5}).plot(kind='bar', stacked=True)
plt.title(u"CITY-USERJMT_DIS分布关系",fontproperties=myfont)
plt.xlabel(u"CITY",fontproperties=myfont)
plt.ylabel(u"count",fontproperties=myfont)
df_dis_series = df.CITY.value_counts()
print df_dis_series
Score_1_per = Score_1/df_dis_series * 100
Score_2_per = Score_2/df_dis_series * 100
Score_3_per = Score_3/df_dis_series * 100
Score_4_per = Score_4/df_dis_series * 100
Score_5_per = Score_5/df_dis_series * 100
print df_dis_series
print
print '1分的分布:\n',pd.concat([Score_1.to_frame(name='count'), Score_1_per.to_frame(name='percentage:%')],axis=1)
print '\n2分的分布:\n',pd.concat([Score_2.to_frame(name='count'), Score_2_per.to_frame(name='percentage:%')],axis=1)
print '\n3分的分布:\n',pd.concat([Score_3.to_frame(name='count'), Score_3_per.to_frame(name='percentage:%')],axis=1)
print '\n4分的分布:\n',pd.concat([Score_4.to_frame(name='count'), Score_4_per.to_frame(name='percentage:%')],axis=1)
print '\n5分的分布:\n',pd.concat([Score_5.to_frame(name='count'), Score_5_per.to_frame(name='percentage:%')],axis=1)
plt.show()
```
| github_jupyter |
# Activity 5: Assembling a Deep Learning System
In this activity, we will train the first version of our LSTM model using Bitcoin daily closing prices. These prices will be organized using the weeks of both 2016 and 2017. We do that because we are interested in predicting the prices of a week's worth of trading.
```
%autosave 5
# Import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
from keras.models import load_model
# Import training dataset
train = pd.read_csv('data/train_dataset.csv')
train.head()
```
## Reshape Data
```
def create_groups(data, group_size=7):
"""Create distinct groups from a continuous series.
Parameters
----------
data: np.array
Series of continious observations.
group_size: int, default 7
Determines how large the groups are. That is,
how many observations each group contains.
Returns
-------
A Numpy array object.
"""
samples = []
for i in range(0, len(data), group_size):
sample = list(data[i:i + group_size])
if len(sample) == group_size:
samples.append(np.array(sample).reshape(1, group_size))
return np.array(samples)
# Find the remainder when the number of observations is divided by group size
len(train) % 7
# Create groups of 7 from our data.
# We drop the first two observations so that the
# number of total observations is divisible by the `group_size`.
data = create_groups(train['close_point_relative_normalization'][2:].values)
print(data.shape)
# Reshape data into format expected by LSTM layer
X_train = data[:-1, :].reshape(1, 76, 7)
Y_validation = data[-1].reshape(1, 7)
print(X_train.shape)
print(Y_validation.shape)
```
## Load Our Model
```
# Load our previously trained model
model = load_model('bitcoin_lstm_v0.h5')
```
## Train model
```
%%time
# Train the model
history = model.fit(
x=X_train, y=Y_validation,
batch_size=32, epochs=100)
# Plot loss function
pd.Series(history.history['loss']).plot(figsize=(14, 4));
```
## Make Predictions
```
# Make predictions using X_train data
predictions = model.predict(x=X_train)[0]
predictions
def denormalize(series, last_value):
"""Denormalize the values for a given series.
This uses the last value available (i.e. the last
closing price of the week before our prediction)
as a reference for scaling the predicted results.
"""
result = last_value * (series + 1)
return result
# Denormalize predictions
last_weeks_value = train[train['date'] == train['date'][:-7].max()]['close'].values[0]
denormalized_prediction = denormalize(predictions, last_weeks_value)
denormalized_prediction
# Plot denormalized predictions against actual predictions
plt.figure(figsize=(14, 4))
plt.plot(train['close'][-7:].values, label='Actual')
plt.plot(denormalized_prediction, color='#d35400', label='Predicted')
plt.grid()
plt.legend();
prediction_plot = np.zeros(len(train)-2)
prediction_plot[:] = np.nan
prediction_plot[-7:] = denormalized_prediction
plt.figure(figsize=(14, 4))
plt.plot(train['close'][-30:].values, label='Actual')
plt.plot(prediction_plot[-30:], color='#d35400', linestyle='--', label='Predicted')
plt.axvline(30 - 7, color='r', linestyle='--', linewidth=1)
plt.grid()
plt.legend(loc='lower right');
# TASK:
# Save model to disk
#
```
In this activity, we have assembled a complete deep learning system: from data to prediction. The model created in this activity need a number of improvements before it can be considered useful. However, it serves as a great starting point from which we will continuously improve.
| github_jupyter |
# Neural Sequence Distance Embeddings
[](https://colab.research.google.com/github/gcorso/NeuroSEED/blob/master/tutorial/NeuroSEED.ipynb)
The improvement of data-dependent heuristics and representation for biological sequences is a critical requirement to fully exploit the recent technological and scientific advancements for human microbiome analysis. This notebook presents Neural Sequence Distance Embeddings (NeuroSEED), a novel framework to embed biological sequences in geometric vector spaces that unifies recently proposed approaches. We demonstrate its capacity by presenting different ways it can be applied to the tasks of edit distance approximation, closest string retrieval, hierarchical clustering and multiple sequence alignment. In particular, the hyperbolic space is shown to be a key component to embed biological sequences and obtain competitive heuristics. Benchmarked with common bioinformatics and machine learning baselines, the proposed approaches display significant accuracy and/or runtime improvements on real-world datasets formed by sequences from samples of the human microbiome.

Figure 1: On the left, a diagram of the NeuroSEED underlying idea: embed sequences in vector spaces preserving the edit distance between them and then extract information from the vector space. On the right, an example of the hierarchical clustering produced on the Poincarè disk from the P53 tumour protein from 20 different organisms.
## Introduction and Motivation
### Motivation
Dysfunctions of the human microbiome (Morgan & Huttenhower, 2012) have been linked to many serious diseases ranging from diabetes and antibiotic resistance to inflammatory bowel disease. Its usage as a biomarker for the diagnosis and as a target for interventions is a very active area of research. Thanks to the advances in sequencing technologies, modern analysis relies on sequence reads that can be generated relatively quickly. However, to fully exploit the potential of these advances for personalised medicine, the computational methods used in the analysis have to significantly improve in terms of speed and accuracy.

Figure 2: Traditional approach to the analysis of the 16S rRNA sequences from the microbiome.
### Problem
While the number of available biological sequences has been growing exponentially over the past decades, most of the problems related to string matching have not been addressed by the recent advances in machine learning. Classical algorithms are data-independent and, therefore, cannot exploit the low-dimensional manifold assumption that characterises real-world data. Exploiting the available data to produce data-dependent heuristics and representations would greatly accelerate large-scale analyses that are critical to microbiome analysis and other biological research.
Unlike most tasks in computer vision and NLP, string matching problems are typically formulated as combinatorial optimisation problems. These discrete formulations do not fit well with the current deep learning approaches causing these problems to be left mostly unexplored by the community. Current supervised learning methods also suffer from the lack of labels that characterises many downstream applications with biological sequences. On the other hand, common self-supervised learning approaches, very successful in NLP, are less effective in the biological context where relations tend to be per-sequence rather than per-token (McDermott et al. 2021).
### Neural Sequence Distance Embedding
In this notebook, we present Neural Sequence Distance Embeddings (NeuroSEED), a general framework to produce representations for biological sequences where the distance in the embedding space is correlated with the evolutionary distance between sequences. This control over the geometric interpretation of the representation space enables the use of geometrical data processing tools for the analysis of the spectrum of sequences.

Figure 3: The key idea of NeuroSEED is to learn an encoder function that preserves distances between the sequence and vector space.
Examining the task of embedding sequences to preserve the edit distance reveals the importance of data-dependent approaches and of using a geometry that matches well the underlying distribution in the data analysed. For biological datasets, that have an implicit hierarchical structure given by evolution, the hyperbolic space provides significant improvement.
We show the potential of the framework by analysing three fundamental tasks in bioinformatics: closest string retrieval, hierarchical clustering and multiple sequence alignment. For all tasks, relatively simple unsupervised approaches using NeuroSEED encoders significantly outperform data-independent heuristics in terms of accuracy and/or runtime. In the paper (preprint will be available soon) and the [complete repository](https://github.com/gcorso/NeuroSEED) we also present more complex geometrical approaches to hierarchical clustering and multiple sequence alignment.
## 2. Analysis
To improve readability and limit the size of the notebook we make use of some subroutines in the [official repository](https://github.com/gcorso/NeuroSEED) for the research project. The code in the notebook is our best effort to convey the promising application of hyperbolic geometry to this novel research direction.
Install and import the required packages.
```
!pip3 install geomstats
!apt install clustalw
!pip install biopython
!pip install python-Levenshtein
!pip install Cython
!pip install networkx
!pip install tqdm
!pip install gdown
!pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
!git clone https://github.com/gcorso/NeuroSEED.git
import os
os.chdir("NeuroSEED")
!cd hierarchical_clustering/relaxed/mst; python setup.py build_ext --inplace; cd ../unionfind; python setup.py build_ext --inplace; cd ..; cd ..; cd ..;
os.environ['GEOMSTATS_BACKEND'] = 'pytorch'
import torch
import os
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import time
from geomstats.geometry.poincare_ball import PoincareBall
from edit_distance.train import load_edit_distance_dataset
from util.data_handling.data_loader import get_dataloaders
from util.ml_and_math.loss_functions import AverageMeter
```
### Dataset description
As microbiome analysis is one of the most critical applications where the methods presented could be applied, we chose to use a dataset containing a portion of the 16S rRNA gene widely used in the biological literature to analyse microbiome diversity. Qiita (Clemente et al. 2015) contains more than 6M sequences of up to 152 bp that cover the V4 hyper-variable region collected from skin, saliva and faeces samples of uncontacted Amerindians. The full dataset can be found on the [European Nucleotide Archive](https://www.ebi.ac.uk/ena/browser/text-search?query=ERP008799), but, in this notebook, we will only use a subset of a few tens of thousands that have been preprocessed and labelled with pairwise distances. We also provide results on the RT988 dataset (Zheng et al. 2019), another dataset of 16S rRNA that contains slightly longer sequences (up to 465 bp).
```
!gdown --id 1yZTOYrnYdW9qRrwHSO5eRc8rYIPEVtY2 # for edit distance approximation
!gdown --id 1hQSHR-oeuS9bDVE6ABHS0SoI4xk3zPnB # for closest string retrieval
!gdown --id 1ukvUI6gUTbcBZEzTVDpskrX8e6EHqVQg # for hierarchical clustering
```
### Edit distance approximation
**Edit distance** The task of finding the distance or similarity between two strings and the related task of global alignment lies at the foundation of bioinformatics. Due to the resemblance with the biological mutation process, the edit distance and its variants are typically used to measure similarity between sequences. Given two string $s_1$ and $s_2$, their edit distance $ED(s_1, s_2)$ is defined as the minimum number of insertions, deletions or substitutions needed to transform $s_1$ in $s_2$. We always deal with the classical edit distance where the same weight is given to every operation, however, all the approaches developed can be applied to any distance function of choice.
**Task and loss function** As represented in Figure 3, the task is to learn an encoding function $f$ such that given any pair of sequences from the domain of interest $s_1$ and $s_2$:
\begin{equation}ED(s_1, s_2) \approx n \; d(f(s_1), f(s_2)) \end{equation}
where $n$ is the maximum sequence length and $d$ is a distance function over the vector space. In practice this is enforced in the model by minimising the mean squared error between the actual and the predicted edit distance. To make the results more interpretable and comparable across different datasets, we report results using \% RMSE defined as:
\begin{equation}
\text{% RMSE}(f, S) = \frac{100}{n} \, \sqrt{L(f, S)} = \frac{100}{n} \, \sqrt{\sum_{s_1, s_2 \in S} (ED(s_1, s_2) - n \; d(f(s_1), f(s_2)))^2}
\end{equation}
which can be interpreted as an approximate average error in the distance prediction as a percentage of the size of the sequences.
In this notebook, we only show the code to run a simple linear layer on the sequence which, in the hyperbolic space, already gives particularly good results. Later we will also report results for more complex models whose implementation can be found in the [NeuroSEED repository](https://github.com/gcorso/NeuroSEED).
```
class LinearEncoder(nn.Module):
""" Linear model which simply flattens the sequence and applies a linear transformation. """
def __init__(self, len_sequence, embedding_size, alphabet_size=4):
super(LinearEncoder, self).__init__()
self.encoder = nn.Linear(in_features=alphabet_size * len_sequence,
out_features=embedding_size)
def forward(self, sequence):
# flatten sequence and apply layer
B = sequence.shape[0]
sequence = sequence.reshape(B, -1)
emb = self.encoder(sequence)
return emb
class PairEmbeddingDistance(nn.Module):
""" Wrapper model for a general encoder, computes pairwise distances and applies projections """
def __init__(self, embedding_model, embedding_size, scaling=False):
super(PairEmbeddingDistance, self).__init__()
self.hyperbolic_metric = PoincareBall(embedding_size).metric.dist
self.embedding_model = embedding_model
self.radius = nn.Parameter(torch.Tensor([1e-2]), requires_grad=True)
self.scaling = nn.Parameter(torch.Tensor([1.]), requires_grad=True)
def normalize_embeddings(self, embeddings):
""" Project embeddings to an hypersphere of a certain radius """
min_scale = 1e-7
max_scale = 1 - 1e-3
return F.normalize(embeddings, p=2, dim=1) * self.radius.clamp_min(min_scale).clamp_max(max_scale)
def encode(self, sequence):
""" Use embedding model and normalization to encode some sequences. """
enc_sequence = self.embedding_model(sequence)
enc_sequence = self.normalize_embeddings(enc_sequence)
return enc_sequence
def forward(self, sequence):
# flatten couples
(B, _, N, _) = sequence.shape
sequence = sequence.reshape(2 * B, N, -1)
# encode sequences
enc_sequence = self.encode(sequence)
# compute distances
enc_sequence = enc_sequence.reshape(B, 2, -1)
distance = self.hyperbolic_metric(enc_sequence[:, 0], enc_sequence[:, 1])
distance = distance * self.scaling
return distance
```
General training and evaluation routines used to train the models:
```
def train(model, loader, optimizer, loss, device):
avg_loss = AverageMeter()
model.train()
for sequences, labels in loader:
# move examples to right device
sequences, labels = sequences.to(device), labels.to(device)
# forward propagation
optimizer.zero_grad()
output = model(sequences)
# loss and backpropagation
loss_train = loss(output, labels)
loss_train.backward()
optimizer.step()
# keep track of average loss
avg_loss.update(loss_train.data.item(), sequences.shape[0])
return avg_loss.avg
def test(model, loader, loss, device):
avg_loss = AverageMeter()
model.eval()
for sequences, labels in loader:
# move examples to right device
sequences, labels = sequences.to(device), labels.to(device)
# forward propagation and loss computation
output = model(sequences)
loss_val = loss(output, labels).data.item()
avg_loss.update(loss_val, sequences.shape[0])
return avg_loss.avg
```
The linear model is trained on 7000 sequences (+700 of validation) and tested on 1500 different sequences:
```
EMBEDDING_SIZE = 128
device = 'cuda' if torch.cuda.is_available() else 'cpu'
torch.manual_seed(2021)
if device == 'cuda':
torch.cuda.manual_seed(2021)
# load data
datasets = load_edit_distance_dataset('./edit_qiita_large.pkl')
loaders = get_dataloaders(datasets, batch_size=128, workers=1)
# model, optimizer and loss
encoder = LinearEncoder(152, EMBEDDING_SIZE)
model = PairEmbeddingDistance(embedding_model=encoder, embedding_size=EMBEDDING_SIZE)
model.to(device)
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss = nn.MSELoss()
# training
for epoch in range(0, 21):
t = time.time()
loss_train = train(model, loaders['train'], optimizer, loss, device)
loss_val = test(model, loaders['val'], loss, device)
# print progress
if epoch % 5 == 0:
print('Epoch: {:02d}'.format(epoch),
'loss_train: {:.6f}'.format(loss_train),
'loss_val: {:.6f}'.format(loss_val),
'time: {:.4f}s'.format(time.time() - t))
# testing
for dset in loaders.keys():
avg_loss = test(model, loaders[dset], loss, device)
print('Final results {}: loss = {:.6f}'.format(dset, avg_loss))
```
Therefore, our linear model after only 50 epochs has a $\% RMSE \approx 2.6$ that, as we will see, is significantly better than any data-independent baseline.
### Closest string retrieval
This task consists of finding the sequence that is closest to a given query among a large number of reference sequences and is very commonly used to classify sequences. Given a set of reference strings $R$ and a set of queries $Q$, the task is to identify the string $r_q \in R$ that minimises $ED(r_q, q)$ for each $q \in Q$. This task is performed in an unsupervised setting using models trained for edit distance approximation. Therefore, given a pretrained encoder $f$, its prediction is taken to be the string $r_q \in R$ that minimises $d(f(r_q), f(q))$ for each $q \in Q$. This allows for sublinear retrieval (via locality-sensitive hashing or other data structures) which is critical in real-world applications where databases can have billions of reference sequences. As performance measures, we report the top-1, top-5 and top-10 scores, where top-$k$ indicates the percentage of times the model ranks the closest string within its top-$k$ predictions.
```
from closest_string.test import closest_string_testing
closest_string_testing(encoder_model=model, data_path='./closest_qiita_large.pkl',
batch_size=128, device=device, distance='hyperbolic')
```
Evaluated on a dataset composed of 1000 reference and 1000 query sequences (disjoint from the edit distance training set) the simple model we trained is capable of detecting the closest sequence correctly 44\% of the time and in approximately 3/4 of the cases it places the real closest sequence in its top-10 choices.
### Hierarchical clustering
Hierarchical clustering (HC) consists of constructing a hierarchy over clusters of data by defining a tree with internal points corresponding to clusters and leaves to datapoints. The goodness of the tree can be measured using Dasgupta's cost (Dasgupta 2016).
One simple approach to use NeuroSEED to speed up hierarchical clustering is similar to the one adopted in the previous section: estimate the pairwise distance matrix with a model pretrained for *edit distance approximation* and then use the matrix as the basis for classical agglomerative clustering algorithms (e.g. Single, Average and Complete Linkage). The computational cost to generate the matrix goes from $O(N^2M^2)$ to $O(N(M+N))$ and by using optimisations like locality-sensitive hashing the clustering itself can be accelerated.
The following code computes the pairwise distance matrix and then runs a series of agglomerative clustering heuristics (Single, Average, Complete and Ward Linkage) on it.
```
from hierarchical_clustering.unsupervised.unsupervised import hierarchical_clustering_testing
hierarchical_clustering_testing(encoder_model=model, data_path='./hc_qiita_large_extr.pkl',
batch_size=128, device=device, distance='hyperbolic')
```
An alternative approach to performing hierarchical clustering we propose uses the continuous relaxation of Dasgupta's cost (Chami et al. 2020) to embed sequences in the hyperbolic space. In comparison to Chami et al. (2020), we show that it is possible to significantly decrease the number of pairwise distances required by directly mapping the sequences.
This allows to considerably speed up the construction especially when dealing with a large number of sequences without requiring any pretrained model. Figure 1 shows an example of this approach when applied to a small dataset of proteins and the code for it is in the NeuroSEED repository.
### Multiple Sequence Alignment
Multiple Sequence Alignment is another very common task in bioinformatics and there are several ways of using NeuroSEED to accelerate heuristics. The most commonly used programs such as the Clustal series and MUSCLE are based on a phylogenetic tree estimation phase from the pairwise distances which produces a guide tree, which is then used to guide a progressive alignment phase.
In Clustal algorithm for MSA on a subset of RT988 of 1200 sequences, the construction of the distance matrix and the tree takes 99\% of the total running time (the rest takes 24s out of 35 minutes). Therefore, one obvious improvement that NeuroSEED can bring is to speed up this phase using the hierarchical clustering techniques seen in the previous section.
The following code uses the model pretrained for edit distance to approximate the neighbour joining tree construction and the runs clustalw using that guide tree:
```
from multiple_alignment.guide_tree.guide_tree import approximate_guide_trees
# performs neighbour joining algorithm on the estimate of the pairwise distance matrix
approximate_guide_trees(encoder_model=model, dataset=datasets['test'],
batch_size=128, device=device, distance='hyperbolic')
# Command line clustalw using the tree generated with the previous command.
# The substitution matrix and gap penalties are set to simulate the classical edit distance used to train the model
!clustalw -infile="sequences.fasta" -dnamatrix=multiple_alignment/guide_tree/matrix.txt -transweight=0 -type='DNA' -gapopen=1 -gapext=1 -gapdist=10000 -usetree='njtree.dnd' | grep 'Alignment Score'
```
An alternative method we propose for the MSA uses an autoencoder to convert the Steiner string approximation problem in a continuous optimisation task. More details on this in our paper and repository.
## 3. Benchmark
In this section, we compare the NeuroSEED approach to classical baseline alignment-free approaches such as k-mer and contrast the performance of neural models with different architectures and on different geometric spaces.
### Edit distance approximation

Figure 4: \% RMSE test set results on the Qiita and RT988 datasets. The first five models are the k-mer baselines and, in parentheses, we indicate the dimension of the embedding space. The remaining are encoder models trained with the NeuroSEED framework and they all have an embedding space dimension equal to 128. - indicates that the model did not converge.
Figure 4 highlights the advantage provided by data-dependent methods when compared to the data-independent baseline approaches. Moreover, the results show that it is critical for the geometry of the embedding space to reflect the structure of the low dimensional manifold on which the data lies. In these biological datasets, there is an implicit hierarchical structure given by the evolution process which is well reflected by the *hyperbolic* plane. Thanks to this close correspondence, even relatively simple models like the linear regression and MLP perform very well with this distance function.

Figure 5: \% RMSE on Qiita dataset for a Transformer with different distance functions.
The clear benefit of using the hyperbolic space is evident when analysing the dimension required for the embedding space (Figure 5). In these experiments, we run the Transformer model tuned on the Qiita dataset with an embedding size of 128 on a range of dimensions. The hyperbolic space provides significantly more efficient embeddings, with the model reaching the 'elbow' at dimension 32 and matching the performance of the other spaces with dimension 128 with only 4 to 16. Given that the space to store the embedding and the time to compute distances between them scale linearly with the size of the space, this would provide a significant improvement in downstream tasks over other NeuroSEED approaches.
**Running time** A critical step behind most of the algorithms analysed in the rest of the paper is the computation of the pairwise distance matrix of a set of sequences. Taking as an example the RT988 dataset (6700 sequences of length up to 465 bases), optimised C code computes on a CPU approximately 2700 pairwise distances per second and takes 2.5 hours for the whole matrix. In comparison, using a trained NeuroSEED model, the same matrix can be approximated in 0.3-3s (similar value for the k-mer baseline) on the same CPU. The computational complexity for $N$ sequences of length $M$ is reduced from $O(N^2\; M^2)$ to $O(N(M + N))$ (assuming the model is linear w.r.t. the length and constant embedding size). The training process takes typically 0.5-3 hours on a GPU. However, in applications such as microbiome analysis, biologists typically analyse data coming from the same distribution (e.g. the 16S rRNA gene) for multiple individuals, therefore the initial cost would be significantly amortised.
### Closest string retrieval
Figure 6 shows that also in this task the data-dependent models outperform the baselines even when these operate on larger spaces. In terms of distance function, the *cosine* distance achieves performances on par with the *hyperbolic*. This can be explained by the fact that for a set of points on the same hypersphere, the ones with the smallest *cosine* or *hyperbolic* distance are the same. So the *cosine* distance is capable of providing good orderings of sequence similarity but inferior approximations of their distance.

Figure 6: Accuracy of different models in the *closest string retrieval* task on the Qiita dataset.
### Hierarchical clustering
The results (Figure 7) show that the difference in performance between the most expressive models and the round truth distances is not statistically significant. The *hyperbolic* space achieves the best performance and, although the relative difference between the methods is not large in terms of percentage Dasgupta's cost (but still statistically significant), it results in a large performance gap when these trees are used for tasks such as MSA. The total CPU time taken to construct the tree is reduced from more than 30 minutes to less than one in this dataset and the difference is significantly larger when scaling to datasets of more and longer sequences.

Figure 7: Average Linkage \% increase in Dasgupta's cost of NeuroSEED models compared to the performance of clustering on the ground truth distances. Average Linkage was the best performing clustering heuristic across all models.
### Multiple Sequence Alignment
The results reported in Figure 8 show that the alignment scores obtained when using the NeuroSEED heuristics with models such as GAT are not statistically different from those obtained with the ground truth distances. Most of the models show a relatively large variance in performance across different runs. This has positive and negative consequences: the alignment obtained using a single run may not be very accurate, but, by training an ensemble of models and applying each of them, we are likely to obtain a significantly better alignment than the one from the ground truth matrix while still only taking a fraction of the time.

Figure 8: Percentage change in the alignment cost (- alignment score) returned by Clustal when using the heuristics to generate the tree as opposed to using NJ on real distances. The alignment was done on 1.2k unseen sequences from the RT988 dataset.
## 4. Limitations
As mentioned in the introduction, we believe that the NeuroSEED framework has the potential to be applied to numerous problems and, therefore, this project constitutes only an initial analysis of its geometrical properties and applications. Below we list some of the limitations of the current analysis and potential directions of research to cover them.
**Type of sequences** Both the datasets analysed consist of sequence reads of the same part of the genome. This is a very common set-up for sequence analysis (for example for microbiome analysis) and it is enabled by biotechnologies that can amplify and sequence certain parts of the genome selectively, but it is not ubiquitous. Shotgun metagenomics consists of sequencing random parts of the genome. This would, we believe, generate sequences lying on a low-dimensional manifold where the hierarchical relationship of evolution is combined with the relationship based on the specific position in the whole genome. Therefore, more complex geometries such as product spaces might be best suited.
**Type of labels** In this project, we work with edit distances between strings, these are very expensive when large scale analysis is required, but it is feasible to produce several thousand exact pairwise distance values from which the model can learn. For different definitions of distance, however, this might not be the case. If it is only feasible to determine which sequences are closest, then encoders can be trained using triplet loss and then most of the approaches presented would still apply. Future work could explore the robustness of this framework to inexact estimates of the distances as labels and whether NeuroSEED models, once trained, could provide more accurate predictions than its labels.
**Architectures** Throughout the project we used models that have been shown to work well for other types of sequences and tasks. However, the correct inductive biases that models should have to perform SEED are likely to be different to the ones used for other tasks and even dependent on the type of distance it tries to preserve. Moreover, the capacity of the hyperbolic space could be further exploited using models that directly operate in the hyperbolic space (Peng et al. 2021).
**Self-supervised embeddings** One potential application of NeuroSEED that was not explored in this project is the direct use of the embedding produced by NeuroSEED for downstream tasks. This would enable the use of a wide range of geometric data processing tools for the analysis of biological sequences.
## References
(Morgan & Huttenhower, 2012) Xochitl C Morgan and Curtis Huttenhower. Human microbiome analysis. PLoS Comput Biol, 2012.
(McDermott et al. 2021) Matthew McDermott, Brendan Yap, Peter Szolovits, and Marinka Zitnik. Rethinking relational encoding in language model: Pre-training for general sequences. arXiv preprint, 2021.
(Clemente et al. 2015) Jose C Clemente, Erica C Pehrsson, Martin J Blaser, Kuldip Sandhu, Zhan Gao, Bin Wang, Magda Magris, Glida Hidalgo, Monica Contreras, Oscar Noya-Alarcon, et al. ´The microbiome of uncontacted amerindians. Science advances, 2015.
(Zheng et al. 2019)Wei Zheng, Le Yang, Robert J Genco, Jean Wactawski-Wende, Michael Buck, and Yijun Sun. Sense: Siamese neural network for sequence embedding and alignment free comparison. Bioinformatics, 2019.
(Dasgupta 2016) Sanjoy Dasgupta. A cost function for similarity-based hierarchical clustering. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, 2016.
(Chami et al. 2020) Ines Chami, Albert Gu, Vaggos Chatziafratis, and Christopher Re. From trees to continuous embeddings and back: Hyperbolic hierarchical clustering. Advances in Neural Information Processing Systems 33, 2020.
| github_jupyter |
## Python Generator
파이썬 제너레이터는 메모리를 효율적으로 사용하면서 반복을 수행하도록 돕는 객체입니다.
제너레이터가 무엇인지 감을 잡기 위해 먼저 다음과 같은 문제를 상상해보겠습니다.
**문제: 특정한 길이의 숫자 배열이 주어졌을 때, 이를 제곱한 수들을 담은 배열을 출력하라**
이를 list를 활용하여 풀면 다음과 같이 풀 수 있습니다.
```
num_count = 10
nums = [i for i in range(num_count)]
print(nums)
def square_list(nums):
result = []
for num in nums:
result.append(num * num)
return result
result = square_list(nums)
print(result)
```
list를 이용하여 쉽게 문제를 풀 수 있는 것을 보았습니다.
하지만 숫자의 개수가 1억개가 된다면 어떻게 될까요?
아마도 result에는 1억개의 제곱된 숫자가 저장한 다음 출력해야 합니다.
Python Generator는 이런 방식의 반복이 비효율적이다고 생각하여 만들어졌습니다.
**즉, 대용량 반복을 수행해야할 때, 메모리를 더욱 효율적으로 사용하기 위한 도구가 Generator입니다.**
우선, generator를 사용한 방식으로 square 함수를 만들어보겠습니다.
```
def square_generator(nums):
for num in nums:
yield num * num
result = square_generator(nums)
print(result)
```
결과를 출력해보면 배열이 아니라 generator라는 객체가 돌아온 것을 볼 수 있습니다.
그리고 square 함수를 보면 return 대신에 yield라는 구문을 사용한 것을 볼 수 있습니다.
이 차이를 이해하기 위해서는 먼저 return과 yield의 차이를 알아야합니다.
먼저 return을 사용해서 임의의 수를 반환하는 함수입니다.
임의의 수를 생성하고, 이를 10회 반복하여 return 하도록 하였습니다.
하지만 return이 호출되면 반복을 멈추고, 수를 반환하며 따라서 그 뒤에 hello가 출력되지 않습니다.
그리고 이 때 만들었던 임의의 수는 메모리에서 할당 해제되어 다시 호출할 수 없습니다.
```
import random
def randnum_return():
a = random.randint(0, 100)
for i in range(0, 10):
return a
print('hello')
return_result = randnum_return()
print(return_result)
return_result = randnum_return()
print(randnum_return())
return_result = randnum_return()
print(randnum_return())
```
반면에 yield가 호출되면 함수는 그 시점에서 일시 정지합니다.
그리고 그 시점에 함수 안에 선언되어 있는 변수들을 기억합니다.
next()를 통해 실행되면 yield 바로 다음 라인부터 다시 실행을 이어나갑니다.
**즉, yield를 호출하면 원하는 값을 리턴하며, 실행 흐름을 일시 정지하여 함수를 재활용할 수 있는 상태로 만듭니다.**
```
def randnum_yield():
a = random.randint(0, 100)
for i in range(0, 10):
yield a
print('hello')
yield_result = randnum_yield()
print(yield_result)
print(next(yield_result))
print(next(yield_result))
print(next(yield_result))
```
yield와 return의 차이가 어느 정도 이해가셨나요?
이제 다시 generator로 돌아오겠습니다.
next 함수를 사용하지 않고도 for문에서 generator 계산 결과를 가져올 수 있습니다.
```
def square_generator(nums):
for num in nums:
yield num * num
nums = [i for i in range(0, 10)]
result = square_generator(nums)
for num in result:
print(num)
```
어때요, 이제 generator가 어떤 역할을 하고, 어떻게 사용하는지 감이 오시나요?
파이썬에서는 굳이 yield를 포함한 함수를 만들기 않고도 쉽게 generator를 사용할 수 있는
generator expression을 제공합니다.
```
result = (i*i for i in range(0, 10))
print(result)
for num in result:
print(num)
```
## 마치며
사실 제너레이터는 성능 최적화가 필요한 상황에서 요긴하게 사용되는 것이므로
평상시 개발할 때 필요성을 느끼지 못하는 경우가 많습니다.
하지만 대용량 데이터를 처리하거나, 알고리즘 문제를 푸는 등 성능이 중요한 상황이 오게 됩니다.
그럴때 꼭 Generator를 잊지 말고 사용하면 좋을 듯 합니다.
감사합니다!
| github_jupyter |
# Dano's CORVO & TPOT notebook
In this notebook, I will try and use TPOT to asses what traditional ML algorithms would be useful to predict cognitive performance from EEG data in Neurodoro
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn as sk
from os import walk
from os import listdir
from os.path import isfile, join
from sklearn.model_selection import train_test_split
from tpot import TPOTRegressor
from math import sqrt
EPOCH_LENGTH = 440 # 2 seconds
# Data has been collected, let's import it
data = pd.read_csv("../muse-data/DanoDenoisedPSDJul18.csv", header=0, index_col=False)
data.describe
# Let's get our labels data set first because it's easier. We'll grab every 4th row from the Performance column
labels = data['Performance'].iloc[::4]
# Then we'll reindex the dataframe
labels = labels.reset_index().drop('index', axis=1)
# Convert to 1D array for TPOT
labels = np.array(labels).ravel()
# Seperate data into 4 dataframes, 1 for each electrode
chan1 = data.loc[:,'Channel':'60 hz'].loc[data['Channel'] == 1,].reset_index(drop=True)
chan1.columns = np.arange(1000,1061)
chan2 = data.loc[:,'Channel':'60 hz'].loc[data['Channel'] == 2,].reset_index(drop=True)
chan2.columns = np.arange(2000,2061)
chan3 = data.loc[:,'Channel':'60 hz'].loc[data['Channel'] == 3,].reset_index(drop=True)
chan3.columns = np.arange(3000,3061)
chan4 = data.loc[:,'Channel':'60 hz'].loc[data['Channel'] == 4,].reset_index(drop=True)
chan4.columns = np.arange(4000,4061)
# Concat all channel-specific dataframes together so that row = 2s epoch
# columns = [electrode 1 FFT bins] + [electrode 2 FFT bins] + ...
training_data = pd.concat([chan1.iloc[:,1:], chan2.iloc[:,1:], chan3.iloc[:,1:], chan4.iloc[:,1:]], axis=1, join_axes=[chan1.index])
print(training_data.shape)
labels.shape
```
# Nice!
```
# Create a TPOTClassifier that will run for 10 generations
pipeline_optimizer = TPOTRegressor(generations=10, population_size=30, cv=5,
random_state=42, verbosity=3)
# Fit this baby! Takes a long time to run
pipeline_optimizer.fit(training_data, labels)
# See what kind of score we get
print(pipeline_optimizer.score(training_data, labels))
pipeline_optimizer.export('tpot_exported_pipeline4.py')
print(sqrt(pipeline_optimizer.score(training_data, labels)))
```
##### Split values and labels arrays into random train and test subsets (20% set aside for testing)
X_train, X_test, y_train, y_test = train_test_split(training_data,labels,test_size=0.2)
```
print(sqrt(98))
```
| github_jupyter |
# Training and Serving with TensorFlow on Amazon SageMaker
*(This notebook was tested with the \"Python 3 (Data Science)\" kernel.)*
Amazon SageMaker is a fully-managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including TensorFlow.
In this notebook, we use the SageMaker Python SDK to launch a training job and deploy the trained model. We use a Python script to train a classification model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist), and show how to train with both TensorFlow 1.x and TensorFlow 2.x scripts.
## Set up the environment
Let's start by setting up the environment:
```
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_region_name
```
We also define the TensorFlow version here, and create a quick helper function that lets us toggle between TF 1.x and 2.x in this notebook.
```
tf_version = '2.1.0' # replace with '1.15.2' for TF 1.x
def use_tf2():
return tf_version.startswith('2')
```
## Training Data
The [MNIST dataset](http://yann.lecun.com/exdb/mnist) is a dataset consisting of handwritten digits. There is a training set of 60,000 examples, and a test set of 10,000 examples. The digits have been size-normalized and centered in a fixed-size image.
The dataset has already been uploaded to an Amazon S3 bucket, ``sagemaker-sample-data-<REGION>``, under the prefix ``tensorflow/mnist``. There are four ``.npy`` file under this prefix:
* ``train_data.npy``
* ``eval_data.npy``
* ``train_labels.npy``
* ``eval_labels.npy``
```
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
```
## Construct a script for distributed training
The training code is very similar to a training script we might run outside of Amazon SageMaker. The SageMaker Python SDK handles transferring our script to a SageMaker training instance. On the training instance, SageMaker's native TensorFlow support sets up training-related environment variables and executes the training code.
We can use a Python script, a Python module, or a shell script for the training code. This notebook's training script is a Python script adapted from a TensorFlow example of training a convolutional neural network on the MNIST dataset.
We have modified the training script to handle the `model_dir` parameter passed in by SageMaker. This is an Amazon S3 path which can be used for data sharing during distributed training and checkpointing and/or model persistence. Our script also contains an argument-parsing function to handle processing training-related variables.
At the end of the training job, our script exports the trained model to the path stored in the environment variable `SM_MODEL_DIR`, which always points to `/opt/ml/model`. This is critical because SageMaker uploads all the model artifacts in this folder to S3 at end of training.
For more about writing a TensorFlow training script for SageMaker, see [the SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_tf.html#prepare-a-script-mode-training-script).
Here is the entire script:
```
training_script = 'mnist-2.py' if use_tf2() else 'mnist.py'
!pygmentize {training_script}
```
## Create a SageMaker training job
The SageMaker Python SDK's `sagemaker.tensorflow.TensorFlow` estimator class handles creating a SageMaker training job. Let's call out a couple important parameters here:
* `entry_point`: our training script
* `distributions`: configuration for the distributed training setup. It's required only if we want distributed training either across a cluster of instances or across multiple GPUs. Here, we use parameter servers as the distributed training schema. SageMaker training jobs run on homogeneous clusters. To make parameter server more performant in the SageMaker setup, we run a parameter server on every instance in the cluster, so there is no need to specify the number of parameter servers to launch. Script mode also supports distributed training with [Horovod](https://github.com/horovod/horovod). You can find the full documentation on how to configure `distributions` in the [SageMaker Python SDK API documentation](https://sagemaker.readthedocs.io/en/stable/sagemaker.tensorflow.html#sagemaker.tensorflow.estimator.TensorFlow).
```
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(entry_point=training_script,
role=role,
instance_count=1,
instance_type='ml.p2.xlarge',
framework_version=tf_version,
py_version='py3')
```
To start a training job, we call `estimator.fit(training_data_uri)`.
An S3 location is used here as the input. `fit` creates a default channel named "training", and the data at the S3 location is downloaded to the "training" channel. In the training script, we can then access the training data from the location stored in `SM_CHANNEL_TRAINING`. `fit` accepts a couple other types of input as well. For details, see the [API documentation](https://sagemaker.readthedocs.io/en/stable/estimators.html#sagemaker.estimator.EstimatorBase.fit).
When training starts, `mnist.py` is executed, with the estimator's `hyperparameters` and `model_dir` passed as script arguments. Because we didn't define either in this example, no hyperparameters are passed, and `model_dir` defaults to `s3://<DEFAULT_BUCKET>/<TRAINING_JOB_NAME>`, so the script execution is as follows:
```bash
python mnist.py --model_dir s3://<DEFAULT_BUCKET>/<TRAINING_JOB_NAME>
```
When training is complete, the training job uploads the saved model to S3 so that we can use it with TensorFlow Serving.
```
estimator.fit(training_data_uri)
```
## Deploy the trained model to an endpoint
After we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint. This creates a SageMaker Model, which is then deployed to an endpoint.
The Docker image used for TensorFlow Serving runs an implementation of a web server that is compatible with SageMaker hosting protocol. For more about using TensorFlow Serving with SageMaker, see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_tf.html#deploy-tensorflow-serving-models).
```
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
```
## Invoke the endpoint
We then use the returned predictor object to invoke our endpoint. For demonstration purposes, let's download the training data and use that as input for inference.
```
import numpy as np
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_data.npy train_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_labels.npy train_labels.npy
train_data = np.load('train_data.npy')
train_labels = np.load('train_labels.npy')
```
The formats of the input and the output data correspond directly to the request and response formats of the `Predict` method in the [TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest). SageMaker's TensforFlow Serving endpoints can also accept additional input formats that are not part of the TensorFlow REST API, including the simplified JSON format, line-delimited JSON objects ("jsons" or "jsonlines"), and CSV data.
In this example we use a `numpy` array as input, which is serialized into the simplified JSON format. In addtion, TensorFlow serving can also process multiple items at once, which we utilize in the following code. For complete documentation on how to make predictions against a SageMaker Endpoint using TensorFlow Serving, see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_tf.html#making-predictions-against-a-sagemaker-endpoint).
```
predictions = predictor.predict(train_data[:50])
for i in range(0, 50):
if use_tf2():
prediction = np.argmax(predictions['predictions'][i])
else:
prediction = predictions['predictions'][i]['classes']
label = train_labels[i]
print('prediction: {}, label: {}, matched: {}'.format(prediction, label, prediction == label))
```
## Delete the endpoint
Let's delete our to prevent incurring any extra costs.
```
predictor.delete_endpoint()
```
| github_jupyter |
# Saving and Loading Models
<a href="https://colab.research.google.com/github/jwangjie/gpytorch/blob/master/examples/00_Basic_Usage/Saving_and_Loading_Models.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
In this bite-sized notebook, we'll go over how to save and load models. In general, the process is the same as for any PyTorch module.
```
# COMMENT this if not used in colab
!pip install gpytorch
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
```
## Saving a Simple Model
First, we define a GP Model that we'd like to save. The model used below is the same as the model from our
<a href="../01_Exact_GPs/Simple_GP_Regression.ipynb">Simple GP Regression</a> tutorial.
```
train_x = torch.linspace(0, 1, 100)
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
```
### Change Model State
To demonstrate model saving, we change the hyperparameters from the default values below. For more information on what is happening here, see our tutorial notebook on <a href="Hyperparameters.ipynb">Initializing Hyperparameters</a>.
```
model.covar_module.outputscale = 1.2
model.covar_module.base_kernel.lengthscale = 2.2
```
### Getting Model State
To get the full state of a GPyTorch model, simply call `state_dict` as you would on any PyTorch model. Note that the state dict contains **raw** parameter values. This is because these are the actual `torch.nn.Parameters` that are learned in GPyTorch. Again see our notebook on hyperparamters for more information on this.
```
model.state_dict()
```
### Saving Model State
The state dictionary above represents all traininable parameters for the model. Therefore, we can save this to a file as follows:
```
torch.save(model.state_dict(), 'model_state.pth')
```
### Loading Model State
Next, we load this state in to a new model and demonstrate that the parameters were updated correctly.
```
state_dict = torch.load('model_state.pth')
model = ExactGPModel(train_x, train_y, likelihood) # Create a new GP model
model.load_state_dict(state_dict)
model.state_dict()
```
## A More Complex Example
Next we demonstrate this same principle on a more complex exact GP where we have a simple feed forward neural network feature extractor as part of the model.
```
class GPWithNNFeatureExtractor(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPWithNNFeatureExtractor, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
self.feature_extractor = torch.nn.Sequential(
torch.nn.Linear(1, 2),
torch.nn.BatchNorm1d(2),
torch.nn.ReLU(),
torch.nn.Linear(2, 2),
torch.nn.BatchNorm1d(2),
torch.nn.ReLU(),
)
def forward(self, x):
x = self.feature_extractor(x)
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPWithNNFeatureExtractor(train_x, train_y, likelihood)
```
### Getting Model State
In the next cell, we once again print the model state via `model.state_dict()`. As you can see, the state is substantially more complex, as the model now includes our neural network parameters. Nevertheless, saving and loading is straight forward.
```
model.state_dict()
torch.save(model.state_dict(), 'my_gp_with_nn_model.pth')
state_dict = torch.load('my_gp_with_nn_model.pth')
model = GPWithNNFeatureExtractor(train_x, train_y, likelihood)
model.load_state_dict(state_dict)
model.state_dict()
```
| github_jupyter |
# Demonstrate the Sankey class by producing three basic diagrams
Code taken from the [Sankey API](http://matplotlib.org/api/sankey_api.html) at Matplotlib doc
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.sankey import Sankey
```
## Example 1 -- Mostly defaults
This demonstrates how to create a simple diagram by implicitly calling the
Sankey.add() method and by appending finish() to the call to the class.
```
Sankey(flows=[0.25, 0.15, 0.60, -0.20, -0.15, -0.05, -0.50, -0.10],
labels=['', '', '', 'First', 'Second', 'Third', 'Fourth', 'Fifth'],
orientations=[-1, 1, 0, 1, 1, 1, 0, -1]).finish()
plt.title("The default settings produce a diagram like this.");
# Notice:
# 1. Axes weren't provided when Sankey() was instantiated, so they were
# created automatically.
# 2. The scale argument wasn't necessary since the data was already
# normalized.
# 3. By default, the lengths of the paths are justified.
```
## Example 2
This demonstrates:
1. Setting one path longer than the others
2. Placing a label in the middle of the diagram
3. Using the scale argument to normalize the flows
4. Implicitly passing keyword arguments to PathPatch()
5. Changing the angle of the arrow heads
6. Changing the offset between the tips of the paths and their labels
7. Formatting the numbers in the path labels and the associated unit
8. Changing the appearance of the patch and the labels after the figure is created
```
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, xticks=[], yticks=[],
title="Flow Diagram of a Widget")
sankey = Sankey(ax=ax, scale=0.01, offset=0.2, head_angle=180,
format='%.0f', unit='%')
sankey.add(flows=[25, 0, 60, -10, -20, -5, -15, -10, -40],
labels=['', '', '', 'First', 'Second', 'Third', 'Fourth',
'Fifth', 'Hurray!'],
orientations=[-1, 1, 0, 1, 1, 1, -1, -1, 0],
pathlengths=[0.25, 0.25, 0.25, 0.25, 0.25, 0.8, 0.25, 0.25,
0.25],
patchlabel="Widget\nA",
alpha=0.2, lw=2.0) # Arguments to matplotlib.patches.PathPatch()
diagrams = sankey.finish()
diagrams[0].patch.set_facecolor('#37c959')
diagrams[0].texts[-1].set_color('r')
diagrams[0].text.set_fontweight('bold')
# Notice:
# 1. Since the sum of the flows is nonzero, the width of the trunk isn't
# uniform. If verbose.level is helpful (in matplotlibrc), a message is
# given in the terminal window.
# 2. The second flow doesn't appear because its value is zero. Again, if
# verbose.level is helpful, a message is given in the terminal window.
```
## Example 3
This demonstrates:
1. Connecting two systems
2. Turning off the labels of the quantities
3. Adding a legend
```
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, xticks=[], yticks=[], title="Two Systems")
sankey = Sankey(ax=ax, unit=None)
flows = [0.25, 0.15, 0.60, -0.10, -0.05, -0.25, -0.15, -0.10, -0.35]
sankey.add(flows=flows, label='one',
orientations=[-1, 1, 0, 1, 1, 1, -1, -1, 0])
sankey.add(flows=[-0.25, 0.15, 0.1], fc='#37c959', label='two',
orientations=[-1, -1, -1], prior=0, connect=(0, 0))
diagrams = sankey.finish()
diagrams[-1].patch.set_hatch('/')
ax.legend(loc='best');
# Notice that only one connection is specified, but the systems form a
# circuit since: (1) the lengths of the paths are justified and (2) the
# orientation and ordering of the flows is mirrored.
```
| github_jupyter |
# Amazon SageMaker Workshop
## _**Introduction**_
This workshop has been adapted from an [AWS blog post](https://aws.amazon.com/blogs/ai/predicting-customer-churn-with-amazon-machine-learning/).
Losing customers is costly for any business. Identifying unhappy customers early on gives you a chance to offer them incentives to stay. In this workshop we'll use machine learning (ML) for automated identification of unhappy customers, also known as customer churn prediction.
---
In this workshop we will use Gradient Boosted Trees to Predict Mobile Customer Departure.
To solve put our model in production we will use some features of SageMaker like:
* [Amazon SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/studio.html)
* [Amazon SageMaker Training Jobs](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html)
* [Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html)
* Manage multiple trials
* Experiment with hyperparameters and charting
* [Amazon SageMaker Debugger](https://docs.aws.amazon.com/sagemaker/latest/dg/train-debugger.html)
* Debug your model
* [Amazon SageMaker Clarify](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-fairness-and-explainability.html)
* [Model hosting](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-deployment.html)
* Set up a persistent endpoint to get predictions from your model
* [SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-model-monitor.html)
* Monitor the quality of your model
* Set alerts for when model quality deviates
* [Amazon SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines.html)
---
## The format of this workshop
Although we recommend that you follow and run the Labs in order, _this workshop was built in a way that you can skip labs or just do those that interest you the most_ (e.g. you can just run the last Lab, or just run labs 4 an 5, or lab 1 and 4, etc.). Running the labs in order help us understand the natural flow of an ML project and may make more sense.
> This is only possible because we leverage the design of SageMaker where each component is independent from each other (e.g. training jobs, hosting, processing) and customers have the freedom to use those that fit better to their use-case.
This `0-Introduction` lab is the only Lab that is strictly required to setup some basic things like creating S3 buckets, installing packages, etc.)
---
## The Data
Mobile operators have historical records that tell them which customers ended up churning and which continued using the service. We can use this historical information to train an ML model that can predict customer churn. After training the model, we can pass the profile information of an arbitrary customer (the same profile information that we used to train the model) to the model to have the model predict whether this customer will churn.
The dataset we use is publicly available and was mentioned in [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. The `Data sets` folder that came with this notebook contains the churn dataset.
The dataset can be [downloaded here.](https://bcs.wiley.com/he-bcs/Books?action=resource&bcsId=11704&itemId=0470908742&resourceId=46577)
---
## Let's configure our environment
```
import sys
!{sys.executable} -m pip install sagemaker==2.42.0 -U
!{sys.executable} -m pip install sagemaker-experiments
!{sys.executable} -m pip install xgboost==1.3.3
import pandas as pd
import boto3
import sagemaker
sess = boto3.Session()
region = sess.region_name
sm = sess.client('sagemaker')
role = sagemaker.get_execution_role()
region = sess.region_name
account_id = sess.client('sts', region_name=region).get_caller_identity()["Account"]
bucket = 'sagemaker-studio-{}-{}'.format(sess.region_name, account_id)
prefix = 'xgboost-churn'
try:
if sess.region_name == "us-east-1":
sess.client('s3').create_bucket(Bucket=bucket)
else:
sess.client('s3').create_bucket(Bucket=bucket,
CreateBucketConfiguration={'LocationConstraint': sess.region_name})
except Exception as e:
print("Looks like you already have a bucket of this name. That's good!")
framework_version = '1.2-2'
docker_image_name = sagemaker.image_uris.retrieve(framework='xgboost', region=region, version=framework_version)
# Workaround while versions are not updated in SM SDK
framework_version = '1.3-1'
docker_image_name = docker_image_name[:-5] + framework_version
print("Setting some useful environment variables (bucket, prefix, region, docker_image_name)...")
%store bucket
%store prefix
%store region
%store docker_image_name
%store framework_version
```
---
## Let's download the data and upload to S3
```
!wget https://higheredbcs.wiley.com/legacy/college/larose/0470908742/ds/data_sets.zip
!unzip -o data_sets.zip
!mv "Data sets"/churn.txt .
!rm -rf "Data sets" data_sets.zip
local_raw_path = "churn.txt"
raw_dir = f"{prefix}/data/raw"
s3uri_raw = sagemaker.s3.S3Uploader.upload(local_raw_path, f's3://{bucket}/{raw_dir}')
s3uri_raw
```
Store the raw data S3 URI for later:
```
%store s3uri_raw
print("\n\nWe are ready for starting the SageMaker Workshop!")
```
---
# [You can now go to the first lab 1-DataPrep](../1-DataPrep/data_preparation.ipynb)
| github_jupyter |
# BOSS: Bag-of-SFA Symbols
* Website: https://www2.informatik.hu-berlin.de/~schaefpa/boss/
* Paper: https://www2.informatik.hu-berlin.de/~schaefpa/boss.pdf
**Note: an Internet connection is required to download the datasets used in this benchmark.**
```
import numpy as np
from pyts.transformation import BOSS
from pyts.classification import KNeighborsClassifier
from pyts.datasets import fetch_ucr_dataset
from sklearn.pipeline import Pipeline
from sklearn.ensemble import VotingClassifier
import pyts
print("pyts: {0}".format(pyts.__version__))
knn = KNeighborsClassifier(n_neighbors=1, metric='boss')
dataset_params = {
'Adiac': {'word_size': np.tile(np.arange(10, 16, 2), 3),
'window_size': np.repeat(np.arange(60, 110, 20), 3),
'norm_mean': np.full(9, True),
'drop_sum': np.full(9, True)},
'ECG200': {'word_size': 8,
'window_size': 40,
'norm_mean': False,
'drop_sum': False},
'GunPoint': {'word_size': 8,
'window_size': 40,
'norm_mean': True,
'drop_sum': True},
'MiddlePhalanxTW': {'word_size': 10,
'window_size': 30,
'norm_mean': False,
'drop_sum': False},
'Plane': {'word_size': 6,
'window_size': 10,
'norm_mean': True,
'drop_sum': True},
'SyntheticControl': {'word_size': np.full(20, 5),
'window_size': np.arange(18, 37),
'norm_mean': np.full(20, True),
'drop_sum': np.full(20, True)}
}
for dataset, params in dataset_params.items():
print(dataset)
print('-' * len(dataset))
X_train, X_test, y_train, y_test = fetch_ucr_dataset(dataset, return_X_y=True)
if isinstance(params['window_size'], np.ndarray):
dicts = [{key: value[i] for key, value in params.items()}
for i in range(len(params['window_size']))]
bosses = [BOSS(**param, sparse=False) for param in dicts]
pipelines = [Pipeline([('boss', boss), ('knn', knn)])
for boss in bosses]
clf = VotingClassifier([('pipeline_' + str(i), pipeline)
for i, pipeline in enumerate(pipelines)])
else:
boss = BOSS(**params, sparse=False)
clf = Pipeline([('boss', boss), ('knn', knn)])
accuracy = clf.fit(X_train, y_train).score(X_test, y_test)
print('Accuracy on the test set: {0:.3f}'.format(accuracy))
print()
```
| github_jupyter |
## RetinaNet
Keras-RetinaNet 모델 훈련 및 예측 과정입니다. [keras-retinanet](https://github.com/fizyr/keras-retinanet) 패키지가 필요합니다.
- Tensorflow를 다운로드 및 설치합니다. 2.3.0 이후 버전이 필요합니다.
```
python -m pip install tensorflow
```
- Git 저장소에서 최신 패키지를 다운로드 및 설치합니다.
```
git clone https://github.com/fizyr/keras-retinanet.git
cd keras-retinanet
python -m pip install .
```
### Train
keras-retinanet 패키지는 커맨드라인 상에서 훈련을 할 수 있는 명령어를 스크립트 형태로 제공합니다. `retinanet-train` 명령어를 사용하시면 됩니다.
- 기본 사용법: `retinanet-train (옵션) (데이터셋 종류) (데이터셋 경로) (데이터셋 옵션)`
- 데이터셋 종류는 `coco`, `pascal`, `csv`, `oid`, `kitti` 5종류가 있습니다.
- coco 데이터셋은 [pycocotools](https://pypi.org/project/pycocotools/) 의존이 추가로 필요합니다.
- 커스텀 데이터셋에서 훈련을 하고자 하는 경우 csv 데이터셋 사용을 권장합니다.
- csv 데이터셋의 경우: `retinanet-train csv (어노테이션 파일 경로) (클래스 파일 경로)`
- 어노테이션 CSV 파일은 `이미지/경로.jpg,x1,y1,x2,y2,클래스이름` 양식으로 각 경계 상자에 대한 정보를 기술합니다.
- 이미지 경로를 제외한 정보를 생략할 경우 (`이미지/경로.jpg,,,,,`) 음성 샘플로 간주되어 훈련에 사용됩니다.
- 클래스 CSV 파일은 `클래스이름,인덱스` 양식으로 각 클래스 이름과 인덱스의 대응 관계에 대한 정보를 기술합니다.
- CSV 데이터셋 추가 옵션:
- `--val-annotations (경로)` 검증용 데이터셋에 대한 어노테이션 파일 경로를 지정합니다.
- 공통 추가 옵션
- `--no-evaluation` 이 옵션을 사용하면 검증을 수행하지 않습니다. 사용하지 않으면 검증용 데이터셋이 존재할 경우 검증을 수행합니다.
- `--weights (경로)` 기존 저장된 모델 스냅샷 정보를 불러와 가중치를 해당 모델의 값으로 초기화합니다.
- `--no-weights` 가중치를 특정 값 대신 무작위로 초기화합니다. 가중치 관련 옵션이 지정되지 않은 경우 ImageNet 사전 훈련(pre-trained) 가중치를 사용합니다.
- `--snapshot-path (경로)` 모델 정보가 저장될 폴더를 지정합니다. 기본값은 `./snapshot/`입니다.
- `--tensorboard-dir (경로)` 텐서보드용 로그가 저장될 폴더를 지정합니다. 지정하지 않으면 텐서보드 로그가 저장되지 않습니다.
- `--backbone (이름)` 레티나넷에 사용될 백본 CNN 망을 지정합니다. 기본값은 논문에서 사용된 백본 망인 `resnet50`입니다.
- `--epochs (숫자)` 총 몇 세대(epoch)동안 훈련할 것인지를 정의합니다. 기본값은 `50`입니다.
- `--steps (숫자)` 한 세대(epoch)동안 몇 배치가 훈련될 것인지를 정의합니다. 기본값은 `10000`입니다. 훈련 데이터셋 길이를 배치 사이즈로 나눈 값을 지정하는 것을 권장합니다.
- `--batch-size (숫자)` 한번에 훈련할 이미지 갯수인 배치 사이즈를 지정합니다. 기본값은 `1`입니다.
- `--gpu (숫자)` 몇번째 GPU를 훈련에 사용할지 정합니다.
- 명령어 예시
- `retinanet-train --steps 500 --gpu 0 csv dataset/train.csv dataset/cat.csv --val-annotations dataset/val.csv`
### Predict
```
import cv2
from keras_retinanet import models
from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
from keras_retinanet.utils.colors import label_color
from keras_retinanet.utils.visualization import draw_box, draw_caption
```
#### 데이터셋 설정
아래는 데이터셋 및 객체 검출 설정에 필요한 환경 변수입니다. 사용하는 데이터셋 및 모델에 맞춰서 변경하시기 바랍니다.
- `classes` : 데이터내에 구성하는 모든 클래스의 정보를 담은 dict입니다. 키는 클래스의 일련변호, 값은 각 클래스의 이름으로 지정해야 합니다.
- `weight_file_path` : 입력 weight 파일의 경로 및 이름을 지정합니다.
- `predict_source` : 객체 검출을 할 대상 이미지 파일이 있는 디렉토리를 지정합니다.
- `predict_result` : 객체 검출 결과가 출력될 디렉토리를 지정합니다. 자동으로 해당 디렉토리를 생성하지 않으므로 미리 생성해두셔야 합니다.
- `bbox_threshold` : 검출된 경계 상자를 양성(positive)로 판정할 임계값입니다.
- `resize` : 이미지가 지나치게 크면 리사이즈하여 검출할지를 지정하는 부울 값입니다.
```
classes = {0: 'aeroplane', 1: 'bicycle', 2: 'bird', 3: 'boat', 4: 'bottle',
5: 'bus', 6: 'car', 7: 'cat', 8: 'chair', 9: 'cow',
10: 'diningtable', 11: 'dog', 12: 'horse', 13: 'motorbike', 14: 'person',
15: 'pottedplant', 16: 'sheep', 17: 'sofa', 18: 'train', 19: 'tvmonitor'}
model_path = './retinanet_predict.h5'
predict_source = './samples'
predict_result = './results'
bbox_threshold = 0.5
```
#### 검출 시작
모델을 로드하고 소스 이미지의 객체 검출을 실행하며 그 결과를 출력합니다.
```
model = models.load_model(model_path)
model = models.convert_model(model)
for dir_path, _, filenames in os.walk(predict_source):
for filename in filenames:
if not filename.endswith(('.jpg', '.png')):
continue
# 이미지 로드
start = time.time()
file_path = os.path.join(dir_path, filename)
print(file_path)
image = read_image_bgr(file_path)
draw = image.copy()
# 정규화 및 리사이즈
image = preprocess_image(image)
image, scale = resize_image(image)
# 검출
boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))
boxes /= scale
for box, score, label in zip(boxes[0], scores[0], labels[0]):
if score < bbox_threshold:
break
b = box.astype(int)
draw_box(draw, b, label_color(label))
draw_caption(draw, b, f'{classes[label]} {score:.3f}')
print(f'Label: {classes[label]}, Score: {score:.3f}, LTRB of the boundary box: {b[0]}, {b[1]}, {b[2]}, {b[3]}')
cv2.imwrite(os.path.join(predict_result, filename), image)
print(f'Processing time: {time.time() - start}')
```
#### 모델 변환
keras-retinanet은 훈련 결과 가중치 스냅샷을 용량이 적은 예측 전용 모델로 변환할 수 있는 기능을 제공합니다. `retinanet-convert-model` 명령어를 사용하면 됩니다.
- 기본 사용법: `retinanet-convert-model (훈련 모델 파일 경로) (출력될 예측 모델 파일 경로)`
- 변환한 예측 전용 모델을 사용할 경우 위의 검출 스크립트에서 `model = models.convert_model(model)`을 주석 처리하거나 삭제하고 사용합니다.
| github_jupyter |
```
# default_exp timeseries.data
```
# timeseries.data
> API details.
```
#export
from fastai.torch_basics import *
from fastai.data.all import *
from fastai.tabular.data import *
from fastai.tabular.core import *
from fastrenewables.tabular.core import *
from fastrenewables.timeseries.core import *
import glob
#hide
from nbdev.showdoc import *
#export
class RenewableTimeSeriesDataLoaders(DataLoaders):
"""Creates a timerseries data and dataloaders."""
@classmethod
@delegates(Tabular.dataloaders, but=["dl_type", "dl_kwargs"])
def from_df(cls, df, path='.', procs=None, pre_procs=None, \
cat_names=None, cont_names=None, \
y_names=None, y_block=RegressionBlock(),\
splits=None, **kwargs):
"Create from `df` in `path` using `procs`"
if cat_names is None:
cat_names = []
if cont_names is None:
cont_names = list(set(df)-set(L(cat_names))-set(L(y_names)))
if pre_procs is None:
pre_procs = [CreateTimeStampIndex("TimeUTC"), AddSeasonalFeatures]
if procs is None:
procs = [NormalizePerTask, Categorify]
to = TabularRenewables(
df,
cont_names=cont_names,
cat_names=cat_names,
y_names=y_names,
pre_process=pre_procs,
procs=procs,
y_block=y_block,
splits=None,
)
splits = RandomSplitter(valid_pct=0.2) if splits is None else splits
tt = Timeseries(to, splits=splits, **kwargs)
return TimeSeriesDataLoaders(tt, **kwargs)
@classmethod
def from_files(cls, files, **kwargs):
dfs = read_files(files)
dfs = pd.concat(dfs, axis=0)
# if "cat_names" in kwargs.keys():
# kwargs["cat_names"] = kwargs["cat_names"] if "TaskID" in kwargs["cat_names"] else kwargs["cat_names"] + ["TaskID"]
# else:
# kwargs["cat_names"] = ["TaskID"]
return cls.from_df(dfs, **kwargs)
files = glob.glob("../data/*.h5"); len(files)
cont_names = ['T_HAG_2_M', 'RELHUM_HAG_2_M', 'PS_SFC_0_M', 'ASWDIFDS_SFC_0_M',
'ASWDIRS_SFC_0_M', 'WindSpeed58m',
'SinWindDirection58m', 'CosWindDirection58m', 'WindSpeed60m',
'SinWindDirection60m', 'CosWindDirection60m', 'WindSpeed58mMinus_t_1',
'SinWindDirection58mMinus_t_1', 'CosWindDirection58mMinus_t_1',
'WindSpeed60mMinus_t_1', 'SinWindDirection60mMinus_t_1',
'CosWindDirection60mMinus_t_1', 'WindSpeed58mPlus_t_1',
'SinWindDirection58mPlus_t_1', 'CosWindDirection58mPlus_t_1',
'WindSpeed60mPlus_t_1', 'SinWindDirection60mPlus_t_1',
'CosWindDirection60mPlus_t_1']
cat_names = ['TaskID', 'Month', 'Day', 'Hour']
pd.options.mode.chained_assignment=None
dls = RenewableTimeSeriesDataLoaders.from_files(glob.glob("../data/*.h5"), y_names="PowerGeneration",
pre_procs=[FilterYear(year=2020),
AddSeasonalFeatures(as_cont=False),
FilterInconsistentSamplesPerDay],
cat_names=cat_names, cont_names=cont_names, bs=13)
cats, conts, ys = dls.one_batch()
cats.shape, conts.shape, ys.shape
```
| github_jupyter |
# Real-world data analysis example: PPC Campaign Performance
In the following example, we will load and analyze a generated set of data. The dataset is almost in the same format as could be obtained from AdWords using its reporting API, but the data itself is completely generated and any similarities with any existing AdWords Account is purely coincidental.
Let's dive right in!
We have to start by importing the `pandas` library. All the examples in [the official pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) import the library under the `pd` alias. Futhermore, [the official NumPy documentation] also uses an alias: `np`. We'll follow these conventions and import both libraries using these aliases:
```
import pandas as pd
import numpy as np
```
When working in Jupyter notebooks (especially when presenting), it might be a good idea to set the maximum number of rows displayed when printing `DataFrame`s and `Series`es. To do so, set `display.max_rows` and `display.max_seq_items` options:
```
pd.set_option('display.max_rows', 20)
pd.set_option('display.max_seq_items', 20)
```
## Creating your first table
We can now proceed by creating your first table. We can initialize the `DataFrame` using:
* A list of dictionaries, each dictionary will represent one row and dictionary keys will be mapped to columns. Please note that order of columns might not be preserved unless you use `OrderedDict`s, even in Python 3.6! Or, you can set the optional `columns` argument and they will be ordered accordingly.
* A list of tuples/lists, each tuple/list wil represent one row. You can specify the optional `columns` argument to specify number of columns.
* A generator yielding any of the above.
* A dictionary of columns, key will be mapped to columns and each value should contain a list of values. As in the first method, you might need to use `OrderedDict` in order to preserve column order.
* ... and a couple of other methods which are well-described in [the documentation of DataFrame constructor](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
Let's try some of these methods:
```
restaurant = pd.DataFrame([
{'name': 'Arthur Dent', 'homeworld': 'Earth', 'bill': 8.45},
{'name': 'Ford Prefect', 'homeworld': 'Betelgeuse Five', 'bill': 85.9},
{'name': 'Tricia McMillan', 'homeworld': 'Earth', 'bill': 10.2},
])
restaurant
from collections import OrderedDict
restaurant = pd.DataFrame([
OrderedDict([('name', 'Arthur Dent'), ('homeworld', 'Earth'), ('bill', 8.45)]),
OrderedDict([('name', 'Ford Prefect'), ('homeworld', 'Betelgeuse Five'), ('bill', 85.9)]),
OrderedDict([('name', 'Tricia McMillan'), ('homeworld', 'Earth'), ('bill', 10.2)]),
])
restaurant
restaurant = pd.DataFrame([
{'name': 'Arthur Dent', 'homeworld': 'Earth', 'bill': 8.45},
{'name': 'Ford Prefect', 'homeworld': 'Betelgeuse Five', 'bill': 85.9},
{'name': 'Tricia McMillan', 'homeworld': 'Earth', 'bill': 10.2},
], columns=('name', 'homeworld', 'bill'))
restaurant
restaurant = pd.DataFrame([
('Arthur Dent', 'Earth', 8.45),
('Ford Prefect', 'Betelgeuse Five', 85.9),
('Tricia McMillan', 'Earth', 10.2),
], columns=('name', 'homeworld', 'bill'))
restaurant
restaurant = pd.DataFrame(OrderedDict([
('name', ['Arthur Dent', 'Ford Prefect', 'Tricia McMillan']),
('homeworld', ['Earth', 'Betelgeuse Five', 'Earth']),
('bill', [8.45, 85.9, 10.2]),
]))
restaurant
```
## Loading from an XLSX file
Usually, you'll need to load some data that are already stored in some other format. pandas contains support for loading from various formats: CSV, XLS, XLSX, JSON, HDF5 and a [few other formats](http://pandas.pydata.org/pandas-docs/stable/io.html). To read an XLSX file, you'll need the `xlrd` library installed. pandas can read a single sheet specified by the `sheet_name` parameter, or it can read everything and will return a dictionary of data frames (unless there is only one sheet - in that case, it will return just the dataframe).
```
ad_group_performance = pd.read_excel(
'../data/data_ad_group_performance.xlsx'
)
ad_group_performance
```
The table we have just loaded contains daily performance of Ad Groups for 18 weeks. For every Ad Group, there are 18 * 7 = 126 rows with performance metrics, each for a single day.
The table contains the following columns:
* `CampaignId`: Internal ID of Campaign in AdWords.
* `CampaignName`: Name of the Campaign.
* `AdGroupId`: Internal ID of Ad Group in AdWords.
* `AdGroupName`: Name of the Ad Group.
* `Date`: Parsed date. pandas recognizes dates stored in XLSX files.
* `Impressions`: How many times any ad from the Ad Group was served and displayed on that days.
* `Clicks`: How many people clicked on the Ad and therefore visited our website.
* `Cost`: How much these clicks cost. Remember, PPC is Pay-Per-Click.
* `Conversions`: Number of conversions, e.g. how many people, who clicked on an ad and visited our website, actually bought anything.
* `ConversionsValue`: Total revenue from all of these purchases.
## Data Selection
We can access individual columns using their name using the indexing operator (like accessing an item of `dict`):
```
ad_group_performance['CampaignName']
ad_group_performance['Impressions']
```
Each column is actualy a named series. Note that each column has its own `dtype`.
We can also "select" columns if you pass a list of their names to the indexing operator. We will get another table with subset of columns (you can call it projection, if you are into relational algebra):
```
ad_group_performance[
['CampaignName', 'AdGroupName', 'Impressions']
]
```
We can also access the rows using their value in index. Since we didn't tell pandas anything about the index of the table, it generated a default 0-based numeric index. This means that we can access the rows like elements in the array using the special `loc` property:
```
ad_group_performance.loc[5]
```
If we pass an array, we can also get multiple rows:
```
ad_group_performance.loc[
[5, 6, 7, 8, 15, 25]
]
```
We can also use slicing to get the first 5 rows:
```
ad_group_performance.loc[:5]
```
Or we can also pass a column name to get only a specific cell:
```
ad_group_performance.loc[4, 'Date']
```
There are many other ways to access the columns and rows, se the documentation chapter on [Indexing and Selecting Data](https://pandas.pydata.org/pandas-docs/stable/indexing.html) to get more information.
## Filtering
Selecting rows by their index is not very useful. We might want to get specific rows matching our own condition. Luckily, pandas has it covered: we can pass a series of `bool`s to the indexing operator. The series must have the same size as there are rows in the `DataFrame`. We can get such series by simply taking one of the columns in the table and comparing it to value (or other series). `pandas.Series` supports all kinds of operators: standard math (`+` `-` `*` `/` `**` `%`), relational operators (`>` `>=` `<` `<=` `==` `!=`) and logical operators (`&` `|` `~`). Each of these operators are applied on every item of the series and a new series with results is returned.
So, let's assume we would like to get rows where the number of impressions is less than 10:
```
ad_group_performance[
ad_group_performance['Impressions'] < 10
]
```
We can also combine multiple series using the `&` and `|` operators. To find rows where number of impressions is greater than 100 and number of conversions is 0:
```
ad_group_performance[
(ad_group_performance['Impressions'] > 100) &
(ad_group_performance['Conversions'] == 0)
]
```
`pandas.Series` also supports the `~` unary operator for negation. To get rows with number of impressions greater than 100 and conversions not equal to 0 (pay attention to the tiny snake in front of the second parentheses):
```
ad_group_performance[
(ad_group_performance['Impressions'] > 100) &
~(ad_group_performance['Conversions'] == 0)
]
```
## Computations
We'll continue our tour by computing a few metrics that are common in the PPC world. They are described in the slides.
**→ Switch to the slides and continue on slide 31 if you are interested.**
We can add a new columns to the table just by assigning them. We can assign either a new series, or a constant value - it will be repeated in the every row:
```
ad_group_performance['TheAnswer'] = 42
ad_group_performance
```
We can compute CTR by taking the `Clicks` column and dividing it by `Impressions` column:
```
ad_group_performance['CTR'] = (
ad_group_performance['Clicks'] /
ad_group_performance['Impressions']
)
ad_group_performance
```
If we are not happy with any of the columns, or if we don't need it anymore (we know The Answer), we can delete it using the `drop` method of `pandas.DataFrame`:
```
ad_group_performance.drop(columns=['TheAnswer'])
ad_group_performance
```
The column is still there! That's because many functions and methods in `pandas` returns a new instance of `DataFrame` and keeps the original instance intact. Don't worry, it does it's best not to copy values when it's not necessary. To modify the instance, re-assign the variable like this:
```python
ad_group_performance = ad_group_performance.drop(columns=['TheAnswer'])
```
Or, more conviniently, most of the methods supports the `inplace` argument, which will tell pandas to modify the original instance:
```
ad_group_performance.drop(
columns=['TheAnswer'], inplace=True
)
ad_group_performance
```
Let's compute the CPC and Average Conversion Value:
```
ad_group_performance['CPC'] = (
ad_group_performance['Cost'] /
ad_group_performance['Impressions']
)
ad_group_performance['AvgConversionValue'] = (
ad_group_performance['ConversionsValue'] /
ad_group_performance['Conversions']
)
ad_group_performance
```
As you can see, there are a few `NaN` values in the `AvgConversionValue` column. That's because the `Conversions`, which is used as divisor, is zero. pandas does not raise `ZeroDivisionError` in this case and replace the value with `NaN`. We can check for `NaN`s, as well as for `None`, using the `pandas.isnull` (alias of `pandas.isnan`) function. To get all rows where `AvgConversionValue` is `NaN`:
```
ad_group_performance[
pd.isnull(ad_group_performance['AvgConversionValue'])
]
```
We might also be interested in descriptive statistics of individual columns, such as:
* What is the total number of clicks we received?
* What is the median value of number of conversions?
* What is the minumum number of impressions we got?
pandas can answer all of these questions (and many more) easily - see the documentation on [computations and descriptive statistics](https://pandas.pydata.org/pandas-docs/stable/api.html#computations-descriptive-stats) for more details:
```
ad_group_performance['Clicks'].sum()
ad_group_performance['Conversions'].median()
ad_group_performance['Impressions'].min()
```
You can see that the minumum number of impressions is 0 - we might have stopped some ad groups, but it can also indicate some larger problem, for instance we might have ran out of credit in our wallet. Let's investigate!
We start by searching for rows with zero impressions:
```
ad_groups_zero_impr = ad_group_performance[
ad_group_performance['Impressions'] == 0
]
ad_groups_zero_impr
```
As you can see, dates 2018-03-17 and 2018-03-18 repeats quite often, but there are 1398 rows and it might be difficult to search them manually. Let's group the values by campaign and date and see how many ad groups without impressions there are in each campaign and for each day.
## Grouping
Grouping can be done using [the `DataFrame.groupby` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html):
```
ad_groups_zero_impr.groupby(
['CampaignName', 'Date']
)
```
Well, that is not very useful. pandas will defer the actual grouping operation until you perform any action with it. You can list the groups, for instance:
```
ad_groups_zero_impr.groupby(
['CampaignName', 'Date']
).groups
```
You can already see that there are 6 groups, each with 2 dates: 2018-03-17 and 2018-03-18. This confirms our suspicion that this is an error caused by insufficient credit in the wallet, but it is still quite hard to read. Let's get the number of values for each group:
```
counts_by_campaign_date = ad_groups_zero_impr.groupby(
['CampaignName', 'Date']
).count()
counts_by_campaign_date
```
We didn't specify any column, pandas simply computed the counts for each of the columns. The aggregation counts only non-`Nan` and non-`None` values, therefore there are zeros in the `CTR`, `CPC` and `AvgConversionValue` columns.
You can also se that the first two columns, `CampaignName` and `Date`, are printed in bold and their names are not on the same line. This means that pandas created a hierachical index from the values. We can access any group on the first level by passing that group to the `loc` property:
```
counts_by_campaign_date.loc['Sport']
```
We can also access a specific row by passing a tuple with values from `CampaignName` and `Date`, in that order:
```
counts_by_campaign_date.loc[('Sport', '2018-03-18')]
```
Generally, you can access any sub-group on any level, just pass a tuple containing a path to that group.
You can also let pandas produce a table without the hierarchical index (like a GROUP BY clause in SQL) if you set the `as_index` argument to `False`. pandas will generate a numerical zero-based index:
```
ad_groups_zero_impr.groupby(
['CampaignName', 'Date'],
as_index=False
).count()
```
To get the metrics only for one of the columns, just pick any of the columns without `NaN`, for instance `AdGroupId` and compute statistics for that column (pandas will return a series with values only with that column):
```
ad_groups_zero_impr.groupby(
['CampaignName', 'Date']
)['CampaignId'].count()
```
We can take advantage of grouping and answer another question: which day of week performs the best?
We need to extract the day of week from the date and then group by that column. `pandas.Series` has a bunch of [methods for working with dates](https://pandas.pydata.org/pandas-docs/stable/api.html#datetimelike-properties), so it is quite straightforward:
```
ad_group_performance['DayOfWeek'] = (
ad_group_performance['Date'].dt.dayofweek
)
ad_group_performance
```
0 is Monday and 6 is Sunday, easy. We can now group by the column and compute the statistics. However, we will need to aggregate multiple columns at once. This is a good job for the `agg` method. It takes a dictionary, where keys equals to columns which will be used for the aggregations, and values are either `str`s with aggregation method to evaluate (such as `sum`, `count`, `min`, `mean`), or custom functions.
When using a custom function, it will receive a single argument: a `Series` with chunk of data to aggregate. We will try it later on, let's use the already defined methods:
```
daily_performance = ad_group_performance.groupby(
['DayOfWeek']
).agg({
'Impressions': 'sum',
'Clicks': 'sum',
'Cost': 'sum',
'Conversions': 'sum',
'ConversionsValue': 'sum',
})
daily_performance
```
So many numbers! Let's sort it by `ConversionsValue`, which is our revenue from the advertisment (note the `inplace=True` argument):
```
daily_performance.sort_values(
by='ConversionsValue',
ascending=False,
inplace=True
)
daily_performance
```
You can see that Wednesday is by far our most profitable day of week. This is dependent on your type of business, people generally shop less during the weekends.
But that is for the whole account. What if we wanted to examine individual campaings over the week? pandas supports pivoting to do exactly that.
## Pivoting
Pivoting is an operation that transposes rows to columns based on a value in one or more columns - columns will be named after the values. We can perform this operation either on rows, or on hierarchical index, but we have to use a different method.
First of all, we will show how to perform pivoting on columns. We use the `DataFrame.unstack` method, we have to pass a level of the hierarchical index on which to operate - either number, name (such as `DayOfWeek`) or `-1` to operate on the last level (the default):
```
campaign_weekday_performance = ad_group_performance.groupby(
['CampaignName', 'DayOfWeek']
).agg({
'Impressions': 'sum',
'Clicks': 'sum',
'Cost': 'sum',
'Conversions': 'sum',
'ConversionsValue': 'sum',
})
campaign_weekday_performance
campaign_weekday_performance = (
campaign_weekday_performance.unstack(
level='DayOfWeek'
)
)
campaign_weekday_performance
```
If the value we would like to use for pivoting is not in an index, but in any of the columns of the table (for instance if we passed `as_index=False` to the `groupby` method), we have to use the `DataFrame.pivot` method. The method requires at least two parameters: `index` which will tell pandas what colum shall be used to identify rows, and `columns` which will specify column name(s) whose values will be used to create new columns.
```
campaign_weekday_performance = ad_group_performance.groupby(
['CampaignName', 'DayOfWeek'],
as_index=False
).agg({
'Impressions': 'sum',
'Clicks': 'sum',
'Cost': 'sum',
'Conversions': 'sum',
'ConversionsValue': 'sum',
})
campaign_weekday_performance
campaign_weekday_performance = (
campaign_weekday_performance.pivot(
index='CampaignName',
columns='DayOfWeek'
)
)
campaign_weekday_performance
```
pandas actually created a hierarchical columns. Column name is on the first level and `DayOfWeek` on the second one. Just like with hierarchical indexes, we can access specific group or specific column by passing a value or tuple with path to the indexing operator:
```
campaign_weekday_performance['Impressions']
campaign_weekday_performance[('Impressions', 0)]
```
## Joining Tables
Until now, we worked with a single table. In practice, we often have multiple tables that contains different views on the data and we need to join them together. In AdWords, such example is the Quality Score metric. That metric is available only on the keyword-level reports, but we might want to aggregate it's value and see it on the Ad Group or even Campaign level. This will enable us to quickly find Ad Groups or Campaigns where we need to focus on the keywords and their quality.
We will load another table that contains quality scores on the keyword-level:
```
keywords_qs = pd.read_excel(
'../data/data_keywords_quality_score.xlsx'
)
keywords_qs
```
The table contains a row for each keyword in each Ad Group. It contains two metrics: `Impressions` and `QualityScore`. We would like to compute the aggregated `QualityScore` on the Ad Group level. It would be a mistake to simply calculate the mean over all keywords in an Ad Group - keywords that are rarely searched and has low quality score is usually not a big deal, but keywords with many impressions and low quality score should be fixed. Therefore, we need to calculate weighted average with number of impressions as weight.
To accomplish this task, we can aggregate the values using the [`numpy.average` function](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.average.html), which allows us to set the `weights` parameter:
```
def weighted_average(chunk):
return np.average(
chunk,
weights=keywords_qs.loc[chunk.index, 'Impressions']
)
ad_group_qs = keywords_qs.groupby('AdGroupId').agg({
'QualityScore': weighted_average
})
### Can be also written using lambda function:
ad_group_qs = keywords_qs.groupby('AdGroupId').agg({
'QualityScore': \
lambda chunk: np.average(
chunk,
weights=keywords_qs.loc[chunk.index, 'Impressions']
)
})
###
ad_group_qs
```
You can see that there are few differences, but it would be useful to see the data in context with other metrics. Let's join the tables!
Before we begin, we need to aggregate the `ad_group_performance` table on the `AdGroupId` level:
```
ad_group_performance_sum = ad_group_performance.groupby(
'AdGroupId',
as_index=False
).agg({
'CampaignId': 'first',
'CampaignName': 'first',
'AdGroupName': 'first',
'Impressions': 'sum',
'Clicks': 'sum',
'Cost': 'sum',
'Conversions': 'sum',
'ConversionsValue': 'sum',
})
ad_group_performance_sum
```
The join itself is accomplished using [the `DataFrame.merge` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html#pandas.DataFrame.merge). We give it two tables, `left` (that's the table instance on which the method is called) and `right`, set the join type (same as in SQL: `left`, `right`, `inner`, `outer` -- there are helpful diagrams in [the documentation on merging and joining](https://pandas.pydata.org/pandas-docs/stable/merging.html#brief-primer-on-merge-methods-relational-algebra) and in the slides - **→ see slides 33 - 36**), and the join columns: it can be a set of columns which are in both tables, or we can set different columns in both tables (we need to set the same number of columns, of course). We can even order pandas to use indexes:
```
ad_group_performance_qs = ad_group_performance_sum.merge(
right=ad_group_qs,
left_on='AdGroupId',
right_index=True,
how='left'
)
ad_group_performance_qs
```
## Output
Now that we have successfully joined tables, we might want to save the results and give them to somebody else for further processing. We could share this notebook, but we would need to distribute all the data, the recipient will need Python with pandas, Jupyter installed... It is just easier for everyone to save it to XLSX:
```
ad_group_performance_qs.to_excel(
'../output/out_ad_group_performance_qs.xlsx',
sheet_name='Ad Groups with QS'
)
```
We can even write multiple sheets to a single file, we just need to create an instance of `pandas.ExcelWriter` in advance, pass it to `to_excel` and then call `save` on the writer:
```
writer = pd.ExcelWriter(
'../output/out_all_relevant_tables.xlsx'
)
ad_group_performance_qs.to_excel(
writer, sheet_name='Ad Groups with QS'
)
campaign_weekday_performance.to_excel(
writer, sheet_name='Campaigns on Weekdays'
)
daily_performance.to_excel(
writer, sheet_name='Account Daily Perf'
)
writer.save()
```
See [documentation for `DataFrame.to_excel`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html#pandas.DataFrame.to_excel) to learn more about saving to XLSX files.
We might want to output the data to database. pandas uses the great SQLAlchemy library under the hood, so it supports MySQL, PostgreSQL, Microsoft SQL Server and several other databases. We just need to [initialize the database connection engine)](https://docs.sqlalchemy.org/en/latest/core/engines.html) and then call `DataFrame.to_sql`. pandas and SQLAlchemy will handle creating the table automatically. Let's output the latest table to MySQL (remember to set the `charset=utf8` in the connection string):
```
from sqlalchemy import create_engine
connection = create_engine(
'mysql://user:***@localhost/pyvo_pandas?charset=utf8'
)
ad_group_performance_qs.to_sql(
name='ad_group_performance_qs',
con=connection,
if_exists='replace'
)
```
## Conclusion
In this tutorial, we have gone through construction of `pandas.DataFrame`s from data in Python, loading the AdWords Ad Group performance data from an XLSX file, accessing rows and columns, filtering, computing new columns, grouping, sorting and pivoting. In the end, we demonstrated joining two tables and saving to an XLS file and SQL database.
The pandas library provides many other possibilities and functions that were not mentioned in this turorial. Additionally, we didn't cover visualization of the data, which is another important step during data analysis. If you are interested in learning more, there are several good sources where to start:
* [The official pandas documentation](https://pandas.pydata.org/pandas-docs/stable/index.html), which was heavily referred to during the tutorial.
* [List of pandas tutorials in the documentation](https://pandas.pydata.org/pandas-docs/stable/tutorials.html) - I can recommend the great [Pandas cookbook by Julia Evans](https://github.com/jvns/pandas-cookbook).
* [Data Analysis with Pandas and Python on Udemy](https://www.udemy.com/data-analysis-with-pandas/) (paid course).
* [Learning pandas - Second Edition by Michael Heydt](https://www.packtpub.com/big-data-and-business-intelligence/learning-pandas-second-edition).
It is also a good idea to visit [the list of PyData projects](https://pydata.org/downloads.html) and [list of projects in the pandas Ecosystem](https://pandas.pydata.org/pandas-docs/stable/ecosystem.html) to see how pandas fits into the data science stack.
| github_jupyter |
# High-level RNN TF Example
```
import numpy as np
import os
import sys
import tensorflow as tf
from common.params_lstm import *
from common.utils import *
# Force one-gpu
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Numpy: ", np.__version__)
print("Tensorflow: ", tf.__version__)
print("GPU: ", get_gpu_name())
print(get_cuda_version())
print("CuDNN Version ", get_cudnn_version())
def create_symbol(CUDNN=True,
maxf=MAXFEATURES, edim=EMBEDSIZE, nhid=NUMHIDDEN, batchs=BATCHSIZE):
word_vectors = tf.contrib.layers.embed_sequence(X, vocab_size=maxf, embed_dim=edim)
word_list = tf.unstack(word_vectors, axis=1)
if not CUDNN:
cell = tf.contrib.rnn.GRUCell(nhid)
outputs, states = tf.contrib.rnn.static_rnn(cell, word_list, dtype=tf.float32)
else:
# Using cuDNN since vanilla RNN
cudnn_cell = tf.contrib.cudnn_rnn.CudnnGRU(num_layers=1,
num_units=nhid,
input_size=edim)
params_size_t = cudnn_cell.params_size()
params = tf.Variable(tf.random_uniform([params_size_t], -0.1, 0.1), validate_shape=False)
input_h = tf.Variable(tf.zeros([1, batchs, nhid]))
outputs, states = cudnn_cell(input_data=word_list,
input_h=input_h,
params=params)
logits = tf.layers.dense(outputs[-1], 2, activation=None, name='output')
return logits
def init_model(m, y, lr=LR, b1=BETA_1, b2=BETA_2, eps=EPS):
# Single-class labels, don't need dense one-hot
# Expects unscaled logits, not output of tf.nn.softmax
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=m, labels=y)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(lr, b1, b2, eps)
training_op = optimizer.minimize(loss)
return training_op
%%time
# Data into format for library
x_train, x_test, y_train, y_test = imdb_for_library(seq_len=MAXLEN, max_features=MAXFEATURES)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)
%%time
# Place-holders
X = tf.placeholder(tf.int32, shape=[None, MAXLEN])
y = tf.placeholder(tf.int32, shape=[None])
sym = create_symbol()
%%time
model = init_model(sym, y)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
%%time
# Main training loop: 22s
correct = tf.nn.in_top_k(sym, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
for j in range(EPOCHS):
for data, label in yield_mb(x_train, y_train, BATCHSIZE, shuffle=True):
sess.run(model, feed_dict={X: data, y: label})
# Log
acc_train = sess.run(accuracy, feed_dict={X: data, y: label})
print(j, "Train accuracy:", acc_train)
%%time
# Main evaluation loop: 9.19s
n_samples = (y_test.shape[0]//BATCHSIZE)*BATCHSIZE
y_guess = np.zeros(n_samples, dtype=np.int)
y_truth = y_test[:n_samples]
c = 0
for data, label in yield_mb(x_test, y_test, BATCHSIZE):
pred = tf.argmax(sym, 1)
output = sess.run(pred, feed_dict={X: data})
y_guess[c*BATCHSIZE:(c+1)*BATCHSIZE] = output
c += 1
print("Accuracy: ", 1.*sum(y_guess == y_truth)/len(y_guess))
```
| github_jupyter |
```
!pip install gluoncv
import boto3
from IPython.display import clear_output, Image, display, HTML
import numpy as np
import cv2
import base64
from bokeh.plotting import figure
from bokeh.io import output_notebook, show, push_notebook
import time
import json
output_notebook()
STREAM_NAME = "pi4-001"
kvs = boto3.client("kinesisvideo")
# Grab the endpoint from GetDataEndpoint
endpoint = kvs.get_data_endpoint(
APIName="GET_HLS_STREAMING_SESSION_URL",
StreamName=STREAM_NAME
)['DataEndpoint']
print(endpoint)
# import sagemaker
# sagemaker_endpoint_name = 'object-detection-2019-10-18-17-56-41-925'
# sagemaker_endpoint = sagemaker.predictor.RealTimePredictor(sagemaker_endpoint_name)
# sagemaker_endpoint.content_type = 'image/jpeg'
from object_detection import ObjectDetection
objectDetection = ObjectDetection()
class VideoPlayer(object):
def __init__(self):
self._init = False
self._myImage = None
def __call__(self, frame):
if frame is None:
return
if self._init is False:
self.init_display(frame)
self._init = True
else:
self.update_display(frame)
def init_display(self, frame):
assert frame is not None
frame=cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA) # because Bokeh expects a RGBA image
# frame=cv2.flip(frame, -1) # because Bokeh flips vertically
frame=cv2.flip(frame, 0) # because Bokeh flips vertically
width=frame.shape[1]
height=frame.shape[0]
p = figure(x_range=(0,width), y_range=(0,height), output_backend="webgl", width=width, height=height)
self._myImage = p.image_rgba(image=[frame], x=0, y=0, dw=width, dh=height)
show(p, notebook_handle=True)
def update_display(self, frame):
assert frame is not None
frame=cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
# frame=cv2.flip(frame, -1)
frame=cv2.flip(frame, 0)
self._myImage.data_source.data['image']=[frame]
push_notebook()
def detection_result_process(frame, classes, class_IDs, scores, bounding_boxes, hand_cnt, no_hand_cnt, start_trans, in_trans, curr_item_cnt, max_item_cnt, pre_msg, pre_msg2):
# thres = 0.45
thres = 0.5
if class_IDs is not None and len(class_IDs) == 1:
if len(class_IDs[0]) >= 1:
hand = False
# item_cnt = [0, 0, 0]
item_cnt = [0, 0]
for i in range(len(class_IDs[0])):
if scores[0][i] > thres:
class_ID = int(class_IDs[0][i])
score = float(scores[0][i])
bounding_box = bounding_boxes[0][i]
# print('class_ID:', class_ID, 'score:', score, 'bounding_box:', bounding_box)
xmin, ymin, xmax, ymax = [int(x) for x in bounding_box]
cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0), thickness=1)
cv2.putText(frame, str(classes[class_ID])+':'+str(round(score, 2)), (xmin+10, ymin+10), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=0.5, thickness=1, color=(255, 255, 255))
# if class_ID == 3:
if class_ID == 2:
hand = True
else:
item_cnt[class_ID] += 1
for i in range(len(item_cnt)):
item_cnt[i] = min(item_cnt[i], max_item_cnt[i])
if not hand:
no_hand_cnt += 1
else:
hand_cnt += 1
if hand_cnt >= 3:
start_trans = True
in_trans = True
hand_cnt = 0
no_hand_cnt = 0
elif no_hand_cnt >= 3:
start_trans = False
hand_cnt = 0
no_hand_cnt = 0
if start_trans:
msg = 'Start Transaction'
else:
msg = 'End Transaction'
if in_trans:
for i in range(len(item_cnt)):
change_item = item_cnt[i]-curr_item_cnt[i]
if change_item != 0:
char = ''
if change_item > 0:
char = '+'
msg += ' '+classes[i]+': '+char+str(change_item)
in_trans = False
curr_item_cnt = item_cnt
else:
msg = pre_msg
if not hand:
for i in range(len(item_cnt)):
curr_item_cnt[i] = max(curr_item_cnt[i], item_cnt[i])
msg2 = ''
for i in range(len(curr_item_cnt)):
msg2 += classes[i]+': '+str(curr_item_cnt[i])+' '
cv2.putText(frame, msg, (40, 40), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=0.8, thickness=2, color=(255, 255, 255))
cv2.putText(frame, msg2, (40, 60), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=0.5, thickness=1, color=(255, 255, 255))
return frame, hand_cnt, no_hand_cnt, start_trans, in_trans, curr_item_cnt, msg, msg2
# Grab the HLS Stream URL from the endpoint
kvam = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint)
url = kvam.get_hls_streaming_session_url(
StreamName=STREAM_NAME,
PlaybackMode="LIVE"
)['HLSStreamingSessionURL']
vcap = cv2.VideoCapture(url)
#vcap.set(cv2.CAP_PROP_BUFFERSIZE, 3)
player = VideoPlayer()
hand_cnt = 0
no_hand_cnt = 0
start_trans = False
in_trans = False
# curr_item_cnt = [0, 0, 0]
# max_item_cnt = [4, 1, 1]
curr_item_cnt = [0, 0]
max_item_cnt = [3, 1]
pre_msg = ''
pre_msg2 = ''
run_mode = 2 # 1 for original image, 2 for object detection
while(True):
# Capture frame-by-frame
read_start = time.time()
if run_mode == 1:
for i in range(24):
ret, frame = vcap.read()
if run_mode == 2:
ret, frame = vcap.read()
ret, frame = vcap.read()
ret, frame = vcap.read()
read_end = time.time()
#print('read time:', read_end-read_start)
if frame is not None:
start = time.time()
frame = cv2.flip(frame, -1)
# save image
if run_mode == 1:
filename = '../images/frame_'+str(time.time())+'.jpg'
cv2.imwrite(filename, frame)
print(filename)
if run_mode == 2:
# use SageMaker
#with open(filename, 'rb') as image:
# f = image.read()
# b = bytearray(f)
#result = sagemaker_endpoint.predict(b)
#print(json.loads(result))
# use GluonCV
detect_start = time.time()
#class_IDs, scores, bounding_boxes = objectDetection.detect_image(frame)
class_IDs, scores, bounding_boxes = objectDetection.detect_image_yolo(frame)
#print('class_IDs:', class_IDs)
#print('scores:', scores)
#print('bounding_boxes:', bounding_boxes)
detect_end = time.time()
#print('detect time:', detect_end-detect_start)
frame, hand_cnt, no_hand_cnt, start_trans, in_trans, curr_item_cnt, msg, msg2 = detection_result_process(frame, objectDetection.classes, class_IDs, scores, bounding_boxes, hand_cnt, no_hand_cnt, start_trans, in_trans, curr_item_cnt, max_item_cnt, pre_msg, pre_msg2)
#print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(detect_end)), 'msg:', msg)
#print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(detect_end)), 'msg2:', msg2)
pre_msg = msg
pre_msg2 = msg2
# Display the resulting frame
#cv2.imwrite(filename, frame)
player(frame)
end = time.time()
#print('all time:', end-start)
#t = 0.2
#sleep_time = t-(end-start)
#if sleep_time >= 0:
# time.sleep(sleep_time)
# vcap.set(cv2.CAP_PROP_POS_FRAMES, sleep_time*25)
#skip_frame = max(0, 25-int(1/(end-start)))
#print('skip_frame:', skip_frame)
#if skip_frame > 0:
# vcap.set(cv2.CAP_PROP_POS_FRAMES, skip_frame)
if run_mode == 1:
t = 1
time.sleep(t)
#vcap.set(cv2.CAP_PROP_POS_FRAMES, t*25)
#break
# Press q to close the video windows before it ends if you want
#if cv2.waitKey(22) & 0xFF == ord('q'):
# break
else:
print("Frame is None")
break
# When everything done, release the capture
vcap.release()
print("Video stop")
```
| github_jupyter |
# Effect of the sample size in cross-validation
In the previous notebook, we presented the general cross-validation framework
and how to assess if a predictive model is underfiting, overfitting, or
generalizing. Besides these aspects, it is also important to understand how
the different errors are influenced by the number of samples available.
In this notebook, we will show this aspect by looking a the variability of
the different errors.
Let's first load the data and create the same model as in the previous
notebook.
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing(as_frame=True)
data, target = housing.data, housing.target
target *= 100 # rescale the target in k$
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
```
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor()
```
## Learning curve
To understand the impact of the number of samples available for training on
the statistical performance of a predictive model, it is possible to
synthetically reduce the number of samples used to train the predictive model
and check the training and testing errors.
Therefore, we can vary the number of samples in the training set and repeat
the experiment. The training and testing scores can be plotted similarly to
the validation curve, but instead of varying a hyperparameter, we vary the
number of training samples. This curve is called the **learning curve**.
It gives information regarding the benefit of adding new training samples
to improve a model's statistical performance.
Let's compute the learning curve for a decision tree and vary the
proportion of the training set from 10% to 100%.
```
import numpy as np
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
train_sizes
```
We will use a `ShuffleSplit` cross-validation to assess our predictive model.
```
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=30, test_size=0.2)
```
Now, we are all set to carry out the experiment.
```
from sklearn.model_selection import learning_curve
results = learning_curve(
regressor, data, target, train_sizes=train_sizes, cv=cv,
scoring="neg_mean_absolute_error", n_jobs=2)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
```
Now, we can plot the curve.
```
import matplotlib.pyplot as plt
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Learning curve for decision tree")
```
Looking at the training error alone, we see that we get an error of 0 k$. It
means that the trained model (i.e. decision tree) is clearly overfitting the
training data.
Looking at the testing error alone, we observe that the more samples are
added into the training set, the lower the testing error becomes. Also, we
are searching for the plateau of the testing error for which there is no
benefit to adding samples anymore or assessing the potential gain of adding
more samples into the training set.
If we achieve a plateau and adding new samples in the training set does not
reduce the testing error, we might have reach the Bayes error rate using the
available model. Using a more complex model might be the only possibility to
reduce the testing error further.
## Summary
In the notebook, we learnt:
* the influence of the number of samples in a dataset, especially on the
variability of the errors reported when running the cross-validation;
* about the learning curve that is a visual representation of the capacity
of a model to improve by adding new samples.
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).*
*The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).*
<!--NAVIGATION-->
< [Introduction](00-Introduction.ipynb) | [Contents](Index.ipynb) | [A Quick Tour of Python Language Syntax](02-Basic-Python-Syntax.ipynb) >
# How to Run Python Code
Python is a flexible language, and there are several ways to use it depending on your particular task.
One thing that distinguishes Python from other programming languages is that it is *interpreted* rather than *compiled*.
This means that it is executed line by line, which allows programming to be interactive in a way that is not directly possible with compiled languages like Fortran, C, or Java. This section will describe four primary ways you can run Python code: the *Python interpreter*, the *IPython interpreter*, via *Self-contained Scripts*, or in the *Jupyter notebook*.
### The Python Interpreter
The most basic way to execute Python code is line by line within the *Python interpreter*.
The Python interpreter can be started by installing the Python language (see the previous section) and typing ``python`` at the command prompt (look for the Terminal on Mac OS X and Unix/Linux systems, or the Command Prompt application in Windows):
```
$ python
Python 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:24:55)
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
With the interpreter running, you can begin to type and execute code snippets.
Here we'll use the interpreter as a simple calculator, performing calculations and assigning values to variables:
``` python
>>> 1 + 1
2
>>> x = 5
>>> x * 3
15
```
The interpreter makes it very convenient to try out small snippets of Python code and to experiment with short sequences of operations.
### The IPython interpreter
If you spend much time with the basic Python interpreter, you'll find that it lacks many of the features of a full-fledged interactive development environment.
An alternative interpreter called *IPython* (for Interactive Python) is bundled with the Anaconda distribution, and includes a host of convenient enhancements to the basic Python interpreter.
It can be started by typing ``ipython`` at the command prompt:
```
$ ipython
Python 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:24:55)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]:
```
The main aesthetic difference between the Python interpreter and the enhanced IPython interpreter lies in the command prompt: Python uses ``>>>`` by default, while IPython uses numbered commands (e.g. ``In [1]:``).
Regardless, we can execute code line by line just as we did before:
``` ipython
In [1]: 1 + 1
Out[1]: 2
In [2]: x = 5
In [3]: x * 3
Out[3]: 15
```
Note that just as the input is numbered, the output of each command is numbered as well.
IPython makes available a wide array of useful features; for some suggestions on where to read more, see [Resources for Further Learning](16-Further-Resources.ipynb).
### Self-contained Python scripts
Running Python snippets line by line is useful in some cases, but for more complicated programs it is more convenient to save code to file, and execute it all at once.
By convention, Python scripts are saved in files with a *.py* extension.
For example, let's create a script called *test.py* which contains the following:
``` python
# file: test.py
print("Running test.py")
x = 5
print("Result is", 3 * x)
```
To run this file, we make sure it is in the current directory and type ``python`` *``filename``* at the command prompt:
```
$ python test.py
Running test.py
Result is 15
```
For more complicated programs, creating self-contained scripts like this one is a must.
### The Jupyter notebook
A useful hybrid of the interactive terminal and the self-contained script is the *Jupyter notebook*, a document format that allows executable code, formatted text, graphics, and even interactive features to be combined into a single document.
Though the notebook began as a Python-only format, it has since been made compatible with a large number of programming languages, and is now an essential part of the [*Jupyter Project*](https://jupyter.org/).
The notebook is useful both as a development environment, and as a means of sharing work via rich computational and data-driven narratives that mix together code, figures, data, and text.
For an introduction to Jupyter, you might consult the [*Jupyter notebook introduction*](https://realpython.com/jupyter-notebook-introduction/).
To get some more knowledge about how a Markdown cell works and how to format text in such a cell, you might want to take a look at the [*Introduction of Markdown*](https://daringfireball.net/projects/markdown/) and copy the text under [*this link*](https://daringfireball.net/projects/markdown/index.text) to a Markdown cell and execute its content.
If the Jupyter notebook is not installed on the computer you are working on, you can use [*Google colab*](https://colab.research.google.com/).
<!--NAVIGATION-->
< [Introduction](00-Introduction.ipynb) | [Contents](Index.ipynb) | [A Quick Tour of Python Language Syntax](02-Basic-Python-Syntax.ipynb) >
| github_jupyter |
# Prerequisites
Install the `OpenPAI` sdk from `github` and specify your cluster information in `~/.openpai/clusters.yaml`.
And for simplicity and security, we recommand user to setup necessary information in `.openpai/defaults.json` other than shown in the example notebook. (Refer to for [README](https://github.com/microsoft/pai/blob/sdk-release-v0.4.00/contrib/python-sdk/README.md) more details.)
_Please make sure you have set default values for ***cluster-alias***. This notebook will not set them explicitly for security and privacy issue_
If not, use below commands to set them
```bash
opai set cluster-alias=<your/cluster/alias>
```
```
%load_ext autoreload
%autoreload 2
from openpaisdk.command_line import Engine
from openpaisdk.core import ClusterList, in_job_container
from uuid import uuid4 as randstr
clusters = Engine().process(['cluster', 'list'])
default_values = Engine().process(['set'])
print(default_values)
cluster_alias = default_values["cluster-alias"]
assert cluster_alias in clusters, "please specify cluster-alias and workspace"
```
# Submit jobs
Now we submit jobs from
- an existing version 1 job config file
- an existing version 2 job config file
- a hello-world command line
```
%%writefile mnist_v1.json
{
"jobName": "keras_tensorflow_backend_mnist",
"image": "openpai/pai.example.keras.tensorflow:stable",
"taskRoles": [
{
"name": "mnist",
"taskNumber": 1,
"cpuNumber": 4,
"memoryMB": 8192,
"gpuNumber": 1,
"command": "python mnist_cnn.py"
}
]
}
%%writefile mnist_v2.yaml
protocolVersion: 2
name: keras_tensorflow_mnist
type: job
version: 1.0
contributor: OpenPAI
description: |
# Keras Tensorflow Backend MNIST Digit Recognition Examples
Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
Reference https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
prerequisites:
- protocolVersion: 2
name: keras_tensorflow_example
type: dockerimage
version: 1.0
contributor : OpenPAI
description: |
This is an [example Keras with TensorFlow backend Docker image on OpenPAI](https://github.com/Microsoft/pai/tree/master/examples/keras).
uri : openpai/pai.example.keras.tensorflow
taskRoles:
train:
instances: 1
completion:
minSucceededTaskCount: 1
dockerImage: keras_tensorflow_example
resourcePerInstance:
cpu: 4
memoryMB: 8192
gpu: 1
commands:
- python mnist_cnn.py
tests = ["submit_v1", "submit_v2", "sub_oneliner"]
jobnames = {k: k + '_' + randstr().hex for k in tests}
options = ""
# options += " --preview"
if not in_job_container():
jobs, cmds = [], []
# submit v1
jobs.append("submit_v1_" + randstr().hex)
cmds.append(f'opai job submit {options} --update jobName={jobs[-1]} mnist_v1.json')
# submit v2
jobs.append("submit_v2_" + randstr().hex)
cmds.append(f'opai job submit {options} --update name={jobs[-1]} mnist_v2.yaml')
# sub
jobs.append("sub_" + randstr().hex)
resource = '-i openpai/pai.example.keras.tensorflow --cpu 4 --memoryMB 8192 --gpu 1'
cmds.append(f'opai job sub {options} -j {jobs[-1]} {resource} python mnist_cnn.py')
# notebook
jobs.append("notebook_" + randstr().hex)
cmds.append(f'opai job notebook {options} -j {jobs[-1]} {resource} --python python3 --pip-installs keras 2-submit-job-from-local-notebook.ipynb')
for cmd in cmds:
print(cmd, "\n")
! {cmd}
print("\n")
states = ClusterList().load().get_client(cluster_alias).wait(jobs)
failed_jobs = [t for i, t in enumerate(jobs) if states[i] != "SUCCEEDED"]
assert not failed_jobs, "some of jobs fails %s" % failed_jobs
```
| github_jupyter |
# Riemann Staircase
A notebook to caclulate functions to visualise the prime staircase using Riemann's formula.
```
from mpmath import *
from sympy import mobius
import numpy as np
import matplotlib.pyplot as plt
from tqdm.notebook import trange
mp.dps = 30; mp.pretty = True
def Li(x, rho=1):
return ei(rho * log(x))
def secondary(x, zetazeros):
res = 0
for rho in zetazeros:
a = Li(x, rho) + Li(x, rho.conjugate())
res += a
return res
def Ns(x):
N = 1
y = x
while y >= 2:
if mobius(N) != 0:
yield N, y
N += 1
y = x ** (1/N)
import gzip
with gzip.open("data/zeros6.gz", "rt") as f:
lines = f.readlines()
zetazeros = [mpc(0.5, float(s.strip())) for s in lines]
```
An approximation of $\pi(x)$ using the first $n$ Riemann zeros.
```
def pi_approx(x, n_zeros):
total = 0
for N, y in Ns(x):
row = mobius(N) * (Li(y) - secondary(y, zetazeros[:n_zeros]) - log(2) + quad(lambda t: 1/(t*(t*t-1)*log(t)), [y, inf])) / N
total += row
return total
```
Plot the approximation for a few values of $n$.
```
def plot_funcs(funcs, rng, npoints=200):
points = np.linspace(rng[0], rng[1], npoints)
plt.figure()
for func in funcs:
plt.plot(points, [func(p) for p in points])
plt.plot(points, [0]*npoints, linestyle=':')
plt.rcParams["figure.figsize"]=10,7
plot_funcs([primepi, pi_approx0], [2, 50], 200)
plot_funcs([primepi, pi_approx30], [2, 50], 200)
plot_funcs([primepi, pi_approx60], [2, 50], 200)
plot_funcs([primepi, pi_approx90], [2, 50], 200)
```
## Precompute for multiple zeros
```
%%time
rng = [2, 50]
npoints = 500
n_doublings = 13
points = np.linspace(rng[0], rng[1], npoints)
values = np.zeros((n_doublings, npoints))
n_zeros = 0
for i in range(n_doublings):
print(i, n_zeros)
for j in trange(len(points)):
p = points[j]
values[i, j] = float(pi_approx(p, n_zeros))
if n_zeros == 0:
n_zeros = 1
else:
n_zeros *= 2
```
Export data for use in javascript
```
pi_values = np.zeros(npoints)
for i, p in enumerate(points):
pi_values[i] = float(primepi(p))
data = np.empty((values.shape[0] + 2, values.shape[1]))
data[0] = points
data[1:values.shape[0]+1] = values
data[-1] = pi_values
np.savetxt("staircase.csv", data, delimiter=",")
```
Show an interactive plot using `ipywidgets`
```
def plot_points(n=0):
plt.figure()
plt.plot(points, pi_values)
plt.plot(points, values[n])
plt.ylim(0, 16)
plt.show()
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
interactive_plot = interactive(plot_points, n=(0, len(values)-1, 1))
output = interactive_plot.children[-1]
plt.rcParams["figure.figsize"]=10,7
interactive_plot
```
| github_jupyter |
# Foundations of Computational Economics #12
by Fedor Iskhakov, ANU
<img src="_static/img/dag3logo.png" style="width:256px;">
## Enumeration of discrete compositions
<img src="_static/img/lab.png" style="width:64px;">
<img src="_static/img/youtube.png" style="width:65px;">
[https://youtu.be/eU2WRHBTFBw](https://youtu.be/eU2WRHBTFBw)
Description: Combinatorial enumeration. Python generators.
### Generators in Python
**yield** keyword is a special kind of **return** from a function
- “pauses” the function execution
- waits fro the next iteration request from the caller
- saves its state for the next call
Use generators instead of lists when only one element of the list is needed at any time
- **saves memory**: alternative is a list of objects to be generated
- overhead in keeping the state of the generator function
#### Iterator vs Iterable vs Generator
Iterator:
- object that can return its members one at a time
- has __iter__ and __next__ methods
- support StopIteration error
- can be used in for loops
Iterable:
- object that can be converted to an iterator with iter() function
- can be used in for loops directly
- examples: lists, tuples, strings
Generator:
- a particular kind of iterator
- can be implemented as a function with yield returns
- or as *comprehension expression* similar to list comprehension
#### Note on range()
Range object is an itarable, therefore
- iter() function converts it to iterator
- it can be used directly in for loops
- but it is more than a generator: can be used in many other tasks besides looping
```
# simple examples of generators
def test_generator():
'''Testing generator
'''
yield 1
yield 5
yield 15
yield 25
g = test_generator()
print('initial call returned %d' % (next(g)))
for i in g:
print('loop call returned %d' % (i))
print('loop finished gracefully')
next(g) # impossible to get any output once generator is done
def test_generator_steps():
'''Testing generator
'''
for i in range(10):
print('generator at step %d'%(i))
yield i
g = test_generator_steps()
for i in g:
print('main code at step %d' % (i))
```
### Discrete compositions
$$
(p_1, p_2, \dots, p_n) \text{ such that } p_i \in \mathbb{Z}, 0 \le p_i \le M, \sum_{i=1}^{n}p_i = M
$$
$ (p_1,p_2,\dots,p_n) $ is **composition** of number $ M $ into
$ n $ parts.
#### Number of compositions
- composition corresponds to cutting the interval of length $ M $ into $ n $ parts
- but cut points can be at the same place to allow for zeros
- instead let interval of length $ M+n $ be cut
- discrete $ \rightarrow $ have $ M+n-1 $ points for $ n-1 $ cuts
- no overlaps in the latter scheme
$$
\text{Number of compositions} = {M+n-1 \choose n-1} = \frac{(M+n-1)!}{n!(M-1)!}
$$
```
import scipy.special
def number_compositions(n,M):
'''Returns the number of discrete compositions for given parameters
'''
return int(scipy.special.comb(M+n-1,n-1)) # M+n-1 choose n-1
print('n=%3d, M=%3d --> NC=%d'%(2,2,number_compositions(2,2)))
print('n=%3d, M=%3d --> NC=%d'%(2,10,number_compositions(2,10)))
print('n=%3d, M=%3d --> NC=%d'%(2,100,number_compositions(2,100)))
print('n=%3d, M=%3d --> NC=%d'%(5,10,number_compositions(5,10)))
print('n=%3d, M=%3d --> NC=%d'%(5,100,number_compositions(5,100)))
print('n=%3d, M=%3d --> NC=%d'%(10,100,number_compositions(10,100)))
print('n=%3d, M=%3d --> NC=%d'%(50,100,number_compositions(50,100)))
```
#### Lexicographical order
Composition $ p = (p_1, p_2, \dots, p_n) $ is greater than
composition $ p' = (p'_1, p'_2, \dots, p'_n) $ in lexicographical sense
iff for some $ J \in \{1,\dots,n\} $ $ p_j=p'_j $ for all $ j<J $
and $ p_J>p'_J $.
$ j=n $ is the *lowest digit*, $ j=1 $ is the *highest digit*
Composition $ p $ is greater that $ p' $ iff the *highest different digit*
of $ p $ is greater than that of $ p' $.
#### Examples of lexicographical order
- numbers in any base system
- words in a dictionary
- compositions to be generated
```
def compositions(N,m):
'''Iterable on compositions of N with m parts
Returns the generator (to be used in for loops)
'''
pass
n, M = 4, 8
for i,k in enumerate(compositions(M,n)):
print('%3d'%i,end=": ")
print(k)
def compositions(N,m):
'''Iterable on compositions of N with m parts
Returns the generator (to be used in for loops)
'''
cmp=[0,]*m
cmp[m-1]=N # initial composition is all to the last
yield cmp
while cmp[0]!=N:
i=m-1
while cmp[i]==0: i-=1 # find lowest non-zero digit
cmp[i-1] = cmp[i-1]+1 # increment next digit
cmp[m-1] = cmp[i]-1 # the rest to the lowest
if i!=m-1: cmp[i] = 0 # maintain cost sum
yield cmp
```
### Further learning resources
- Generators vs. lists in Python
[https://www.youtube.com/watch?v=bD05uGo_sVI](https://www.youtube.com/watch?v=bD05uGo_sVI)
- Iterators and generators in Trey Hunner blog
[https://treyhunner.com/2018/06/how-to-make-an-iterator-in-python/](https://treyhunner.com/2018/06/how-to-make-an-iterator-in-python/)
- Some examples of combinatorial optimization
[https://neos-guide.org/content/combinatorial-optimization](https://neos-guide.org/content/combinatorial-optimization)
| github_jupyter |
**A high-level plotting API for the PyData ecosystem built on HoloViews.**
<img src="./assets/diagram.png" width="70%"></img>
The PyData ecosystem has a number of core Python data containers that allow users to work with a wide array of datatypes, including:
* [Pandas](https://pandas.pydata.org): DataFrame, Series (columnar/tabular data)
* [Rapids cuDF](https://docs.rapids.ai/api/cudf/stable/): GPU DataFrame, Series (columnar/tabular data)
* [Dask](https://dask.pydata.org): DataFrame, Series (distributed/out of core arrays and columnar data)
* [XArray](https://xarray.pydata.org): Dataset, DataArray (labelled multidimensional arrays)
* [Streamz](https://streamz.readthedocs.io): DataFrame(s), Series(s) (streaming columnar data)
* [Intake](https://github.com/ContinuumIO/intake): DataSource (data catalogues)
* [GeoPandas](https://geopandas.org): GeoDataFrame (geometry data)
* [NetworkX](https://networkx.github.io/documentation/stable/): Graph (network graphs)
Several of these libraries have the concept of a high-level plotting API that lets a user generate common plot types very easily. The native plotting APIs are generally built on [Matplotlib](http://matplotlib.org), which provides a solid foundation, but it means that users miss out on the benefits of modern, interactive plotting libraries built for the web like [Bokeh](http://bokeh.pydata.org) and [HoloViews](http://holoviews.org).
hvPlot provides a high-level plotting API built on HoloViews that provides a general and consistent API for plotting data in all the abovementioned formats. hvPlot can integrate neatly with the individual libraries if an extension mechanism for the native plot APIs is offered, or it can be used as a standalone component. To get started jump straight into the [installation instructions](#installation) and check out the current functionality in the [User Guide.](user_guide/index.html)
## Usage
hvPlot provides an alternative for the static plotting API provided by [Pandas](http://pandas.pydata.org) and other libraries, with an interactive [Bokeh](http://bokeh.pydata.org)-based plotting API that supports panning, zooming, hovering, and clickable/selectable legends:
```
import pandas as pd, numpy as np
idx = pd.date_range('1/1/2000', periods=1000)
df = pd.DataFrame(np.random.randn(1000, 4), index=idx, columns=list('ABCD')).cumsum()
import hvplot.pandas # noqa
df.hvplot()
```
With recent versions of Pandas (>=0.25.0) we can also swap the default plotting backend:
```
pd.options.plotting.backend = 'holoviews'
df.A.hist()
```
hvPlot works with multiple data sources and ships with some inbuilt sample data, which is loaded using the [Intake](http://github.com/ContinuumIO/intake) data catalog.
```
from hvplot.sample_data import us_crime
columns = ['Burglary rate', 'Larceny-theft rate', 'Robbery rate', 'Violent Crime rate']
us_crime.plot.violin(y=columns, group_label='Type of crime', value_label='Rate per 100k', invert=True, color='Type of crime')
```
Unlike the default plotting, hvPlot output can easily be composed using `*` to overlay plots (or `+` to lay them out side by side):
```
us_crime.plot.bivariate('Burglary rate', 'Property crime rate', legend=False, width=500, height=400) * \
us_crime.plot.scatter( 'Burglary rate', 'Property crime rate', color='black', size=15, legend=False) +\
us_crime.plot.table(['Burglary rate', 'Property crime rate'], width=350, height=350)
```
When used with [streamz](http://streamz.readthedocs.io) DataFrames, hvPlot can very easily plot streaming data to get a [live updating plot](user_guide/Streaming.html):
```
import hvplot.streamz # noqa
from streamz.dataframe import Random
streaming_df = Random(freq='5ms')
streaming_df.hvplot(backlog=100, height=400, width=500) +\
streaming_df.hvplot.hexbin(x='x', y='z', backlog=2000, height=400, width=500);
```
<img src="./assets/streamz_demo.gif" style="display: table; margin: 0 auto;" width="70%"></img>
For multidimensional data not supported well by Pandas, you can use an XArray Dataset like this gridded data of North American air temperatures over time, which also demonstrates support for [geographic projections](user_guide/Geographic_Plots.html):
```
import xarray as xr, cartopy.crs as crs
import hvplot.xarray # noqa
air_ds = xr.tutorial.open_dataset('air_temperature').load()
proj = crs.Orthographic(-90, 30)
air_ds.air.isel(time=slice(0, 9, 3)).hvplot.quadmesh(
'lon', 'lat', projection=proj, project=True, global_extent=True,
cmap='viridis', rasterize=True, dynamic=False, coastline=True,
frame_width=500)
```
Lastly, hvPlot also provides drop-in replacements for the NetworkX plotting functions, making it trivial to generate interactive plots of [network graphs](user_guide/NetworkX.html):
```
import networkx as nx
import hvplot.networkx as hvnx
G = nx.karate_club_graph()
hvnx.draw_spring(G, labels='club', font_size='10pt', node_color='club', cmap='Category10', width=500, height=500)
```
hvPlots will show widgets like the "Time" slider here whenever your data is indexed by dimensions that are not mapped onto the plot axes, allowing you to explore complex datasets much more easily than with the default plotting support.
hvPlot is designed to work well in and outside the Jupyter notebook, and thanks to built-in [Datashader](http://datashader.org) support scales easily to millions or even billions of datapoints:
<img src="./assets/console_server.gif" style="display: table; margin: 0 auto;" width="80%"></img>
For information on using hvPlot take a look at the [User Guide](user_guide/index.html) or the [announcement blog](http://blog.pyviz.org/hvplot_announcement.html).
## Installation
| | |
| --- | --- |
| Latest release | [](https://github.com/holoviz/hvplot/releases) [](https://pypi.python.org/pypi/hvplot) [](https://anaconda.org/pyviz/hvplot) [](https://anaconda.org/conda-forge/hvplot) [](https://anaconda.org/anaconda/hvplot) |
hvPlot works with [Python 2.7 and Python 3](https://travis-ci.org/pyviz/hvplot) on Linux, Windows, or Mac. The recommended way to install hvPlot is using the [conda](http://conda.pydata.org/docs/) command provided by [Anaconda](http://docs.continuum.io/anaconda/install) or [Miniconda](http://conda.pydata.org/miniconda.html):
conda install -c pyviz hvplot
or using PyPI:
pip install hvplot
For versions of `jupyterlab>=3.0` the necessary extension is automatically bundled in the `pyviz_comms` package, which must be >=2.0. However note that for version of `jupyterlab<3.0` you must also manually install the JupyterLab extension with:
conda install jupyterlab
jupyter labextension install @pyviz/jupyterlab_pyviz
| github_jupyter |
Face detection with OpenCV isn't something new or complicated. There is however the aspect of face recognition. Combining all of that plus some PIL image processing we can make a fun machine vision app.
```
import pytest
import ipytest
ipytest.autoconfig()
```
### Detect faces, draw memes
```
import os
import cv2
import numpy as npframe
import PIL.Image
import PIL.ImageOps
import PIL.ImageFont
import PIL.ImageDraw
from IPython.display import display
eye_cascade = cv2.CascadeClassifier('/home/xilinx/jupyter_notebooks/pynq-memes/data/haarcascade_eye.xml')
face_cascade = cv2.CascadeClassifier('/home/xilinx/jupyter_notebooks/pynq-memes/data/haarcascade_frontalface_default.xml')
def detect_faces_on_frame(frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
return face_cascade.detectMultiScale(gray, 1.3, 5)
ipytest.clean_tests()
def test_if_face_is_detected():
image = PIL.Image.open('/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/test_face.jpg')
frame = npframe.array(image)
faces = detect_faces_on_frame(frame)
assert len(faces) == 1
x, y, w, h = faces[0]
assert [x, y, w, h] == [213, 94, 119, 119]
ipytest.run('-qq')
def mark_faces_on_frame(frame, faces):
for (x,y,w,h) in faces:
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
return frame
def crop_face(image, face):
x, y, width, height = expand_face_coordinates_if_possible(face, image)
return image.crop((x, y, x + width, y +height))
def expand_face_coordinates_if_possible(face, image, expand_by=60):
image_width, image_height = image.size
x, y, width, height = face
x = x - expand_by
if x < 0:
x = 0
y = y - expand_by
if y < 0:
y = 0
width = width + expand_by * 2
if width + x > image_width:
width = image_width - x
height = height + expand_by * 2
if height + y > image_height:
height = image_height - y
return x, y, width, height
ipytest.clean_tests()
def test_if_face_is_cropped():
image = PIL.Image.open('/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/test_face.jpg')
face = [213, 94, 119, 119]
result = crop_face(image, face)
assert result.size == (239, 239)
display(result)
ipytest.run('-qq')
ipytest.clean_tests()
def test_if_coordinates_are_expanded():
image = PIL.Image.new('RGB', (400, 400))
face = (100, 100, 50, 50)
result = expand_face_coordinates_if_possible(face, image, expand_by=20)
assert result == (80, 80, 90, 90)
def test_if_left_side_edge_is_handled():
image = PIL.Image.new('RGB', (400, 400))
face = (0, 0, 50, 50)
result = expand_face_coordinates_if_possible(face, image, expand_by=20)
assert result == (0, 0, 90, 90)
def test_if_right_side_edge_is_handled():
image = PIL.Image.new('RGB', (400, 400))
face = (350, 350, 50, 50)
result = expand_face_coordinates_if_possible(face, image, expand_by=20)
assert result == (330, 330, 70, 70)
ipytest.run('-qq')
def draw_meme(image, face):
image = crop_face(image, face)
image = PIL.ImageOps.expand(image, border=20, fill='deeppink')
w, h = image.size
overlay = PIL.Image.new('RGB', (w, h + 60), (255, 20, 147))
overlay.paste(image, (0, 0))
font_size = 30
if w < 200:
font_size = 20
draw = PIL.ImageDraw.Draw(overlay)
font = PIL.ImageFont.truetype("/home/xilinx/jupyter_notebooks/pynq-memes/data/COMIC.TTF", font_size)
draw.text((20, h),"PYNQ Hero!",(255,255,255),font=font)
return overlay
ipytest.clean_tests()
def test_if_meme_is_drawn():
image = PIL.Image.open('/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/test_face.jpg')
face = (175, 17, 185, 230)
result = draw_meme(image, face)
assert result
display(result)
ipytest.run('-qq')
def display_memes_on_one_frame(memes):
padding = 5
overlay = PIL.Image.new('RGB', (1920, 1080), (255, 20, 147))
x = padding
y = padding
max_height_in_row = 0
for meme in memes:
overlay.paste(meme, (x, y))
w, h = meme.size
x += w
x += padding
if max_height_in_row < h:
max_height_in_row = h
if x + w >= 1900:
x = padding
y += max_height_in_row
y += padding
max_height_in_row = 0
return overlay
ipytest.clean_tests()
def test_if_memes_are_drawn_on_single_frame():
image = PIL.Image.open('/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/meme.png')
memes = [image] * 7
result = display_memes_on_one_frame(memes)
assert result.size == (1920, 1080)
display(result)
ipytest.run('-qq')
def get_image_from_frame(frame):
return PIL.Image.fromarray(frame)
ipytest.clean_tests()
def test_integration():
image_path = "/home/xilinx/jupyter_notebooks/pynq-memes/data/a2.jpg"
source_image = PIL.Image.open(image_path)
current_frame = npframe.array(source_image)
faces = detect_faces_on_frame(current_frame)
created_memes = []
for face in faces:
meme = draw_meme(source_image, face)
created_memes.append(meme)
display(meme)
display_memes_on_one_frame(created_memes)
assert len(created_memes) == 3
ipytest.run('-qq')
```
# Face recognition - optional
This requires installing "face_recognition" package which on PYNQ-Z2 can take way over an hour.
The face comparison also takes some time.
#### Reference image:

#### Other Ethan Hunt image:

#### Ethan Hunt in disquise image:


#### Other face image:

```
class FaceDetectionError(Exception):
pass
def get_encoding(image_path):
face = face_recognition.load_image_file(image_path)
return face_recognition.face_encodings(face)[0]
def compare_faces(reference_encoding, analyzed_image_file):
try:
unknown_encoding = get_encoding(analyzed_image_file)
except IndexError:
raise FaceDetectionError()
else:
return face_recognition.compare_faces([reference_encoding], unknown_encoding)[0]
import face_recognition
ipytest.clean_tests()
def test_face_recognition():
hunt_encoding = get_encoding("/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/test_face.jpg")
assert compare_faces(hunt_encoding, '/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/other_ethan.jpg')
assert not compare_faces(hunt_encoding, '/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/other_face.jpg')
assert not compare_faces(hunt_encoding, '/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/russian_ethan.jpg')
with pytest.raises(FaceDetectionError):
assert not compare_faces(hunt_encoding, '/home/xilinx/jupyter_notebooks/pynq-memes/data/tests/russian_ethan2.jpg')
ipytest.run('-qq')
```
# Meme machine vision app
### Detect faces on HDMI-IN, draw memes, display them as one frame on HDMI-out.
Get a frame from HDMI input, detect faces, make memes, draw a 720p frame on the output.
```
from time import sleep
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
base = BaseOverlay("base.bit")
hdmi_in = base.video.hdmi_in
hdmi_out = base.video.hdmi_out
hdmi_in.configure(PIXEL_RGB)
hdmi_out.configure(hdmi_in.mode, PIXEL_RGB)
hdmi_in.start()
hdmi_out.start()
# hdmi_in.tie(hdmi_out)
run = 0
memes_displaying = False
while run < 4:
print(run)
frame = hdmi_in.readframe()
image = get_image_from_frame(frame)
faces = detect_faces_on_frame(frame)
created_memes = []
for face in faces:
meme = draw_meme(image, face)
created_memes.append(meme)
if created_memes:
print('Memes detected', len(created_memes))
output_image = display_memes_on_one_frame(created_memes)
output_frame = npframe.array(output_image)
outframe = hdmi_out.newframe()
outframe[:] = output_frame
hdmi_out.writeframe(outframe)
memes_displaying = True
else:
if not memes_displaying:
print('Displaying source')
hdmi_out.writeframe(frame)
sleep(1)
run += 1
hdmi_out.stop()
hdmi_in.stop()
del hdmi_in, hdmi_out
```
| github_jupyter |
### Test create data func
```
!dvc pull ../data/observations_ad_0.0.pickle.dvc
import sys
sys.path.append('../src')
import yaml
import math
import pickle
import numpy as np
from pickle_wrapper import unpickle, pickle_it
import matplotlib.pyplot as plt
from pickle_wrapper import unpickle, pickle_it
from mcmc_norm_learning.algorithm_1_v4 import create_data
from mcmc_norm_learning.rules_4 import get_prob, get_log_prob
from mcmc_norm_learning.environment import position,plot_env
from mcmc_norm_learning.robot_task_new import task, robot, plot_task
with open("../params.yaml", 'r') as fd:
params = yaml.safe_load(fd)
##Get default env
env = unpickle('../data/env.pickle')
##Get default task
true_norm_exp = params['true_norm']['exp']
num_observations = params['num_observations']
obs_data_set = params['obs_data_set']
colour_specific = params['colour_specific']
shape_specific = params['shape_specific']
target_area_parts = params['target_area'].replace(' ','').split(';')
target_area_part0 = position(*map(float, target_area_parts[0].split(',')))
target_area_part1 = position(*map(float, target_area_parts[1].split(',')))
target_area = (target_area_part0, target_area_part1)
print(target_area_part0.coordinates())
print(target_area_part1.coordinates())
the_task = task(colour_specific, shape_specific,target_area)
"\n".join([str(x) for x in true_norm_exp])
fig,axs=plt.subplots(1,2,figsize=(9,4),dpi=100);
plot_task(env,axs[0],"Initial Task State",the_task,True)
axs[1].text(0,0.5,"\n".join([str(x) for x in true_norm_exp]),wrap=True)
axs[1].axis("off")
observations = unpickle('../data/observations_ad_0.0.pickle')
```
### Let's check in how many observations Norm kicks in
```
norm_impacted_obs={}
for obs in observations:
if any([action in [(('pickup', 8), ('putdown', 8, '1')),\
(('pickup', 40), ('putdown', 40, '1'))] for action in obs]):
norm_impacted_obs[obs]=1
print (sum(norm_impacted_obs.values()))
dict(list(norm_impacted_obs.items())[0:2])
def all_compliant(rules,task,env,name,verbose=False):
""" Return all possible compliant action given norms/rules """
import sys
from actions import pickup_action,putdown_action
from verify_action_4 import verify_action
from copy import deepcopy
import os
from numpy import nan,random
from collections import defaultdict
from rules_4 import obl_conds
actionable_objects=[]
(x1,y1)=task.target_area[0].coordinates()
(x2,y2)=task.target_area[1].coordinates()
for obj in env[0]:
if (x1<=obj.position.x<=x2):
if (y1<=obj.position.y<=y2):
if obj.colour in task.colour_specific:
if obj.shape in task.shape_specific:
actionable_objects.append(obj)
num_actionable_obj=len(actionable_objects)
if verbose==True:
print("Rules: ", rules)
print ("Found {} actionable objects".format(str(num_actionable_obj)))
if (num_actionable_obj)==0:
return (nan)
try:
#os.mkdir("./"+name[:-len(name.split("/")[-1])])
if verbose==True:
print ("Creating directory to store permutations")
except:
if verbose==True:
print ("Directory already available")
order=random.choice(actionable_objects,num_actionable_obj,replace=False)
#if verbose==True:
# print ("Order of Acting:",[x.obj_id for x in order])
# with open('./'+name+'.txt', 'w') as f: # Mode was x. Why?
# with redirect_stdout(f):
action_pairs_by_obj=defaultdict(set)
possible_zones_init=set(map(str, deepcopy(env[3]).keys()))
for obj1 in order:
oid = obj1.obj_id
possible_zones=set(possible_zones_init)
""" Perform pickup action """
pro_flag,pro_zone,rule=verify_action(obj1,"prohibition","pickup",rules)
if pro_flag==1: #If action is prohibited
# print ("Picking-up {}-{} object from Zone={} prohibited by norm {}".format(obj1.colour,obj1.shape,pro_zone,rule))
per_flag,per_zone,rule=verify_action(obj1,"permission","pickup",rules)
per_zones = possible_zones if per_zone == 'any' else {per_zone}
if per_flag==1 and pro_zone in per_zones: #If permission exists and overrides prohibition
pass
# print ("Picking-up {}-{} object from Zone={} permitted by norm: {}".format(obj1.colour,obj1.shape,pro_zone,rule))
# pickup_action(obj1).perform()
else:
#print ("Action skipped")
continue
else:
pass # Was: pickup_action(obj1).perform() #This is needed to set obj1.last_action t0 "pickup"
""" Perform putdown action """
if True: # Was: obj1.last_action=="pickup": #Proceed to put down
obl_flag,obl_zone,obl_rule = verify_action(obj1,"obligation","putdown",rules)
if obl_flag==1: #If obligation exists
#print ("Putting-down {}-{} object in Zone-{} obligated by norm {}".format(obj1.colour,obj1.shape,obl_zone,obl_rule))
per_flag,per_zone,rule=verify_action(obj1,"permission","putdown",rules)
if per_flag==1:
#print ("But Putting-down {}-{} object in Zone-{} permitted by norm {}".format(obj1.colour,obj1.shape,per_zone,rule))
per_zones = possible_zones if per_zone == 'any' else {per_zone}
# putdown_action(obj1,int(random.choice(tuple({obl_zone}|per_zones))),task.target_area,env[3]).perform()
else:
per_zones = set()
# putdown_action(obj1,obl_zone,task.target_area,env[3]).perform()
hist_conds, _ = obl_conds(obl_rule)
for z in possible_zones:
if z in {obl_zone} | per_zones:
action_pairs_by_obj[oid].add(PossMove(("pickup",obj1.obj_id), ("putdown",obj1.obj_id,z)))
else:
action_pairs_by_obj[oid].add(PossMove(("pickup",obj1.obj_id), ("putdown",obj1.obj_id,z), unless=lists_to_tuples(hist_conds)))
else:
pro_flag,pro_zone,rule=verify_action(obj1,"prohibition","putdown",rules)
if pro_flag==1: #If putting down is prohibited in pro_zone
# print ("Putting down {}-{} object in Zone-{} prohibited by norm {}".format(obj1.colour,obj1.shape,pro_zone,rule))
per_flag,per_zone,rule=verify_action(obj1,"permission","putdown",rules)
per_zones = possible_zones if per_zone == 'any' else {per_zone}
if per_flag==1 and pro_zone in per_zones:
pass
# print ("Permission provided for putting down {}-{} object in Zone-{} by norm {}".format(obj1.colour,obj1.shape,per_zone,rule))
# putdown_action(obj1,random.choice(tuple(possible_zones)),task.target_area,env[3]).perform()
else:
# print("removing {} from possible_zones: {}".format(pro_zone, possible_zones))
possible_zones.remove(pro_zone)
# putdown_action(obj1,random.choice(tuple(possible_zones)),task.target_area,env[3]).perform()
#else:
#print ("I am King")
# putdown_action(obj1,random.choice(tuple(possible_zones)),task.target_area,env[3]).perform()
#For all complying paths
for z in possible_zones:
action_pairs_by_obj[oid].add(PossMove(("pickup",obj1.obj_id), ("putdown",obj1.obj_id,z)))
return action_pairs_by_obj
from copy import deepcopy
all_compliant(true_norm_exp,the_task,deepcopy(env),name="chk")
```
| github_jupyter |
# datasets
```
import h5py
import cupy as cp
#加载数据的function
def load_dataset():
train_dataset = h5py.File('../datasets/train_signs.h5', "r")
train_set_x_orig = cp.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = cp.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('../datasets/test_signs.h5', "r")
test_set_x_orig = cp.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = cp.array(test_dataset["test_set_y"][:]) # your test set labels
classes = cp.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
#加载数据
train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes=load_dataset()
trainX=train_set_x_orig/255 #shape(1080,64,64,3)样本数,高,宽,通道数
trainX=trainX.transpose(0,3,1,2)
testX=test_set_x_orig/255
testX=testX.transpose(0,3,1,2)
trainy=train_set_y_orig
testy=test_set_y_orig
#转为one-hot编码
n_class = cp.max(testy).tolist() + 1
trainy=cp.eye(n_class)[trainy].reshape(-1,6)
testy=cp.eye(n_class)[testy].reshape(-1,6)
print('trainX shape:',trainX.shape)
print('trainy shape:',trainy.shape)
print('testX shape:',testX.shape)
print('testy shape:',testy.shape)
```
# 1.shinnosuke functional model
```
from shinnosuke.layers.Convolution import Conv2D,MaxPooling2D,MeanPooling2D
from shinnosuke.layers.Activation import Activation
from shinnosuke.layers.Normalization import BatchNormalization
from shinnosuke.layers.Dropout import Dropout
from shinnosuke.layers.FC import Flatten,Dense
from shinnosuke.layers.Base import Input
from shinnosuke.models import Model
X_input=Input(shape=(None,3,64,64))
X=Conv2D(8,(5,5),padding='VALID',initializer='normal',activation='relu')(X_input)
X=BatchNormalization(axis=1)(X)
X=MaxPooling2D((4,4))(X)
X=Conv2D(16,(3,3),padding='SAME',initializer='normal',activation='relu')(X)
X=MaxPooling2D((4,4))(X)
X=Flatten()(X)
X=Dense(6,initializer='normal',activation='softmax')(X)
model=Model(inputs=X_input,outputs=X)
model.compile(optimizer='sgd',loss='sparse_categorical_cross_entropy')
model.fit(trainX,trainy,batch_size=256,epochs=100,validation_ratio=0.)
acc,loss=model.evaluate(testX,testy)
print('test acc: %f,test loss: %f'%(acc,loss))
```
# shinnosuke sequential model
```
from shinnosuke.models import Sequential
m=Sequential()
m.add(Conv2D(8,(5,5),input_shape=(None,3,64,64),padding='VALID',initializer='normal'))
m.add(Activation('relu'))
m.add(BatchNormalization(axis=1))
m.add(MaxPooling2D((4,4)))
m.add(Conv2D(16,(3,3),padding='VALID',initializer='normal'))
m.add(Activation('relu'))
m.add(MaxPooling2D((4,4)))
m.add(Flatten())
m.add(Dense(6,initializer='normal',activation='softmax'))
m.compile(optimizer='sgd',loss='sparse_categorical_cross_entropy')
m.fit(trainX,trainy,batch_size=256,epochs=100,validation_ratio=0.)
acc,loss=m.evaluate(testX,testy)
print('test acc: %f,test loss: %f'%(acc,loss))
```
# keras-gpu functional model
```
trainX=cp.asnumpy(trainX)
trainy=cp.asnumpy(trainy)
testX=cp.asnumpy(testX)
testy=cp.asnumpy(testy)
import keras
from keras.models import Sequential,Model
from keras.layers import Dense, Dropout, Flatten,Input,Conv2D, MaxPooling2D,BatchNormalization,Activation
X_input=Input(shape=(3,64,64))
X=Conv2D(8,(5,5),padding='VALID',kernel_initializer='normal',activation='relu',data_format='channels_first')(X_input)
X=BatchNormalization(axis=1)(X)
X=MaxPooling2D((4,4))(X)
X=Conv2D(16,(3,3),padding='SAME',kernel_initializer='normal',activation='relu',data_format='channels_first')(X)
X=MaxPooling2D((4,4))(X)
X=Flatten()(X)
X=Dense(6,kernel_initializer='normal',activation='softmax')(X)
model=Model(inputs=X_input,outputs=X)
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(trainX,trainy,batch_size=256,epochs=100)
loss,acc=model.evaluate(testX,testy)
print('test acc: %f,test loss: %f'%(acc,loss))
```
# keras-gpu sequential model
```
m=Sequential()
m.add(Conv2D(8,(5,5),input_shape=(3,64,64),padding='VALID',kernel_initializer='normal',data_format='channels_first'))
m.add(Activation('relu'))
m.add(BatchNormalization(axis=1))
m.add(MaxPooling2D((4,4)))
m.add(Conv2D(16,(3,3),padding='VALID',kernel_initializer='normal',data_format='channels_first'))
m.add(Activation('relu'))
m.add(MaxPooling2D((4,4)))
m.add(Flatten())
m.add(Dense(6,kernel_initializer='normal',activation='softmax'))
m.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
m.fit(trainX,trainy,batch_size=256,epochs=100)
loss,acc=m.evaluate(testX,testy)
print('test acc: %f,test loss: %f'%(acc,loss))
```
| github_jupyter |
### VQE(Variational quantum eigensolver)
パラメータ付き量子回路で変分的に基底状態を求めましょう。
### 必要なライブラリをインポート
```
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit,QubitBra,measure_all,measure_partial
from sympy.physics.quantum.gate import X,Y,Z,H,CNOT,SWAP,CPHASE,CGateS
from sympy.physics.quantum.gate import IdentityGate as _I
from sympy.physics.quantum.gate import UGate as U
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.optimize
import scipy.linalg
import numpy as np
import sys
```
### 変分量子回路で利用するためのゲートを定義
```
def Rxi(n,t): return U(n,represent(cos(t)*_I(1)*_I(0)-I*sin(t)*X(1)*_I(0),nqubits=2))
def Rix(n,t): return U(n,represent(cos(t)*_I(1)*_I(0)-I*sin(t)*_I(1)*X(0),nqubits=2))
def Rzi(n,t): return U(n,represent(cos(t)*_I(1)*_I(0)-I*sin(t)*Z(1)*_I(0),nqubits=2))
def Riz(n,t): return U(n,represent(cos(t)*_I(1)*_I(0)-I*sin(t)*_I(1)*Z(0),nqubits=2))
print(Rxi((0,1),pi/4).get_target_matrix())
print(Rix((0,1),pi/4).get_target_matrix())
print(Rzi((0,1),pi/4).get_target_matrix())
print(Riz((0,1),pi/4).get_target_matrix())
```
### ハミルトニアンの定義
$ H = \frac{1}{2} \left( S_{ii} \mathbb{1} + S_{ix} \sigma_{x}^{1} + S_{iz} \sigma_{z}^{1} + S_{xi} \sigma_{x}^{0} + S_{zi} \sigma_{z}^{0}
+ S_{xx} \sigma_{x}^{0} \sigma_{x}^{1} + S_{xz} \sigma_{x}^{0} \sigma_{z}^{1} + S_{zx} \sigma_{z}^{0} \sigma_{x}^{1}
+ S_{zz} \sigma_{z}^{0} \sigma_{z}^{1} \right) $
```
Sii,Six,Sxi,Sxx,Szz,Siz,Szi,Sxz,Szx = symbols('Sii Six Sxi Sxx Szz Siz Szi Sxz Szx')
# 【設問2-1】定義にあるハミルトニアンを完成してください。
Hamiltonian = ( Sii *_I(0) *_I(1)
+ Six *_I(0) * X(1)
+ Siz *_I(0) * Z(1)
+ Sxi * X(0) *_I(1)
+ Szi * Z(0) *_I(1)
+ Sxx # ***回答欄*** #
+ Sxz # ***回答欄*** #
+ Szx # ***回答欄*** #
+ Szz * Z(0) * Z(1) )/2
h = represent(Hamiltonian,nqubits=2)
Hamiltonian
```
$ HeH^+ $ 分子のハミルトニアンの数値設定(パラメータはNature Communication 2014 より)
```
H_valued = h.subs([
(Sii,-3.8505),
(Six,-0.2288),
(Sxi,-0.2288),
(Siz,-1.0466),
(Szi,-1.0466),
(Sxx, 0.2613),
(Sxz, 0.2288),
(Szx, 0.2288),
(Szi,-1.0466),
(Szz,0.2356)])
H_valued
```
### 答えを事前に計算
このような小さいサイズのMatrixであれば、厳密対角化も計算で、固有値は求められます。
```
# sympy は数値計算が苦手なので対角化は時間がかかります.
# ↓ sympy で提供されている対角化
# P, D = H_valued.diagonalize()
# ↓ sympy で提供されている固有値, 固有ベクトルの求め方
E = H_valued.eigenvects()
M = np.argmin([re(E[i][0]) for i in range(len(E))])
pprint(E[M])
print(re(E[M][0]))
# ↓数値計算は, 固有値を求めるのが得意な numpy でも試します.
l, p = np.linalg.eig( np.array( H_valued.tolist(), dtype=np.complex128 ))
v = np.transpose(p)
mini = np.argmin(l)
E_answer = l[mini]
#print(p)
#print(l)
print(np.array([v[mini]]))
print(E_answer)
def dice(n): return [np.random.rand() for i in range(n)]
def vqe_trial(phi):
global count
global f
count += 1
trial_func = Rxi((0,1), phi[0]).get_target_matrix() \
* Riz((0,1), phi[1]).get_target_matrix() \
* represent(CNOT(0,1),nqubits=2) \
* Riz((0,1), phi[2]).get_target_matrix() \
* Rix((0,1), phi[3]).get_target_matrix() \
* Rzi((0,1), phi[4]).get_target_matrix() \
* Rxi((0,1), phi[5]).get_target_matrix()
# 【設問2-2】試行関数マトリックス(trial_func)の複素共役を設定してください。
trial_func_dag = # ***回答欄*** #
trial = trial_func_dag * H_valued * trial_func
pr = -1*abs(((qapply(trial).tolist())[0])[0]) # *(-1) をつけなければなりません
# print(pr)
f.write(str(count)+ ' ' +str(pr)+ ' ' +str(pr/E_answer)+ '\n')
f.flush()
return pr
count = 0
f = sys.stdout
# 【設問2-3】上記で定義した vqe_trial を試してみましょう。何回か呼び出して結果を表示してください。
# ***回答欄*** #
count = 0
f = open('result_VQE.txt', 'w')
# 【設問2-4】scipy.optimize.minimize() 内で vqe_trial を使って、最低エネルギー固有値を探索してください。
res = # ***回答欄*** #
f.close()
# 結果を表示します。
# pprint(res)
print(res["fun"])
# 【設問2-5】結果ファイルをグラフで表示してみましょう。
dat = np.loadtxt("result_VQE.txt")
# 回答欄 #
```
| github_jupyter |
# load package and settings
```
import cv2
import sys
import dlib
import time
import socket
import struct
import numpy as np
import tensorflow as tf
from win32api import GetSystemMetrics
import win32gui
from threading import Thread, Lock
import multiprocessing as mp
from config import get_config
import pickle
import math
conf,_ = get_config()
if conf.mod == 'flx':
import flx as model
else:
sys.exit("Wrong Model selection: flx or deepwarp")
# system parameters
model_dir = './'+conf.weight_set+'/warping_model/'+conf.mod+'/'+ str(conf.ef_dim) + '/'
size_video = [640,480]
# fps = 0
P_IDP = 5
depth = -50
# for monitoring
# environment parameter
Rs = (GetSystemMetrics(0),GetSystemMetrics(1))
model_dir
print(Rs)
# video receiver
class video_receiver:
def __init__(self,shared_v,lock):
self.close = False
self.video_recv = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print('Socket created')
# global remote_head_Center
self.video_recv.bind(('',conf.recver_port))
self.video_recv.listen(10)
print('Socket now listening')
self.conn, self.addr=self.video_recv.accept()
# face detection
self.detector = dlib.get_frontal_face_detector()
self.predictor = dlib.shape_predictor("./lm_feat/shape_predictor_68_face_landmarks.dat")
self.face_detect_size = [320,240]
self.x_ratio = size_video[0]/self.face_detect_size[0]
self.y_ratio = size_video[1]/self.face_detect_size[1]
self.start_recv(shared_v,lock)
def face_detection(self,frame,shared_v,lock):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
face_detect_gray = cv2.resize(gray,(self.face_detect_size[0],self.face_detect_size[1]))
detections = self.detector(face_detect_gray, 0)
coor_remote_head_center=[0,0]
for k,bx in enumerate(detections):
coor_remote_head_center = [int((bx.left()+bx.right())*self.x_ratio/2),
int((bx.top()+bx.bottom())*self.y_ratio/2)]
break
# share remote participant's eye to the main process
lock.acquire()
shared_v[0] = coor_remote_head_center[0]
shared_v[1] = coor_remote_head_center[1]
lock.release()
def start_recv(self,shared_v,lock):
data = b""
payload_size = struct.calcsize("L")
print("payload_size: {}".format(payload_size))
while True:
while len(data) < payload_size:
data += self.conn.recv(4096)
packed_msg_size = data[:payload_size]
data = data[payload_size:]
msg_size = struct.unpack("L", packed_msg_size)[0]
while len(data) < msg_size:
data += self.conn.recv(4096)
frame_data = data[:msg_size]
data = data[msg_size:]
frame = pickle.loads(frame_data, fix_imports=True, encoding="bytes")
if frame == 'stop':
print('stop')
cv2.destroyWindow("Remote")
break
frame = cv2.imdecode(frame, cv2.IMREAD_COLOR)
# face detection
self.video_recv_hd_thread = Thread(target=self.face_detection, args=(frame,shared_v,lock))
self.video_recv_hd_thread.start()
cv2.imshow('Remote',frame)
cv2.waitKey(1)
```
# Flx-gaze
```
class gaze_redirection_system:
def __init__(self,shared_v,lock):
#Landmark identifier. Set the filename to whatever you named the downloaded file
self.detector = dlib.get_frontal_face_detector()
self.predictor = dlib.shape_predictor("./lm_feat/shape_predictor_68_face_landmarks.dat")
self.size_df = (320,240)
self.size_I = (48,64)
# initial value
self.Rw = [0,0]
self.Pe_z = -60
#### get configurations
self.f = conf.f
self.Ps = (conf.S_W,conf.S_H)
self.Pc = (conf.P_c_x,conf.P_c_y,conf.P_c_z)
self.Pe = [self.Pc[0],self.Pc[1],self.Pe_z] # H,V,D
## start video sender
self.client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.client_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.client_socket.connect((conf.tar_ip, conf.sender_port))
self.encode_param=[int(cv2.IMWRITE_JPEG_QUALITY),90]
# load model to gpu
print("Loading model of [L] eye to GPU")
with tf.Graph().as_default() as g:
# define placeholder for inputs to network
with tf.name_scope('inputs'):
self.LE_input_img = tf.placeholder(tf.float32, [None, conf.height, conf.width, conf.channel], name="input_img")
self.LE_input_fp = tf.placeholder(tf.float32, [None, conf.height, conf.width,conf.ef_dim], name="input_fp")
self.LE_input_ang = tf.placeholder(tf.float32, [None, conf.agl_dim], name="input_ang")
self.LE_phase_train = tf.placeholder(tf.bool, name='phase_train') # a bool for batch_normalization
self.LE_img_pred, _, _ = model.inference(self.LE_input_img, self.LE_input_fp, self.LE_input_ang, self.LE_phase_train, conf)
# split modle here
self.L_sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,log_device_placement=False), graph = g)
# load model
saver = tf.train.Saver(tf.global_variables())
ckpt = tf.train.get_checkpoint_state(model_dir+'L/')
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(self.L_sess, ckpt.model_checkpoint_path)
else:
print('No checkpoint file found')
print("Loading model of [R] eye to GPU")
with tf.Graph().as_default() as g2:
# define placeholder for inputs to network
with tf.name_scope('inputs'):
self.RE_input_img = tf.placeholder(tf.float32, [None, conf.height, conf.width, conf.channel], name="input_img")
self.RE_input_fp = tf.placeholder(tf.float32, [None, conf.height, conf.width,conf.ef_dim], name="input_fp")
self.RE_input_ang = tf.placeholder(tf.float32, [None, conf.agl_dim], name="input_ang")
self.RE_phase_train = tf.placeholder(tf.bool, name='phase_train') # a bool for batch_normalization
self.RE_img_pred, _, _ = model.inference(self.RE_input_img, self.RE_input_fp, self.RE_input_ang, self.RE_phase_train, conf)
# split modle here
self.R_sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,log_device_placement=False), graph = g2)
# load model
saver = tf.train.Saver(tf.global_variables())
ckpt = tf.train.get_checkpoint_state(model_dir+'R/')
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(self.R_sess, ckpt.model_checkpoint_path)
else:
print('No checkpoint file found')
self.run(shared_v,lock)
def monitor_para(self,frame,fig_alpha,fig_eye_pos,fig_R_w):
cv2.rectangle(frame,
(size_video[0]-150,0),(size_video[0],55),
(255,255,255),-1
)
cv2.putText(frame,
'Eye:['+str(int(fig_eye_pos[0])) +','+str(int(fig_eye_pos[1]))+','+str(int(fig_eye_pos[2]))+']',
(size_video[0]-140,15), cv2.FONT_HERSHEY_SIMPLEX, 0.4,(0,0,255),1,cv2.LINE_AA)
cv2.putText(frame,
'alpha:[V='+str(int(fig_alpha[0])) + ',H='+ str(int(fig_alpha[1]))+']',
(size_video[0]-140,30),cv2.FONT_HERSHEY_SIMPLEX,0.4,(0,0,255),1,cv2.LINE_AA)
cv2.putText(frame,
'R_w:['+str(int(fig_R_w[0])) + ','+ str(int(fig_R_w[1]))+']',
(size_video[0]-140,45),cv2.FONT_HERSHEY_SIMPLEX,0.4,(0,0,255),1,cv2.LINE_AA)
return frame
def get_inputs(self, frame, shape, pos = "L", size_I = [48,64]):
if(pos == "R"):
lc = 36
rc = 39
FP_seq = [36,37,38,39,40,41]
elif(pos == "L"):
lc = 42
rc = 45
FP_seq = [45,44,43,42,47,46]
else:
print("Error: Wrong Eye")
eye_cx = (shape.part(rc).x+shape.part(lc).x)*0.5
eye_cy = (shape.part(rc).y+shape.part(lc).y)*0.5
eye_center = [eye_cx, eye_cy]
eye_len = np.absolute(shape.part(rc).x - shape.part(lc).x)
bx_d5w = eye_len*3/4
bx_h = 1.5*bx_d5w
sft_up = bx_h*7/12
sft_low = bx_h*5/12
img_eye = frame[int(eye_cy-sft_up):int(eye_cy+sft_low),int(eye_cx-bx_d5w):int(eye_cx+bx_d5w)]
ori_size = [img_eye.shape[0],img_eye.shape[1]]
LT_coor = [int(eye_cy-sft_up), int(eye_cx-bx_d5w)] # (y,x)
img_eye = cv2.resize(img_eye, (size_I[1],size_I[0]))
# create anchor maps
ach_map = []
for i,d in enumerate(FP_seq):
resize_x = int((shape.part(d).x-LT_coor[1])*size_I[1]/ori_size[1])
resize_y = int((shape.part(d).y-LT_coor[0])*size_I[0]/ori_size[0])
# y
ach_map_y = np.expand_dims(np.expand_dims(np.arange(0, size_I[0]) - resize_y, axis=1), axis=2)
ach_map_y = np.tile(ach_map_y, [1,size_I[1],1])
# x
ach_map_x = np.expand_dims(np.expand_dims(np.arange(0, size_I[1]) - resize_x, axis=0), axis=2)
ach_map_x = np.tile(ach_map_x, [size_I[0],1,1])
if (i ==0):
ach_map = np.concatenate((ach_map_x, ach_map_y), axis=2)
else:
ach_map = np.concatenate((ach_map, ach_map_x, ach_map_y), axis=2)
return img_eye/255, ach_map, eye_center, ori_size, LT_coor
def shifting_angles_estimator(self, R_le, R_re,shared_v,lock):
# get P_w
try:
tar_win = win32gui.FindWindow(None, "Remote")
#left, top, reight, bottom
Rw_lt = win32gui.GetWindowRect(tar_win)
size_window = (Rw_lt[2]-Rw_lt[0], Rw_lt[3]-Rw_lt[1])
except:
Rw_lt = [int(Rs[0])-int(size_window[0]/2),int(Rs[1])-int(size_window[1]/2)]
size_window = (659,528)
print("Missing the window")
# get pos head
pos_remote_head = [int(size_window[0]/2),int(size_window[1]/2)]
try:
if ((shared_v[0] !=0) & (shared_v[1] !=0)):
pos_remote_head[0] = shared_v[0]
pos_remote_head[1] = shared_v[1]
except:
pos_remote_head = (int(size_window[0]/2),int(size_window[1]/2))
R_w = (Rw_lt[0]+pos_remote_head[0], Rw_lt[1]+pos_remote_head[1])
Pw = (self.Ps[0]*(R_w[0]-Rs[0]/2)/Rs[0], self.Ps[1]*(R_w[1]-Rs[1]/2)/Rs[1], 0)
# get Pe
self.Pe[2] = -(self.f*conf.P_IDP)/np.sqrt((R_le[0]-R_re[0])**2 + (R_le[1]-R_re[1])**2)
# x-axis needs flip
self.Pe[0] = -np.abs(self.Pe[2])*(R_le[0]+R_re[0]-size_video[0])/(2*self.f) + self.Pc[0]
self.Pe[1] = np.abs(self.Pe[2])*(R_le[1]+R_re[1]-size_video[1])/(2*self.f) + self.Pc[1]
# calcualte alpha
a_w2z_x = math.degrees(math.atan( (Pw[0]-self.Pe[0])/(Pw[2]-self.Pe[2])))
a_w2z_y = math.degrees(math.atan( (Pw[1]-self.Pe[1])/(Pw[2]-self.Pe[2])))
a_z2c_x = math.degrees(math.atan( (self.Pe[0]-self.Pc[0])/(self.Pc[2]-self.Pe[2])))
a_z2c_y = math.degrees(math.atan( (self.Pe[1]-self.Pc[1])/(self.Pc[2]-self.Pe[2])))
alpha = [int(a_w2z_y + a_z2c_y),int(a_w2z_x + a_z2c_x)] # (V,H)
return alpha, self.Pe, R_w
def flx_gaze(self, frame, gray, detections, shared_v, lock, pixel_cut=[3,4], size_I = [48,64]):
alpha_w2c = [0,0]
x_ratio = size_video[0]/self.size_df[0]
y_ratio = size_video[1]/self.size_df[1]
LE_M_A=[]
RE_M_A=[]
p_e=[0,0]
R_w=[0,0]
for k,bx in enumerate(detections):
# Get facial landmarks
time_start = time.time()
target_bx = dlib.rectangle(left=int(bx.left()*x_ratio),right =int(bx.right()*x_ratio),
top =int(bx.top()*y_ratio), bottom=int(bx.bottom()*y_ratio))
shape = self.predictor(gray, target_bx)
# get eye
LE_img, LE_M_A, LE_center, size_le_ori, R_le_LT = self.get_inputs(frame, shape, pos="L", size_I=size_I)
RE_img, RE_M_A, RE_center, size_re_ori, R_re_LT = self.get_inputs(frame, shape, pos="R", size_I=size_I)
# shifting angles estimator
alpha_w2c, p_e, R_w = self.shifting_angles_estimator(LE_center,RE_center,shared_v,lock)
time_get_eye = time.time() - time_start
# gaze manipulation
time_start = time.time()
# gaze redirection
# left Eye
LE_infer_img = self.L_sess.run(self.LE_img_pred, feed_dict= {
self.LE_input_img: np.expand_dims(LE_img, axis = 0),
self.LE_input_fp: np.expand_dims(LE_M_A, axis = 0),
self.LE_input_ang: np.expand_dims(alpha_w2c, axis = 0),
self.LE_phase_train: False
})
LE_infer = cv2.resize(LE_infer_img.reshape(size_I[0],size_I[1],3), (size_le_ori[1], size_le_ori[0]))
# right Eye
RE_infer_img = self.R_sess.run(self.RE_img_pred, feed_dict= {
self.RE_input_img: np.expand_dims(RE_img, axis = 0),
self.RE_input_fp: np.expand_dims(RE_M_A, axis = 0),
self.RE_input_ang: np.expand_dims(alpha_w2c, axis = 0),
self.RE_phase_train: False
})
RE_infer = cv2.resize(RE_infer_img.reshape(size_I[0],size_I[1],3), (size_re_ori[1], size_re_ori[0]))
# repace eyes
frame[(R_le_LT[0]+pixel_cut[0]):(R_le_LT[0]+size_le_ori[0]-pixel_cut[0]),
(R_le_LT[1]+pixel_cut[1]):(R_le_LT[1]+size_le_ori[1]-pixel_cut[1])] = LE_infer[pixel_cut[0]:(-1*pixel_cut[0]), pixel_cut[1]:-1*(pixel_cut[1])]*255
frame[(R_re_LT[0]+pixel_cut[0]):(R_re_LT[0]+size_re_ori[0]-pixel_cut[0]),
(R_re_LT[1]+pixel_cut[1]):(R_re_LT[1]+size_re_ori[1]-pixel_cut[1])] = RE_infer[pixel_cut[0]:(-1*pixel_cut[0]), pixel_cut[1]:-1*(pixel_cut[1])]*255
frame = self.monitor_para(frame, alpha_w2c, self.Pe, R_w)
result, imgencode = cv2.imencode('.jpg', frame, self.encode_param)
data = pickle.dumps(imgencode, 0)
self.client_socket.sendall(struct.pack("L", len(data)) + data)
return True
def redirect_gaze(self, frame,shared_v,lock):
# head detection
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
face_detect_gray = cv2.resize(gray,(self.size_df[0],self.size_df[1]))
detections = self.detector(face_detect_gray, 0)
rg_thread = Thread(target=self.flx_gaze, args=(frame, gray, detections,shared_v,lock))
rg_thread.start()
return True
def run(self,shared_v,lock):
# def main():
redir = False
size_window = [659,528]
vs = cv2.VideoCapture(0)
vs.set(3, size_video[0])
vs.set(4, size_video[1])
t = time.time()
cv2.namedWindow(conf.uid)
cv2.moveWindow(conf.uid, int(Rs[0]/2)-int(size_window[0]/2),int(Rs[1]/2)-int(size_window[1]/2));
while 1:
ret, recv_frame = vs.read()
if ret:
cv2.imshow(conf.uid,recv_frame)
if recv_frame is not None:
# redirected gaze
if redir:
frame = recv_frame.copy()
try:
tag = self.redirect_gaze(frame,shared_v,lock)
except:
pass
else:
result, imgencode = cv2.imencode('.jpg', recv_frame, self.encode_param)
data = pickle.dumps(imgencode, 0)
self.client_socket.sendall(struct.pack("L", len(data)) + data)
if (time.time() - t) > 1:
t = time.time()
k = cv2.waitKey(10)
if k == ord('q'):
data = pickle.dumps('stop')
self.client_socket.sendall(struct.pack("L", len(data))+data)
time.sleep(3)
cv2.destroyWindow(conf.uid)
self.client_socket.shutdown(socket.SHUT_RDWR)
self.client_socket.close()
vs.release()
self.L_sess.close()
self.R_sess.close()
break
elif k == ord('r'):
if redir:
redir = False
else:
redir = True
else:
pass
if __name__ == '__main__':
l = mp.Lock() # multi-process lock
v = mp.Array('i', [320,240]) # shared parameter
# start video receiver
# vs_thread = Thread(target=video_receiver, args=(conf.recver_port,))
vs_thread = mp.Process(target=video_receiver, args=(v,l))
vs_thread.start()
time.sleep(1)
gz_thread = mp.Process(target=gaze_redirection_system, args=(v,l))
gz_thread.start()
vs_thread.join()
gz_thread.join()
```
| github_jupyter |
**author**: lukethompson@gmail.com<br>
**date**: 9 Oct 2017<br>
**language**: Python 3.5<br>
**license**: BSD3<br>
## physicochemical_pairplot.ipynb
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from empcolors import get_empo_cat_color
%matplotlib inline
pd.options.display.max_columns = 50
path_map = '../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv'
path_scat1 = 'scatter1.pdf'
path_scat2 = 'scatter2.pdf'
df = pd.read_csv(path_map, sep='\t', index_col=0)
df1 = df[['temperature_deg_c', 'salinity_psu', 'oxygen_mg_per_l', 'ph', 'empo_3']]
df2 = df[['phosphate_umol_per_l', 'nitrate_umol_per_l', 'ammonium_umol_per_l', 'empo_3']]
# 'sulfate_umol_per_l' -- OTHER NUTRIENTS ARE ALL ~0 WHERE WE HAVE SULFATE DATA (NOT INTERESTING)
for var in ['temperature_deg_c', 'salinity_psu', 'oxygen_mg_per_l', 'ph', 'phosphate_umol_per_l',
'nitrate_umol_per_l', 'ammonium_umol_per_l', 'sulfate_umol_per_l']:
print(var, df[var].max())
dict_xlim = {
'temperature_deg_c': [-25, 105],
'salinity_psu': [-2.5, 42.5],
'oxygen_mg_per_l': [-1.5, 22.5],
'ph': [1, 13],
'phosphate_umol_per_l': [-50, 450],
'nitrate_umol_per_l': [-400, 3400],
'ammonium_umol_per_l': [-500, 4650]
}
dict_xticks = {
'temperature_deg_c': [-20, 0, 20, 40, 60, 80, 100],
'salinity_psu': [0, 10, 20, 30, 40],
'oxygen_mg_per_l': [0, 5, 10, 15, 20],
'ph': [2, 4, 6, 8, 10, 12],
'phosphate_umol_per_l': [0, 100, 200, 300, 400],
'nitrate_umol_per_l': [0, 1000, 2000, 3000],
'ammonium_umol_per_l':[0, 1000, 2000, 3000, 4000]
}
g = sns.PairGrid(df1, hue='empo_3', palette=get_empo_cat_color(returndict=True))
g = g.map(plt.scatter, alpha=1.0)
# g = g.map_offdiag(plt.scatter, alpha=1.0)
# g = g.map_diag(plt.hist)
# TEMP
g.axes[0][0].set_ylim(dict_xlim['temperature_deg_c'])
g.axes[3][0].set_xlim(dict_xlim['temperature_deg_c'])
g.axes[0][0].set_yticks(dict_xticks['temperature_deg_c'])
g.axes[3][0].set_xticks(dict_xticks['temperature_deg_c'])
# SAL
g.axes[1][0].set_ylim(dict_xlim['salinity_psu'])
g.axes[3][1].set_xlim(dict_xlim['salinity_psu'])
g.axes[1][0].set_yticks(dict_xticks['salinity_psu'])
g.axes[3][1].set_xticks(dict_xticks['salinity_psu'])
# OX
g.axes[2][0].set_ylim(dict_xlim['oxygen_mg_per_l'])
g.axes[3][2].set_xlim(dict_xlim['oxygen_mg_per_l'])
g.axes[2][0].set_yticks(dict_xticks['oxygen_mg_per_l'])
g.axes[3][2].set_xticks(dict_xticks['oxygen_mg_per_l'])
# PH
g.axes[3][0].set_ylim(dict_xlim['ph'])
g.axes[3][3].set_xlim(dict_xlim['ph'])
g.axes[3][0].set_yticks(dict_xticks['ph'])
g.axes[3][3].set_xticks(dict_xticks['ph'])
#g.savefig(path_scat1)
g = sns.PairGrid(df2, hue='empo_3', palette=get_empo_cat_color(returndict=True))
g = g.map(plt.scatter, alpha=1.0)
# g = g.map_offdiag(plt.scatter, alpha=1.0)
# g = g.map_diag(plt.hist)
# PHOS
# g.axes[0][0].set_ylim(dict_xlim['phosphate_umol_per_l'])
# g.axes[2][0].set_xlim(dict_xlim['phosphate_umol_per_l'])
# g.axes[0][0].set_yticks(dict_xticks['phosphate_umol_per_l'])
# g.axes[2][0].set_xticks(dict_xticks['phosphate_umol_per_l'])
# NITR
# g.axes[1][0].set_ylim(dict_xlim['nitrate_umol_per_l'])
# g.axes[2][1].set_xlim(dict_xlim['nitrate_umol_per_l'])
# g.axes[1][0].set_yticks(dict_xticks['nitrate_umol_per_l'])
# g.axes[2][1].set_xticks(dict_xticks['nitrate_umol_per_l'])
# AMM
# g.axes[2][0].set_ylim(dict_xlim['ammonium_umol_per_l'])
# g.axes[2][2].set_xlim(dict_xlim['ammonium_umol_per_l'])
# g.axes[2][0].set_yticks(dict_xticks['ammonium_umol_per_l'])
# g.axes[2][2].set_xticks(dict_xticks['ammonium_umol_per_l'])
# log-scale
for i in [0, 1, 2]:
for j in [0, 1, 2]:
g.axes[i][j].set_xscale('log')
g.axes[i][j].set_yscale('log')
g.axes[i][j].set_xlim([1e-4, 1e4])
g.axes[i][j].set_ylim([1e-4, 1e4])
g.axes[i][j].set_xticks([1e-4, 1e-2, 1e0, 1e2, 1e4])
g.axes[i][j].set_yticks([1e-4, 1e-2, 1e0, 1e2, 1e4])
#g.savefig(path_scat2)
df1melt = pd.melt(df1, id_vars='empo_3')
empo_list = list(set(df1melt.empo_3))
empo_list.sort()
empo_colors = [get_empo_cat_color(returndict=True)[x] for x in empo_list]
for var in ['temperature_deg_c', 'salinity_psu', 'oxygen_mg_per_l', 'ph']:
list_of = [0] * len(empo_list)
df1melt2 = df1melt[df1melt['variable'] == var].drop('variable', axis=1)
for empo in np.arange(len(empo_list)):
list_of[empo] = list(df1melt2.pivot(columns='empo_3')['value'][empo_list[empo]].dropna())
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(2.5,2.5))
plt.hist(list_of, color=empo_colors,
bins=20,
stacked=True)
plt.xlim(dict_xlim[var])
plt.xticks(dict_xticks[var])
plt.yticks([])
if var == 'temperature_deg_c':
plt.ylim([0, 2200])
elif var == 'salinity_psu':
plt.ylim([0, 167])
elif var == 'oxygen_mg_per_l':
plt.ylim([0, 236])
elif var == 'ph':
plt.ylim([0, 810])
fig.tight_layout()
#fig.savefig('~/emp/analyses-envparams/hist_%s.pdf' % var)
df2melt = pd.melt(df2, id_vars='empo_3')
empo_list = list(set(df2melt.empo_3))
empo_list.sort()
empo_colors = [get_empo_cat_color(returndict=True)[x] for x in empo_list]
for var in ['phosphate_umol_per_l', 'nitrate_umol_per_l', 'ammonium_umol_per_l']:
list_of = [0] * len(empo_list)
df2melt2 = df2melt[df2melt['variable'] == var].drop('variable', axis=1)
for empo in np.arange(len(empo_list)):
list_of[empo] = list(df2melt2.pivot(columns='empo_3')['value'][empo_list[empo]].dropna())
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(2.5,2.5))
plt.hist(list_of, color=empo_colors,
bins=np.logspace(np.log10(1e-4),np.log10(1e4), 20),
stacked=True)
plt.xscale('log')
plt.xlim([1e-4, 1e4])
plt.xticks([1e-4, 1e-2, 1e0, 1e2, 1e4])
plt.yticks([])
fig.tight_layout()
fig.savefig('hist_%s.pdf' % var)
```
| github_jupyter |
# Improving generalization with regularizers and constraints
Neural networks usually have a very large number of parameters, which may lead to overfitting in many cases (especially when you do not have a large dataset). There's a large number of methods for regularization, and here we cover the most usual ones which are already implemented in Keras.
For more details and theoretical grounds for the regularization methods described here, a good reference is [Chapter 7 of the Deep Learning Book](http://www.deeplearningbook.org/contents/regularization.html).
## Regularizers (`keras.regularizers`)
- `l1(l=0.01)`: L1 weight regularization penalty, also known as LASSO
- `l2(l=0.01)`: L2 weight regularization penalty, also known as weight decay, or Ridge
- `l1l2(l1=0.01, l2=0.01)`: L1-L2 weight regularization penalty, also known as ElasticNet
- `activity_l1(l=0.01)`: L1 activity regularization
- `activity_l2(l=0.01)`: L2 activity regularization
- `activity_l1l2(l1=0.01, l2=0.01)`: L1+L2 activity regularization
```
# Example: Defining a Dense layer with l2 regularization on the weights and activations
from keras.regularizers import l2, activity_l2
model.add(Dense(256, W_regularizer=l2(0.01), activity_regularizer=activity_l2(0.05)))
```
## Constraints (`keras.constraints`)
- `maxnorm(m=2)`: maximum-norm constraint
- `nonneg()`: non-negativity constraint
- `unitnorm()`: unit-norm constraint, enforces the matrix to have unit norm along the last axis
```
# Example: enforce non-negativity on a convolutional layer weights
from keras.constraints import nonneg
model.add(Convolution1D(64, 3, border_mode='same', W_constraint=nonneg()))
```
## Dropout
Dropout is a different regularization technique, based on dropping out random internal features and/or inputs during training. In its usual formulation (which is the one implemented in Keras), dropout will set an input or feature to zero with probability P only during training (or, equivalently, setting a fraction P of the inputs/features to zero).
This is how you use Dropout in Keras:
```
from keras.layers import Dropout
model.add(Dense(128, input_dim=64))
model.add(Dropout(0.5)) # Dropout 50% of the features from the dense layer
```
Note that whenever Dropout is the first layer of a network, you have to specify the `input_shape` as usual. The parameter passed to Dropout should be between zero and one, and 0.5 is the usual value chosen for internal features. For inputs, you usually want to drop out a smaller amount of input features (0.1 or 0.2 are good values to start with).
As an alternative to this sort of "binary" dropout, one can also apply a multiplicative one-centered Gaussian noise to the inputs/features. This is implemented in Keras as the `GaussianDropout` layer:
```
from keras.layers.noise import GaussianDropout
model.add(GaussianDropout(0.1)) # Dropout 50% of the features
```
where the parameter is the $\sigma$ for the Gaussian distribution to be sampled.
## Adding noise to the inputs and/or internal features
Instead of multiplicative Gaussian noise, you can also use good old additive Gaussian noise, too. Usage is similar to the dropout layers described above:
```
from keras.layers.noise import GaussianNoise
model.add(GaussianNoise(0.2))
```
Again, the parameter is the $\sigma$ for the Gaussian distribution, but this time the noise is zero-centered as usual for additive Gaussian noise.
## Early stopping
Early stopping avoids overfitting the training data by monitoring the performance on a validation set and stopping when it stops improving. In Keras, it is implemented as a callback (`keras.callbacks.EarlyStopping`). In order to avoid noise in the performance metric used for the validation set, early stopping is implemented in Keras with a "patience" term: training stops when no improvement is seen for `patience` epochs.
```
early_stop = EarlyStopping(patience=5)
```
Note that the model parameters after training with early stopping will correspond to those from the last epoch, not those for the "best" epoch. So, most of the time, `EarlyStopping` is used in combination with the `ModelCheckpoint` callback with `save_best_only=True` , so you can load the best model after `EarlyStopping` interrupts your model training.
| github_jupyter |
## astropy.wcs
Implements the FITS WCS standard and some commonly used distortion conventions.
This tutorial will show how to create a WCS object from a FITS file and how to use it to transform coordinates.
```
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import os
from astropy.io import fits
from astropy import wcs
```
Open a file with `astropy.fits` and look at it.
```
sip_file_name = os.path.join('sip.fits')
sip_file = fits.open(sip_file_name)
sip_file.info()
```
To create a WCS object pass the header with the WCS kyewords to astropy.wcs.WCS (Primary header in this case).
```
w = wcs.WCS(sip_file[0].header)
```
To perform the transformation from detector to sky, including distortions, pass x and y and an 'origin'. The third argument, 'origin', indicates whether the coordinates are 1-based (like FITS), or 0-based (like python).
The inputs can be numbers, numpy arrays or array like objects.
```
ra, dec = w.all_pix2world([1, 1], [2, 2], 1)
print(ra, dec)
```
Perfom the inverse transformation - from sky to detector coordinates.
If analytical inverse is not available (often the case in the presence of distortion), then an iterative inverse is performed.
```
print(w.all_world2pix(ra, dec, 1))
```
In some cases it is useful to omit the distortion and perform the core WCS transforms only:
```
ra, dec = w.wcs_pix2world([1, 1], [2, 2], 1)
print(ra, dec)
w.wcs_world2pix(ra, dec, 1)
```
The WCS object can be changed and the new WCS can be written out as a new header.
By default only the primary WCS keywords are written out to the header. Pass a keyword `relax=True` to write out the SIP distortion.
```
# The original WCS
w.printwcs()
w.wcs.crpix = [200, 200]
# Calling *to_header()* without arguments writes
# out the standard WCS keywords.
w.to_header()
# Passing *relax=True* writes out the SIP coefficients
# and all other distortions.
#w.to_header(relax=True)
```
Exercise 1:
- Create a WCS object from the file.
dist_file_name = 'dist_lookup.fits.gz'
This file contains all distortions typical for HST imaging data - SIP, lookup_table and det2im (detector to image - correcting detector irregularities). The lookup table and det2im distortions are stored in separate extensions so you will need to pass as a second argument to `wcs.WCS` the file object (already opened with astropy.io.fits).
- Look at the file object with the `info()` method. The lookup_table and det2im distortions are saved in separate extensions.
- Modify one of the WCS keywords and save it to file. (As some of the distortion is saved in extensions, use the method `to_fits()` to save the entire WCS.
Exercise 2:
The FITS WCS standard supports alternate WCSs in the same eader.
These are defined by the same keywords with a character (A...Z) appended
to them. For example, *CRPIXA1*, etc.
Using the same file create a WCS object for the alternate WCS in this header, by passing also `key='O'` to wcs.WCS().
Compare the two WCSs using the `printwcs()` method`
```
# Solution is in Tutorial_Notebooks
f = fits.open("../../Tutorial_Notebooks/wcs/dist_lookup.fits.gz")
```
| github_jupyter |
# Stack
This chapter will cover the basics of stack.
Let's import the necessary libraries for our code to run.
```
import java.util.*;
import java.io.*;
```
## Section 1. The Basics of Stack
A stack is a collection based on the principle of adding elements and retrieving them in the opposite order.
- Last-In, First-Out ("LIFO")
- The elements are stored in order of insertion, but we do not think of them as having indexes.
- The client can only add/remove/examine the last element added (the "top" of the stack).
An example of how a stack is operated can be thought of as a stack of plates: You need to pile one plate on top of another to make a pile when you organize all plates you have. When you need to use plates, you always pull the plate on top of the pile.
Basic Stack Operations:
- push: Add an elemnt to the top
- pop: Remove the top element
- peek: Return the top element
- isEmpty: Check if empty or not
- size: Check the size of the stack
<img src="images/stack.png" alt="index" width="300"/>
### Section 1.1 How do you use Stack?
To initialize a stack variable in Java, you need to start with:
```
Stack <base_type> nameStack = new Stack <base_type>();
```
It is worth noting that the base type has to be a reference data type. If you are thinking of using a primitive data type, you have to use the reference data type it corresponds to:
- int -> Integer
- double -> Double
- char -> Character
- boolean -> Boolean
The conversion between a primitive data type and the reference data type it corresponds to is seamless. FOr instance, the conversion between int and Integer is as the followings:
```
int a = 20;
Integer i = Integer.valueOf(a); //converting
Integer j = a; //auto-boxing
int b = j; //auto-unboxing
```
A demonstration of usin stack is:
```
Stack<Integer> stack = new Stack<Integer>();
stack.push(1); // stack: 1
stack.push(2); // stack: 1, 2
stack.push(3); // stack: 1, 2, 3
stack.push(4); // stack: 1, 2, 3, 4
stack.push(); // return 4; stack: 1, 2, 3, 4
stack.pop(); // return 4; stack: 1, 2, 3 (Removed 4)
stack.isEmpty(); //return false; stack: 1, 2, 3
```
### Section 1.2 When do you use Stack?
when you want to get stored elements out in the reverse order than you put them in, Stack may be a good candidate. This can be learnt from the following practices.
## Practices
Based on the above content and knowledge covered in lectures, you should be able to solve the following listed problems. Please note that **you are strongly recommended to solve the problems by yourself first before you look at the provided solutions**.
#### Valid brackets
Given a string containing ust the characters '(' and ')'. Determine if the input string is valid. For a string to be valid, there must be the same amount of opening brackets to closing brackets and they have to be in correct order.
- "()" and "()()()" are valid
- "((())())" is also valid
- ")(" and "(()" are not valid
```
public boolean isValid(String s) {
Stack<Character> st = new Stack<Character>();
for (char c: s.toCharArray()) {
if (c == '(') {
st.push(c);
} else if (c == ')' && !st.isEmpty() && st.peek() == '(') {
st.pop();
} else {
return false;
}
}
if (st.isEmpty()) return true;
else return false;
}
// sanity check
isValid("((())()))");
```
#### Reverse Polish Notation
Given an array of tokens that could be of operations and numbers, evaluate the value of the arithmetic expression in reverse polish notation.
Valid operations are +, -, /, *.
Each of the opreands may be an integer or another expression.
Some examples:
- ["2", "1", "+", "3", "*"] -> ((2 + 1) * 3) -> 9
- ["4", "13", "5", "/", "+"] -> (4 + 13 / 5)) -> 6
```
public int evalRPN(String[] tokens) {
Stack<String> st = new Stack<String>();
int result;
for (String str:tokens){
if (isNumeric(str)) {
st.push(str);
} else if (str.equals("+")) {
int right = Integer.parseInt(st.pop());
int left = Integer.parseInt(st.pop());
result = left + right;
st.push("" + result);
} else if (str.equals("-")) {
int right = Integer.parseInt(st.pop());
int left = Integer.parseInt(st.pop());
result = left - right;
st.push("" + result);
} else if (str.equals("*")) {
int right = Integer.parseInt(st.pop());
int left = Integer.parseInt(st.pop());
result = left * right;
st.push("" + result);
} else {
int right = Integer.parseInt(st.pop());
int left = Integer.parseInt(st.pop());
result = left / right;
st.push("" + result);
}
}
return Integer.parseInt(st.pop());
}
public static boolean isNumeric(String strNum) {
try {
int d = Integer.parseInt(strNum);
} catch (NumberFormatException | NullPointerException nfe) {
return false;
}
return true;
}
// sanity check
String[] arr = {"2", "1", "+", "3", "*"};
evalRPN(arr);
```
| github_jupyter |
```
import pandas as pd
import os
os.chdir('..')
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
import numpy as np
import requests
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
url = "https://en.wikivoyage.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles=Pisa"
f = requests.get(url)
print(f.text)
data = ['Pisa is a city in Tuscany, Italy, best known for its world-famous leaning tower with beautiful architecture and artistic views.'
'But the tower isnt the only thing to see – there are other architecture and artistic marvels in this beautiful city.',
'The half-hour walk from the Campo dei Miracoli to the train station runs through a pedestrian street with many interesting sights, shops, and restaurants.',
'The best way to visit Pisa is walking the streets, as the city centre is small and cosy, and enjoying the sight and the atmosphere.']
def plot_5_most_common_words(count_data, count_vectorizer):
import matplotlib.pyplot as plt
words = count_vectorizer.get_feature_names()
total_counts = np.zeros(len(words))
for t in count_data:
total_counts+=t.toarray()[0]
count_dict = (zip(words, total_counts))
count_dict = sorted(count_dict, key=lambda x:x[1], reverse=True)[0:5]
words = [w[0] for w in count_dict]
counts = [w[1] for w in count_dict]
print (count_dict)
x_pos = np.arange(len(words))
plt.figure(2, figsize=(15, 15/1.6180))
plt.subplot(title='5 most used words for Pisa')
sns.set_context("notebook", font_scale=1.25, rc={"lines.linewidth": 2.5})
sns.barplot(x_pos, counts, palette='husl')
plt.xticks(x_pos, words, rotation=90)
plt.xlabel('words')
plt.ylabel('counts')
plt.show()
count_vectorizer = CountVectorizer(stop_words='english')
count_data = count_vectorizer.fit_transform(data)
plot_5_most_common_words(count_data, count_vectorizer)
import warnings
warnings.simplefilter("ignore", DeprecationWarning)
# Load the LDA model from sk-learn
from sklearn.decomposition import LatentDirichletAllocation as LDA
# Helper function
def print_topics(model, count_vectorizer, n_top_words):
words = count_vectorizer.get_feature_names()
for topic_idx, topic in enumerate(model.components_):
print("\nTopic #%d:" % topic_idx)
print(" ".join([words[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
# Tweak the two parameters below (use int values below 15)
number_topics = 5
number_words = 5
# Create and fit the LDA model
lda = LDA(n_components=number_topics)
lda.fit(count_data)
# Print the topics found by the LDA model
print("Topics found via LDA:")
print_topics(lda, count_vectorizer, number_words)
moods = {"beach", "artistic", "mountain", "romantic", "lakes", "historic", "architecture"}
words = count_vectorizer.get_feature_names()
tags = []
for topic_idx, topic in enumerate(lda.components_):
for s in [words[i]
for i in topic.argsort()[:-10 - 1:-1]]:
for m in moods:
if(m==s):
tags.append(m)
print("Pisa is tagged with: ", ", ".join(tags))
```
| github_jupyter |
```
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.feature.HashingTF
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.util.MLUtils
import com.amazonaws.services.sagemaker.sparksdk.IAMRole
import com.amazonaws.services.sagemaker.sparksdk.algorithms.XGBoostSageMakerEstimator
import com.amazonaws.services.sagemaker.sparksdk.SageMakerResourceCleanup
// Load 2 types of emails from text files: spam and ham (non-spam).
// Each line has text from one email.
// Convert to lower case, remove punctuation and numbers, trim whitespace
// This adds 0.6% accurary!
val spam = sc.textFile("s3://sagemaker-eu-west-1-123456789012/spam").map(l => l.toLowerCase()).map(l => l.replaceAll("[^ a-z]", "")).map(l => l.trim())
val ham = sc.textFile("s3://sagemaker-eu-west-1-123456789012/ham").map(l => l.toLowerCase()).map(l => l.replaceAll("[^ a-z]", "")).map(l => l.trim())
spam.take(5)
// Create a HashingTF instance to map email text to vectors of features.
val tf = new HashingTF(numFeatures = 200)
// Each email is split into words, and each word is mapped to one feature.
val spamFeatures = spam.map(email => tf.transform(email.split(" ")))
val hamFeatures = ham.map(email => tf.transform(email.split(" ")))
// Display features for a spam sample
spamFeatures.take(1)
// Display features for a ham sample
hamFeatures.take(1)
// Create LabeledPoint datasets for positive (spam) and negative (ham) examples.
val positiveExamples = spamFeatures.map(features => LabeledPoint(1, features))
val negativeExamples = hamFeatures.map(features => LabeledPoint(0, features))
// Display label for a spam sample
positiveExamples.take(1)
// Display label for a ham sample
negativeExamples.take(1)
// The XGBoost built-in algo requires a libsvm-formatted DataFrame
val data = positiveExamples.union(negativeExamples)
val data_libsvm = MLUtils.convertVectorColumnsToML(data.toDF)
data_libsvm.take(2)
// Split the data set 80/20
val Array(trainingData, testData) = data_libsvm.randomSplit(Array(0.8, 0.2))
val roleArn = "YOUR_SAGEMAKER_ROLE"
val xgboost_estimator = new XGBoostSageMakerEstimator(
trainingInstanceType="ml.m5.large", trainingInstanceCount=1,
endpointInstanceType="ml.t2.medium", endpointInitialInstanceCount=1,
sagemakerRole=IAMRole(roleArn))
xgboost_estimator.setObjective("binary:logistic")
xgboost_estimator.setNumRound(25)
val xgboost_model = xgboost_estimator.fit(trainingData)
val transformedData = xgboost_model.transform(testData)
transformedData.head(5)
val roundedData = transformedData.withColumn("prediction_rounded", when($"prediction" > 0.5 , 1.0).otherwise(0.0))
val accuracy = 1.0 * roundedData.filter($"label"=== $"prediction_rounded").count / roundedData.count()
xgboost_model.getCreatedResources
val cleanup = new SageMakerResourceCleanup(xgboost_model.sagemakerClient)
cleanup.deleteResources(xgboost_model.getCreatedResources)
```
| github_jupyter |
# Identify DOSTA Sensors with Missing Two-Point Calibrations
During a review of the dissolved oxygen data, it was discovered that there was an error in how the instrument calibration coefficients were being applied. The two-point calibration values, supplied by the vendor if a multipoint calibration was not warranted, were not being passed to the equation used to convert the raw measurements to dissolved oxygen. The resulting calculated dissolved oxygen values were incorrect. The error in the formulation of the equation was identified and corrected, with the correction going into effect on September 10, 2020.
The code below identifies the instruments impacted using the OOI M2M API to access calibration values for the sensors from the Asset Management database and determine which sensors had two-point calibration values. The default values are set to a slope of 1 and offset of 0. If the vendor applied a two-point calibration, those values will be different. The error in the code always assumed the default values for the two-point calibration.
```
import pandas as pd
from datetime import datetime, timedelta
from ooi_data_explorations.common import list_deployments, get_deployment_dates, \
get_calibrations_by_refdes, get_vocabulary
def missing_two_point(sites, node, sensor):
"""
Use the OOI M2M API to locate the DOSTA sensors that had nondefault two-point
calibration values; default is a slope of 1 and offset of 0. These are the
sensors that would have had miscalculated dissolved oxygen data as the
function was always assuming the default values.
:param site: Site name to query
:param node: Node name to query
:param sensor: Sensor name to query
:return: pandas dataframe listing affected sensors
"""
# create an empty pandas dataframe
data = pd.DataFrame(columns=['Array', 'Platform', 'Node', 'Instrument', 'RefDes', 'Asset_ID',
'Serial Number', 'deployment', 'gitHub_changeDate', 'file', 'URL',
'changeType', 'dateRangeStart', 'dateRangeEnd', 'annotation'])
# loop through the sites
for site in sites:
# for each site, loop through the deployments
deployments = list_deployments(site, node, sensor)
for deploy in deployments:
# get the deployment dates and convert to a datetime object
start, stop = get_deployment_dates(site, node, sensor, deploy)
vocab = get_vocabulary(site, node, sensor)
start = datetime.strptime(start, '%Y-%m-%dT%H:%M:%S.000Z')
stop = datetime.strptime(stop, '%Y-%m-%dT%H:%M:%S.000Z')
# advance the start time by 30 days to ensure we only get calibration
# data for the deployment of interest (exclude potentially overlapping)
adj_start = start + timedelta(days=30)
adj_stop = start + timedelta(days=31)
# use the site, node, sensor and advanced start and stop dates to
# access the sensor calibration data (using 1 day in the middle
# of the deployment limits the response to just this deployment)
cal = get_calibrations_by_refdes(site, node, sensor,
adj_start.strftime('%Y-%m-%dT%H:%M:%S.000Z'),
adj_stop.strftime('%Y-%m-%dT%H:%M:%S.000Z'))
# extract the two-point calibration values
two_point = cal[0]['sensor']['calibration'][1]['calData'][0]['value']
# check to see if a two-point calibration was available
if two_point != [0.0, 1.0]:
# if so, add the information to the dataframe
percent_error = abs(100 - (two_point[0] + two_point[1] * 100))
annotation = (('ALGORITHM CORRECTION: During a review of the dissolved oxygen data, it was ' +
'discovered that there was an error in how the instrument calibration ' +
'coefficients were being applied. The two-point calibration values, supplied by ' +
'the vendor if a multipoint calibration was not warranted, were not being passed ' +
'to the equation used to convert the raw measurements to dissolved oxygen. The ' +
'resulting calculated dissolved oxygen values were incorrect. The error in the ' +
'formulation of the equation was identified and corrected, with the correction ' +
'going into effect on 2020-09-10. Users who have requested data for this sensor ' +
'({0}) prior to 2020-09-10 for deployment {1} between {2} and {3} are encouraged ' +
'to re-download the data. The estimated error in the dissolved oxygen ' +
'calculation in this instance is {4:.2f} percent.').format(cal[0]['referenceDesignator'],
deploy, start, stop,
percent_error))
data = data.append({'Array': vocab[0]['tocL1'],
'Platform': vocab[0]['tocL2'] ,
'Node': vocab[0]['tocL3'],
'Instrument': vocab[0]['instrument'],
'RefDes': cal[0]['referenceDesignator'],
'Asset_ID': cal[0]['sensor']['uid'],
'Serial Number': cal[0]['sensor']['serialNumber'],
'deployment': deploy,
'gitHub_changeDate': '2020-09-10',
'file': 'ParameterFunctions.csv',
'URL': 'https://github.com/oceanobservatories/preload-database/commit/c07c9229d01040da16e2cf6270c7180d4ed57f20',
'changeType': 'algorithmCorrection',
'dateRangeStart': start,
'dateRangeEnd': stop,
'annotation': annotation}, ignore_index=True)
# return the results
return data
```
## Coastal Endurance Array
Two sets of DOSTA sensors were potentially impacted by the error in the Coastal Endurance (CE) Array: the sensors mounted on the near-surface instrument frame (NSIF) for the shelf and offshore coastal surface moorings, and the sensors on the coastal surface piercing profiler (CSPP).
```
# find the endurance instruments affected
sites = ['CE02SHSM', 'CE04OSSM', 'CE07SHSM', 'CE09OSSM']
node = 'RID27'
sensor = '04-DOSTAD000'
nsif = missing_two_point(sites, node, sensor)
sites = ['CE01ISSP', 'CE02SHSP', 'CE06ISSP', 'CE07SHSP']
node = 'SP001'
sensor = '01-DOSTAJ000'
cspp = missing_two_point(sites, node, sensor)
endurance = pd.concat([nsif, cspp], ignore_index=True)
endurance
endurance.to_csv('endurance.dosta.changelog.csv')
```
## Coastal Pioneer Array
Three sets of DOSTA sensors were potentially impacted by the error in the Coastal Pioneer (CP) Array: the sensors mounted on the near-surface instrument frame (NSIF) and the multi-function node (MFN) for the coastal surface moorings, and the sensors on the coastal surface piercing profiler (CSPP).
```
# find the pioneer instruments affected
sites = ['CP01CNSM', 'CP03ISSM', 'CP04OSSM']
sensor = '04-DOSTAD000'
nsif = missing_two_point(sites, 'RID27', sensor)
mfn = missing_two_point(sites, 'MFD37', sensor)
sites = ['CP01CNSP', 'CP03ISSP']
node = 'SP001'
sensor = '01-DOSTAJ000'
cspp = missing_two_point(sites, node, sensor)
pioneer = pd.concat([nsif, mfn, cspp], ignore_index=True)
pioneer
pioneer.to_csv('pioneer.dosta.changelog.csv')
```
## Global Arrays (Argentine Basin, Irminger Sea, Southern Ocean and Station Papa)
4 sets of DOSTA sensors were potentially impacted by the error in the Global Arrays: the sensors mounted on the subsurface plate of the buoy and the near-surface instrument frame (NSIF), the sensors connected to the CTDBP on the mooring riser, and the sensors on the flanking mooring subsurface sphere.
```
sites = ['GA01SUMO', 'GI01SUMO', 'GS01SUMO']
buoy = missing_two_point(sites, 'SBD11', '04-DOSTAD000')
nsif = missing_two_point(sites, 'RID16', '06-DOSTAD000')
imm = [missing_two_point(sites, 'RII11', '02-DOSTAD031'),
missing_two_point(sites, 'RII11', '02-DOSTAD032'),
missing_two_point(sites, 'RII11', '02-DOSTAD033')]
sites = ['GA03FLMA', 'GA03FLMB', 'GI03FLMA', 'GI03FLMB', 'GP03FLMA', 'GP03FLMB', 'GS03FLMA', 'GS03FLMB']
sphere = missing_two_point(sites, 'RIS01', '03-DOSTAD000')
garray = pd.concat([buoy, nsif, pd.concat(imm), sphere], ignore_index=True)
garray
garray.to_csv('globals.dosta.changelog.csv')
```
| github_jupyter |
# Logistic Regression
Here is logistic regression to sats.csv. We have 3 collumns, exam 1 , exam 2 and if it's submitted.
#### Initialize
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df=pd.read_csv("sats.csv")
X=df.iloc[:,:-1].values
y=df.iloc[:,-1].values
df.head()
df.describe()
```
### Plot
```
pos , neg = (y==1).reshape(100,1) , (y==0).reshape(100,1)
fig = plt.figure(1)
ax = fig.add_subplot(111, facecolor='#FFF5EE')
plt.scatter(X[pos[:,0],0],X[pos[:,0],1],c="#808000",marker="+", s=50)
plt.scatter(X[neg[:,0],0],X[neg[:,0],1],c="#8B0000",marker="o",s=20)
plt.xlabel("Exam1")
plt.ylabel("Exam2")
plt.legend(["Submitted","Not submitted"])
```
### Sigmoid
formula:
$ g(z) = \frac{1}{(1+e^{-z})}$
```
def sigmoid(z):
g=1/ (1 + np.exp(-z))
return g
# testing the sigmoid function
sigmoid(0)
```
### Compute the Cost Function and Gradient
$J(\Theta) = \frac{1}{m} \sum_{i=1}^{m} [ -y^{(i)}log(h_{\Theta}(x^{(i)})) - (1 - y^{(i)})log(1 - (h_{\Theta}(x^{(i)}))]$
$ \frac{\partial J(\Theta)}{\partial \Theta_j} = \frac{1}{m} \sum_{i=1}^{m} (h_{\Theta}(x^{(i)}) - y^{(i)})x_j^{(i)}$
```
def costFunction(theta, X, y):
m=len(y)
predictions = sigmoid(np.dot(X,theta))
error = (-y * np.log(predictions)) - ((1-y)*np.log(1-predictions))
# cost func
cost = 1/m * sum(error)
# drad func
grad = 1/m * np.dot(X.transpose(),(predictions - y))
return cost[0] , grad
```
### Feature scaling
```
def featureNormalization(X):
"""
Take in numpy array of X values and return normalize X values,
the mean and standard deviation of each feature
"""
mean=np.mean(X,axis=0)
std=np.std(X,axis=0)
X_norm = (X - mean)/std
return X_norm , mean , std
m , n = X.shape[0], X.shape[1]
X, X_mean, X_std = featureNormalization(X)
X= np.append(np.ones((m,1)),X,axis=1)
y=y.reshape(m,1)
initial_theta = np.zeros((n+1,1))
cost, grad= costFunction(initial_theta,X,y)
print("Cost of initial theta is",cost)
print("Gradient at initial theta (zeros):",grad)
```
### Gradient Descent
```
def gradientDescent(X,y,theta,alpha,num_iters):
"""
Take in numpy array X, y and theta and update theta by taking num_iters gradient steps
with learning rate of alpha
return theta and the list of the cost of theta during each iteration
"""
m=len(y)
J_history =[]
for i in range(num_iters):
cost, grad = costFunction(theta,X,y)
theta = theta - (alpha * grad)
J_history.append(cost)
return theta , J_history
theta , J_history = gradientDescent(X,y,initial_theta,1,400)
print("Theta optimized by gradient descent:",theta)
print("The cost of the optimized theta:",J_history[-1])
```
### Plotting of Cost Function
```
plt.plot(J_history)
plt.xlabel("Iteration")
plt.ylabel("$J(\Theta)$")
plt.title("Cost function using Gradient Descent")
```
### Plotting the decision boundary
From Machine Learning Resources:
$h_\Theta(x) = g(z)$, where g is the sigmoid function and $z = \Theta^Tx$
Since $h_\Theta(x) \geq 0.5$ is interpreted as predicting class "1", $g(\Theta^Tx) \geq 0.5$ or $\Theta^Tx \geq 0$ predict class "1"
$\Theta_1 + \Theta_2x_2 + \Theta_3x_3 = 0$ is the decision boundary
Since, we plot $x_2$ against $x_3$, the boundary line will be the equation $ x_3 = \frac{-(\Theta_1+\Theta_2x_2)}{\Theta_3}$
```
plt.scatter(X[pos[:,0],1],X[pos[:,0],2],c="r",marker="+",label="Admitted")
plt.scatter(X[neg[:,0],1],X[neg[:,0],2],c="b",marker="x",label="Not admitted")
x_value= np.array([np.min(X[:,1]),np.max(X[:,1])])
y_value=-(theta[0] +theta[1]*x_value)/theta[2]
plt.plot(x_value,y_value, "g")
plt.xlabel("Exam 1 score")
plt.ylabel("Exam 2 score")
plt.legend(loc=0)
```
### Prediction
```
def classifierPredict(theta,X):
"""
take in numpy array of theta and X and predict the class
"""
predictions = X.dot(theta)
return predictions>0
x_test = np.array([45,85])
x_test = (x_test - X_mean)/X_std
x_test = np.append(np.ones(1),x_test)
prob = sigmoid(x_test.dot(theta))
print("For a student with scores 45 and 85, we predict an admission probability of",prob[0])
```
### Accuracy on training set
```
p=classifierPredict(theta,X)
print("Train Accuracy:", sum(p==y)[0],"%")
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.<br>
Licensed under the MIT License.</i>
<br>
# Model Comparison for NCF Using the Neural Network Intelligence Toolkit
This notebook shows how to use the **[Neural Network Intelligence](https://nni.readthedocs.io/en/latest/) toolkit (NNI)** for tuning hyperparameters for the Neural Collaborative Filtering Model.
To learn about each tuner NNI offers you can read about it [here](https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html).
NNI is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system’s parameters, in an efficient and automatic way. NNI has several appealing properties: ease of use, scalability, flexibility and efficiency. NNI can be executed in a distributed way on a local machine, a remote server, or a large scale training platform such as OpenPAI or Kubernetes.
In this notebook, we can see how NNI works with two different model types and the differences between their hyperparameter search spaces, yaml config file, and training scripts.
- [NCF Training Script](../../reco_utils/nni/ncf_training.py)
For this notebook we use a _local machine_ as the training platform (this can be any machine running the `reco_base` conda environment). In this case, NNI uses the available processors of the machine to parallelize the trials, subject to the value of `trialConcurrency` we specify in the configuration. Our runs and the results we report were obtained on a [Standard_D16_v3 virtual machine](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general#dv3-series-1) with 16 vcpus and 64 GB memory.
### 1. Global Settings
```
import sys
sys.path.append("../../")
import json
import os
import surprise
import papermill as pm
import pandas as pd
import shutil
import subprocess
import yaml
import pkg_resources
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
import reco_utils
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_chrono_split
from reco_utils.evaluation.python_evaluation import rmse, precision_at_k, ndcg_at_k
from reco_utils.tuning.nni.nni_utils import (
check_experiment_status,
check_stopped,
check_metrics_written,
get_trials,
stop_nni, start_nni
)
from reco_utils.recommender.ncf.dataset import Dataset as NCFDataset
from reco_utils.recommender.ncf.ncf_singlenode import NCF
from reco_utils.tuning.nni.ncf_utils import compute_test_results, combine_metrics_dicts
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
print("NNI version: {}".format(pkg_resources.get_distribution("nni").version))
tmp_dir = TemporaryDirectory()
%load_ext autoreload
%autoreload 2
```
### 2. Prepare Dataset
1. Download data and split into training, validation and test sets
2. Store the data sets to a local directory.
```
# Parameters used by papermill
# Select Movielens data size: 100k, 1m
MOVIELENS_DATA_SIZE = '100k'
SURPRISE_READER = 'ml-100k'
TMP_DIR = tmp_dir.name
NUM_EPOCHS = 10
MAX_TRIAL_NUM = 16
DEFAULT_SEED = 42
# time (in seconds) to wait for each tuning experiment to complete
WAITING_TIME = 20
MAX_RETRIES = MAX_TRIAL_NUM*4 # it is recommended to have MAX_RETRIES>=4*MAX_TRIAL_NUM
# Note: The NCF model can incorporate
df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=["userID", "itemID", "rating", "timestamp"]
)
df.head()
train, validation, test = python_chrono_split(df, [0.7, 0.15, 0.15])
train = train.drop(['timestamp'], axis=1)
validation = validation.drop(['timestamp'], axis=1)
test = test.drop(['timestamp'], axis=1)
LOG_DIR = os.path.join(TMP_DIR, "experiments")
os.makedirs(LOG_DIR, exist_ok=True)
DATA_DIR = os.path.join(TMP_DIR, "data")
os.makedirs(DATA_DIR, exist_ok=True)
TRAIN_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_train.pkl"
train.to_pickle(os.path.join(DATA_DIR, TRAIN_FILE_NAME))
VAL_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_val.pkl"
validation.to_pickle(os.path.join(DATA_DIR, VAL_FILE_NAME))
TEST_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_test.pkl"
test.to_pickle(os.path.join(DATA_DIR, TEST_FILE_NAME))
```
### 3. Prepare Hyperparameter Tuning
To run an experiment on NNI we require a general training script for our model of choice.
A general framework for a training script utilizes the following components
1. Argument Parse for the fixed parameters (dataset location, metrics to use)
2. Data preprocessing steps specific to the model
3. Fitting the model on the train set
4. Evaluating the model on the validation set on each metric (ranking and rating)
5. Save metrics and model
To utilize NNI we also require a hypeyparameter search space. Only the hyperparameters we want to tune are required in the dictionary. NNI supports different methods of [hyperparameter sampling](https://nni.readthedocs.io/en/latest/Tutorial/SearchSpaceSpec.html).
The `script_params` below are the parameters of the training script that are fixed (unlike `hyper_params` which are tuned).
```
PRIMARY_METRIC = "precision_at_k"
RATING_METRICS = ["rmse"]
RANKING_METRICS = ["precision_at_k", "ndcg_at_k"]
USERCOL = "userID"
ITEMCOL = "itemID"
REMOVE_SEEN = True
K = 10
RANDOM_STATE = 42
VERBOSE = True
BIASED = True
script_params = " ".join([
"--datastore", DATA_DIR,
"--train-datapath", TRAIN_FILE_NAME,
"--validation-datapath", VAL_FILE_NAME,
"--surprise-reader", SURPRISE_READER,
"--rating-metrics", " ".join(RATING_METRICS),
"--ranking-metrics", " ".join(RANKING_METRICS),
"--usercol", USERCOL,
"--itemcol", ITEMCOL,
"--k", str(K),
"--random-state", str(RANDOM_STATE),
"--epochs", str(NUM_EPOCHS),
"--primary-metric", PRIMARY_METRIC
])
if BIASED:
script_params += " --biased"
if VERBOSE:
script_params += " --verbose"
if REMOVE_SEEN:
script_params += " --remove-seen"
```
We specify the search space for the NCF hyperparameters
```
ncf_hyper_params = {
'n_factors': {"_type": "choice", "_value": [2, 4, 8, 12]},
'learning_rate': {"_type": "uniform", "_value": [1e-3, 1e-2]},
}
with open(os.path.join(TMP_DIR, 'search_space_ncf.json'), 'w') as fp:
json.dump(ncf_hyper_params, fp)
```
This config file follows the guidelines provided in [NNI Experiment Config instructions](https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/ExperimentConfig.md).
The options to pay attention to are
- The "searchSpacePath" which contains the space of hyperparameters we wanted to tune defined above
- The "tuner" which specifies the hyperparameter tuning algorithm that will sample from our search space and optimize our model
```
config = {
"authorName": "default",
"experimentName": "tensorflow_ncf",
"trialConcurrency": 8,
"maxExecDuration": "1h",
"maxTrialNum": MAX_TRIAL_NUM,
"trainingServicePlatform": "local",
# The path to Search Space
"searchSpacePath": "search_space_ncf.json",
"useAnnotation": False,
"logDir": LOG_DIR,
"tuner": {
"builtinTunerName": "TPE",
"classArgs": {
#choice: maximize, minimize
"optimize_mode": "maximize"
}
},
# The path and the running command of trial
"trial": {
"command": f"{sys.executable} ncf_training.py {script_params}",
"codeDir": os.path.join(os.path.split(os.path.abspath(reco_utils.__file__))[0], "tuning", "nni"),
"gpuNum": 0
}
}
with open(os.path.join(TMP_DIR, "config_ncf.yml"), "w") as fp:
fp.write(yaml.dump(config, default_flow_style=False))
```
### 4. Execute NNI Trials
The conda environment comes with NNI installed, which includes the command line tool `nnictl` for controlling and getting information about NNI experiments. <br>
To start the NNI tuning trials from the command line, execute the following command: <br>
`nnictl create --config <path of config.yml>` <br>
The `start_nni` function will run the `nnictl create` command. To find the URL for an active experiment you can run `nnictl webui url` on your terminal.
In this notebook the 16 NCF models are trained concurrently in a single experiment with batches of 8. While NNI can run two separate experiments simultaneously by adding the `--port <port_num>` flag to `nnictl create`, the total training time will probably be the same as running the batches sequentially since these are CPU bound processes.
```
stop_nni()
config_path_ncf = os.path.join(TMP_DIR, 'config_ncf.yml')
with Timer() as time_ncf:
start_nni(config_path_ncf, wait=WAITING_TIME, max_retries=MAX_RETRIES)
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
trials_ncf, best_metrics_ncf, best_params_ncf, best_trial_path_ncf = get_trials('maximize')
best_metrics_ncf
best_params_ncf
```
## 5. Baseline Model
Although we hope that the additional effort of utilizing an AutoML framework like NNI for hyperparameter tuning will lead to better results, we should also draw comparisons using our baseline model (our model trained with its default hyperparameters). This allows us to precisely understand what performance benefits NNI is or isn't providing.
```
data = NCFDataset(train, validation, seed=DEFAULT_SEED)
model = NCF(
n_users=data.n_users,
n_items=data.n_items,
model_type="NeuMF",
n_factors=4,
layer_sizes=[16,8,4],
n_epochs=NUM_EPOCHS,
learning_rate=1e-3,
verbose=True,
seed=DEFAULT_SEED
)
model.fit(data)
test_results = compute_test_results(model, train, validation, RATING_METRICS, RANKING_METRICS)
test_results
```
### 5. Show Results
The metrics for each model type is reported on the validation set. At this point we can compare the metrics for each model and select the one with the best score on the primary metric(s) of interest.
```
test_results['name'] = 'ncf_baseline'
best_metrics_ncf['name'] = 'ncf_tuned'
combine_metrics_dicts(test_results, best_metrics_ncf)
```
Based on the above metrics, we determine that NNI has identified a set of hyperparameters that does demonstrate an improvement on our metrics of interest. In this example, it turned out that an `n_factors` of 12 contributed to a better performance than an `n_factors` of 4. While the difference in `precision_at_k` and `ndcg_at_k` is small, NNI has helped us determine that a slightly larger embedding dimension for NCF may be useful for the movielens dataset.
```
# Stop the NNI experiment
stop_nni()
tmp_dir.cleanup()
```
### 7. Concluding Remarks
In this notebook we showed how to use the NNI framework on different models. By inspection of the training scripts, the differences between the two should help you identify what components would need to be modified to run another model with NNI.
In practice, an AutoML framework like NNI is just a tool to help you explore a large space of hyperparameters quickly with a pre-described level of randomization. It is recommended that in addition to using NNI one trains baseline models using typical hyperparamter choices (learning rate of 0.005, 0.001 or regularization rates of 0.05, 0.01, etc.) to draw more meaningful comparisons between model performances. This may help determine if a model is overfitting from the tuner or if there is a statistically significant improvement.
Another thing to note is the added computational cost required to train models using an AutoML framework. In this case, it takes about 6 minutes to train each of the models on a [Standard_NC6 VM](https://docs.microsoft.com/en-us/azure/virtual-machines/nc-series). With this in mind, while NNI can easily train hundreds of models over all hyperparameters for a model, in practice it may be beneficial to choose a subset of the hyperparameters that are deemed most important and to tune those. Too small of a hyperparameter search space may restrict our exploration, but too large may also lead to random noise in the data being exploited by a specific combination of hyperparameters.
For examples of scaling larger tuning workloads on clusters of machines, see [the notebooks](./README.md) that employ the [Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters).
### 8. References
Recommenders Repo References
* [NCF deep-dive notebook](../02_model/ncf_deep_dive.ipynb)
* [SVD NNI notebook (uses more tuners available)](./nni_surprise_svd.ipynb)
External References
* [NCF Paper](https://arxiv.org/abs/1708.05031)
* [NNI Docs | Neural Network Intelligence toolkit](https://github.com/Microsoft/nni)
| github_jupyter |
## Adding the required Libraries
```
import numpy as np
import pandas as pd
pd.set_option('display.max_columns',None)
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import seaborn as sns
import nltk
from nltk.tokenize import sent_tokenize
from nltk.corpus import words
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.stem import PorterStemmer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk.sentiment.util import *
from textblob import TextBlob
from sklearn.feature_extraction.text import TfidfVectorizer
from wordcloud import WordCloud
import re
from collections import Counter
import datetime as dt
#Reading the dataset
tweets = pd.read_csv(r'C:\Users\tejas\Desktop\final_kanye.csv')
tweets.head(10)
tweets.shape
```
Our dataset contains <b>27010 rows and 12 columns.
```
tweets.describe()
tweets.info()
```
The <b>info</b> function gives us the description of each columns in our dataset - the datatype and the number of non-null values.
## Data Preparation and Preprocessing
After we extract the raw tweets, the data/tweets contains many <i>unnecessary letters and characters</i> which needs to be removed before we perform the sentiment analysis and build the model.
After carefully analysing the dataset, we find that some of the tweets have been repeated or are a retweet of the original tweet. <u>These duplicate tweets might interfere in our model and sentiment analysis, hence we need to remove it.
```
#Checking the duplicate tweets by converting the tweets into a set.
tweets_set=set(tweets.Text)
print(len(tweets_set))
print("Duplicate Tweet Count:", len(tweets.Text)-len(tweets_set))
```
Our dateset has been reduced from <b>27010 rows</b> to <b>24531 rows</b>. The count of the duplicate tweets is <b>2479.
```
#Removing the dulipacte tweets
kanye_original = tweets.drop_duplicates(subset = 'Text', keep = 'first')
kanye_original.shape
kanye = pd.DataFrame(kanye_original.Text)
```
After removing the duplicate tweets from our dataset, we will be cleaning our data ie.-removing the punctuations, numbers, URLs and emojis using the <b>Regex (Regular expressions).
```
#Removing URLs
kanye.Text = [re.sub(r'http\S+',"", i) for i in kanye.Text]
kanye.Text = [re.sub(r'com',"",i) for i in kanye.Text]
#Removing the retweet text 'RT'.
kanye.Text = [re.sub('^RT[\s]','',i) for i in kanye.Text]
#Removing the hashtag symbol '#'.
kanye.Text = [re.sub('^#[\s]','',i) for i in kanye.Text]
#Removing all punctuations and numbers
kanye.Text = [re.sub('[^a-zA-Z]', ' ',i) for i in kanye.Text]
#Converting into lower case
kanye.Text = [low.lower() for low in kanye.Text]
#Removing Emojis
def preprocess(Text):
emojis = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',Text)
Text = re.sub('[\W]+',' ', Text.lower()) +\
' '.join(emojis).replace('-','')
return Text
kanye.Text = kanye.Text.apply(preprocess)
kanye.Text.head()
```
After removing the unnecessary characters from our tweets,we now need to remove the redundant words, known as <b>stopwords.</b> <i><u>Stopwords are the words that occur frequently in the text and are not considered informative.</i> These words are removed before the model is built. Lets check the stopwords present in the English language.
```
#Checking the stopwords list
cachedStopWords=set(stopwords.words("english"))
print(cachedStopWords)
#Removing Stop Words
kanye.Text=kanye.Text.apply(lambda tweet: ' '.join([word for word in tweet.split() if word not in cachedStopWords]))
```
After removing the stopwords we will be performing <b>Stemming</b> and <b>Lemmatizations</b>.
<b>Stemming</b> can defined as the <u>process of reducing inflected (derived) words to their word stem, base or root form—generally a written word form.
```
#Stemming
porter = PorterStemmer()
def stemWords(word):
return porter.stem(word)
kanye["Text"] = kanye["Text"].apply(lambda tweet: ' '.join([stemWords(word) for word in tweet.split()]))
```
<b>Lemmatization</b> is defined as <u>the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form.
```
#Lemmatization:
lema = WordNetLemmatizer()
def lemmatizeWords(word):
return lema.lemmatize(word)
kanye.Text = kanye.Text.apply(lambda tweet: ' '.join([lemmatizeWords(word) for word in tweet.split()]))
```
Now that our data is cleaned and preprocessed, we can now perform the sentiment analysis and build the model.
## Sentiment Analysis
For performing the sentiment analysis, we will be using the <b>TextBlob</b> package and the <b>VADER Sentiment Analyser.
```
pol = []
for i in kanye.Text:
blob = TextBlob(i) #Using the TextBlob
pol.append(blob.sentiment.polarity)
#Adding polarity to the dataframe
kanye['Polarity']=pol
kanye.head()
```
Lets count the number of tweets as per the polarity of the tweets. We will group the tweets as positive, negative and neutral tweets.
```
#Counting the number of tweets based on the polarity
positive=0
negative=0
neutral=0
sent=[]
for i in pol:
if i>=0.2:
positive+=1
sent.append('Positive')
elif i<=0:
negative+=1
sent.append('Negative')
else:
neutral+=1
sent.append('Neutral')
print("Positive Tweets:",positive)
print("Negative Tweets:",negative)
print("Neutral Tweets:",neutral)
kanye['Sentiment']=sent
kanye.head()
```
After dividing the tweets into the groups mentioned above, we find that the majority of the tweets are off negative sentiment(17123 tweets).
```
#Preparing words by splitting the tweets
words=[]
words=[word for tweet in kanye.Text for word in tweet.split()]
```
Now we will use the <b>VADER Sentiment Analyser</b> to get the polarity of the tweets.
```
import nltk
nltk.download('vader_lexicon')
#Using the VADER Sentiment Analyzer
sid = SentimentIntensityAnalyzer()
sentiment_scores = kanye.Text.apply(lambda x: sid.polarity_scores(x))
sentimental_score = pd.DataFrame(list(sentiment_scores))
sentimental_score.tail()
```
Lets divide the tweets into the following groups and assign the overall sentiment of the tweet.
```
sentimental_score['Sentiment'] = sentimental_score['compound'].apply(lambda x: 'negative' if x <= 0 else ('positive' if x >=0.2 else 'neutral'))
sentimental_score.head()
#Checking the number of tweets per sentiment
sns.set_style(style='darkgrid')
sns.set_context('poster')
fig= plt.figure(figsize=(10,5))
sent_count = pd.DataFrame.from_dict(Counter(sentimental_score['Sentiment']), orient = 'index').reset_index()
sent_count.columns = ['sentiment', 'count']
sns.barplot(y="count", x='sentiment', data=sent_count)
```
### WordCloud
Wordcloud is an image that comprises of words with different sizes and colors. The size of the words gives frequency of the word -<i><u>more frequent the words is, the bigger and bolder it will appear on the WordCloud.
```
wordcloud=WordCloud(background_color='black',max_words=100,max_font_size=50,scale=5,collocations=False,
normalize_plurals=True).generate(' '.join(words))
plt.figure(figsize = (12, 12), facecolor="None")
plt.imshow(wordcloud,interpolation='bilinear')
plt.axis("off")
plt.title("WordCloud",fontsize=18)
plt.show()
```
Now we will count the frequency of the words used in the tweets. For our analysis, we will observe the <u>first 60 most frequent words.
```
sns.set(style="darkgrid")
sns.set_context('notebook')
#Counting the word frequency of the tweets
counts = Counter(words).most_common(60)
counts_df = pd.DataFrame(counts)
counts_df.columns = ['word', 'frequency']
fig = plt.subplots(figsize = (12, 10))
plt.title("Word Frequency",fontsize=18)
sns.barplot(y="word", x='frequency', data=counts_df)
#Finding the frequency of the polarity
sns.set_context('talk')
fig,ax = plt.subplots(figsize = (12, 8))
ax.set(title='Tweet Sentiments distribution', xlabel='polarity', ylabel='frequency')
sns.distplot(kanye['Polarity'], bins=30)
```
This plot gives the distibution of the polarity of the tweets with a majority tweets either of neutral polarity or slightly of positive polarity.
```
#Counting the number of different tweets
sns.set_style(style='darkgrid')
sns.set_context('poster')
fig= plt.figure(figsize=(10,5))
sns.countplot(kanye.Sentiment)
```
As we can see, majority of the tweets are of negative sentiment.
```
#Counting the number of tweets per hour
tweets['date'] = pd.to_datetime(tweets['date'])
hour = list(tweets.date.dt.hour)
count_hours=Counter(hour)
sns.set_context("talk")
fig,ax=plt.subplots(figsize=(12,10))
ax.set(title='Count of Tweets per Hour', xlabel='Hour (24 hour time)', ylabel='count')
sns.barplot(x=list(count_hours.keys()),y=list(count_hours.values()),color='blue')
```
The above tweets gives us the number of tweets per hour.
## Model Prediction
### Required Libraries
```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score, precision_score, recall_score,f1_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn import model_selection
```
Before we can build the model, we need to transform the text data into the numerical features as we cannot directly feed the model the raw data.
We can convert the text document into the numerical representation by vectorization.
### TfiDF Vectorizer
<b>TfiDF Vectorizer</b> is a method to transforms text to feature vectors that can be used <u>to evaluate how important a word is to a document in a collection or corpus.</u>
It is termed as the <b>Term Frequency Inverse Document Frequency</b>.<ul>
1.Term Frequency: The number of times the particular word appears in a single document.
2.Inverse Document Frequency:The log ratio of the total number of documents divided by the total documents where that particular word appears.
Hence the TfiDf can be calculated by:<l>
TfiDF = Tf * iDf
```
#Definining the vectorizer
vect = TfidfVectorizer(ngram_range=(1,1), max_features=100,smooth_idf=True,use_idf=True).fit(kanye.Text)
#Transform the vectorizer
X_txt = vect.transform(kanye.Text)
#Transforming to a data frame
X_df=pd.DataFrame(X_txt.toarray(), columns=vect.get_feature_names())
X_df.head(10)
```
The <b>LabelEncoder</b> to convert the categorical features into numerical features. We will use it to the Sentiment column.
```
X_df['Sentiment'] = sent
label = LabelEncoder()
X_df['Label'] = label.fit_transform(X_df['Sentiment'])
X_df.head()
```
After converting the Sentiment column into the numerical values, we find that:<ul>
Negative Sentiment-0
Neutral Setiment-1
Positive Sentiment-2
We have our target variable(Label) and the predictor variables. Now we will split our dataset into training and testing dataset, using the train_test_split method.
```
#Splitting the dataset into training and testing set
X = X_df.drop(['Sentiment','Label'],axis=1)
y = X_df.Label
X_train, X_test, y_train, y_test = train_test_split(X , y, test_size=0.2, random_state=0) #Testing dataset size-20% of the total
```
## Naives Bayes
```
nb = MultinomialNB().fit(X_train, y_train) #Training the model
#Predicting the test set
y_pred_nb= nb.predict(X_test)
#Checking the accuracy,precision,recall,f1 scores
accuracy_score_nb = accuracy_score(y_test, y_pred_nb)
precision_score_nb = precision_score(y_test, y_pred_nb , average = 'micro')
recall_score_nb = recall_score(y_test, y_pred_nb , average = 'micro')
f1_score_nb = f1_score(y_test, y_pred_nb , average = 'micro')
print("Accuracy Score: " , accuracy_score_nb)
print("Precision Score: " , precision_score_nb)
print("Recall Score: " , recall_score_nb)
print("F1 Score: " , f1_score_nb)
print("Classification Report:\n",classification_report(y_test, y_pred_nb))
#Constructing a confusion matrix
cm_nb = confusion_matrix(y_test, y_pred_nb)
df_cm_nb = pd.DataFrame(cm_nb)
categories = ['Negative','Neutral','Positive']
plt.figure(figsize=(10,8))
sns.heatmap(df_cm_nb, cmap='inferno' ,annot=True, annot_kws={"size": 18}, xticklabels = categories,
yticklabels = categories,fmt="d")
```
### Hyperparameter Optimization
```
#Getting the parameters of the particular model
nb.get_params().keys()
NB_opti = MultinomialNB() #Using the MultinomialNB
param_grid = {'alpha':[1,2,3,4], } #Selecting the parameters
model_NB = model_selection.GridSearchCV(estimator=NB_opti, #GridSearch
param_grid=param_grid,
cv=10)
model_NB.fit(X_train,y_train)
print(model_NB.best_score_) #Gives the best score of the model
print(model_NB.best_estimator_.get_params()) #Gives the best parameters of the model
```
## Decision Tree
```
DT = DecisionTreeClassifier().fit(X_train, y_train) #Training the model
#predicting the test set
y_pred_dt= DT.predict(X_test)
#Checking the accuracy,precision,recall and f1 scores
accuracy_score_dt = accuracy_score(y_test, y_pred_dt)
precision_score_dt = precision_score(y_test, y_pred_dt , average = 'micro')
recall_score_dt = recall_score(y_test, y_pred_dt , average = 'micro')
f1_score_dt = f1_score(y_test, y_pred_dt , average = 'micro')
print("Accuracy Score: " , accuracy_score_dt)
print("Precision Score: " , precision_score_dt)
print("Recall Score: " , recall_score_dt)
print("F1 Score: " , f1_score_dt)
print("Classification Report:\n",classification_report(y_test, y_pred_dt))
#Construting the confusion matrix
cm_dt = confusion_matrix(y_test, y_pred_dt)
df_cm_dt = pd.DataFrame(cm_dt)
categories = ['Negative','Neutral','Positive']
plt.figure(figsize=(10,8))
sns.heatmap(df_cm_dt, cmap='inferno' ,annot=True, annot_kws={"size": 20}, xticklabels = categories,
yticklabels = categories,fmt="d")
```
### Hyperparamter Optimization
```
#Getting the parameters of the particular model
DT.get_params().keys()
DT_opti = DecisionTreeClassifier()
param_grid = {"max_depth" : [1,3,5,7], #Setting the parameters for the model
"criterion" : ["gini","entropy"],
"min_samples_split" : [2,3,4],
"max_leaf_nodes" : [7,8,9],
"min_samples_leaf": [2,3,4],
}
model_DT = model_selection.GridSearchCV(estimator=DT_opti, #GridSearch
param_grid=param_grid,
cv=10)
model_DT.fit(X_train,y_train)
print(model_DT.best_score_) #Gives the best score of the model
print(model_DT.best_estimator_.get_params()) #Gives the best parameters
```
## Random Forest
```
RF = RandomForestClassifier(n_jobs=1).fit(X_train, y_train) #Training the model
#Predicting the test set
y_pred_RF = RF.predict(X_test)
#Checking the accuracy,precision,recall and f1 scores
accuracy_score_RF = accuracy_score(y_test, y_pred_RF)
precision_score_RF = precision_score(y_test, y_pred_RF , average = 'micro')
recall_score_RF = recall_score(y_test, y_pred_RF, average = 'micro')
f1_score_RF = f1_score(y_test, y_pred_RF, average = 'micro')
print("Accuracy Score: " , accuracy_score_RF)
print("Precision Score: " , precision_score_RF)
print("Recall Score: " , recall_score_RF)
print("F1 Score:" , f1_score_RF)
print("Classification Report:\n",classification_report(y_test, y_pred_RF))
#Construting the confusion matrix
cm_RF = confusion_matrix(y_test, y_pred_RF)
df_cm_RF = pd.DataFrame(cm_RF)
categories = ['Negative','Neutral','Positive']
plt.figure(figsize=(10,8))
sns.heatmap(df_cm_RF, cmap='inferno' ,annot=True, annot_kws={"size": 18}, xticklabels = categories,
yticklabels = categories,fmt="d")
```
### Hyperparameter Optimization
```
#Getting the parameters of the particular model
RF.get_params().keys()
RF_opti1 = RandomForestClassifier()
param_grid = {"n_estimators": np.arange(100,1500,100), #Selecting the parameters
"max_depth": np.arange(1,20),
"criterion": ["gini","entropy"],
}
model_RF1 = model_selection.RandomizedSearchCV(estimator=RF_opti1, #RandomizedSearch
param_distributions=param_grid,
n_iter=10,
scoring='accuracy',
verbose=10,
n_jobs=1,
cv=5)
model_RF1.fit(X_train,y_train)
print(model_RF1.best_score_) #Gives the best score of the model
print(model_RF1.best_estimator_.get_params()) #Gives the best parameters of the model
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from struct import unpack
from sklearn import cluster
import datetime
import hdbscan
import seaborn as sns
from sklearn.preprocessing import PowerTransformer, normalize, MinMaxScaler, StandardScaler
from tsnecuda import TSNE
from struct import pack
from sklearn_extra.cluster import KMedoids
def transform(_in) :
#vg / h 분리
np_vgh = _in
np_vg = np.delete(_in, 2, axis = 1) #3열 삭제
np_h = (_in[:,2]).reshape(-1,1) #3열 추출
#log(h)
np_logh = np.log(np_h[::]+1)
#logh 병합
np_vgh_logh = np.hstack((np_vgh,np_logh))
# normalized v,g,h,logh
np_normal_l1_vgh_logh = normalize(np_vgh_logh, axis=0, norm='l1')
np_normal_l2_vgh_logh = normalize(np_vgh_logh, axis=0, norm='l2')
np_normal_max_vgh_logh = normalize(np_vgh_logh, axis=0, norm='max')
#tr standardization
std_scaler = StandardScaler()
fitted = std_scaler.fit(np_vgh_logh)
np_std_vgh_logh = std_scaler.transform(np_vgh_logh)
#print(np_std_vgh_logh)
#tr min-max scale
min_max_scaler = MinMaxScaler()
min_max_scaler.fit(np_vgh_logh)
np_min_max_vgh_logh=min_max_scaler.transform(np_vgh_logh)
#print(np_min_max_vgh_logh)
#yeo-johnson
pt_vgh_logh = PowerTransformer(method='yeo-johnson')
pt_vgh_logh.fit(np_vgh_logh)
np_yeojohnson_vgh_logh = pt_vgh_logh.transform(np_vgh_logh)
ret = []
ret.append( _in )
ret.append( np.delete(np_vgh_logh, 2, axis = 1) )
ret.append( np.hstack((np_normal_l1_vgh_logh[:,[0,1]], np_h)) )
ret.append( np.hstack((np_normal_l1_vgh_logh[:,[0,1]], np_logh)) )
ret.append( np.delete(np_normal_l1_vgh_logh, 2, axis = 1) )
ret.append( np.delete(np_normal_l1_vgh_logh, 3, axis = 1) )
ret.append( np.hstack((np_normal_l2_vgh_logh[:,[0,1]], np_h)) )
ret.append( np.hstack((np_normal_l2_vgh_logh[:,[0,1]], np_logh)) )
ret.append( np.delete(np_normal_l2_vgh_logh, 2, axis = 1) )
ret.append( np.delete(np_normal_l2_vgh_logh, 3, axis = 1) )
ret.append( np.hstack((np_normal_max_vgh_logh[:,[0,1]], np_h)) )
ret.append( np.hstack((np_normal_max_vgh_logh[:,[0,1]], np_logh)) )
ret.append( np.delete(np_normal_max_vgh_logh, 2, axis = 1) )
ret.append( np.delete(np_normal_max_vgh_logh, 3, axis = 1) )
ret.append( np.hstack((np_std_vgh_logh[:,[0,1]], np_h)) )
ret.append( np.hstack((np_std_vgh_logh[:,[0,1]], np_logh)) )
ret.append( np.delete(np_std_vgh_logh, 2, axis = 1) )
ret.append( np.delete(np_std_vgh_logh, 3, axis = 1) )
ret.append( np.hstack((np_min_max_vgh_logh[:,[0,1]], np_h)) )
ret.append( np.hstack((np_min_max_vgh_logh[:,[0,1]], np_logh)) )
ret.append( np.delete(np_min_max_vgh_logh, 2, axis = 1) )
ret.append( np.delete(np_min_max_vgh_logh, 3, axis = 1) )
ret.append( np.hstack((np_yeojohnson_vgh_logh[:,[0,1]], np_h)) )
ret.append( np.hstack((np_yeojohnson_vgh_logh[:,[0,1]], np_logh)) )
ret.append( np.delete(np_yeojohnson_vgh_logh, 2, axis = 1) )
ret.append( np.delete(np_yeojohnson_vgh_logh, 3, axis = 1) )
return ret
def _TSNE(learning_rate, data) :
model = TSNE(learning_rate=learning_rate)
print("TSNE calc : ", end='')
startTime = datetime.datetime.now()
transformed = model.fit_transform(data)
endTime = datetime.datetime.now()
diffTime = endTime-startTime
exeTime = diffTime.total_seconds() * 1000
print(exeTime,'ms')
return transformed
index = ["vgh", "vglogh",
"n_l1_vg_h", "n_l1_vg_logh", "n_l1_vgh", "n_l1_vglogh",
"n_l2_vg_h", "n_l2_vg_logh", "n_l2_vgh", "n_l2_vglogh",
"n_max_vg_h", "n_max_vg_logh", "n_max_vgh", "n_max_vglogh",
"std_vg_h", "std_vg_logh", "std_vgh", "std_vglogh",
"mm_vg_h", "mm_vg_logh", "mm_vgh", "mm_vglogh",
"pt_vg_h", "pt_vg_logh", "pt_vgh", "pt_vglogh"
]
w=256
h=256
size = w*h
all_val_grad_hist = []
sparse_val_grad_hist = []
#'VisMale_128x256x256','bonsai256X256X256B', 'Carp_256x256x512','XMasTree-LO_256x249x256'
for dataset in ['abdomen_256x256x128'] :
with open('../volumeCache/%s.raw.2DHistogram.TextureCache'%(dataset), 'rb') as fp:
Histogram2DYMax = unpack('<f', fp.read(4))[0] #Max of Grad_mag
for i in range(h) :
for j in range(w):
readdata = unpack('<L', fp.read(4))[0]
all_val_grad_hist.append([i, j, readdata])
if readdata>=1 and i!=0:
sparse_val_grad_hist.append([i, j, readdata])
np_all_val_grad_hist = np.array(all_val_grad_hist)
np_sparse_val_grad_hist = np.array(sparse_val_grad_hist)
ret_all_array = transform(np_all_val_grad_hist)
ret_sparse_array = transform(np_sparse_val_grad_hist)
tsne_all_array = []
tsne_sparse_array = []
###########################################################################################################
#for i in range(len(ret_all_array)):
# print("%d_%s_%s"% (i, dataset, index[i]), end='\t')
# tsne_all_array.append(_TSNE(100,ret_all_array[i]))
#for i in range(len(tsne_all_array)):
# save_tsne_result( tsne_all_array[i], "tsneCache", "%d_%s_%s"% (i, dataset, index[i]) )
#for i in range(len(tsne_all_array)):
# for _k in [10,15,20]:
# kmeans(_k, tsne_all_array[i], np_all_val_grad_hist, 256,256, "%d_%s_%s"% (i, dataset, index[i]) )
#
# for _eps in [0.5, 1.0, 1.5, 3.0]:
# dbscan(_eps, tsne_all_array[i], np_all_val_grad_hist, 256,256, "%d_%s_%s"% (i, dataset, index[i]))
# Hdbscan(300, 20, 1.0, tsne_all_array[i], np_all_val_grad_hist, 256,256, "%d_%s_%s"% (i, dataset, index[i]) ) # *
###########################################################################################################
for i in range(len(ret_sparse_array)):
print("%s_%d_sparse_%s"% ( dataset, i,index[i]), end='\t')
tsne_sparse_array.append(_TSNE(100,ret_sparse_array[i]))
np_tsne_sparse_array = np.array(tsne_sparse_array)
print(tsne_sparse_array)
```
| github_jupyter |
# 목차
## 1. 데이터 분석에 앞서 (워밍업)
1-1. 분석의 목적
1-2. 요구 조건 정의
## 2. 통계의 기초
2-1. 평균과 표준편차
2-1-1. 대표값
2-1-2. 모집단과 표본
2-1-3. Random Sampling
2-2. 기술통계 추론통계
2-2-1. 기술통계
2-2-2. 추론통계
2-3. EDA
2-3-1. Visualization
2-3-2. 중심극한정리
2-4. 점추정과 구간추정
2-4-1. 점추정
2-4-2. 구간추정
2-5. Outlier
2-6. 가설검정
2-8-1. 귀무가설 (0가설)
2-8-2. 대립가설
2-8-2. 검정통계량
## 3. 다양한 분포들 (Z, T, F)
obj : 집단간 차이는 통계적으로 어떻게 검증하는가?
3-1. Z분포
3-2. T분포
3-3. F 검증
## 4. 상관과 회귀
데이터의 유사성은 어떻게 따지는가?
예측은 어떻게 하는가?
# 1. 데이터 분석 워밍업
## 1-1. 데이터 분석에 들어가기에 앞서서 해야할 것
> - 분석의 목적
> - 요구 조건 정의
## 1-2. 신호와 소음
> - garbage in garbage out
> - '노이즈'를 제거해야 한다.
## 1-3. 퀴즈
### 1-3-1. 분류 알고리즘 머릿속으로 만들어보기
- 고양이와 개를 구분하는 방법
> - 가정1. 고양이 500장, 개 500장
> - 가정2. 어린아이는 틀렸다와 맞았다는 걸 알고 있다.
> - 가정3. 틀린 횟수를 줄이려고 끊잆없이 노력한다.
- 피드백
> - 이진 분류 : 남-녀, 개-고양이
>> - 기존의 방법: feature를 넣었다. 허나 문제가 많음(경우의 수가 무한대, 발전 시키기에 엄연한 한계가 존재)
> - 방안 : Data를 test (50%) / validation (25%) / test (25%)로 나눔 (Threshold)
>> - 1) Train Data (50%)
>>> - 피드백을 준다.
>> - 2) Validation Data (50%)
>>> - 피드백을 준다.
>> - 3) Test Data (50%)
>>> - 피드백을 주지 않는다.
> - 이진 분류할 때, 학습시킬 데이터의 비율은 똑같아야 한다.
>> - 1,000장의 사진으로 학습시켜야 한다면, 고양이 500장, 개 500장 맞춰줘야 한다.
>> - 학습 데이터 양이 드라마틱하게 차이나면, 알고리즘의 정답은 극단값으로 갈 수밖에 없다.
> - __label data가 있으면 지도학습 (supervised learning)__
> - __label data가 없으면 비지도학습 (unsupervised learning) : Clustering / 현업에서 가장 많이 보임__
- - -
- 우리 주변에서 볼 수 있는 인공지능
> - 페이스북 친구 추천
> - 월마트 맥주와 기저귀
>> - 추측 : 금요일 저녁, 부인이 남편한테 기저귀 부탁 + 남편이 맥주를 삼
> - 넷플릭스 영화 추천
> - 유투브 영상 추천
- - -
### 1-3-2. 영상 추천하기
- 사용자 맞춤 영상 추천, 어떻게 하면 가장 좋은 추천을 할 수 있을까?
> - 가정 1) 영상 url, 각 사용자들이 특정 영상을 조회한 횟수, 조회를 한 시점(시각), 사용자 기본정보(id, 성별, 나이)
> - 가정 2) 성능: 속도, 비용 무제한 ... '정확도 높은 추천'
- 피드백
> - 추천 알고리즘
>> - 1) __[CF](http://khanrc.tistory.com/entry/%EC%B6%94%EC%B2%9C-%EC%8B%9C%EC%8A%A4%ED%85%9CRecommendation-System)__
>>> - Item-Based
>>> - User-Based
>>>> - 사용자 기반 matrix
>>>> - 패턴 : 영상을 본 순서(본 순서대로 사용자의 패턴을 정의하고 추천함)
>>>> - 아이템 속성의 한계를 뛰어 넘음
>>>> - 철수가 0->3->4 를 보았고, 영희가 0->3 을 보았다면 그 다음에 4번 영상을 추천함
>>>> - 다만 어마어마한 cost가 듬
>> - __2) Contents-Based__
>>> - 아이템 속성의 한계에서 벗어날 수 없다.
>>> - 소비한 콘텐츠의 한계에서 벗어날 수 없다.
> - Cold Start
>> - 선호에 대한 정보가 없는 경우
>> - 방법이 없다. 그래서 facebook에서 개인 정보를 어마어마하게 수집함.
>> - 왓챠나 넷플릭스도 처음 가입한 사람에게 선호도를 조사한다.
>> - 순서대로 한 사람 매치
# 2. 통계의 기초
## 2-1. 목차
- 평균과 표준편차
- 모집단과 표본
## 2-2. Random Sampling
- n = 샘플링 갯수
- 모집단(Population)
- 표본(Sample)
- 우리가 알고 싶은 것: 모평균(뮤), 표준편차(시그마) ... 측정할 수 없음
> - 그래서 Sampling
>> - 고려할 수 있는 부분(성별, 지역, 연령 등)을 다 고려해서 R.S
>> - Bias를 고려하고 Sampling해야 함.
>> - Bias(편향)된 Data는 오염되었음.
- 측정할 수 있는 것
> - __표본평균 (x_bar)__
> - __표본표준편차(S)__
## 2-3. 기술통계와 추론통계
- 기술통계
> - 공식으로 나올 수 있는 통계
> - 모집단을 알고 있을 경우
- 추론통계
> - 표본으로 모집단을 추론
## 2-4. EDA (탐색적 데이터 분석)
### 2-4-1. Visualization
> - 분포의 모양 파악
### 2-4-2. 중심극한정리
- 샘플링 수가 충분히 많으면 모든 확률분포는 정규분포를 수렴하며, 평균은 실제 평균에 점점 가까워진다.
> - (정규분포로 가정하면) 통계적으로 문제를 풀기 어려움
> - (정규분포를 따르지 않으면) 비모수검증을 해야하지만... 대부분의 경우는 정규분포를 가정하고 문제를 품.
### 2-4-3. 대표값
- 평균(Mean)
- 중앙(Median)
- 최빈(Mode)
### 2-4-4. 공식
<img src = "https://photos-3.dropbox.com/t/2/AADRotX6KXnI4OjbB1nBh5XNfcGWss-slgX9c5wHd13twg/12/848258672/png/32x32/3/1528135200/0/2/Screenshot%202018-06-04%2022.22.00.png/EPDS_JsJGGsgAigC/J4WddRT0R0--Xs4hlNFRH9i5eu6ctrePwtgA4FYIiIQ?dl=0&preserve_transparency=1&size=2048x1536&size_mode=3">
- 시그마 제곱 = 분산
- 루트 분산 = 표준편차
## 2-5. 점추정과 구간추정
- 추정에는 두 가지 방법이 있다.
### 2-5-1. 점추정
> - point to point(표본 <-> 모집단)
> - 표본에서 얻은 값을 모집단 값이라고 추정
### 2-5-2. 구간추정
> - 점추정 기반
> - 평균이 대충 어떤 구간 사이에 있다고 추정
<img src = "https://photos-2.dropbox.com/t/2/AABOnHf7fhpmPyhKUg4cJsiF_2giCeQcLAz9nM_8H1LJtA/12/848258672/png/32x32/3/1528135200/0/2/Screenshot%202018-06-04%2022.24.04.png/EPDS_JsJGGwgAigC/RaOAF_hjSC0ESBbOJ6b-gNx5etREflZPFqeFfulEWqo?dl=0&preserve_transparency=1&size=2048x1536&size_mode=3">
## 2-6. Outlier
- 극단치는 항상 전처리 과정을 거쳐서 처리해야한다.
- 머신러닝에서도 중요하다.
- Outlier의 기준
> - 경험적 기준
> - 평균을 기준으로... 구간 추정 값을 넘어선 것은 버린다.
>> - 데이터마다 다 다르다.
>> - 답은 없다. 잡히는 경우도 있고, 없는 경우도 있다.
## 2-7. 0가설 검정
- 통계적으로 균질하다 -> 귀무가설 (H0)
- 통계적으로 균질하지 않다 -> 대립가설 (H1)
- 대립가설과 귀무가설은 목적성과는 별개로 설정되어야 한다.
> - 내가 원하는 주장이라고 해도 (대립가설이 아니라)귀무가설이 될 수 있다.
> - 귀무가설은 세상이 생각하는 상식과도 같다.
- 0가설은 내 반대쪽?
> - 귀무가설 = 내 반대주장 = 0가설?
- 검정통계량
> - 실제로 관측한 값
- 기준
> - 유의수준(알파값, α) : 0.05(5%)
>> - 95%의 신뢰구간
- 내가 맞는 걸 주장하고 싶으면 바로 검증하면 안 된다...
> - 내 반대 주장의 확률을 본다.
## 2-8. 우리가 이렇게 삽질하는 이유
### 우리가 하는 모든 삽질이 Random Sampling 때문이다.
- Random Sampling Error가 필연적으로 생길 수밖에 없다.
- Random에 의한 우연인지, 실제 효과인지 알 방법이 없다.
> - 검정을 해야 한다.
## 2-9. 퀴즈
- 새로 개발한 두통약의 효과 통계적으로 검증하기
> - 두통 완화 효과는 뇌파의 강도로 측정 가능하다고 가정
> - 예산 편성, 평소 두통이 심하다고 검증된 100명, 나이와 성별, 생활환경 등이 비슷한 환자
- 1) 구체적으로 어떻게 해야 원하는 결과를 검증할 수 있을까?
> - 우선 효과를 정의해야 한다. 무엇을 효과라고 할 것인가?
>> - 뇌파의 측정값이 줄어드는지 검증
> - 100명의 data (sampling)
>> - 1) __Between(집단 간)__
>>> - 랜덤하게 2 집단으로 나눈다.
>>> - 처치한 집단, 처치하지 않은 집단의 뇌파 평균
>>> - 평균이 차이가 있는지 0가설 검정 (통계적으로 확인)
>>> - 0가설(귀무가설) : 두 집단 간의 차이가 없다.
>> - 2) __Within(집단 내)__
>>> - 1차 투입
>>> - 2차 투입
>>> - 3차 투입
>>> - 평균, 투약했을 때와 투약하지 않았을 때의 차이를 검증
>>> - 0가설은 똑같지만, 같은 집단 내에 반복되게 노출시킨다.
- 2) 이때의 0가설은 무엇인가?
> - 내 주장과 반대(귀무가설 ... 통계적으로 균질하다)
> - 두 집단 간의 차이가 없다.
# 3. 집단간 차이 검증하기 (Z, T, F)
- 집단간 차이는 통계적으로 어떻게 검증하는가?
- - -
## 3-1. 표준화와 Z분포
### 3-1-1. 표준화(standarization)
- 두 집단에 대해 비교하고 싶은데, 절대적인 수치로는 비교할 수 없다.
- 단위가 틀리므로 단위 통일이 필요하다.
- 그래서 단위를 시그마로 맞추는 것
- 평균으로부터 떨어진 거리
- 그 거리를 시그마로 나타냄
- 위치에 대한 비교가 가능해진다.
<img src="https://photos-5.dropbox.com/t/2/AABDBIDvz_4QdXp41Dq10kxaE6WhbE47bNbNEE1ry4BzPQ/12/848258672/png/32x32/3/1528628400/0/2/Screenshot%202018-06-10%2015.11.07.png/EPDS_JsJGHkgAigC/kqJkenodIFDjpnCNuKiovzI5EWvNcDcieWK5-dDetUA?dl=0&preserve_transparency=1&size=2048x1536&size_mode=3">
- - -
### 3-1-2. Z분포
- 상대적인 비교를 하기 위해서(확률적으로 이야기하기 위해서), z를 씀
- 모든 수치를 z로 바꾸면, z분포
- 표준정규분포
- 확률젹으로 평균을 기준으로 분산에 따라 분포
- 평균(기대값)은 항상 0
- 평균 0, 분포 1, 넓이 1
- ex) 우리나라 20대 성인의 평균키는 173cm 이며, 표준편차는 5일 경우. 내 키는 상위 몇%에 속하는지 직접 계산하기.
- Z = x - μ / σ
- μ = 173, σ = 5
- (174-173) / 5 = 0.2
- - -
### 3-1-3. 신뢰구간
<img src = "https://photos-1.dropbox.com/t/2/AACSCDCk4Z0kAjHho8Py-6JbWzvWi6s3iD6DFrZDkYxJHA/12/848258672/png/32x32/1/_/1/2/Screenshot%202018-06-16%2019.19.02.png/EPDS_JsJGIEBIAIoAg/IfGZyB2jVAbNjo2J4iEY0Y5rxgyfw8uDvxWRRBCMw1Q?preserve_transparency=1&size=2048x1536&size_mode=3">
- - -
### 3-1-4. 퀴즈
- 실험설계 1
- Toeic 문제지 효과 검증하기
- 토익문제지를 새로 만들었다. 이 문제지가 토익점수를 올리는 데 효과가 있는지를 보려고 한다.
- 이것을 알아보려면 크게 2가지 방법이 있다.
- 1) 효과에 대한 정의 & n
- 효과 : 10%
- n : (연령, 점수가 비슷한 사람) 6개월 이내에 토익을 봤고, 6개월 이내에 토익 시험을 치루려는 사람
- 2) N=100명 모집 가능하다고 할 때, 2가지 방법을 정의해보자.
- 집단 간
- 랜덤하게 2집단으로 나눈다.
- 새로운 문제지를 푼 집단과 풀지 않은 집단
- 모의고사 평균 점수 확인
- 집단 내
- 같은 집단 내에 반복 노출
- 모의고사 평균 점수 확인
- 3) 반드시 각 방법의 0가설도 같이 정의하자
- 귀무가설 : 차이가 없다.
- 새로 만든 토익 문제지가 점수를 올리는데 효과가 없다.
- 대립가설 : 차이가 있다.
- 새로 만든 토익 문제지가 점수를 올리는데 효과가 있다.
- 실험설계 2
- 4) 각 방법이 장단점 파악하기
- 방법 1
- 장점 : 대조군을 쉽게 비교할 수 있다.
- 단점 : 샘플링의 편향성(bias)
- 방법 2
- 장점 : 추이(trend)를 볼 수 있다.
- 단점 : 반복 학습으로 인한 과적합 (Overfitting)
### 3-1-5. 퀴즈 피드백
- 평가 기준
- 1) 0가설, 효과
- 2) 샘플링을 어떻게 할 것인가
- 3) 방법의 장단점을 어떻게 정의할 것인가
- 문제
- 0가설 & 효과값 정의
- 경향성을 논의하기 위해 평균값을 이야기해야 한다.
- 귀무가설(0가설)
- H0 = H1
- 처치 전후의 평균 차이가 없다.
- 대립가설
- H0 != H1
- 처치 전후의 평균 차이가 있다.
- 효과값
-
- Sampling
- 일단 n을 100명 모은다 치자.
- 정의하기 쉽게 비슷한 성질로 sampling 한다.
- 비슷한 점수대
- 방법 정의
- 집단 내(within) ... 제약회사, IT 회사(A/B Test)
- 장점 : 흐름을 볼 때 사용
- 단점 : 학습(learning)효과를 배제할 수 없음
- 천장효과의 문제
- n번 마다 결과값이 들쑥날쑥할 수 있음
- 집단 간(between) ... 실험실
- 장점 :
- 단점 : 통제(control)가 어려움
- 같은 데이터를 갖고도 분석 방법에 따라 결과가 다르니까 방법을 잘 정의해야한다.
- - -
### 3-1-6. 가설 검정을 반복할 때의 문제, '1종 오류'
- (가설 검증할 때) 우리의 가정
- 구간을 95%로 정해놓고 5% 이하일때만 귀무(0)가설을 기각
- 5%는 현실에서 잘 안 일어나니까
- 허나, (실제 우리의 생활에서 5%라는 것은 드물긴 하지만) 반드시 일어나는 확률 중 하나.<br>
<br>
- 즉, 우리는 5%는 우연이라고 치고 0가설을 기각해 버렸지만...
- 실제로는... 우연이 아니라 실제 그럴 수도 있을 가능성이 있다.<br>
<br>
- 즉, 0가설을 기각하면 안 되지만 기각할 오류(또는 확률), 이것을 보고 통계학에서는 1종 오류라고 부른다.
- 우리가 어떠한 통계적 가설검정을 하면, 우리가 정한 구간에 따라 1종 오류가 자동으로 결정되며, 검증을 반복하면 에러가 누적된다.
- - -
### 3-1-7. Solution 1 : T-test
- 방금 살펴본 1종 오류 외에도 다양한 이유(모집단을 모르는 이유)들 때문에 Z분석은 실생활에서는 크게 사용 X
- __Sample을 가지고__ 모집단의 차이를 유추하는, 자유도에 따라 분포모양이 변화하는 t 분포를 이용한 분석
- __두 집단의 차이가 있는지 검증__
#### 3-1-7-1. 차이 정의 : 집단 내 vs. 집단 간
- t 분석이 가장 많이 쓰이는 곳은 __집단 간 차이__ 분석
- ex 1) 새로 개발한 약 test ... 집단 내 설계
- 병원에서 시약을 개발하였는데, 약이 효과가 있는지 보려고 총 10명의 환자에게 약을 3회에 걸쳐 먹이고 먹일 때마다 병세를 체크했다.
- 약을 복용함에 따라 병세가 호전되었다면, 이것은 약이 효과가 있다고 얘기할 수 있다.
- 이러한 실험 방법을 __'집단 내 설계'__ 라고 부르며, 1개의 동일한 집단에 반복해서 측정한다고 해서 __'반복 측정 설계'__ 라고도 부른다.
- 이때는 반복측정 T검정 이라는 분석방법을 써야 함.<br>
<br>
- ex 2) 새로 만든 Toeic 문제지 test ... 집단 간 설계
- 토익점수가 비슷한 사람 100명을 모집하여, 50명씩 그룹을 2개로 나눈다.
- 첫번째 집단에는 새로 만든 토익문제지를 풀게 하고, 두번째 집단은 그냥 원래 하던대로 하게 둔다.
- 특정 기간 후, 각 집단의 평균을 비교하였을때 새로운 문제지로 푼 집단이 토익점수가 더 높으면 이 문제지가 토익점수를 높이는 효과가 있다고 말할 수 있다. <br>
<br>
- 2번의 접근 방법은 우리가 상식적으로 생각하는 '실험'에 가장 가깝다.
- 즉, 뭔가 처치를 하는 <실험군>과 아무것도 안하고 두는 <대조군>을 두고, __처치 전후의 '평균'값 차이를 비교__하는 것.
- 이러한 설계를 __'집단간 설계'__ 라고 부르며, 이때는 __'독립 집단 T검정'__을 써야함.
#### 3-1-7-2. T-test 해석
- α(알파값), p-value ... 0가설 검정
- 분산 동질성 검정 (분산이 같은 경우와 다른 경우에 따라 다르다.)
- 같은 결과라도 분산이 다르면 분포가 달라진다.
- 두 집단의 분산이 같은 경우
- 두 집단의 분산이 다른 경우
- 똑같은 평균값이라고 해도 t-test 값이 다르다.
- 집단 간 비교는 분산(분포가 벌어진 정도)을 생각해야 한다.
- 두 집단의 분산이 어느 정도 동일해야 한다.
- 유의수준(α, 알파값) < 유의확률(p-value)
- 두 집단간 통계적으로 차이가 없다.
- 대립가설 기각
- α > p-value
- 두 집단간 통계저긍로 차이가 있다.
- 귀무가설 기각
<img src="https://photos-1.dropbox.com/t/2/AAAVbbpo-0wC1rJoUHHAClIyjvRJNZmhAnurqvR6Jn0kLQ/12/848258672/png/32x32/3/1528635600/0/2/Screenshot%202018-06-10%2017.26.59.png/EPDS_JsJGHogAigC/LX7GX8jSndNrfwar_aDxSBbChKiA0JVSv_ZXBJhflgo?dl=0&preserve_transparency=1&size=2048x1536&size_mode=3">
- 1종 오류 : 0가설이 참일 때, 기각됨
- 2종 오류 : 대립가설이 참일 때, 기각됨
|사실 / 결정|| H0 채택 || H0 기각 |
|---------||----------||------|
|*H0 True* || 옳은 결정 || 1종 오류 =α|
|*H0 False* || 2종 오류 =β|| 옳은결정|
- 1종 오류(Type1 error): 귀무가설이 참일 때, 귀무가설을 기각하게 되는 오류
- 2종 오류(Type2 error): 대립가설이 참일 때, 대립가설을 기각하게 되는 오류
- 1종 오류는 사실인 귀무가설을 기각하는 오류를 말하며
- 2종 오류는 허위인 귀무가설을 채택하는 오류를 말한다.
- n이 30이 넘으면, 표준정규분포와 동일하다고 본다.
- 정규화가 되었다고 본다.
- 편포여도, 샘플 수가 많아지면 정규화가 됨
- 데이터의 속성부터 파악
-
- 알고리즘은 제일 나중
#### 3-1-7-4. T-test 한계점
- 접급방법이 Z분석과 동일하기에 1종 오류를 극복하지 못했다.
- Q. 두 집단간 차이가 있는지 통계적으로 검증하시오.
- 두 집단간 차이가 있는지 검증하기 위해 귀무가설을 '두 집단간 차이가 없다'로 가정한 분포를 본다.
- 집단간 차이 관측치가 나올 확률을 계산한다.
- 해당 확률이 너무 낮으면(통상 5%이하) 차이가 없다는 0가설을 기각하고 대립가설을 채택한다.
- - -
### 3-1-8. Solution 2 : F 검증 (Anova)
- 여태까지 집단간 차이를 '평균값'의 차이로 정의하였음
- 평균 말고 분포의 특성을 나타내는 수치가 하나 더 있는데 그게 바로 '분산'(분포가 벌어진 정도)<br>
<br>
- 집단 간 비교의 끝판왕
- 세 개 이상의 집단을 비교할 때, 사용
- 평균으로 비교하는 게 아니라 분산으로 비교<br>
<br>
- 수식으로 나타내기
- H0 : A = B = C
- H1 : A, B, C의 분산 중 하나라도 차이가 있다.<br>
<br>
- F 검증의 한계
- 집단간 차이가 있는지 없는지를 알려주지만,
- 차이가 있다면... 어떤 것에서 차이가 있는지 알려주지 않는다. <br>
<br>
- F 검증을 통해 유의미한 차이가 있다면, T-test로 하나씩 잡아야 한다.
- - -
### 3-1-9. 여러 가정들
- 1) 독립성<br>
- <br>
- 2) 정규성
- 샘플 수가 늘어나면 자동으로 정규분포가 된다.
- n이 30개
- 3) 등분산성

- - -
# 4. 상관과 회귀
- 데이터의 유사성은 어떻게 따지는가?
- 예측은 어떻게 하는가?
- - -
## Part 1. 행동 패턴부터 찾기 : 상관
- 통계에서는
- 무엇을 하였는지 알고있다. = 데이터의 패턴이나 추이를 알고있다.
- 데이터의 패턴이나 추이를 알게된다는 것은 그 다음을 예측할수 있다는 뜻이된다.
- 예를 들어, 시간에 따라 증가하는 패턴이 있고, 특정 단위 시간당 증가량을 수학적으로 알고 있다면?
- 아마....다음시간 에도 그럴꺼야 라고 추측할 수 있게 된다.
- 반대로 감소하는 경우에도 마찬가지.
- 즉, 데이터가 변화하는 패턴을 알 수 있으면, 유사한 정도를 알 수 있게 된다.
- 더불어 관계성을 알았기 때문에, 앞으로 어떻게 변할 것인지 예측도 가능해짐 <br>
<br>
- 이렇게 지난 데이터의 패턴을 파악해서, 그 패턴이 계속 유지된다는 가정 하에 미래를 예측하는 분석을 회귀분석이라고 한다.
- 데이터로 유사성을 알아보는데서 출발했는데 자동으로 예측도 가능해짐.
- (1) 비슷하다를 정의한다 = 패턴을 본다 = 상관분석(pearson r)
- (2) 만약 위 (1)에 의하여 집단간 패턴을 알게된다면 앞으로 어떻게 변화할지 예측도 가능! = 회귀분석
<br>
<br>
- 상관분석
- 두 데이터 간의 연관성을 따진다.
- 상관하고 회귀는 떨어질 수 없다.
- perason r(상관) = (같이 변하는 정도) / (서로 각기 변하는 정도) = (공변량) / (변량)<br>
<br>
- 공변량(=공분산)
- (분산, 변량)각자 서로 변화하는 양이 있고, (공분산, 공변량)같이 변하는 패턴이 있다.
- 그걸 보고 유사한 정도를 알 수 있다.
- 상관 값은 (최소)-1 ~ 1(최대)에서 존재
- r = 1이면, 동일하다. y = x
## 4-1. 퀴즈
- 데이터에 따른 상관값
- 두 집단 X와 Y의 상관을 구하자.
- 이때, X = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
<br>
- r(xy) = 1 이라면, y의 값은?
- r(xy) = 0 이라면, y의 값은?
- r(xy) = -1 이라면, y의 값은? <br>
<br>
- 상관 관계 파악 및 그래프 그려보기
- r = -1이면 두 집단 관계성 그래프는 어떤 형태를 띌까?
- r = -1과 r = 0 중에 두 집단의 유사성이 높은쪽은 어느 쪽 값인가?
## 4-2. 피드백
- R
- r에서 +, - 는 방향성일 뿐이다.
- 0.4 이상, 양의 상관관계
- 0.7 이상, 강한 양의 상관관계
- R^2
## 4-3. 상관관계와 인과관계
- [초콜릿과 노벨상](http://www.dt.co.kr/contents.html?article_no=2014111902102251607001)
- 유사성은 데이터의 패턴만 논한다.
- 상관관계와 인과관계를 혼동하지 말자<br>
<br>
- [백신의 효과](http://www.nocutnews.co.kr/news/4389170)
- 상관값을 가지고, 인과를 논하면 안 된다.
- 백신의 효과를 검증하려면 인과관계를 봐야한다.
- 백신의 효과를 상관으로 해석하면 안 된다.<br>
<br>
- 이 분석을 왜 하는가?
- 상관분석의 목적은 두 데이터의 유사성을 따지는 것이다.
- 상관관계는 데이터의 유사성을 보는 것.
- 유사성은 인과관계를 따지는 게 아니다.<br>
<br>
- 선형의 관계에서만 사용한다.
- 사용자 추천
- 유사도 만들기(계산)
- 피어슨 상관관계
- 현실에서는 이 방법을 잘 안쓴다.
- 이건 선형의 관계에서만 사용할 수 있기에<br>
<br>
## Part 2. 예측 : 회귀분석
### 4-4. 최적화
- 패턴을 가장 잘 설명하는 녀석을 찾자 : data-fitting, modeling의 의미
- 회귀분석 = data-fitting = modeling
- LMS 알고리즘
- 오차 : 최소자승법(Least Mean Square, Mean Squared Error)
- RMSE(Root Mean Squared Error)
- 루트를 씌워줌으로써 값을 작게 만들어줌
### 4-5. 회귀선 검정
- R = 피어슨 R
- sig. = p-value
### 4-6. 단순 회귀 vs. 다중 회귀
#### 다중공선성 (1시간 9분 30초)
### 4-7. 선형 회귀 vs. 비선형 회귀
### 4-8. 로지스틱 회귀
- 특징
- 결과값이 binary(즉 0또는 1)이 나옴.
- 즉, 데이터를 분류할 때 사용
- 남자 vs. 여자 같은 데이터를 주고 어떤 데이터가 들어오면 '남자/여자' 이런 식으로 예측
- 이러한 분석을 위해 '카이제곱 분포'와 '우도'라는 개념을 이용하
- - -
#### Overshooting
- 데이터의 문제
- cost가 너무 커서 그렇다.
- rescaling : mean max scaling, 표준화
#### Machine Learning
- 머신러닝
- 목적
- '분류'
- '예측'
- 방법
- 지도학습
- 현업에서는 잘 못 쓴다. 정답이 없기에.
- 비지도학습
- 단점 : 우리의 목적대로 잘 되었느냐? 알 수 없다. 정답이 없으니까. 정답을 모델에 안 알려줬기 때문에. 머신은 벡터 공간만으로 수학적으로 계산할 수 밖에 없다.
- 중요한 건
- 요구 목적 정의
- 데이터 확인
- 알고리즘 선택
```
# 상관예제
import numpy as np
from scipy import stats
A = np.array([1,2,3,5,7,9])
B = np.array([3,6,3,2,1,9])
pearson_r = stats.pearsonr(A,B)
print('pearson r: ', pearson_r[0],' \n' 'p - value: ', pearson_r[1])
# 단순 선형회귀 예제
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
% matplotlib inline
A = np.array([1,2,3,5,7,9])
B = np.array([3,6,3,2,1,9])
slope, intercept, r_value, p_value, std_err = stats.linregress(A,B)
result = stats.linregress(A,B)
plt.plot(A, B, 'o', label='original data')
plt.plot(A, intercept + slope*A, 'r', label='fitted line')
plt.legend()
plt.show()
print('기울기: ', result[0],' ' 'Y절편: ', result[1], ' ' 'pearson r: ',
result[2], ' ' 'p-value: ', result[3], ' ' '표집에러: ', result[4], ' =',result[0],'X','+',result[1])
```
| github_jupyter |
# Load data
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(precision=3, linewidth=120)
import sys
sys.path.append("..")
from scem import ebm, stein, kernel, util, gen
from scem.datasets import *
import matplotlib.pyplot as plt
from tqdm import notebook as tqdm
dname = "banana"
p = load_data(dname, D=2, noise_std = 0.0, seed=0, itanh=False, whiten=False )
x = p.sample(1000)
x_eval = p.sample(100)
import torch
import torch.nn as nn
import numpy as np
import torch.distributions as td
class EBM(nn.Module):
'''
EBM
'''
def __init__(self, Dx, Dz, Dh):
super().__init__()
self.layer_1 = nn.Sequential(nn.Linear(Dx, Dh), nn.ELU(), nn.Linear(Dh, Dz))
self.W = nn.Parameter(torch.randn(Dz,Dz) / 10.)
self.Wx = nn.Parameter(torch.randn(Dz,Dz) / 10.)
self.b1 = nn.Parameter(torch.randn(Dz) / 10.)
self.b2 = nn.Parameter(torch.randn(Dz) / 10.)
self.c = nn.Parameter(torch.randn(1))
def forward(self, X, Z):
# h = self.layer_1(X)
W = self.W
Wx = self.Wx @ self.Wx.T
E = -torch.einsum('ij,jk,ik->i', X, Wx, X) + \
torch.einsum('ij,jk,ik->i', X, W, Z) + X @ self.b1 + Z @ self.b2 + self.c
E = E - ((X**2).sum(-1)/20 + (Z**2).sum(-1))
return E
# dimensionality of model
Dx = 2
Dz = 2
Dh = 100
lebm = ebm.LatentEBMAdapter(EBM(Dx, Dz, Dh), var_type_obs='continuous', var_type_latent='continuous')
def weight_reset(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
m.reset_parameters()
X = torch.as_tensor(x, dtype=torch.float32)
# define kernel
# KSD kernel on x
med2_x = util.pt_meddistance(X)**2
#kx = kernel.KIMQ(b=-0.5, c=1, s2=med2)
med2_px = torch.tensor([med2_x], requires_grad=False)
base_kernel = kernel.BKGauss(med2_px)
class LinFeat(nn.Module):
def __init__(self, Dx):
super(LinFeat, self).__init__()
self.scales = nn.Parameter(torch.zeros([1]))
def forward(self, X):
return X * (torch.exp(self.scales))
lin_feat = LinFeat(Dx)
feat = kernel.FuncFeatureMap(lin_feat, (Dx,), (Dx,))
k_px = kernel.KSTFuncCompose(base_kernel, feat)
k_px = kernel.KGauss(med2_px)
# KCSD kernel on x
med2_qx = torch.tensor([med2_x], requires_grad=True)
k_qx = kernel.KGauss(med2_qx)
# q(z|x)
# cs = gen.CSFactorisedGaussian(Dx, Dz, Dh)
cs = gen.Implicit(Dx, Dz, Dh)
# KCSD kernel on z
Z = cs.sample(1, X)
Z = Z.squeeze(0)
med2_z = util.pt_meddistance(Z)**2
# med2_z = 1.0
med2_z = torch.tensor([med2_z], requires_grad=True)
# k_z = kernel.KIMQ(b=-0.5, c=1, s2=med2_z)
k_z = kernel.KGauss(med2_z)
# approximate score p(x)
approx_score = stein.ApproximateScore(
lebm.score_joint_obs, cs)
approx_score.n_sample = 100
# optimizer settings
learning_rate_q = 1e-3
weight_decay_q = 0
optimizer_q = torch.optim.Adam(cs.parameters(), lr=learning_rate_q,
weight_decay=weight_decay_q)
# optimizer settings for p(x)
learning_rate_p = 1e-3
weight_decay_p = 0
optimizer_p = torch.optim.Adam(lebm.parameters(), lr=learning_rate_p,
weight_decay=weight_decay_p)
optimizer_med2_qx = torch.optim.Adam([med2_qx], lr=learning_rate_q,
weight_decay=weight_decay_q)
# optimizer_med2_px = torch.optim.Adam(lin_feat.parameters(), lr=learning_rate_p,
# weight_decay=weight_decay_p)
optimizer_med2_px = torch.optim.Adam([med2_px], lr=learning_rate_p,
weight_decay=weight_decay_p)
optimizer_med2_z = torch.optim.Adam([med2_z], lr=learning_rate_q,
weight_decay=weight_decay_q)
iter_p = 2000
iter_q = 10
batch_size = 100
def inner_loop(niter, X, cs):
for i in range(niter):
Z = cs.sample(1, X)
Z = Z.squeeze(0)
loss = stein.kcsd_ustat(
X, Z, lebm.score_joint_latent, k_qx, k_z)
optimizer_q.zero_grad()
loss.backward(retain_graph=False)
optimizer_q.step()
# Z = cs.sample(1, X)
# Z = Z.squeeze(0)
# loss = -stein.kcsd_ustat(
# X, Z, lebm.score_joint_latent, k_qx, k_z)
# optimizer_med2_qx.zero_grad()
# optimizer_med2_z.zero_grad()
# loss.backward(retain_graph=False)
# optimizer_med2_qx.step()
# optimizer_med2_z.step()
losses = []
med2s = []
with tqdm.tqdm(range(50000)) as ts:
for t in ts:
# sample data
perm = torch.randperm(X.shape[0]).detach()
idx = perm[:batch_size]
X_ = X[idx].detach()
# train recognition model and KCSD kernels
inner_loop(iter_q, X_, cs)
loss = stein.ksd_ustat(X_, approx_score, k_px)
losses += [loss.item()]
med2s += [med2_px.item(), med2_qx.item(), med2_z.item()],
# med2s += [lin_feat.scales.exp().item()*med2_x, med2_qx.item(), med2_z.item()],
ts.set_postfix(loss=loss.item())
# train model p
optimizer_p.zero_grad()
loss.backward(retain_graph=False)
optimizer_p.step()
# # train KSD kernel
# perm = torch.randperm(X.shape[0]).detach()
# idx = perm[:batch_size]
# X_ = X[idx].detach()
# loss = -stein.ksd_ustat(X_, approx_score, k_px)
# optimizer_med2_px.zero_grad()
# loss.backward(retain_graph=False)
# optimizer_med2_px.step()
plt.plot(losses)
plt.plot(med2s)
# form a grid for numerical normalisation
from itertools import product
ngrid = 20
grid = torch.linspace(-10, 10, ngrid)
xz_eval = torch.tensor(list(product(*[grid]*4)))
x_eval = xz_eval[:,:2]
z_eval = xz_eval[:,2:]
# true log density
E_true = p.logpdf_multiple(torch.tensor(list(product(*[grid]*2))))
E_true -= E_true.max()
# EBM log density
E_eval = lebm(x_eval, z_eval).reshape(ngrid,ngrid,ngrid,ngrid).exp().detach()
E_eval /= E_eval.sum()
E_eval = E_eval.sum(-1).sum(-1)
E_eval.log_()
E_eval -= E_eval.max()
# E_eval = E_eval.sum(-1).sum(-1)
def normalise(E):
if isinstance(E, np.ndarray):
E = np.exp(E)
else:
E = E.exp()
E /= E.sum()
return E
fig, axes = plt.subplots(2,2,figsize=(6,6), sharex=True, sharey=True)
ax = axes[0,0]
ax.pcolor(grid, grid,E_true.reshape(ngrid,ngrid), shading='auto', vmin=-10, vmax=0)
ax.scatter(x[:,1], x[:,0], c="r", s=1, alpha=0.05)
ax = axes[1,0]
ax.pcolor(grid, grid,normalise(E_true).reshape(ngrid,ngrid), shading='auto')
ax = axes[0,1]
ax.pcolor(grid, grid,E_eval,shading='auto', vmin=-10, vmax=0, )
ax.scatter(x[:,1], x[:,0], c="r", s=1, alpha=0.05)
ax = axes[1,1]
ax.pcolor(grid, grid,normalise(E_eval),shading='auto' )
ax.scatter(x[:,1], x[:,0], c="r", s=1, alpha=0.0)
axes[0,0].set_ylabel("logp")
axes[1,0].set_ylabel("p")
axes[0,0].set_title("data")
axes[0,1].set_title("KSD")
axes[0,0].set_xlim(-10,10)
z = cs.sample(1000,X_).detach().numpy()
for i in range(10):
plt.errorbar(z[:,i,0].mean(0), z[:,i,1].mean(0), xerr=z[:,i,0].std(0), yerr=z[:,i,1].std(0),)
plt.errorbar(z[:,:,0].mean((0,1)), z[:,:,1].mean((0,1)), xerr=z[:,:,0].std((0,1)), yerr=z[:,:,1].std((0,1)), lw=5)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/csaybar/EarthEngineMasterGIS/blob/master/module06/04_RUSLE.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<!--COURSE_INFORMATION-->
<img align="left" style="padding-right:10px;" src="https://user-images.githubusercontent.com/16768318/73986808-75b3ca00-4936-11ea-90f1-3a6c352766ce.png" width=10% >
<img align="right" style="padding-left:10px;" src="https://user-images.githubusercontent.com/16768318/73986811-764c6080-4936-11ea-9653-a3eacc47caed.png" width=10% >
**Bienvenidos!** Este *colab notebook* es parte del curso [**Introduccion a Google Earth Engine con Python**](https://github.com/csaybar/EarthEngineMasterGIS) desarrollado por el equipo [**MasterGIS**](https://www.mastergis.com/). Obten mas informacion del curso en este [**enlace**](https://www.mastergis.com/product/google-earth-engine/). El contenido del curso esta disponible en [**GitHub**](https://github.com/csaybar/EarthEngineMasterGIS) bajo licencia [**MIT**](https://opensource.org/licenses/MIT).
### **Ejercicio N°01: RUSLE a Nivel Mundial**
<img src="https://user-images.githubusercontent.com/16768318/73690808-1604b700-46c9-11ea-8bdd-43e0e490a0a3.gif" align="right" width = 60%/>
Genere una funcion para calcular la Ecuacion Universal de Perdida de Suelo (RUSLE) para cualquier parte del mundo. La funcion debe tener los siguientes parametros.**rusle(roi, prefix, folder, scale)**
http://cybertesis.unmsm.edu.pe/handle/cybertesis/10078
```
#@title Credenciales Google Earth Engine
import os
credential = '{"refresh_token":"PON_AQUI_TU_TOKEN"}'
credential_file_path = os.path.expanduser("~/.config/earthengine/")
os.makedirs(credential_file_path,exist_ok=True)
with open(credential_file_path + 'credentials', 'w') as file:
file.write(credential)
import ee
ee.Initialize()
#@title mapdisplay: Crea mapas interactivos usando folium
import folium
def mapdisplay(center, dicc, Tiles="OpensTreetMap",zoom_start=10):
'''
:param center: Center of the map (Latitude and Longitude).
:param dicc: Earth Engine Geometries or Tiles dictionary
:param Tiles: Mapbox Bright,Mapbox Control Room,Stamen Terrain,Stamen Toner,stamenwatercolor,cartodbpositron.
:zoom_start: Initial zoom level for the map.
:return: A folium.Map object.
'''
center = center[::-1]
mapViz = folium.Map(location=center,tiles=Tiles, zoom_start=zoom_start)
for k,v in dicc.items():
if ee.image.Image in [type(x) for x in v.values()]:
folium.TileLayer(
tiles = v["tile_fetcher"].url_format,
attr = 'Google Earth Engine',
overlay =True,
name = k
).add_to(mapViz)
else:
folium.GeoJson(
data = v,
name = k
).add_to(mapViz)
mapViz.add_child(folium.LayerControl())
return mapViz
```
### **1) Factor R**
El **factor R** es el factor de erosividad de la lluvia. Este factor indica el potencial erosivo de la lluvia que afecta en el proceso de erosion del suelo. Haciendo una analogia, se podria decir que una lluvia fuerte un dia al año puede producir suficiente energia para erosionar el suelo que varias lluvias de mediana intensidad a lo largo de un ano.
El factor erosividad (R) es definido como la sumatoria anual de los promedios de los valores individuales del indice de tormenta de erosion (EI30). Donde E es la energia cinetica por unidad de area e I30 es la maxima intensidad en 30 minutos de precipitacion. Esto se puede definir en la siguiente ecuacion:
<img src="https://user-images.githubusercontent.com/16768318/73694650-67fd0b00-46d0-11ea-87f6-4ed9501cf964.png" width = 60%>
Por tanto, la energia de la tormenta (EI o R) indica el volumen de lluvia y escurrimiento, pero una larga y suave lluvia puede tener el mismo valor de E que una lluvia de corta y mas alta intensidad. (Mannaerts, 1999). La energia se calcula a partir de la formula de Brown y Foster:
<img src="https://user-images.githubusercontent.com/16768318/73694782-b3171e00-46d0-11ea-94fe-94f3f57941c5.png" width = 40%>
A partir de la ecuación anterior, el calculo del factor R es un proceso complejo y requiere datos horarios o diarios de varios anos. Por lo que se han desarrollado diferentes ecuaciones que adaptan la erosividad local mediante una formula que solo requiera una data mensual o anual de precipitacion. A continuacion, se muestran algunas de las formulas adaptadas para una precipitacion media anual.
<img src="https://user-images.githubusercontent.com/16768318/73694993-228d0d80-46d1-11ea-8bc4-9962963850b7.png">
Si bien es cierto, se usa ampliamente una precipitacion media anual para estimar el **factor R** debido a la escasez de informacion, para este ejemplo se ha optado por utilizar la formula desarrollada por **(Wischmeier & Smith, 1978)** debido a que se cuenta con una serie historica de informacion de precipitacion mensual. La formula es:
<img src="https://user-images.githubusercontent.com/16768318/73695488-2b321380-46d2-11ea-8033-0063f27698d8.png" width = 50%>
```
# Monthly precipitation in mm at 1 km resolution:
# https://zenodo.org/record/3256275#.XjibuDJKiM8
clim_rainmap = ee.Image("OpenLandMap/CLM/CLM_PRECIPITATION_SM2RAIN_M/v01")
year = clim_rainmap.reduce(ee.Reducer.sum())
R_monthly = ee.Image(10).pow(ee.Image(1.5).multiply(clim_rainmap.pow(2).divide(year).log10().subtract(-0.08188))).multiply(1.735)
factorR = R_monthly.reduce(ee.Reducer.sum())
center_coordinate = [0,0]
palette_rain = ["#450155", "#3B528C", "#21918D", "#5DCA63","#FFE925"]
mapdisplay(center_coordinate, {'Factor_R':factorR.getMapId({'min':0,'max':6000,'palette':palette_rain})},zoom_start=3)
```
### **2) Factor K**
A diferencia del factor R, el factor K muestra qué tan susceptible es el suelo a la erosion hidrica, esto es determinado por las propiedades fisicas y quimicas del suelo, que dependen de las caracteristicas de estos. Para determinar el factor K, existen una gran cantidad de formulas empiricas, adecuadas para diversos lugares del mundo y donde intervienen caracteristicas del suelo como porcentaje de arena, limo, arcilla; estructura del suelo; contenido de carbono organico o materia orgánica; entre otros.
El factor K puede variar en una escala de 0 a 1, donde 0 indica suelos con la menor susceptibilidad a la erosion y 1 indica suelos altamente susceptibles a la erosion hidrica del suelo; cabe mencionar que esta escala fue hecha para el sistema de unidades americanas, y adaptandose al sistema internacional, la escala varia a normalmente entre 0 y 0.07.
A continuacion, se muestran algunas ecuaciones para la estimación de este factor:
<img src="https://user-images.githubusercontent.com/16768318/73704444-039b7500-46eb-11ea-9ccd-b7850bb17911.png" width = 50%>
<img src="https://user-images.githubusercontent.com/16768318/73704442-039b7500-46eb-11ea-870c-a557ca50b777.png" width = 50%>
<img src="https://user-images.githubusercontent.com/16768318/73704443-039b7500-46eb-11ea-9469-104f04983dfd.png" width = 50%>
Para este ejemplo se ha optado por utilizar la formula desarrollada por **Williams (1975)**.
```
# Cargamos toda la informacion necesaria para estimar el factor K
sand = ee.Image("OpenLandMap/SOL/SOL_CLAY-WFRACTION_USDA-3A1A1A_M/v02").select('b0')
silt = ee.Image('users/aschwantes/SLTPPT_I').divide(100)
clay = ee.Image("OpenLandMap/SOL/SOL_SAND-WFRACTION_USDA-3A1A1A_M/v02").select('b0')
morg = ee.Image("OpenLandMap/SOL/SOL_ORGANIC-CARBON_USDA-6A1C_M/v02").select('b0').multiply(0.58)
sn1 = sand.expression('1 - b0 / 100', {'b0': sand})
orgcar = ee.Image("OpenLandMap/SOL/SOL_ORGANIC-CARBON_USDA-6A1C_M/v02").select('b0')
#Juntando todas las imagenes en una sola
soil = ee.Image([sand, silt, clay, morg, sn1, orgcar]).rename(['sand', 'silt', 'clay', 'morg', 'sn1', 'orgcar'] )
factorK = soil.expression(
'(0.2 + 0.3 * exp(-0.0256 * SAND * (1 - (SILT / 100)))) * (1 - (0.25 * CLAY / (CLAY + exp(3.72 - 2.95 * CLAY)))) * (1 - (0.7 * SN1 / (SN1 + exp(-5.51 + 22.9 * SN1))))',
{
'SAND': soil.select('sand'),
'SILT': soil.select('silt'),
'CLAY': soil.select('clay'),
'MORG': soil.select('morg'),
'SN1': soil.select('sn1'),
'CORG': soil.select('orgcar')
});
center_coordinate = [0,0]
palette_k = palette = [
'FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718', '74A901',
'66A000', '529400', '3E8601', '207401', '056201', '004C00', '023B01',
'012E01', '011D01', '011301'
]
viz_param_k = {'min': 0.0, 'max': 0.5, 'palette': palette_k};
mapdisplay(center_coordinate, {'Factor_K':factorK.getMapId(viz_param_k)},zoom_start=3)
```
### **3) Factor LS**
El factor LS expresa el efecto de la topografia local sobre la tasa de erosion del suelo, combinando los efectos de la longitud de la pendiente (L) y la inclinación de la pendiente (S). A medida que mayor sea la longitud de la pendiente, mayor sera la cantidad de escorrentia acumulada y de la misma forma, mientras mas pronunciada sea la pendiente de la superficie, mayor sera la velocidad de la escorrentia, que influye directamente en la erosion. Existen diversas metodologias basadas en SIG para calcular estos factores, como se pueden mostrar a continuación:
<img src="https://user-images.githubusercontent.com/16768318/73706484-7ce99680-46f0-11ea-8e0e-5fbb4a00731d.png" width = 50%>
```
facc = ee.Image("WWF/HydroSHEDS/15ACC")
dem = ee.Image("WWF/HydroSHEDS/03CONDEM")
slope = ee.Terrain.slope(dem)
ls_factors = ee.Image([facc, slope]).rename(['facc','slope'])
factorLS = ls_factors.expression(
'(FACC*270/22.13)**0.4*(SLOPE/0.0896)**1.3',
{
'FACC': ls_factors.select('facc'),
'SLOPE': ls_factors.select('slope')
});
center_coordinate = [0,0]
palette_ls = palette = [
'FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718', '74A901',
'66A000', '529400', '3E8601', '207401', '056201', '004C00', '023B01',
'012E01', '011D01', '011301'
]
viz_param_k = {'min': 0, 'max': 100, 'palette': palette_ls};
mapdisplay(center_coordinate, {'Factor_LS':factorLS.getMapId(viz_param_k)},zoom_start=3)
```
### **4) Factor C**
El factor C se utiliza para determinar la eficacia relativa de los sistemas de manejo del suelo y de los cultivos en terminos de prevencion o reduccion de la perdida de suelo. Este factor indica como la cobertura vegetal y los cultivos afectaran la perdida media anual de suelos y como se distribuira el potencial de perdida de suelos en el tiempo (Rahaman, 2015).
El valor de C depende del tipo de vegetacion, la etapa de crecimiento y el porcentaje de cobertura. Valores mas altos del factor C indican que no hay efecto de cobertura y perdida de suelo, mientras que el menor valor de C significa un efecto de cobertura muy fuerte que no produce erosion.
```
ndvi_median = ee.ImageCollection("MODIS/006/MOD13A2").median().multiply(0.0001).select('NDVI')
geo_ndvi = [
'FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718', '74A901',
'66A000', '529400', '3E8601', '207401', '056201', '004C00', '023B01',
'012E01', '011D01', '011301'
]
l8_viz_params = {'palette':geo_ndvi,'min':0,'max': 0.8}
mapdisplay([0,0],{'composite_median':ndvi_median.getMapId(l8_viz_params)},zoom_start=3)
```
Otra forma de hallar este factor C, es haciendo una comparación entre el NDVI a partir de las fórmulas Van de Kniff (1999) [C1] y su adaptacion para paises asiaticos, que tambien se adecuan a la realidad de la costa peruana de Lin (2002) [C2]. Por ultimo se tiene la ecuacion formulada por De Jong(1994) [C3] adaptado a estudios de degradacion de suelos en un entorno mediterraneo.
<center>
<img src="https://user-images.githubusercontent.com/16768318/73713048-e6bf6b80-4703-11ea-80b1-1940e6b55707.png" width = 50%>
</center>
```
factorC = ee.Image(0.805).multiply(ndvi_median).multiply(-1).add(0.431)
```
### **5) Calculo de la Erosion**
**A = R\*K\*LS\*C\*1**
<img src="https://user-images.githubusercontent.com/16768318/73690808-1604b700-46c9-11ea-8bdd-43e0e490a0a3.gif">
```
erosion = factorC.multiply(factorR).multiply(factorLS).multiply(factorK)
geo_erosion = ["#00BFBF", "#00FF00", "#FFFF00", "#FF7F00", "#BF7F3F", "#141414"]
l8_viz_params = {'palette':geo_erosion,'min':0,'max': 6000}
mapdisplay([0,0],{'composite_median':erosion.getMapId(l8_viz_params)},zoom_start=3)
```
### **Funcion para descargar RUSLE en cualquier parte del mundo**
[Respuesta aqui](https://gist.github.com/csaybar/19a9db35f8c8044448d885b68e8c9eb8)
```
#Ponga su funcion aqui (cree un snippet!)
# Ambito de estudio aqui
geometry = ee.Geometry.Polygon([[[-81.9580078125,-5.659718554577273],
[-74.99267578125,-5.659718554577273],
[-74.99267578125,2.04302395742204],
[-81.9580078125,2.04302395742204],
[-81.9580078125,-5.659718554577273]]])
ec_erosion = rusle(geometry,'RUSLE_','RUSLE_MASTERGIS', scale = 100)
# Genere una vizualizacion de su ambito de estudio
geo_erosion = ["#00BFBF", "#00FF00", "#FFFF00", "#FF7F00", "#BF7F3F", "#141414"]
l8_viz_params = {'palette':geo_erosion,'min':0,'max': 6000}
center = geometry.centroid().coordinates().getInfo()
mapdisplay(center,{'composite_median':ec_erosion.select('A').getMapId(l8_viz_params)},zoom_start=6)
```
### **¿Dudas con este Jupyer-Notebook?**
Estaremos felices de ayudarte!. Create una cuenta Github si es que no la tienes, luego detalla tu problema ampliamente en: https://github.com/csaybar/EarthEngineMasterGIS/issues
**Tienes que dar clic en el boton verde!**
<center>
<img src="https://user-images.githubusercontent.com/16768318/79680748-d5511000-81d8-11ea-9f89-44bd010adf69.png" width = 70%>
</center>
| github_jupyter |
## Cleaning up Data
Sometimes data comes to us in a form that requires some cleaning before we can begin with further analyses. In this exercise we will explore some tools and strategies for that.
We'll begin by reading in a modified version of the Ithaca climate dataset that we worked with previously. You should notice that there is a new column in the dataframe, indicating the prevailing Sky conditions for each day (sunny, cloudy, etc.).
Execute the code cell below.
```
import pandas as pd
df = pd.read_csv('IthacaDailyClimateJan2018expanded.csv')
df.info()
```
Let's look at the dataframe in its entirety. Execute the code cell below.
```
df
```
### Step 1.
Let's examine the column names in a bit more detail. Execute the code cell below.
```
df.columns
```
Notice that there are three different temperature columns, but the naming conventions differ for all three: "Max T", "Minimum Temp", and "Average Temperature". This can lead to confusion because you need to keep track in your mind how temperature is labeled in each column name (T, Temp, Temperature). Furthermore, the largest temperature is labeled with the shorthand "Max", while the smallest temperature is labeled with the full word "Minimum". While we are free to choose whatever names we want for our data (within any syntactic rules), it is useful — for both you the developer and for anyone else who might be using the code — to establish some uniformity and consistency in naming. Fortunately, we don't need to go back and modify the original csv file; we can just modify the column names in our code.
In the code cell below, use the ```rename``` method on a dataframe to rename the columns as follows:
* rename 'Max T' to be 'Maximum Temperature'
* rename 'Minimum Temp' to be 'Minimum Temperature'
Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html) for ```df.rename``` and find an example to "Rename columns using a mapping", which will demonstrate how to pass a dictionary to the method in order to change the column names. Hint: each element in the dictionary is a key/value pair in which the key is the original name and the value is the new name. For example: {original_name: new_name}.
Note that the ```rename``` method will, by default, return a new dataframe with the modified names. You can either assign that new dataframe to a variable (you can even just reassign it to the name ```df```) or you can use the ```inplace=True``` option to modify ```df``` directly.
For this exercise, you will use the ```inplace=True``` option in the `rename` method to modify ```df``` directly. After you've done the renaming, inspect the column names of the dataframe to verify that you've changed the names as intended (and if necessary, modify your renaming code until the column names are as desired.)
```
df.rename(columns={"Max T": "Maximum Temperature", "Minimum Temp": "Minimum Temperature"}, inplace=True)
```
## Self-Check
Run the cell below to test the correctness of your code in the cell above.
```
# Run this self-test cell to check your code; do not add code or delete code in this cell
from jn import testChangeNameMax, testChangeNameMin
try:
print(testChangeNameMax(df))
except Exception as e:
print("Error!\n" + str(e))
try:
print(testChangeNameMin(df))
except Exception as e:
print("Error!\n" + str(e))
```
Execute the cell below and inspect the column names of the dataframe.
```
df.columns
```
### Step 2.
Let's examine the new data in the 'Sky' column. The entries here are strings representing categorical data, such as "sunny", "partly sunny", "cloudy" and "partly cloudy". Because they are text-based, it is often useful to verify that there are no mispellings or spelling variants. A useful method on a Series, or on a column extracted from a DataFrame, is ```unique```, which returns an array of unique entries in that Series or column.
The code cell below contains an expression to extract the unique entries of the 'Sky' column of the dataframe.
```
df['Sky'].unique()
```
While it might not have been obvious when looking at the entire dataframe, extracting the group of unique entries in the 'Sky' column makes it obvious that there are some misspellings and other textual problems, such as an extra space at the beginning of ' partly cloudy'.
Pandas dataframes have a ```replace``` method that allows you to change values in a dataframe according to a specified rule, as described [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html). If you want to replace text entries (strings), you either need to provide the full string to replace, or use the ```regex=True``` option to specify a regular expression (regex) for replacing part of a string. In this exercise, let's just specify fully the strings we want to replace. Note that the ```replace``` method is for changing values in the body of the dataframe, whereas the ```rename``` method used above is for changing the names of the index or column labels. When called on an entire dataframe, the ```replace``` method will replace all instances of the specified text, regardless of what column it is in. (If you wanted to replace values only in a particular column, you would first extract that column before doing the replacement.)
In the code cell below, write and evaluate code to replace all the misspelled entries in the dataframe with their corrected versions. The easiest way to do this is to provide all the corrections in a dictionary which is passed as an argument to the method. Note that by default, the ```replace``` method will return a new dataframe, so you can either assign it to a variable, or modify the original dataframe in place by using the `inplace=True` option.
For this exercise, you will modify `df` in place by using the `inplace=True` option.
```
df.replace({"party sunny": "Partially Sunny", "couldy": "Cloudy", "partly cloudy": "Partially Cloudy"}, inplace=True)
```
## Self-Check
Run the cell below to test the correctness of your code in the cell above.
```
# Run this self-test cell to check your code; do not add code or delete code in this cell
from jn import testChangePartlySunny, testChangeCloudy, testChangePartlyCloudy
try:
print(testChangePartlySunny(df))
except Exception as e:
print("Error!\n" + str(e))
try:
print(testChangeCloudy(df))
except Exception as e:
print("Error!\n" + str(e))
try:
print(testChangePartlyCloudy(df))
except Exception as e:
print("Error!\n" + str(e))
```
After doing the text replacement, re-examine the unique entries of the 'Sky' column to verify that you have corrected the problems with the original data. (There should be 5 unique entries.) If you have not fixed all the problems, modify your replacement code above and continue until the data are corrected.
```
# YOUR CODE HERE
df['Sky'].unique()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import csv
import itertools
import os
from dataclasses import dataclass
from datetime import datetime
import numpy as np
import pandas as pd
from func_timeout import FunctionTimedOut, func_timeout
from sklearn.metrics import accuracy_score
from sklearn.svm import LinearSVC
from tqdm.notebook import tqdm
import warnings
warnings.filterwarnings("ignore")
files = !(find ../UCRArchive_2018/ -maxdepth 2 -type f -name "*TRAIN.tsv" -exec ls -al {} \; | sort -k 5 -n | sed 's/ \+/\t/g' | cut -f 9)
@dataclass
class FileNames:
name: str
train_file: str
test_file: str
train_dtw: str
train_fastdtw: str
test_dtw: str
test_fastdtw: str
sort_files = []
for file_name in tqdm(files):
name = file_name.split("/")[-1].replace("_TRAIN.tsv", "")
test_file = file_name.replace("TRAIN.tsv", "TEST.tsv")
train_dtw = file_name.replace(".tsv", "_train_dtw.csv")
train_fastdtw = file_name.replace(".tsv", "_train_fastdtw.csv")
test_dtw = test_file.replace(".tsv", "_train_dtw.csv")
test_fastdtw = test_file.replace(".tsv", "_train_fastdtw.csv")
if not all(
[
os.path.exists(x)
for x in (
train_dtw,
train_fastdtw,
test_dtw,
test_fastdtw,
)
]
):
continue
fl = FileNames(
name=name,
train_file=file_name,
test_file=test_file,
train_dtw=train_dtw,
train_fastdtw=train_fastdtw,
test_dtw=test_dtw,
test_fastdtw=test_fastdtw,
)
frame = pd.read_csv(file_name, delimiter="\t", header=None)
test_frame = pd.read_csv(test_file, delimiter="\t", header=None)
sort_files.append([frame.shape[0] + test_frame.shape[0], frame.shape[1], fl])
sort_files = sorted(sort_files, key=lambda x: x[0])
sort_files
from typing import Any, Callable, List, Optional, Union
import numpy as np
from fastdtw import fastdtw
from scipy.stats import spearmanr
from sklearn.base import RegressorMixin, TransformerMixin
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import RANSACRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.utils import check_array, check_random_state
from tqdm.notebook import tqdm
def fastdtw_distance(x: Any, y: Any) -> float:
return fastdtw(x, y)[0]
def euclidian_distance(x: Any, y: Any) -> float:
return np.linalg.norm(x - y)
# Implementation of FDTW using Linear Regression for
# new prototype selection
class FeatureDTWTransformer(TransformerMixin):
def __init__(
self,
n_start: int = 30,
n_add: int = 10,
n_max: int = 100,
by: str = "mean",
p_max: float = 0.7,
regressor: RegressorMixin = GradientBoostingRegressor,
copy_prototypes: bool = True,
distance_func: Callable[[Any, Any], float] = fastdtw_distance,
random_state: Optional[int] = None,
n_jobs: Optional[int] = None,
) -> None:
self.n_start = n_start
self.n_add = n_add
self.n_max = n_max
self.random_state = random_state
self.copy_prototypes = copy_prototypes
self.distance_func = distance_func
self.regressor = regressor
self.n_jobs = n_jobs
self.by = by
self.p_max = p_max
def fit_step(self, X: Any, y: Any = None) -> bool:
if self.index_.shape[0] >= self.n_max:
return False
X_ = self.distances_
p_all = []
regressor = self.regressor(random_state=42)
for i, prototype in enumerate(self.prototypes_):
regressor.fit(X_[self.index_][:, i].reshape(-1, 1), self.s_corr_[i])
predicted = regressor.predict(X_[:, i].reshape(-1, 1))
predicted[predicted > 1] = 1
predicted[predicted < -1] = -1
predicted[self.index_] = self.s_corr_[i]
p_all.append(predicted)
p_all = np.abs(p_all)
p_mean = p_all.mean(axis=0)
p_max = p_all.max(axis=0)
condition = (p_mean < 0.5) & (p_max < self.p_max)
if self.by == "mean":
sort_by = p_mean.argsort()
else:
sort_by = p_max.argsort()
new_r = sort_by[condition[sort_by]][: self.n_add]
if new_r.shape[0] == 0:
return False
self.add_protype(new_r, X)
return True
def fit(self, X: Any, y: Any = None) -> "FeatureDTWTransformer":
raw_data = self.__check_array(X)
self.fin = False
self._shape = raw_data.shape
rnd = check_random_state(self.random_state)
self.index_ = rnd.choice(self._shape[0], self.n_start, replace=False)
self.prototypes_ = np.array(raw_data[self.index_], copy=self.copy_prototypes)
self.transform(X)
while self.fit_step(X, y):
pass
return self
def transform(self, X: Any, y: Any = None) -> np.ndarray:
raw_data = self.__check_array(X)
self.distances_ = raw_data[:, self.index_]
self.s_corr_ = spearmanr(self.distances_, axis=0)[0]
return self.distances_
def add_protype(self, index: Union[int, List[int]], X: Any) -> np.ndarray:
if isinstance(index, int):
index = [index]
mask = ~np.isin(index, self.index_)
new_index = np.array(index)[mask]
raw_data = self.__check_array(X)
new_prototypes = raw_data[new_index]
self.distances_ = np.hstack(
(
self.distances_,
raw_data[:, new_index],
)
)
self.prototypes_ = np.vstack((self.prototypes_, new_prototypes))
self.index_ = np.append(self.index_, new_index)
self.s_corr_ = spearmanr(self.distances_, axis=0)[0]
return self.distances_
def remove_prototype(self, index: Union[int, List[int]]) -> np.ndarray:
if isinstance(index, int):
index = [index]
mask = ~np.isin(self.index_, index)
self.index_ = self.index_[mask]
self.prototypes_ = self.prototypes_[mask]
self.distances_ = self.distances_[:, mask]
return self.distances_
def __check_array(self, X: Any) -> np.ndarray:
return check_array(
X, accept_sparse=False, dtype="numeric", force_all_finite="allow-nan"
)
np.random.seed(42)
with open(f"../logs/classification-{datetime.now().isoformat()}.csv", "w") as out_file:
writer = csv.writer(out_file, delimiter=",")
writer.writerow(
[
"dataset",
"n_features",
"n_max",
"1NN_fastdtw",
"features_fastdtw",
"fdtw_linear_fastdtw",
"n_linear_used",
]
)
for n_samples, n_len, file_name in tqdm(sort_files):
name = file_name.name
train_frame = pd.read_csv(file_name.train_file, delimiter="\t", header=None)
test_frame = pd.read_csv(file_name.test_file, delimiter="\t", header=None)
y_train = train_frame[0].values
y_test = test_frame[0].values
train_fastdtw = pd.read_csv(file_name.train_fastdtw, delimiter=",", header=None)
test_fastdtw = pd.read_csv(file_name.test_fastdtw, delimiter=",", header=None)
n_max = np.min([np.rint(0.5 * train_fastdtw.shape[0]).astype(int), 100])
row = [name, n_samples, n_max]
row.append(
round(
accuracy_score(
y_pred=y_train[np.argmin(test_fastdtw.values, axis=1)],
y_true=y_test,
),
3,
)
)
# Features DTW
try:
X_train = train_fastdtw.values
X_test = test_fastdtw.values
svc = LinearSVC(random_state=42, max_iter=1000)
func_timeout(600, svc.fit, args=(X_train, y_train))
predicted = func_timeout(600, svc.predict, args=(X_test,))
row.append(round(accuracy_score(y_true=y_test, y_pred=predicted), 3))
except FunctionTimedOut:
continue
try:
arr = []
n_used = []
for i in range(3):
n_start = np.max([np.rint(0.2 * X_train.shape[0]).astype(int), 10])
n_add = n_start
fdtw = FeatureDTWTransformer(
n_start=n_start, n_add=n_add, n_max=n_max, by="mean", p_max=0.7
)
X_train = train_fastdtw.values
fdtw.fit(X_train)
X_train = X_train[:, fdtw.index_]
X_test = test_fastdtw.values[:, fdtw.index_]
svc = LinearSVC(random_state=42, max_iter=1000)
func_timeout(600, svc.fit, args=(X_train, y_train))
predicted = func_timeout(600, svc.predict, args=(X_test,))
arr.append(accuracy_score(y_true=y_test, y_pred=predicted))
n_used.append(fdtw.index_.shape[0])
row.append(round(np.mean(arr), 3))
row.append(round(np.mean(n_used), 3))
except FunctionTimedOut:
continue
writer.writerow(row)
out_file.flush()
```
| github_jupyter |
# Work with Data
Data is the foundation on which machine learning models are built. Managing data centrally in the cloud, and making it accessible to teams of data scientists who are running experiments and training models on multiple workstations and compute targets is an important part of any professional data science solution.
In this notebook, you'll explore two Azure Machine Learning objects for working with data: *datastores*, and *datasets*.
## Connect to your workspace
To get started, connect to your workspace.
> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## Work with datastores
In Azure ML, *datastores* are references to storage locations, such as Azure Storage blob containers. Every workspace has a default datastore - usually the Azure storage blob container that was created with the workspace. If you need to work with data that is stored in different locations, you can add custom datastores to your workspace and set any of them to be the default.
### View datastores
Run the following code to determine the datastores in your workspace:
```
# Get the default datastore
default_ds = ws.get_default_datastore()
# Enumerate all datastores, indicating which is the default
for ds_name in ws.datastores:
print(ds_name, "- Default =", ds_name == default_ds.name)
```
You can also view and manage datastores in your workspace on the **Datastores** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com).
### Upload data to a datastore
Now that you have determined the available datastores, you can upload files from your local file system to a datastore so that it will be accessible to experiments running in the workspace, regardless of where the experiment script is actually being run.
```
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
```
## Work with datasets
Azure Machine Learning provides an abstraction for data in the form of *datasets*. A dataset is a versioned reference to a specific set of data that you may want to use in an experiment. Datasets can be *tabular* or *file*-based.
### Create a tabular dataset
Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *tabular* dataset.
```
from azureml.core import Dataset
# Get the default datastore
default_ds = ws.get_default_datastore()
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Display the first 20 rows as a Pandas dataframe
tab_data_set.take(20).to_pandas_dataframe()
```
As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques.
### Create a file Dataset
The dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a *file* dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files.
```
#Create a file dataset from the path on the datastore (this may take a short while)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# Get the files in the dataset
for file_path in file_data_set.to_path():
print(file_path)
```
### Register datasets
Now that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace.
We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**.
```
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Register the file dataset
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
```
You can view and manage datasets on the **Datasets** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com). You can also get a list of datasets from the workspace object:
```
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
```
The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this:
```python
dataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)
```
### Train a model from a tabular dataset
Now that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as *inputs* in the estimator being used to run the script.
Run the following two code cells to create:
1. A folder named **diabetes_training_from_tab_dataset**
2. A script that trains a classification model by using a tabular dataset that is passed to it as an argument.
```
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Run, Dataset
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Get the script arguments (regularization rate and training dataset ID)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter (passed as an argument to the script)
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# Get the training dataset
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
```
> **Note**: In the script, the dataset is passed as a parameter (or argument). In the case of a tabular dataset, this argument will contain the ID of the registered dataset; so you could write code in the script to get the experiment's workspace from the run context, and then get the dataset using its ID; like this:
>
> ```
> run = Run.get_context()
> ws = run.experiment.workspace
> dataset = Dataset.get_by_id(ws, id=args.training_dataset_id)
> diabetes = dataset.to_pandas_dataframe()
> ```
>
> However, Azure Machine Learning runs automatically identify arguments that reference named datasets and add them to the run's **input_datasets** collection, so you can also retrieve the dataset from this collection by specifying its "friendly name" (which as you'll see shortly, is specified in the argument definition in the script run configuration for the experiment). This is the approach taken in the script above.
Now you can run a script as an experiment, defining an argument for the training dataset, which is read by the script.
> **Note**: The **Dataset** class depends on some components in the **azureml-dataprep** package, so you need to include this package in the environment where the training experiment will be run. The **azureml-dataprep** package is included in the **azure-defaults** package.
```
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.widgets import RunDetails
# Create a Python environment for the experiment (from a .yml file)
env = Environment.from_conda_specification("experiment_env", "environment.yml")
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
```
> **Note:** The **--input-data** argument passes the dataset as a *named input* that includes a *friendly name* for the dataset, which is used by the script to read it from the **input_datasets** collection in the experiment run. The string value in the **--input-data** argument is actually the registered dataset's ID. As an alternative approach, you could simply pass `diabetes_ds.id`, in which case the script can access the dataset ID from the script arguments and use it to get the dataset from the workspace, but not from the **input_datasets** collection.
The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.
When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run.
### Register the trained model
As with any training experiment, you can retrieve the trained model and register it in your Azure Machine Learning workspace.
```
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Tabular dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
```
### Train a model from a file dataset
You've seen how to train a model using training data in a *tabular* dataset; but what about a *file* dataset?
When you're using a file dataset, the dataset argument passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python **glob** module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe.
Run the following two code cells to create:
1. A folder named **diabetes_training_from_file_dataset**
2. A script that trains a classification model by using a file dataset that is passed to is as an *input*.
```
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Dataset, Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# Get script arguments (rgularization rate and file dataset mount point)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument('--input-data', type=str, dest='dataset_folder', help='data mount point')
args = parser.parse_args()
# Set regularization hyperparameter (passed as an argument to the script)
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
data_path = run.input_datasets['training_files'] # Get the training data path from the input
# (You could also just use args.dataset_folder if you don't want to rely on a hard-coded friendly name)
# Read the files
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files), sort=False)
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
```
Just as with tabular datasets, you can retrieve a file dataset from the **input_datasets** collection by using its friendly name. You can also retrieve it from the script argument, which in the case of a file dataset contains a mount path to the files (rather than the dataset ID passed for a tabular dataset).
Next we need to change the way we pass the dataset to the script - it needs to define a path from which the script can read the files. You can use either the **as_download** or **as_mount** method to do this. Using **as_download** causes the files in the file dataset to be downloaded to a temporary location on the compute where the script is being run, while **as_mount** creates a mount point from which the files can be streamed directly from the datastore.
You can combine the access method with the **as_named_input** method to include the dataset in the **input_datasets** collection in the experiment run (if you omit this, for example by setting the argument to `diabetes_ds.as_mount()`, the script will be able to access the dataset mount point from the script arguments, but not from the **input_datasets** collection).
```
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes file dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_files').as_download()], # Reference to dataset location
environment=env) # Use the environment created previously
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
```
When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log to verify that the files in the file dataset were downloaded to a temporary folder to enable the script to read the files.
### Register the trained model
Once again, you can register the model that was trained by the experiment.
```
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'File dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
```
> **More Information**: For more information about training with datasets, see [Training with Datasets](https://docs.microsoft.com/azure/machine-learning/how-to-train-with-datasets) in the Azure ML documentation.
| github_jupyter |
# Import Packages
```
import os
import numpy as np
import matplotlib.pyplot as plt
import quantities as pq
import neo
from neurotic._elephant_tools import CausalAlphaKernel, instantaneous_rate
pq.markup.config.use_unicode = True # allow symbols like mu for micro in output
pq.mN = pq.UnitQuantity('millinewton', pq.N/1e3, symbol = 'mN'); # define millinewton
# make figures interactive and open in a separate window
# %matplotlib qt
# make figures interactive and inline
%matplotlib notebook
# make figures non-interactive and inline
# %matplotlib inline
colors = {
'B38': '#EFBF46', # yellow
'I2': '#DC5151', # red
'B8a/b': '#DA8BC3', # pink
'B6/B9': '#64B5CD', # light blue
'B3/B6/B9': '#5A9BC5', # medium blue
'B3': '#4F80BD', # dark blue
'B4/B5': '#00A86B', # jade green
'Force': '0.7', # light gray
'Model': '0.2', # dark gray
}
```
# Load Data
```
directory = 'spikes-firing-rates-and-forces'
# filename = 'JG07 Tape nori 0.mat'
# filename = 'JG08 Tape nori 0.mat'
filename = 'JG08 Tape nori 1.mat'
# filename = 'JG08 Tape nori 1 superset.mat' # this file is missing spikes for several swallows
# filename = 'JG08 Tape nori 2.mat'
# filename = 'JG11 Tape nori 0.mat'
# filename = 'JG12 Tape nori 0.mat'
# filename = 'JG12 Tape nori 1.mat'
# filename = 'JG14 Tape nori 0.mat'
file_basename = '.'.join(os.path.basename(filename).split('.')[:-1])
# read the data file containing force and spike trains
reader = neo.io.NeoMatlabIO(os.path.join(directory, filename))
blk = reader.read_block()
seg = blk.segments[0]
sigs = {sig.name:sig for sig in seg.analogsignals}
spiketrains = {st.name:st for st in seg.spiketrains}
```
# Plot Empirical Force
```
# plot the swallowing force measured by the force transducer
fig, ax = plt.subplots(1, 1, sharex=True, figsize=(8,4))
ax.plot(sigs['Force'].times.rescale('s'), sigs['Force'].rescale('mN'), c=colors['Force'])
ax.set_xlabel('Time (s)')
ax.set_ylabel('Force (mN)')
ax.set_title(file_basename)
plt.tight_layout()
```
# Model Parameters
```
# parameters for constructing the model
# - model force = sum of scaled (weighted) firing rates + offset
# - comment/uncomment an entry in firing_rate_params to exclude/include the unit (I2 muscle or motor neurons)
# - weights can be positive or negative
# - rate constants determine how quickly the effect of a unit builds and decays
# - the model will be plotted below against the empirical force, both normalized by their peak values
offset = 0
# firing_rate_params = {
# # 'I2': {'weight': -0.002, 'rate_constant': 1},
# # 'B8a/b': {'weight': 0.05, 'rate_constant': 1},
# 'B3': {'weight': 0.05, 'rate_constant': 1},
# 'B6/B9': {'weight': 0.05, 'rate_constant': 0.5},
# 'B38': {'weight': 0.025, 'rate_constant': 1},
# # 'B4/B5': {'weight': 0.05, 'rate_constant': 1},
# }
firing_rate_params = {
# 'I2': {'weight': -0.02, 'rate_constant': 1},
# 'B8a/b': {'weight': 0.05, 'rate_constant': 1},
'B3': {'weight': 0.05, 'rate_constant': 1},
'B6/B9': {'weight': 0.1, 'rate_constant': 0.5},
'B38': {'weight': 0.05, 'rate_constant': 1},
# 'B4/B5': {'weight': 0.05, 'rate_constant': 1},
}
```
# Generate Firing Rate Model
```
firing_rates = {}
for name, params in firing_rate_params.items():
weight = params['weight']
rate_constant = params['rate_constant']
# convolve the spike train with the kernel
firing_rates[name] = instantaneous_rate(
spiketrain=spiketrains[name],
sampling_period=0.0002*pq.s, # 5 kHz, same as data acquisition rate
kernel=CausalAlphaKernel(rate_constant*pq.s),
)
firing_rates[name].name = f'{name}\nweight: {weight}\nrate const: {rate_constant} sec'
# scale the firing rate by its weight
firing_rates[name] *= weight
# create the model by summing the firing rates and adding the offset
firing_rates['Model'] = None
for name, params in firing_rate_params.items():
if firing_rates['Model'] is None:
firing_rates['Model'] = firing_rates[name].copy()
else:
firing_rates['Model'] += firing_rates[name]
firing_rates['Model'] += offset*pq.Hz
firing_rates['Model'].name = f'Model = Sum of\nScaled Rates + {offset}'
```
# Plot Model
```
# plot each spike train and the scaled (weighted) firing rate
fig, axes = plt.subplots(len(firing_rates)+1, 1, sharex=True, figsize=(8,2*len(firing_rates)))
for i, name in enumerate(firing_rates):
ax = axes[i]
if name in spiketrains:
ax.eventplot(positions=spiketrains[name], lineoffsets=-1, colors=colors[name])
ax.plot(firing_rates[name].times.rescale('s'), firing_rates[name].rescale('Hz'), c=colors[name])
ax.set_ylabel(firing_rates[name].name)
ax.set_ylim(-2, 3)
# plot force and the model, both normalized by their peaks
axes[-1].plot(sigs['Force'].times.rescale('s'), sigs['Force']/sigs['Force'].max(), c=colors['Force'])
axes[-1].plot(firing_rates['Model'].times.rescale('s'), firing_rates['Model']/firing_rates['Model'].max(), c=colors['Model'])
axes[-1].set_ylabel('Model vs. Force\n(both normalized)')
axes[-1].set_xlabel('Time (s)')
axes[0].set_title(file_basename)
plt.tight_layout()
```
# Plot Model for Grant
```
# use with JG08 Tape nori 1
time_slices = {
'I2': [670.7, 680.83],
'B8a/b': [673.5, 679.59],
'B3': [675.645, 680.83],
'B6/B9': [674.25, 680.83],
'B38': [670.7, 680.83],
'Model': [672.26, 680.2],
'Force': [672.26, 680.2],
}
# plot each spike train and the scaled (weighted) firing rate
fig, axes = plt.subplots(2*len(firing_rate_params)+1, 1, sharex=True, figsize=(6,len(firing_rate_params)*(16/17)+1*(20/17)), gridspec_kw={'height_ratios': [3, 1]*len(firing_rate_params) + [5]})
for i, name in enumerate(firing_rate_params):
ax = axes[2*i]
fr = firing_rates[name]
st = spiketrains[name]
if name in time_slices:
fr = fr.copy().time_slice(time_slices[name][0]*pq.s, time_slices[name][1]*pq.s)
st = st.copy().time_slice(time_slices[name][0]*pq.s, time_slices[name][1]*pq.s)
ax.plot(fr.times.rescale('s'), fr.rescale('Hz'), c=colors[name])
ax.annotate(name, xy=(0, 0.5), xycoords='axes fraction',
ha='right', va='center', fontsize='large', color=colors[name], fontfamily='Serif',
)
# ax.set_ylim(0, 2.2)
ax.axis('off')
ax = axes[2*i+1]
ax.eventplot(positions=st, lineoffsets=-1, colors=colors[name])
ax.axis('off')
# plot force and the model, both normalized by their peaks
force = sigs['Force'].copy().time_slice(time_slices['Force'][0]*pq.s, time_slices['Force'][1]*pq.s)
model = firing_rates['Model'].time_slice(time_slices['Model'][0]*pq.s, time_slices['Model'][1]*pq.s)
axes[-1].plot(force.times.rescale('s'), force/force.max(), c=colors['Force'])
axes[-1].plot(model.times.rescale('s'), model/model.max(), c=colors['Model'])
axes[-1].annotate('Model\nvs.', xy=(-0.04, 0.6), xycoords='axes fraction',
ha='center', va='center', fontsize='large', color=colors['Model'], fontfamily='Serif',
)
axes[-1].annotate('Force', xy=(-0.04, 0.35), xycoords='axes fraction',
ha='center', va='center', fontsize='large', color=colors['Force'], fontfamily='Serif',
)
axes[-1].axis('off')
plt.tight_layout(0)
```
| github_jupyter |
# BERT: As one of Autoencoding Language Models
```
import os
from google.colab import drive
drive.mount('/content/drive')
!pip install transformers
!pip install tokenizers
os.chdir("drive/My Drive/data/")
os.listdir()
import pandas as pd
imdb_df = pd.read_csv("IMDB Dataset.csv")
reviews = imdb_df.review.to_string(index=None)
with open("corpus.txt", "w") as f:
f.writelines(reviews)
from tokenizers import BertWordPieceTokenizer
bert_wordpiece_tokenizer = BertWordPieceTokenizer()
bert_wordpiece_tokenizer.train("corpus.txt")
bert_wordpiece_tokenizer.get_vocab()
!mkdir tokenizer
bert_wordpiece_tokenizer.save_model("tokenizer")
tokenizer = BertWordPieceTokenizer.from_file("tokenizer/vocab.txt")
tokenized_sentence = tokenizer.encode("Oh it works just fine")
tokenized_sentence.tokens
tokenized_sentence = tokenizer.encode("ohoh i thougt it might be workingg well")
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("tokenizer")
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="corpus.txt", block_size=128)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir="BERT", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=128)
from transformers import BertConfig, BertForMaskedLM
bert = BertForMaskedLM(BertConfig())
from transformers import Trainer
trainer = Trainer(model=bert, args=training_args, data_collator=data_collator, train_dataset=dataset)
trainer.train()
trainer.save_model("MyBERT")
from transformers import BertConfig
BertConfig()
tiny_bert_config = BertConfig(max_position_embeddings=512, hidden_size=128, num_attention_heads=2, num_hidden_layers=2, intermediate_size=512)
tiny_bert_config
tiny_bert = BertForMaskedLM(tiny_bert_config)
trainer = Trainer(model=tiny_bert, args=training_args, data_collator=data_collator, train_dataset=dataset)
trainer.train()
from transformers import TFBertModel, BertTokenizerFast
bert = TFBertModel.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
bert.layers
tokenized_text = tokenizer.batch_encode_plus(["hello how is it going with you","lets test it"], return_tensors="tf", max_length=256, truncation=True, pad_to_max_length=True)
bert(tokenized_text)
from tensorflow import keras
import tensorflow as tf
max_length = 256
tokens = keras.layers.Input(shape=(max_length,), dtype=tf.dtypes.int32)
masks = keras.layers.Input(shape=(max_length,), dtype=tf.dtypes.int32)
embedding_layer = bert.layers[0]([tokens,masks])[0][:,0,:]
dense = tf.keras.layers.Dense(units=2, activation="softmax")(embedding_layer)
model = keras.Model([tokens,masks],dense)
tokenized = tokenizer.batch_encode_plus(["hello how is it going with you","hello how is it going with you"], return_tensors="tf", max_length= max_length, truncation=True, pad_to_max_length=True)
model([tokenized["input_ids"],tokenized["attention_mask"]])
model.compile(optimizer="Adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()
model.layers[2].trainable = False
import pandas as pd
imdb_df = pd.read_csv("IMDB Dataset.csv")
reviews = list(imdb_df.review)
tokenized_reviews = tokenizer.batch_encode_plus(reviews, return_tensors="tf", max_length=max_length, truncation=True, pad_to_max_length=True)
import numpy as np
train_split = int(0.8 * len(tokenized_reviews["attention_mask"]))
train_tokens = tokenized_reviews["input_ids"][:train_split]
test_tokens = tokenized_reviews["input_ids"][train_split:]
train_masks = tokenized_reviews["attention_mask"][:train_split]
test_masks = tokenized_reviews["attention_mask"][train_split:]
sentiments = list(imdb_df.sentiment)
labels = np.array([[0,1] if sentiment == "positive" else [1,0] for sentiment in sentiments])
train_labels = labels[:train_split]
test_labels = labels[train_split:]
model.fit([train_tokens,train_masks],train_labels, epochs=5)
```
| github_jupyter |
```
import torch
import torch.nn as nn
from collections import OrderedDict
import shutil
import time
import gzip
import os
import json
import numpy as np
from dpp_nets.utils.io import make_embd, make_tensor_dataset, load_tensor_dataset
from dpp_nets.utils.io import data_iterator, load_embd
from torch.autograd import Variable
from torch.utils.data.dataloader import DataLoader
import time
from dpp_nets.my_torch.utilities import pad_tensor
root = '/Users/Max/data/beer_reviews'
data_file = 'reviews.aspect3.train.txt.gz'
embd_file = 'review+wiki.filtered.200.txt.gz'
save_path = os.path.join(root,'pytorch/aspect3_train.pt')
data_path = os.path.join(root, data_file)
embd_path = os.path.join(root, embd_file)
def read_rationales(path):
"""
This reads the json.annotations file.
Creates a list of dictionaries, which holds the 994 reviews for which
sentence-level annotations are available.
"""
data = []
fopen = gzip.open if path.endswith(".gz") else open
with fopen(path) as fin:
for line in fin:
item = json.loads(line)
data.append(item)
return data
from collections import defaultdict
import torch
import torch.nn as nn
from dpp_nets.my_torch.linalg import custom_decomp
from dpp_nets.my_torch.DPP import DPP
from dpp_nets.my_torch.DPP import AllInOne
from dpp_nets.my_torch.utilities import compute_baseline
class DPP_Classifier(nn.Module):
def __init__(self, dtype):
super(DPP_Classifier, self).__init__()
# Float vs Double
self.dtype = dtype
# Network parameters
self.kernel_in = kernel_in = 400
self.kernel_h = kernel_h = 1000
self.kernel_out = kernel_out = 400
self.pred_in = pred_in = 200 # kernel_in / 2
self.pred_h = pred_h = 500
self.pred_h2 = pred_h2 = 200
self.pred_out = pred_out = 3
# 2-Hidden-Layer Networks
self.kernel_net = torch.nn.Sequential(nn.Linear(kernel_in, kernel_h), nn.ELU(),
nn.Linear(kernel_h, kernel_h), nn.ELU(),
nn.Linear(kernel_h, kernel_out))
# 3-Hidden-Layer-Networks
self.pred_net = torch.nn.Sequential(nn.Linear(pred_in, pred_h), nn.ReLU(),
nn.Linear(pred_h, pred_h), nn.ReLU(),
nn.Linear(pred_h, pred_h2), nn.ReLU(),
nn.Linear(pred_h2, pred_out), nn.Sigmoid())
self.kernel_net.type(self.dtype)
self.pred_net.type(self.dtype)
# Sampling Parameter
self.alpha_iter = 5
# Convenience
self.kernels = []
self.subsets = None
self.picks = None
self.preds = None
self.saved_subsets = None
self.saved_losses = None # not really necesary
self.saved_baselines = None # not really necessary
def forward(self, reviews):
"""
reviews: batch_size x max_set_size x embd_dim = 200
Output: batch_size x pred_out (the prediction)
Challenges: Need to resize tensor appropriately and
measure length etc.
"""
batch_size, max_set_size, embd_dim = reviews.size()
alpha_iter = self.alpha_iter
self.saved_subsets = actions = [[] for i in range(batch_size)]
picks = [[] for i in range(batch_size)]
# Create context
lengths = reviews.sum(2).abs().sign().sum(1)
context = (reviews.sum(1) / lengths.expand_as(reviews.sum(1))).expand_as(reviews)
mask = reviews.sum(2).abs().sign().expand_as(reviews).byte()
# Mask out zero words
reviews = reviews.masked_select(mask).view(-1, embd_dim)
context = context.masked_select(mask).view(-1, embd_dim)
# Compute batched_kernel
kernel_input = torch.cat([reviews, context], dim=1)
kernel_output = self.kernel_net(kernel_input)
# Extract the kernel for each review from batched_kernel
s = list(lengths.squeeze().cumsum(0).long().data - lengths.squeeze().long().data)
e = list(lengths.squeeze().cumsum(0).long().data)
for i, (s, e) in enumerate(zip(s, e)):
review = reviews[s:e] # original review, without zero words
kernel = kernel_output[s:e] # corresponding kernel
self.kernels.append(kernel.data)
#vals, vecs = custom_decomp()(kernel)
for j in range(alpha_iter):
subset = AllInOne()(kernel)
#subset = DPP()(vals, vecs)
actions[i].append(subset)
pick = subset.diag().mm(review).sum(0)
picks[i].append(pick)
# Predictions
picks = torch.stack([torch.stack(pick) for pick in picks]).view(-1, embd_dim)
preds = self.pred_net(picks).view(batch_size, alpha_iter, -1)
return preds
def register_rewards(preds, targets, criterion, net):
#targets = targets.unsqueeze(1).unsqueeze(1).expand_as(preds)
targets = targets.unsqueeze(1).expand_as(preds)
loss = criterion(preds, targets)
actions = net.saved_subsets
losses = ((preds - targets)**2).mean(2)
losses = [[i.data[0] for i in row] for row in losses]
net.saved_losses = losses # not really necessary
baselines = [compute_baseline(i) for i in losses]
net.saved_baselines = baselines # not really necessary
for actions, rewards in zip(actions, baselines):
for action, reward in zip(actions, rewards):
action.reinforce(reward)
return loss
# Useful Support
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
"""
This is good!
"""
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, 'model_best.pth.tar')
def adjust_learning_rate(optimizer, epoch):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = optimizer.state_dict()['param_groups'][0]['lr']
lr = lr * (0.1 ** (epoch // 5))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def train(train_loader, embd, model, criterion, optimizer, epoch, dtype):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
subset_size = AverageMeter()
target_dim = 3
end = time.time()
for i, (review, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
targets = Variable(target[:,:target_dim].type(dtype))
reviews = embd(Variable(review)).type(dtype)
# compute output
model.alpha_iter = 2
pred = model(reviews)
loss = register_rewards(pred, targets, criterion, model)
##measure accuracy and record loss ????????????????????????
# prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data[0], reviews.size(0))
for l in model.saved_subsets:
for s in l:
subset_size.update(s.data.sum())
# top1.update(prec1[0], input.size(0))
# top5.update(prec5[0], input.size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
#if i % args.print_freq == 0:
if i % print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
'SSize {subset_size.val:.2f} ({subset_size.avg: .2f})'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'.format(
epoch, i, len(train_loader), batch_time=batch_time,
data_time=data_time, subset_size = subset_size, loss=losses))
def validate(val_loader, model, criterion):
batch_time = AverageMeter()
losses = AverageMeter()
t_prec = AverageMeter()
t_recall = AverageMeter()
t_tp = AverageMeter()
t_fp = AverageMeter()
t_fn = AverageMeter()
target_dim = 3
# switch to evaluate mode
# model.eval()
end = time.time()
for i, (review, target) in enumerate(val_loader):
target = target.sum(1).sign().type(dtype).squeeze().byte()
# targets = target[:,:target_dim,:].type(dtype)
reviews = embd(Variable(review, volatile=True)).type(dtype)
# compute output
model.alpha_iter = 1
preds = model(reviews)
subset = model.saved_subsets[0][0]
subset = pad_tensor(subset.data,0,0,412).byte()
# target = targets
# targets = target[:,:target_dim,:].type(dtype)
reviews = embd(Variable(review, volatile=True)).type(dtype)
# compute output
model.alpha_iter = 1
preds = model(reviews)
subset = model.saved_subsets[0][0]
subset = pad_tensor(subset.data,0,0,412).byte()
# target = target[:,:target_dim,:].squeeze()
retriev = subset.sum()
relev = target.sum()
tp = target.masked_select(subset).sum()
fp = (1 - target.masked_select(subset)).sum()
fn = (1 - subset.masked_select(target)).sum()
t_tp.update(tp)
t_fp.update(fp)
t_fn.update(fn)
if retriev:
prec = tp / retriev
t_prec.update(prec)
if relev:
recall = tp / relev
t_recall.update(recall)
# measure accuracy and record loss
#prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
#losses.update(loss.data[0], input.size(0))
#top1.update(prec1[0], input.size(0))
#top5.update(prec5[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % 100 == 0:
print('Test: [{0}/{1}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Precision {t_prec.val:.4f} ({t_prec.avg:.4f})\t'
'Recall {t_recall.val:.4f} ({t_recall.avg:.4f})\t'.format(
i, len(val_loader), batch_time=batch_time, t_prec=t_prec, t_recall=t_recall))
return t_prec.avg
### MAIN PROGRAMME
global best_prec1
best_prec1 = 0
# set parameters
lr = 1e-1
momentum = 0.9
weight_decay = 0.
start_epoch = 0
epochs = 1
batch_size = 20
print_freq = 10
data = '/Users/Max/data/beer_reviews/pytorch'
dtype = torch.DoubleTensor
# create model
embd = load_embd('/Users/Max/data/beer_reviews/pytorch/embeddings.pt')
model = DPP_Classifier(torch.DoubleTensor)
# define loss function (criterion) and optimizer
criterion = nn.L1Loss()
optimizer = torch.optim.SGD(model.parameters(), lr,
momentum=momentum,
weight_decay=weight_decay)
# Data loading code
trainpath = os.path.join(data, 'aspect1_train.pt')
valpath = os.path.join(data, 'aspect1_heldout.pt')
ratpath = os.path.join(data, 'annotated.pt')
train_set = torch.load(trainpath)
val_set = torch.load(valpath)
rat_set = torch.load(ratpath)
rat_train_set = torch.load(os.path.join(data, 'annotated_common.pt'))
#train_loader = DataLoader(train_set, batch_size, shuffle=True)
#val_loader = DataLoader(val_set)
rat_train_loader = DataLoader(rat_train_set, batch_size, shuffle=True)
rat_loader = DataLoader(rat_set)
epochs = 20
criterion = nn.L1Loss()
for epoch in range(start_epoch, epochs):
adjust_learning_rate(optimizer, epoch)
# train for one epoch
train(rat_train_loader, embd, model, criterion, optimizer, epoch, dtype)
# evaluate on validation set
prec1 = validate(rat_loader, model, criterion)
# remember best prec@1 and save checkpoint
is_best = prec1 > best_prec1
best_prec1 = max(prec1, best_prec1)
save_checkpoint({
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
'optimizer' : optimizer.state_dict(),
}, is_best)
import random
#word_to_ix = make_embd(embd_path, only_index_dict=True)
#ix_to_word = {ix: word for word, ix in word_to_ix.items()}
rat_set, ix_to_word
def sample_words(rat_set, model, ix_to_word):
# Sample a review
ix = random.randint(0,len(rat_set))
# Make a prediction
x = rat_set.data_tensor[ix].unsqueeze(0)
review = embd(Variable(x, volatile=True)).type(dtype)
model.alpha_iter = 1
model(review)
# What words were selected
subset = model.saved_subsets[0][0]
subset = pad_tensor(subset.data,0,0,412).byte()
# Convert to words
all_words = [ix_to_word[ix -1] for ix in x.squeeze() if ix > 0]
filtered_words = [ix_to_word[ix -1] for ix in x.masked_select(subset)]
print(" ".join(all_words) )
print("DPP Selection: ", filtered_words)
ix = random.randint(0,len(rat_set))
rat_set.data_tensor[ix].unsqueeze(0)
def sample_prediction(rat_set, model):
# Sample a review
ix = random.randint(0,len(rat_train_set))
# Make a prediction
x = rat_train_set.data_tensor[ix].unsqueeze(0)
target = rat_train_set.target_tensor[ix][:3]
review = embd(Variable(x, volatile=True)).type(dtype)
model.alpha_iter = 1
pred = model(review).data.squeeze()
print(pred, target)
return pred, target
pred, target = sample_prediction(rat_set, model)
criterion(Variable(pred), Variable(target))
torch.save(model.pred_net.state_dict(), 'pred_dict25.pt')
import torch
e = 0
for i in range(100):
v = torch.normal(torch.FloatTensor([1,2,3,4,5]))
e += v
e / 100
non_lin = torch.sin
torch.sin(v)
non_lin(v)
non_lin
batch_size = 2
set_size = 3
embd_dim = 4
words = torch.randn(batch_size, set_size, embd_dim)
v = torch.normal(torch.FloatTensor([1,2,3,4,5])torch.cos(torch.sin(words).mean(1)).squeeze()
v = torch.normal(torch.FloatTensor([1,2,3,4,5]))
torch.log(v)
import numpy as np
batch_size = 100
n_clusters = 10
set_size = 40
embd_dim = pred_in = 50
dtype = dtype = torch.DoubleTensor
np.random.seed(0)
means = dtype(np.random.randint(-50,50,[n_clusters, int(pred_in)]).astype("float"))
def generate(batch_size):
"""sdf"
Arguments:
means: Probs best to make this an attribute of the class,
so that repeated training works with the same data distribution.
"""
# Generate index
index = torch.cat([torch.arange(0, float(n_clusters)).expand(batch_size, n_clusters).long(),
torch.multinomial(torch.ones(batch_size, n_clusters), set_size - n_clusters, replacement=True)]
,dim=1)
index = index.t()[torch.randperm(set_size)].t().contiguous()
# Generate words, context, target
words = dtype(torch.normal(means.index_select(0,index.view(index.numel()))).view(batch_size, set_size, embd_dim))
context = dtype(words.sum(1).expand_as(words))
target = torch.sin(torch.pow(words.abs(),2).mean(1)).squeeze()
return words, context, target
words, context, target = generate(5)
print(target)
(torch.std(target, dim=0) / torch.mean(target, dim=0)).mean()
target
v1 = torch.randn(2,2)
v2 = torch.randn(2,2)
v3 = torch.randn(2,2)
v4 = torch.randn(2,2)
v5 = torch.randn(2,2)
v6 = torch.randn(2,2)
import torch.nn as nn
nn.MSELoss()
from dpp_nets.my_torch.simulator import SimKDPPDeepSet
import torch
network_params = {'set_size': 40, 'n_clusters': 10}
dtype = torch.DoubleTensor
sim = SimKDPPDeepSet(network_params, dtype)
mod = torch.nn.Sequential(nn.Linear(10,20), nn.ReLU(), nn.Linear(20,10))
for mod in mod.modules():
print(mod)
A = Variable(torch.randn(10,20))
mod(A)
batch_size = 3
set_size = 4
embd_dim = 5
words = Variable(torch.randn(batch_size, set_size, embd_dim))
print(words)
subset = Variable(torch.ByteTensor([1,0,0,1]),requires_grad=True)
words[1].masked_select(Variable(subset.data.expand_as(words[1].t())).t()).view(-1,embd_dim)
data_path
word_to_ix = make_embd(embd_path, only_index_dict=True)
old_dataset = make_tensor_dataset(data_path, word_to_ix)
old_dataset.data_tensor_tensor
ix = 1324
print(torch.cat([old_dataset.data_tensor[ix, 200:220].unsqueeze(1), dataset.data_tensor[ix, 200:220].unsqueeze(1)],dim=1))
# this is bad as maximum set_size might increase#
from torch.utils.data import TensorDataset
import re
reviews = []
targets = []
max_set_size = 0
for i, (review, target) in enumerate(data_iterator(data_path)):
review_ix = []
for word in review:
if word in word_to_ix:
ix = word_to_ix[word] + 1
review_ix.append(ix)
else:
candidates = re.split('[;|,-/."]',word)
for word in candidates:
if word in word_to_ix:
print(i)
ix = word_to_ix[word] + 1
review_ix.append(ix)
max_set_size = max(max_set_size, len(review_ix))
reviews.append(review_ix)
targets.append(target)
reviews_tensor = []
for review in reviews:
review = torch.LongTensor(review)
review = pad_tensor(review, 0, 0, max_set_size)
reviews_tensor.append(review)
reviews = torch.stack(reviews_tensor)
targets = torch.stack(targets)
dataset = TensorDataset(reviews, targets)
word = 2
dim = 3
embd.weight[:,dim]
dataset.data_tensor
def data_iterator(data_path):
with gzip.open(data_path, 'rt') as f:
for line in f:
target, sep, words = line.partition("\t")
words, target = words.split(), target.split()
if len(words):
target = torch.Tensor([float(v) for v in target])
yield words, target
def make_tensor_dataset(data_path, word_to_ix, max_set_size=0, save_path=None):
if not max_set_size:
for (review, target) in data_iterator(data_path):
review = [(word in word_to_ix) for word in review]
max_set_size = max(sum(review),max_set_size)
reviews, targets = [], []
for (review, target) in data_iterator(data_path):
review = [word_to_ix[word] + 1 for word in review if word in word_to_ix]
review = torch.LongTensor(review)
review = pad_tensor(review, 0, 0, max_set_size)
reviews.append(review)
targets.append(target)
reviews = torch.stack(reviews)
targets = torch.stack(targets)
dataset = TensorDataset(reviews, targets)
if save_path:
torch.save(dataset, save_path)
else:
return dataset
root = '/Users/Max/data/beer_reviews'
data_file = 'reviews.aspect3.train.txt.gz'
embd_file = 'review+wiki.filtered.200.txt.gz'
save_path = os.path.join(root,'pytorch/aspect3_train.pt')
data_path = os.path.join(root, data_file)
embd_path = os.path.join(root, embd_file)
embd_path =
import torch
import torch.nn as nn
batch_size = 20
input_dim = 12
hidden_dim = 18
output_dim = 2
layer1 = nn.Linear(input_dim, hidden_dim)
batch_norm = nn.BatchNorm1d(hidden_dim)
layer2 = nn.Linear(hidden_dim, output_dim)
model = nn.Sequential(layer1, batch_norm, layer2)
layer1.train()
x = Variable(torch.randn(batch_size, input_dim))
#model2 = nn.Sequential(layer1, layer2)
#y1 = model(x)
#y2 = model2(x)
y = layer2(batch_norm(layer1(x)))
batch_norm(layer1(x))
import argparse
import os
import shutil
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.utils.data.dataloader import DataLoader
from dpp_nets.utils.io import make_embd, make_tensor_dataset
from dpp_nets.layers.layers import KernelVar, ReinforceSampler, PredNet, ReinforceTrainer
parser = argparse.ArgumentParser(description='REINFORCE VIMCO Trainer')
parser.add_argument('-a', '--aspect', type=str, choices=['aspect1', 'aspect2', 'aspect3', 'all'],
help='what is the target?', required=True)
parser.add_argument('-b', '--batch-size', default=50, type=int,
metavar='N', help='mini-batch size (default: 50)')
parser.add_argument('--epochs', default=30, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--lr_k', '--learning_rate_k', default=1e-3, type=float,
metavar='LRk', help='initial learning rate for kernel net')
parser.add_argument('--lr_p', '--learning_rate_p', default=1e-4, type=float,
metavar='LRp', help='initial learning rate for pred net')
parser.add_argument('--reg', type=float, required=True,
metavar='reg', help='regularization constant')
parser.add_argument('--reg_mean', type=float, required=True,
metavar='reg_mean', help='regularization_mean')
parser.add_argument('--alpha_iter', type=int, required=True,
metavar='alpha_iter', help='How many subsets to sample from DPP? At least 2!')
# Pre-training
parser.add_argument('--pretrain_kernel', type=str, default="",
metavar='pretrain_kernel', help='Give name of pretrain_kernel')
parser.add_argument('--pretrain_pred', type=str, default="",
metavar='pretrain_pred', help='Give name of pretrain_pred')
# Train locally or remotely?
parser.add_argument('--remote', type=int,
help='training locally or on cluster?', required=True)
# Burnt in Paths..
parser.add_argument('--data_path_local', type=str, default='/Users/Max/data/beer_reviews',
help='where is the data folder locally?')
parser.add_argument('--data_path_remote', type=str, default='/cluster/home/paulusm/data/beer_reviews',
help='where is the data folder remotely?')
parser.add_argument('--ckp_path_local', type=str, default='/Users/Max/checkpoints/beer_reviews',
help='where is the checkpoints folder locally?')
parser.add_argument('--ckp_path_remote', type=str, default='/cluster/home/paulusm/checkpoints/beer_reviews',
help='where is the data folder remotely?')
parser.add_argument('--pretrain_path_local', type=str, default='/Users/Max/checkpoints/beer_reviews',
help='where is the pre_trained model? locally')
parser.add_argument('--pretrain_path_remote', type=str, default='/cluster/home/paulusm/pretrain/beer_reviews',
help='where is the data folder? remotely')
def train(loader, trainer, optimizer):
trainer.train()
for t, (review, target) in enumerate(loader):
review = Variable(review)
if args.aspect == 'all':
target = Variable(target[:,:3]).type(dtype)
else:
target = Variable(target[:,int(args.aspect[-1])]).type(dtype)
loss = trainer(review, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Weight mean is: ', trainer.kernel_net.layer1.weight.mean())
print('Weight max is: ', trainer.kernel_net.layer1.weight.max())
print('Weight min is: ', trainer.kernel_net.layer1.weight.min())
print('Grad max is: ', trainer.kernel_net.layer1.weight.grad.max())
print('Grad min is: ', trainer.kernel_net.layer1.weight.grad.min())
print("trained one batch")
def validate(loader, trainer):
"""
Note, we keep the sampling as before.
i.e what ever alpha_iter is, we take it.
"""
trainer.eval()
total_loss = 0.0
total_pred_loss = 0.0
total_reg_loss = 0.0
for i, (review, target) in enumerate(loader, 1):
review = Variable(review, volatile=True)
if args.aspect == 'all':
target = Variable(target[:,:3], volatile=True).type(dtype)
else:
target = Variable(target[:,int(args.aspect[-1])], volatile=True).type(dtype)
trainer(review, target)
loss = trainer.loss.data[0]
pred_loss = trainer.pred_loss.data[0]
reg_loss = trainer.reg_loss.data[0]
delta = loss - total_loss
total_loss += (delta / i)
delta = pred_loss - total_pred_loss
total_pred_loss += (delta / i)
delta = reg_loss - total_reg_loss
total_reg_loss += (delta / i)
# print("validated one batch")
return total_loss, total_pred_loss, total_reg_loss
def adjust_learning_rate(optimizer, epoch):
"""Sets the learning rate to the initial LR multiplied by factor 0.1 for every 10 epochs"""
if not ((epoch + 1) % 10):
factor = 0.1
for param_group in optimizer.param_groups:
param_group['lr'] = param_group['lr'] * factor
def log(epoch, loss, pred_loss, reg_loss):
string = str.join(" | ", ['Epoch: %d' % (epoch), 'V Loss: %.5f' % (loss),
'V Pred Loss: %.5f' % (pred_loss), 'V Reg Loss: %.5f' % (reg_loss)])
if args.remote:
destination = os.path.join(args.ckp_path_remote, args.aspect + 'reg' + str(args.reg) + 'reg_mean' + str(args.reg_mean) +
'alpha_iter' + str(args.alpha_iter) + str(args.pretrain_kernel) + str(args.pretrain_pred) + 'reinforce_log.txt')
else:
destination = os.path.join(args.ckp_path_local, args.aspect + 'reg' + str(args.reg) + 'reg_mean' + str(args.reg_mean) +
'alpha_iter' + str(args.alpha_iter) + str(args.pretrain_kernel) + str(args.pretrain_pred) + 'reinforce_log.txt')
with open(destination, 'a') as log:
log.write(string + '\n')
def save_checkpoint(state, is_best, filename='reinforce_checkpoint.pth.tar'):
"""
State is a dictionary that cotains valuable information to be saved.
"""
if args.remote:
destination = os.path.join(args.ckp_path_remote, args.aspect + 'reg' + str(args.reg) + 'reg_mean' + str(args.reg_mean) +
'alpha_iter' + str(args.alpha_iter) + str(args.pretrain_kernel) + str(args.pretrain_pred) + str(args.filename))
else:
destination = os.path.join(args.ckp_path_local, args.aspect + 'reg' + str(args.reg) + 'reg_mean' + str(args.reg_mean) +
'alpha_iter' + str(args.alpha_iter) + str(args.pretrain_kernel) + str(args.pretrain_pred) + str(args.filename))
torch.save(state, destination)
if is_best:
if args.remote:
best_destination = os.path.join(args.ckp_path_remote, args.aspect + 'reg' + str(args.reg) + 'reg_mean' + str(args.reg_mean) +
'alpha_iter' + str(args.alpha_iter) + str(args.pretrain_kernel) + str(args.pretrain_pred) + 'reinforce_best.pth.tar')
else:
best_destination = os.path.join(args.ckp_path_local, args.aspect + 'reg' + str(args.reg) + 'reg_mean' + str(args.reg_mean) +
'alpha_iter' + str(args.alpha_iter) + str(args.pretrain_kernel) + str(args.pretrain_pred) + 'reinforce_best.pth.tar')
shutil.copyfile(destination, best_destination)
global args, lowest_loss, dtype
args = parser.parse_args("-a aspect3 --remote 0 --reg 0.1 --reg_mean 10 --alpha_iter 4 --lr_k 1e-4".split())
lowest_loss = 100 # arbitrary high number as upper bound for loss
dtype = torch.DoubleTensor
### Load data
if args.remote:
# print('training remotely')
train_path = os.path.join(args.data_path_remote, str.join(".",['reviews', args.aspect, 'train.txt.gz']))
val_path = os.path.join(args.data_path_remote, str.join(".",['reviews', args.aspect, 'heldout.txt.gz']))
embd_path = os.path.join(args.data_path_remote, 'review+wiki.filtered.200.txt.gz')
else:
# print('training locally')
train_path = os.path.join(args.data_path_local, str.join(".",['reviews', args.aspect, 'train.txt.gz']))
val_path = os.path.join(args.data_path_local, str.join(".",['reviews', args.aspect, 'heldout.txt.gz']))
embd_path = os.path.join(args.data_path_local, 'review+wiki.filtered.200.txt.gz')
embd, word_to_ix = make_embd(embd_path)
train_set = make_tensor_dataset(train_path, word_to_ix)
val_set = make_tensor_dataset(val_path, word_to_ix)
print("loaded data")
torch.manual_seed(0)
train_loader = DataLoader(train_set, args.batch_size, shuffle=True)
val_loader = DataLoader(val_set, args.batch_size)
print("loader defined")
### Build model
# Network parameters
embd_dim = embd.weight.size(1)
kernel_dim = 200
hidden_dim = 500
enc_dim = 200
if args.aspect == 'all':
target_dim = 3
else:
target_dim = 1
# Model
torch.manual_seed(1)
# Add pre-training here...
kernel_net = KernelVar(embd_dim, hidden_dim, kernel_dim)
sampler = ReinforceSampler(args.alpha_iter)
pred_net = PredNet(embd_dim, hidden_dim, enc_dim, target_dim)
if args.pretrain_kernel:
if args.remote:
state_dict = torch.load(args.pretrain_path_remote + args.pretrain_kernel)
else:
state_dict = torch.load(args.pretrain_path_local + args.pretrain_kernel)
kernel_net.load_state_dict(state_dict)
if args.pretrain_pred:
if args.remote:
state_dict = torch.load(args.pretrain_path_remote + args.pretrain_pred)
else:
state_dict = torch.load(args.pretrain_path_local + args.pretrain_pred)
pred_net.load_state_dict(state_dict)
# continue with trainer
trainer = ReinforceTrainer(embd, kernel_net, sampler, pred_net)
trainer.reg = args.reg
trainer.reg_mean = args.reg_mean
trainer.activation = nn.Sigmoid()
trainer.type(dtype)
print("created trainer")
params = [{'params': trainer.kernel_net.parameters(), 'lr': args.lr_k},
{'params': trainer.pred_net.parameters(), 'lr': args.lr_p}]
optimizer = torch.optim.Adam(params)
print('set-up optimizer')
### Loop
l = []
torch.manual_seed(0)
print("started loop")
for epoch in range(args.epochs):
adjust_learning_rate(optimizer, epoch)
trainer.train()
for t, (review, target) in enumerate(train_loader):
review = Variable(review)
if args.aspect == 'all':
target = Variable(target[:,:3]).type(dtype)
else:
target = Variable(target[:,int(args.aspect[-1])]).type(dtype)
loss = trainer(review, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Weight mean is: ', trainer.kernel_net.layer1.weight.mean())
print('Weight max is: ', trainer.kernel_net.layer1.weight.max())
print('Weight min is: ', trainer.kernel_net.layer1.weight.min())
print('Grad max is: ', trainer.kernel_net.layer1.weight.grad.max())
print('Grad min is: ', trainer.kernel_net.layer1.weight.grad.min())
print("trained one batch")
loss, pred_loss, reg_loss = validate(val_loader, trainer)
log(epoch, loss, pred_loss, reg_loss)
print("logged")
is_best = pred_loss < lowest_loss
lowest_loss = min(pred_loss, lowest_loss)
save = {'epoch:': epoch + 1,
'model': 'Marginal Trainer',
'state_dict': trainer.state_dict(),
'lowest_loss': lowest_loss,
'optimizer': optimizer.state_dict()}
save_checkpoint(save, is_best)
print("saved a checkpoint")
print('*'*20, 'SUCCESS','*'*20)
trainer.kernel_net.layer1.weight.min()
```
| github_jupyter |
# Monitoring Data Drift
Over time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary.
In this lab, you'll configure data drift monitoring for datasets.
## Before you start
In addition to the latest version of the **azureml-sdk** and **azureml-widgets** packages, you'll need the **azureml-datadrift** package to run the code in this notebook. Run the cell below to verify that it is installed.
```
!pip show azureml-datadrift
```
## Connect to your workspace
With the required SDK packages installed, now you're ready to connect to your workspace.
> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
```
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
```
## Create a *baseline* dataset
To monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future.
```
from azureml.core import Datastore, Dataset
# Upload the baseline data
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# Create and register the baseline dataset
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
```
## Create a *target* dataset
Over time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code:
```
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# Load the smaller of the two data files
data = pd.read_csv('data/diabetes2.csv')
# We'll generate data for the past 6 weeks
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# Get the date X weeks ago
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# Modify data to ceate some drift
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# Save the file with the date encoded in the filename
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# Upload the files
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# Use the folder partition format to define a dataset with a 'date' timestamp column
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# Register the target dataset
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
```
## Create a data drift monitor
Now you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time.
### Create a compute target
To run the data drift monitor, you'll need a compute target. Run the following cell to specify a compute cluster (if it doesn't exist, it will be created).
> **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "dp100cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
```
### Define the data drift monitor
Now you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection.
```
from azureml.datadrift import DataDriftDetector
# set up feature list
features = ['Pregnancies', 'Age', 'BMI']
# set up data drift detector
monitor = DataDriftDetector.create_from_datasets(ws, 'mslearn-diabates-drift', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
```
## Backfill the data drift monitor
You have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data.
> **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio!
```
from azureml.widgets import RunDetails
backfill = monitor.backfill(dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
```
## Analyze data drift
You can use the following code to examine data drift for the points in time collected in the backfill run.
```
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
```
You can also visualize the data drift metrics in [Azure Machine Learning studio](https://ml.azure.com) by following these steps:
1. On the **Datasets** page, view the **Dataset monitors** tab.
2. Click the data drift monitor you want to view.
3. Select the date range over which you want to view data drift metrics (if the column chart does not show multiple weeks of data, wait a minute or so and click **Refresh**).
4. Examine the charts in the **Drift overview** section at the top, which show overall drift magnitude and the drift contribution per feature.
5. Explore the charts in the **Feature detail** section at the bottom, which enable you to see various measures of drift for individual features.
> **Note**: For help understanding the data drift metrics, see the [How to monitor datasets](https://docs.microsoft.com/azure/machine-learning/how-to-monitor-datasets#understanding-data-drift-results) in the Azure Machine Learning documentation.
## Explore further
This lab is designed to introduce you to the concepts and principles of data drift monitoring. To learn more about monitoring data drift using datasets, see the [Detect data drift on datasets](https://docs.microsoft.com/azure/machine-learning/how-to-monitor-datasets) in the Azure machine Learning documentation.
You can also collect data from published services and use it as a target dataset for datadrift monitoring. See [Collect data from models in production](https://docs.microsoft.com/azure/machine-learning/how-to-enable-data-collection) for details.
| github_jupyter |
```
# set tf 1.x for colab
%tensorflow_version 1.x
# setup only for running on google colab
# ! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
# please, uncomment the week you're working on
# setup_google_colab.setup_week1()
# setup_google_colab.setup_week2()
# setup_google_colab.setup_week2_honor()
setup_google_colab.setup_week3()
# setup_google_colab.setup_week4()
# setup_google_colab.setup_week5()
# setup_google_colab.setup_week6()
```
# Fine-tuning InceptionV3 for flowers classification
In this task you will fine-tune InceptionV3 architecture for flowers classification task.
InceptionV3 architecture (https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html):
<img src="https://github.com/hse-aml/intro-to-dl/blob/master/week3/images/inceptionv3.png?raw=1" style="width:70%">
Flowers classification dataset (http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) consists of 102 flower categories commonly occurring in the United Kingdom. Each class contains between 40 and 258 images:
<img src="https://github.com/hse-aml/intro-to-dl/blob/master/week3/images/flowers.jpg?raw=1" style="width:70%">
# Import stuff
```
import sys
sys.path.append("..")
import grading
import download_utils
# !!! remember to clear session/graph if you rebuild your graph to avoid out-of-memory errors !!!
download_utils.link_all_keras_resources()
import tensorflow as tf
import keras
from keras import backend as K
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
print(tf.__version__)
print(keras.__version__)
import cv2 # for image processing
from sklearn.model_selection import train_test_split
import scipy.io
import os
import tarfile
import keras_utils
from keras_utils import reset_tf_session
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
warnings.filterwarnings('ignore', category=FutureWarning)
```
# Fill in your Coursera token and email
To successfully submit your answers to our grader, please fill in your Coursera submission token and email
```
grader = grading.Grader(assignment_key="2v-uxpD7EeeMxQ6FWsz5LA",
all_parts=["wuwwC", "a4FK1", "qRsZ1"])
# token expires every 30 min
COURSERA_TOKEN = "kbq50loyqKlaw3NK"
COURSERA_EMAIL = "mailid_coursera@whatever.com"
```
# Load dataset
Dataset was downloaded for you, it takes 12 min and 400mb.
Relevant links (just in case):
- http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html
- http://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz
- http://www.robots.ox.ac.uk/~vgg/data/flowers/102/imagelabels.mat
```
# we downloaded them for you, just link them here
download_utils.link_week_3_resources()
```
# Prepare images for model
```
# we will crop and resize input images to IMG_SIZE x IMG_SIZE
IMG_SIZE = 250
def decode_image_from_raw_bytes(raw_bytes):
img = cv2.imdecode(np.asarray(bytearray(raw_bytes), dtype=np.uint8), 1)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
return img
```
We will take a center crop from each image like this:
<img src="https://github.com/hse-aml/intro-to-dl/blob/master/week3/images/center_crop.jpg?raw=1" style="width:50%">
```
def image_center_crop(img):
"""
Makes a square center crop of an img, which is a [h, w, 3] numpy array.
Returns [min(h, w), min(h, w), 3] output with same width and height.
For cropping use numpy slicing.
"""
h, w, c = img.shape
min_dim = int(min(h, w)/2)
center = (int(h/2), int(w/2))
cropped_img = img[center[0] - min_dim: center[0] + min_dim, center[1] - min_dim: center[1] + min_dim]
return cropped_img
def prepare_raw_bytes_for_model(raw_bytes, normalize_for_model=True):
img = decode_image_from_raw_bytes(raw_bytes) # decode image raw bytes to matrix
img = image_center_crop(img) # take squared center crop
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) # resize for our model
if normalize_for_model:
img = img.astype("float32") # prepare for normalization
img = keras.applications.inception_v3.preprocess_input(img) # normalize for model
return img
# reads bytes directly from tar by filename (slow, but ok for testing, takes ~6 sec)
def read_raw_from_tar(tar_fn, fn):
with tarfile.open(tar_fn) as f:
m = f.getmember(fn)
return f.extractfile(m).read()
# test cropping
raw_bytes = read_raw_from_tar("102flowers.tgz", "jpg/image_00001.jpg")
img = decode_image_from_raw_bytes(raw_bytes)
print(img.shape)
plt.imshow(img)
plt.show()
img = prepare_raw_bytes_for_model(raw_bytes, normalize_for_model=False)
print(img.shape)
plt.imshow(img)
plt.show()
## GRADED PART, DO NOT CHANGE!
# Test image preparation for model
prepared_img = prepare_raw_bytes_for_model(read_raw_from_tar("102flowers.tgz", "jpg/image_00001.jpg"))
grader.set_answer("qRsZ1", list(prepared_img.shape) + [np.mean(prepared_img), np.std(prepared_img)])
list(prepared_img.shape) + [np.mean(prepared_img), np.std(prepared_img)] # expected by grader
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
```
# Prepare for training
```
# read all filenames and labels for them
# read filenames firectly from tar
def get_all_filenames(tar_fn):
with tarfile.open(tar_fn) as f:
return [m.name for m in f.getmembers() if m.isfile()]
all_files = sorted(get_all_filenames("102flowers.tgz")) # list all files in tar sorted by name
all_labels = scipy.io.loadmat('imagelabels.mat')['labels'][0] - 1 # read class labels (0, 1, 2, ...)
# all_files and all_labels are aligned now
N_CLASSES = len(np.unique(all_labels))
print(N_CLASSES)
# split into train/test
tr_files, te_files, tr_labels, te_labels = \
train_test_split(all_files, all_labels, test_size=0.2, random_state=42, stratify=all_labels)
# will yield raw image bytes from tar with corresponding label
def raw_generator_with_label_from_tar(tar_fn, files, labels):
label_by_fn = dict(zip(files, labels))
with tarfile.open(tar_fn) as f:
while True:
m = f.next()
if m is None:
break
if m.name in label_by_fn:
yield f.extractfile(m).read(), label_by_fn[m.name]
# batch generator
BATCH_SIZE = 32
def batch_generator(items, batch_size):
"""
Implement batch generator that yields items in batches of size batch_size.
There's no need to shuffle input items, just chop them into batches.
Remember about the last batch that can be smaller than batch_size!
Input: any iterable (list, generator, ...). You should do `for item in items: ...`
In case of generator you can pass through your items only once!
Output: In output yield each batch as a list of items.
"""
items = list(items)
for i in range(0, len(items), batch_size):
yield items[i: i+batch_size]
## GRADED PART, DO NOT CHANGE!
# Test batch generator
def _test_items_generator():
for i in range(10):
yield i
grader.set_answer("a4FK1", list(map(lambda x: len(x), batch_generator(_test_items_generator(), 3))))
list(map(lambda x: len(x), batch_generator(_test_items_generator(), 3))) # expedcted by grader
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
def train_generator(files, labels):
while True: # so that Keras can loop through this as long as it wants
for batch in batch_generator(raw_generator_with_label_from_tar(
"102flowers.tgz", files, labels), BATCH_SIZE):
# prepare batch images
batch_imgs = []
batch_targets = []
for raw, label in batch:
img = prepare_raw_bytes_for_model(raw)
batch_imgs.append(img)
batch_targets.append(label)
# stack images into 4D tensor [batch_size, img_size, img_size, 3]
batch_imgs = np.stack(batch_imgs, axis=0)
# convert targets into 2D tensor [batch_size, num_classes]
batch_targets = keras.utils.np_utils.to_categorical(batch_targets, N_CLASSES)
yield batch_imgs, batch_targets
# test training generator
for _ in train_generator(tr_files, tr_labels):
print(_[0].shape, _[1].shape)
plt.imshow(np.clip(_[0][0] / 2. + 0.5, 0, 1))
break
```
# Training
You cannot train such a huge architecture from scratch with such a small dataset.
But using fine-tuning of last layers of pre-trained network you can get a pretty good classifier very quickly.
```
# remember to clear session if you start building graph from scratch!
s = reset_tf_session()
# don't call K.set_learning_phase() !!! (otherwise will enable dropout in train/test simultaneously)
def inception(use_imagenet=True):
# load pre-trained model graph, don't add final layer
model = keras.applications.InceptionV3(include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3),
weights='imagenet' if use_imagenet else None)
# add global pooling just like in InceptionV3
new_output = keras.layers.GlobalAveragePooling2D()(model.output)
# add new dense layer for our labels
new_output = keras.layers.Dense(N_CLASSES, activation='softmax')(new_output)
model = keras.engine.training.Model(model.inputs, new_output)
return model
model = inception()
model.summary()
# how many layers our model has
print(len(model.layers)) # deep model with 313 layers
# set all layers trainable by default
for layer in model.layers:
layer.trainable = True
if isinstance(layer, keras.layers.BatchNormalization):
# we do aggressive exponential smoothing of batch norm
# parameters to faster adjust to our new dataset
layer.momentum = 0.9
# fix deep layers (fine-tuning only last 50)
for layer in model.layers[:-50]:
# fix all but batch norm layers, because we neeed to update moving averages for a new dataset!
if not isinstance(layer, keras.layers.BatchNormalization):
layer.trainable = False
# compile new model
model.compile(
loss='categorical_crossentropy', # we train 102-way classification
optimizer=keras.optimizers.adamax(lr=1e-2), # we can take big lr here because we fixed first layers
metrics=['accuracy'] # report accuracy during training
)
# we will save model checkpoints to continue training in case of kernel death
model_filename = 'flowers.{0:03d}.hdf5'
last_finished_epoch = None
#### uncomment below to continue training from model checkpoint
#### fill `last_finished_epoch` with your latest finished epoch
# from keras.models import load_model
# s = reset_tf_session()
# last_finished_epoch = 10
# model = load_model(model_filename.format(last_finished_epoch))
```
Training takes **2 hours**. You're aiming for ~0.93 validation accuracy.
```
# fine tune for 2 epochs (full passes through all training data)
# we make 2*8 epochs, where epoch is 1/8 of our training data to see progress more often
model.fit_generator(
train_generator(tr_files, tr_labels),
steps_per_epoch=len(tr_files) // BATCH_SIZE // 8,
epochs=2 * 8,
validation_data=train_generator(te_files, te_labels),
validation_steps=len(te_files) // BATCH_SIZE // 4,
callbacks=[keras_utils.TqdmProgressCallback(),
keras_utils.ModelSaveCallback(model_filename)],
verbose=0,
initial_epoch=last_finished_epoch or 0
)
## GRADED PART, DO NOT CHANGE!
# Accuracy on validation set
test_accuracy = model.evaluate_generator(
train_generator(te_files, te_labels),
len(te_files) // BATCH_SIZE // 2
)[1]
grader.set_answer("wuwwC", test_accuracy)
print(test_accuracy)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
```
That's it! Congratulations!
What you've done:
- prepared images for the model
- implemented your own batch generator
- fine-tuned the pre-trained model
| github_jupyter |
# Imports
```
import os, re, sys, pickle, datetime
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import pandas as pd
from scipy import stats
from sklearn import metrics
from sklearn.metrics import confusion_matrix,f1_score
from sklearn.model_selection import train_test_split,GridSearchCV,RepeatedKFold,LeaveOneOut,cross_val_score,cross_validate
from sklearn.preprocessing import StandardScaler,MinMaxScaler,PolynomialFeatures
from sklearn.tree import DecisionTreeClassifier,DecisionTreeRegressor
import warnings
warnings.filterwarnings("ignore")
```
# Read Data
```
expinput = pd.read_csv("exp_example.csv",sep=",",index_col=0)
expinput.index = expinput.index.astype(int)
comp_file = "comp_example"
comp_sheet = "Sheet1"
compinp = pd.read_excel(comp_file+".xlsx",comp_sheet,index_col = 0, header = 1)
#compinp = ci[ci.columns[:-4]].loc[ci.index[:-5]]
compinp.index = compinp.index.astype(int)
compinp.dropna(axis=0,inplace=True)
print("available datasets:")
for i in expinput.columns[3:]:
print(i)
```
# Select Data
```
# select dataset and visualize data spread
dataset = "yield_chloro_2to1_2hr" #"yield_doyle_Ni_suzuki_0020"
plt.figure(figsize=(4, 4))
plt.hist(expinput[dataset].dropna(axis=0), bins="auto",color="black")
plt.ylabel("frequency")
plt.xlabel(dataset)
plt.tight_layout()
plt.show()
```
# Threshold Analysis
```
#threshold settings
y_cut = 5 # experimental output (e.g., yield) to define inactive/active
class_weight = {0:1,1:20} # define class weights,
#parameter settings
num_par = 10 # number of parameters
par_start_col = 2 # parameter start column (0-indexed)
X_labels = list(compinp.columns)[par_start_col-1:num_par+par_start_col-1]
target_names = ['0 ("negatives")', '1 ("positives")']
#set which parameters/features to iterate through
features = range(0,10)
#features = itertools.chain(range(0,2),range(6,9))
#threshold analysis for selected features
for f_ind in features:
feature = X_labels[f_ind]
# read in data
y_all = expinput[dataset].dropna(axis=0)
X_all = np.array(compinp[feature].loc[y_all.index])
# filter Buchwald ligands
buchwaldmask = expinput["buchwald"][expinput[dataset].notna()]
X_nobw = X_all[buchwaldmask==0]
y_nobw = y_all[buchwaldmask==0]
X_bw = X_all[buchwaldmask==1]
y_bw = y_all[buchwaldmask==1]
# select which subset of data to use for the threshold analysis (X_all, X_bw, X_nobw). change as appropriate
X_use = X_all #X_nobw
y_use = y_all #y_nobw
y_class = np.array([0 if i < y_cut else 1 for i in y_use])
dt = DecisionTreeClassifier(max_depth=1,class_weight=class_weight).fit(X_use.reshape(-1, 1), y_class)
print("Dataset: {}\nN = {}\nFeature: {}\nDecision threshold = {:.2f}\nAccuracy: {:.2f}\nf1_score: {:.2f}\nMCC: {:.2f} \nRecall: = {:.2f}\nClassification Report: \n{}".format(dataset,len(y_use),feature,
dt.tree_.threshold[0],
dt.score(X_use.reshape(-1, 1), y_class),
metrics.f1_score(y_class,dt.predict(X_use.reshape(-1, 1))),
metrics.matthews_corrcoef(y_class,dt.predict(X_use.reshape(-1,1))),
metrics.recall_score(y_class, dt.predict(X_use.reshape(-1, 1)), pos_label=1, average='binary'),
metrics.classification_report(y_class, dt.predict(X_use.reshape(-1, 1)), target_names=target_names),
))
# begin plot
dt_plt = DecisionTreeClassifier(max_depth=1,class_weight=class_weight).fit(X_use.reshape(-1, 1), y_class)
n_classes = 2
plot_step = 0.02
# define plot axes limits here as appropriate
#x_min, x_max = 20, 80
#y_min, y_max = 0, 100
x_min, x_max = X_all.min(), X_all.max()
y_min, y_max = y_all.min(), y_all.max()
# set plot colors
cMap_background = ListedColormap(['white', '#ccebc5',]) # color for the backgrounds: light green and red
cMap_points = ListedColormap(["r","g"]) # the color for each class of the actual data points (i.e. inactive/active)). "rg" is red,green
bwcolor = "#999999" # color for points removed from threshold analysis, e.g. Buchwald ligands
# plot code
dx,dy = x_max-x_min,y_max-y_min
xx, yy = np.meshgrid(np.arange(x_min-0.04*dx, x_max+0.04*dx, plot_step),
np.arange(y_min-0.04*dy, y_max+0.04*dy, plot_step))
plt.figure(figsize=(4, 4))
plt.tight_layout(h_pad=0.5, w_pad=0.5, pad=2.5)
Z = dt_plt.predict(xx.ravel().reshape(-1, 1))
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cMap_background)
# Axis labels
plt.xlabel(feature,fontsize=20)
plt.ylabel("% Yield", fontsize=20) #change as appropriate. e.g., yield, selectivity, $ΔΔG^{≠}$
plt.scatter(X_use,y_use,c=y_class,cmap=cMap_points,edgecolor="black",s=15, alpha=0.75)
# Plot points that were removed from threshold analysis (e.g., buchwald ligands). Comment out if not needed.
#plt.scatter(X_bw,y_bw,c=bwcolor,edgecolor=bwcolor,s=15, alpha=0.75)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
# Plot horizontal line to indicate y_cut
plt.axhline(y = y_cut, color = 'black', linestyle = '-', linewidth = 0.5)
# Print plot
plt.show()
#optional save plot as .png
figname = "threshold_{}_{}".format(dataset,feature)
#plt.savefig(figname,dpi=300,bbox_inches = 'tight')
print('-------------------------------------------------------')
```
| github_jupyter |
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/NotebooksPython101"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5> Writing and Saving Files in PYTHON</font></h1>
<br>
This notebook will provide information regarding writing and saving data into **.txt** files.
## Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#refw">Writing files Text Files</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
<a id="ref3"></a>
<h2 align=center>Writing Files</h2>
We can open a file object using the method ** write()** to save the text file to a list. To write the mode, argument must be set to write **w**. Let’s write a file **Example2.txt** with the line: “This is line A”
```
with open('/resources/data/Example2.txt','w') as writefile:
writefile.write("This is line A")
```
We can read the file to see if it worked:
```
with open('/resources/data/Example2.txt','r') as testwritefile:
print(testwritefile.read())
```
We can write multiple lines:
```
with open('/resources/data/Example2.txt','w') as writefile:
writefile.write("This is line A\n")
writefile.write("This is line B\n")
```
The method **.write()** works similar to the method **.readline()**, except instead of reading a new line it writes a new line. The process is illustrated in the figure , the different colour coding of the grid represents a new line added to the file after each method call.
<a ><img src = "https://ibm.box.com/shared/static/4d86eysjv7fiy5nocgvpbddyj2uckw6z.png" width = 500, align = "center"></a>
<h4 align=center>
An example of “.write()”, the different colour coding of the grid represents a new line added after each method call.
</h4>
You can check the file to see if your results are correct
```
with open('/resources/data/Example2.txt','r') as testwritefile:
print(testwritefile.read())
```
By setting the mode argument to append **a** you can append a new line as follows:
```
with open('/resources/data/Example2.txt','a') as testwritefile:
testwritefile.write("This is line C\n")
```
You can verify the file has changed by running the following cell:
```
with open('/resources/data/Example2.txt','r') as testwritefile:
print(testwritefile.read())
```
We write a list to a **.txt** file as follows:
```
Lines=["This is line A\n","This is line B\n","This is line C\n"]
Lines
with open('Example2.txt','w') as writefile:
for line in Lines:
print(line)
writefile.write(line)
```
We can verify the file is written by reading it and printing out the values:
```
with open('Example2.txt','r') as testwritefile:
print(testwritefile.read())
```
We can again append to the file by changing the second parameter to **a**. This adds the code:
```
with open('Example2.txt','a') as testwritefile:
testwritefile.write("This is line D\n")
```
We can see the results of appending the file:
```
with open('Example2.txt','r') as testwritefile:
print(testwritefile.read())
```
#### Copy a file
Let's copy the file **Example2.txt** to the file **Example3.txt**:
```
with open('Example2.txt','r') as readfile:
with open('Example3.txt','w') as writefile:
for line in readfile:
writefile.write(line)
```
We can read the file to see if everything works:
```
with open('Example3.txt','r') as testwritefile:
print(testwritefile.read())
```
After reading files, we can also write data into files and save them in different file formats like **.txt, .csv, .xls (for excel files) etc**. Let's take a look at some examples.
Now go to the directory to ensure the .txt file exists and contains the summary data that we wrote.
<a href="http://cocl.us/NotebooksPython101bottom"><img src = "https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png" width = 750, align = "center"></a>
<hr>
### About the Author:
[Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
<hr>
Copyright © 2017 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
# Template File
The data is given
in CSV form precisely as would be given in data files. For each table started by the cell
magic `%%Table`, the table name follows immediately.

```
from Frame2D import Frame2D
theframe = Frame2D('Template')
```
# Input Data
## Nodes
Table `nodes` (file `nodes.csv`) provides the $x$-$y$ coordinates of each node. Other columns, such
as the $z$- coordinate are optional, and ignored if given.
```
%%Table nodes
NODEID,X,Y,Z
```
## Supports
Table `supports` (file `supports.csv`) specifies the support fixity, by indicating the constrained
direction for each node. There can be 1, 2 or 3 constraints, selected from the set '`FX`', '`FY`' or '`MZ`',
in any order for each constrained node. Directions not mentioned are 'free' or unconstrained.
```
%%Table supports
NODEID,C0,C1,C2
```
## Members
Table `members` (file `members.csv`) specifies the member incidences. For each member, specify
the id of the nodes at the 'j-' and 'k-' ends. These ends are used to interpret the signs of various values.
```
%%Table members
MEMBERID,NODEJ,NODEK
```
## Releases
Table `releases` (file `releases.csv`) is optional and specifies internal force releases in some members.
Currently only moment releases at the 'j-' end ('`MZJ`') and 'k-' end ('`MZK`') are supported. These specify
that the internal bending moment at those locations are zero. You can only specify one release per line,
but you can have more than one line for a member.
```
%%Table releases
MEMBERID,RELEASE
```
## Properties
Table `properties` (file `properties.csv`) specifies the member properties for each member.
If the '`SST`' library is available, you may specify the size of the member by using the
designation of a shape in the CISC Structural Section Tables. If either `IX` or `A` is missing,
it is retreived using the `sst` library. If the values on any line are missing, they
are copied from the line above.
```
%%Table properties
MEMBERID,SIZE,IX,A
```
## Node Loads
Table `node_loads` (file `node_loads.csv`) specifies the forces applied directly to the nodes.
`DIRN` (direction) may be one of `'FX,FY,MZ'`. 'LOAD' is an identifier of the kind of load
being applied and `F` is the value of the load, normally given as a service or specified load.
A later input table will specify load combinations and factors.
```
%%Table node_loads
LOAD,NODEID,DIRN,F
```
## Support Displacements
Table `support_displacements` (file `support_displacements.csv`) is optional and specifies imposed displacements
of the supports. `DIRN` (direction) is one of `'DX, DY, RZ'`. `LOAD` is as for Node Loads, above.
Of course, in this example the frame is statically determinate and so the support displacement
will have no effect on the reactions or member end forces.
```
%%Table support_displacements
LOAD,NODEID,DIRN,DELTA
```
## Member Loads
Table `member_loads` (file `member_loads.csv`) specifies loads acting on members. Current
types are `PL` (concentrated transverse, ie point load), `CM` (concentrated moment), `UDL` (uniformly
distributed load over entire span), `LVL` (linearly varying load over a portion of the span) and `PLA` (point load applied parallel to member coincident with centroidal axis). Values `W1` and `W2` are loads or
load intensities and `A`, `B`, and `C` are dimensions appropriate to the kind of load.
```
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
```
## Load Combinations
Table `load_combinations` (file `load_combinations.csv`) is optional and specifies
factored combinations of loads. By default, there is always a load combination
called `all` that includes all loads with a factor of 1.0. A frame solution (see below)
indicates which `CASE` to use.
```
%%Table load_combinations
CASE,LOAD,FACTOR
```
## Solution
The following outputs all tables, prints a description of the input data,
produces a solution for load case '`one`' (all load and case names are case-insensitive)
and finally prints the results.
```
theframe.input_all()
theframe.print_input()
RS = theframe.solve('one')
theframe.print_results(rs=RS)
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/TEXT_FINDER_EN.ipynb)
# **Find words/phrases in text using word and regex matching**
**Demo of the following annotators:**
* TextMatcher
* RegexMatcher
## 1. Colab Setup
```
# Install java
!apt-get update -qq
!apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
!java -version
# Install pyspark
!pip install --ignore-installed -q pyspark==2.4.4
# Install Sparknlp
!pip install --ignore-installed spark-nlp
import pandas as pd
import numpy as np
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
```
## 2. Start Spark Session
```
spark = sparknlp.start()
```
## 3. Select annotator and re-run the cells below
```
#MODEL_NAME='TextMatcher'
MODEL_NAME='RegexMatcher'
```
## 4. Create some sample examples and desired regex/string matching queries
```
## Generating Example Files ##
text_list = ["""Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. Computers that perform quantum computations are known as quantum computers. Quantum computers are believed to be able to solve certain computational problems, such as integer factorization (which underlies RSA encryption), substantially faster than classical computers. The study of quantum computing is a subfield of quantum information science. Quantum computing began in the early 1980s, when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that a quantum computer had the potential to simulate things that a classical computer could not. In 1994, Peter Shor developed a quantum algorithm for factoring integers that had the potential to decrypt RSA-encrypted communications. Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing is still a rather distant dream." In recent years, investment into quantum computing research has increased in both the public and private sector. On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), published a paper in which they claimed to have achieved quantum supremacy. While some have disputed this claim, it is still a significant milestone in the history of quantum computing.""",
"""Instacart has raised a new round of financing that makes it one of the most valuable private companies in the U.S., leapfrogging DoorDash, Palantir and Robinhood. Amid surging demand for grocery delivery due to the coronavirus pandemic, Instacart has raised $225 million in a new funding round led by DST Global and General Catalyst. The round increases Instacart’s valuation to $13.7 billion, up from $8 billion when it last raised money in 2018.""",
]
exact_matches = ['Quantum', 'million', 'payments', 'index', 'market share', 'gap', 'market', 'measure', 'aspects', 'accounts', 'king' ]
regex_rules = ["""Quantum\s\w+""", """million\s\w+""", """John\s\w+, followed by leader""", """payment.*?\s""", """rall.*?\s""", '\d\d\d\d', '\d+ Years' ]
```
## 5. Save the queries in separate files
```
if MODEL_NAME=='TextMatcher':
with open ('text_to_match.txt', 'w') as f:
for i in exact_matches:
f.write(i+'\n')
else:
with open ('regex_to_match.txt', 'w') as f:
for i in regex_rules:
f.write(i+'\n')
```
## 6. Define Spark NLP pipleline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
if MODEL_NAME=='TextMatcher':
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
text_matcher = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("matched_text")\
.setCaseSensitive(False)\
.setEntities(path="text_to_match.txt")
nlpPipeline = Pipeline(stages=[documentAssembler,
tokenizer,
text_matcher
])
else:
regex_matcher = RegexMatcher()\
.setInputCols('document')\
.setStrategy("MATCH_ALL")\
.setOutputCol("matched_text")\
.setExternalRules(path='regex_to_match.txt', delimiter=',')
nlpPipeline = Pipeline(stages=[documentAssembler,
regex_matcher
])
```
## 7. Run the pipeline
```
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text':text_list}))
result = pipelineModel.transform(df)
```
## 8. Visualize results
```
result.select(F.explode(F.arrays_zip('matched_text.result', 'matched_text.metadata')).alias("cols")) \
.select(
F.expr("cols['0']").alias("Matches Found"),
F.expr("cols['1']['identifier']").alias("matching_regex/string"),
).show(truncate=False)
```
| github_jupyter |
# Genre recognition: experiment
Goal:
Conclusion:
Observations:
## Hyper-parameters
### Parameter under test
```
Pname = 'lg'
Pvalues = [1, 10, 100]
# Regenerate the graph or the features at each iteration.
regen_graph = False
regen_features = True
regen_baseline = False
```
### Model parameters
```
p = {}
# Preprocessing.
# Graph.
p['data_scaling_graph'] = None
p['K'] = 10 + 1 # 5 to 10 + 1 for self-reference
p['dm'] = 'euclidean'
p['Csigma'] = 1
p['diag'] = True
p['laplacian'] = 'normalized'
# Feature extraction.
p['m'] = 512 # 64, 128, 512
p['ls'] = 1
p['ld'] = 10
p['le'] = None
p['lg'] = 100
# Classification.
p['scale'] = None
p['Nvectors'] = 6
p['svm_type'] = 'C'
p['kernel'] = 'linear'
p['C'] = 1
p['nu'] = 0.5
p['majority_voting'] = False
```
### Data parameters
```
# HDF5 data stores.
p['folder'] = 'data'
p['filename_gtzan'] = 'gtzan.hdf5'
p['filename_audio'] = 'audio.hdf5'
p['filename_graph'] = 'graph.hdf5'
p['filename_features'] = 'features.hdf5'
# Dataset (10,100,644 | 5,100,149 | 2,10,644).
p['Ngenres'] = 10
p['Nclips'] = 100
p['Nframes'] = 644
# Added white noise.
p['noise_std'] = 0
```
### Numerical parameters
```
# Graph.
p['tol'] = 1e-5
# Feature extraction.
p['rtol'] = 1e-5 # 1e-3, 1e-5, 1e-7
p['N_inner'] = 500
p['N_outer'] = 50
# Classification.
p['test_size'] = 0.1
p['Ncv'] = 20
p['dataset_classification'] = 'Z'
```
## Processing
```
import numpy as np
import time
texperiment = time.time()
# Result dictionary.
res = ['accuracy', 'accuracy_std']
res += ['sparsity', 'atoms_D']
res += ['objective_g', 'objective_h', 'objective_i', 'objective_j']
res += ['time_features', 'iterations_inner', 'iterations_outer']
res = dict.fromkeys(res)
for key in res.keys():
res[key] = []
def separator(name, parameter=False):
if parameter:
name += ', {} = {}'.format(Pname, p[Pname])
dashes = 20 * '-'
print('\n {} {} {} \n'.format(dashes, name, dashes))
# Fair comparison when tuning parameters.
# Randomnesses: dictionary initialization, training and testing sets.
np.random.seed(1)
#%run gtzan.ipynb
#%run audio_preprocessing.ipynb
if not regen_graph:
separator('Graph')
%run audio_graph.ipynb
if not regen_features:
separator('Features')
%run audio_features.ipynb
# Hyper-parameter under test.
for p[Pname] in Pvalues:
if regen_graph:
separator('Graph', True)
%run audio_graph.ipynb
if regen_features:
separator('Features', True)
p['filename_features'] = 'features_{}_{}.hdf5'.format(Pname, p[Pname])
%run audio_features.ipynb
separator('Classification', True)
%run audio_classification.ipynb
# Collect results.
for key in res:
res[key].append(globals()[key])
# Baseline, i.e. classification with spectrograms.
p['dataset_classification'] = 'X'
p['scale'] = 'minmax' # Todo: should be done in pre-processing.
if regen_baseline:
res['baseline'] = []
res['baseline_std'] = []
for p[Pname] in Pvalues:
separator('Baseline', True)
%run audio_classification.ipynb
res['baseline'].append(accuracy)
res['baseline_std'].append(accuracy_std)
else:
separator('Baseline')
%run audio_classification.ipynb
res['baseline'] = len(Pvalues) * [accuracy]
res['baseline_std'] = accuracy_std
```
## Results
```
print('{} = {}'.format(Pname, Pvalues))
for key, value in res.items():
if key is not 'atoms_D':
print('res[\'{}\'] = {}'.format(key, value))
def plot(*args, **kwargs):
plt.figure(figsize=(8,5))
x = range(len(Pvalues))
log = 'log' in kwargs and kwargs['log'] is True
pltfunc = plt.semilogy if log else plt.plot
params = {}
params['linestyle'] = '-'
params['marker'] = '.'
params['markersize'] = 10
for i, var in enumerate(args):
if 'err' in kwargs:
pltfunc = plt.errorbar
params['yerr'] = res[kwargs['err'][i]]
params['capsize'] = 5
pltfunc(x, res[var], label=var, **params)
for i,j in zip(x, res[var]):
plt.annotate('{:.2f}'.format(j), xy=(i,j), xytext=(5,5), textcoords='offset points')
margin = 0.25
params['markersize'] = 10
plt.xlim(-margin, len(Pvalues)-1+margin)
if 'ylim' in kwargs:
plt.ylim(kwargs['ylim'])
plt.title('{} vs {}'.format(', '.join(args), Pname))
plt.xlabel(Pname)
plt.ylabel(' ,'.join(args))
plt.xticks(x, Pvalues)
plt.grid(True); plt.legend(loc='best'); plt.show()
def div(l):
div = Pvalues if Pname is l else [p[l]]
return np.array([1 if v is None else v for v in div])
# Classification results.
res['chance'] = len(Pvalues) * [100./p['Ngenres']]
res['chance_std'] = 0
err=['accuracy_std', 'baseline_std', 'chance_std']
plot('accuracy', 'baseline', 'chance', err=err, ylim=[0,100])
# Features extraction results.
if regen_features:
plot('objective_g', 'objective_h', 'objective_i', 'objective_j', log=True)
# Unweighted objectives.
print('g(Z) = ||X-DZ||_2^2, h(Z) = ||Z-EX||_2^2, i(Z) = ||Z||_1, j(Z) = tr(Z^TLZ)')
res['objective_g_un'] = res['objective_g'] / div('ld')
res['objective_h_un'] = res['objective_h'] / div('le')
res['objective_i_un'] = res['objective_i'] / div('ls')
res['objective_j_un'] = res['objective_j'] / div('lg')
plot('objective_g_un', 'objective_h_un', 'objective_i_un', 'objective_j_un', log=True)
plot('sparsity', ylim=[0,100])
plot('time_features')
plot('iterations_inner')
plot('iterations_outer')
for i, fig in enumerate(res['atoms_D']):
print('Dictionary atoms for {} = {}'.format(Pname, Pvalues[i]))
fig.show()
print('Experiment time: {:.0f} seconds'.format(time.time() - texperiment))
```
| github_jupyter |
## Boulder Watershed Demo
Process ATL03 data from the Boulder Watershed region and produce a customized ATL06 elevation dataset.
### What is demonstrated
* The `icesat2.atl06p` API is used to perform a SlideRule parallel processing request of the Boulder Watershed region
* The `matplotlib` and `cartopy` packages are used to plot the data returned by SlideRule
### Points of interest
This is a simple notebook showing how a region of interest can be processed by SlideRule and the results analyzed using pandas DataFrames and Matplotlib.
```
import sys
import logging
import geopandas
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import cartopy
import time
from sliderule import icesat2
```
## SlideRule Configuration
```
# Configure ICESat-2 API
icesat2.init("icesat2sliderule.org", False)
# Configure Region of Interest
region = [ {"lon":-105.82971551223244, "lat": 39.81983728534918},
{"lon":-105.30742121965137, "lat": 39.81983728534918},
{"lon":-105.30742121965137, "lat": 40.164048017973755},
{"lon":-105.82971551223244, "lat": 40.164048017973755},
{"lon":-105.82971551223244, "lat": 39.81983728534918} ]
```
## Execute ATL06 Algorithm using SlideRule
```
# Latch Start Time
perf_start = time.perf_counter()
# Build ATL06 Request
parms = {
"poly": region,
"srt": icesat2.SRT_LAND,
"cnf": icesat2.CNF_SURFACE_HIGH,
"ats": 10.0,
"cnt": 10,
"len": 40.0,
"res": 20.0,
"maxi": 1
}
# Request ATL06 Data
gdf = icesat2.atl06p(parms)
# Latch Stop Time
perf_stop = time.perf_counter()
# Display Statistics
perf_duration = perf_stop - perf_start
print("Completed in {:.3f} seconds of wall-clock time". format(perf_duration))
print("Reference Ground Tracks: {}".format(gdf["rgt"].unique()))
print("Cycles: {}".format(gdf["cycle"].unique()))
print("Received {} elevations".format(len(gdf)))
```
## Plot Region
```
# Calculate Extent
lons = [p["lon"] for p in region]
lats = [p["lat"] for p in region]
lon_margin = (max(lons) - min(lons)) * 0.1
lat_margin = (max(lats) - min(lats)) * 0.1
extent = (min(lons) - lon_margin, max(lons) + lon_margin, min(lats) - lat_margin, max(lats) + lat_margin)
# Create Plot
fig = plt.figure(num=None, figsize=(24, 12))
box_lon = [e["lon"] for e in region]
box_lat = [e["lat"] for e in region]
# Plot SlideRule Ground Tracks
ax1 = plt.subplot(121,projection=cartopy.crs.PlateCarree())
ax1.set_title("SlideRule Zoomed Ground Tracks")
ax1.scatter(gdf.geometry.x, gdf.geometry.y, s=2.5, c=gdf["h_mean"], cmap='winter_r', zorder=3, transform=cartopy.crs.PlateCarree())
ax1.set_extent(extent,crs=cartopy.crs.PlateCarree())
ax1.plot(box_lon, box_lat, linewidth=1.5, color='r', zorder=2, transform=cartopy.crs.Geodetic())
# Plot SlideRule Global View
ax2 = plt.subplot(122,projection=cartopy.crs.PlateCarree())
ax2.set_title("SlideRule Global Reference")
#gdf.plot(kind='scatter', ax=ax2, x="lon", y="lat", s=2.5, c='r', zorder=3)
ax2.scatter(gdf.geometry.x, gdf.geometry.y, s=2.5, c='r', zorder=3, transform=cartopy.crs.PlateCarree())
ax2.add_feature(cartopy.feature.LAND, zorder=0, edgecolor='black')
ax2.add_feature(cartopy.feature.LAKES)
ax2.set_extent((-180,180,-90,90),crs=cartopy.crs.PlateCarree())
# Show Plot
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.